Code Analysis and Debugging Tools in Microsoft Dynamics NAV 2009

Exclusive offer: get 50% off this eBook here
Programming Microsoft Dynamics NAV 2009

Programming Microsoft Dynamics NAV 2009 — Save 50%

Using this Microsoft Dynamics NAV book and eBook - develop and maintain high performance applications to meet changing business needs with improved agility and enhanced flexibility.

$41.99    $21.00
by David A. Studebaker | September 2010 | Enterprise Articles Microsoft

In the previous article, Microsoft Dynamics NAV 2009 Development Tools, we gained an overall view of NAV as an application software system.

The goal of this article by David Studebaker, author of Programming Microsoft Dynamics NAV 2009, is to learn about many of the debugging tools and techniques available to the NAV developer. As it has been pointed out, "Without programmers, there are no bugs." As we are all developers and therefore a primary source of bugs, we need to be knowledgeable about the tools we can use to stamp out those bugs. Fortunately, NAV has a good arsenal of such tools.

(For more resources on Microsoft Dynamics, see here.)

The NAV tools and techniques that you use to determine what code to modify and to help you debug modifications are essentially the same. The goal in the first case is to focus on your modifications, so that you have the minimum effect on the standard code. This results in multiple benefits. Smaller pieces of well focused code are easier to debug, easier to document, easier to maintain, and easier to upgrade.

As of NAV's relatively tight structure and unique combination of features, it is not unusual to spend significantly more time in determining the right way to make a modification than it actually takes to code the modification. Obviously this depends on the type of modification being made. Unfortunately, the lack of documentation regarding the internals of NAV also contributes to an extended analysis time required to design modifications. The following sections review some of the tools and techniques you can use to analyze and test.

Developer's Toolkit

To paraphrase the introduction in the NAV Developer's Toolkit documentation, the Toolkit is designed to help you analyze the source code. This makes it easier to design and develop application customizations and to perform updates. The Developer's Toolkit is not part of the standard product distribution, but is available to all Microsoft Partners for NAV for download from the Partner website. While it takes a few minutes to set up the Developer's Toolkit for the database on which you will be working, the investment is worthwhile. Follow the instructions in the Developer's Toolkit manual for creating and loading your Toolkit database. The Help files in the Developer's Toolkit are also useful. As of late 2009, the current NAV Developer's Toolkit is V3.00. V3.00 does not deal with the new features of NAV 2009 associated with the Role Tailored Client or three tier functionality.

The NAV Developer's Toolkit has two major categories of tools—the Compare and Merge Tools, and the Source Analyzer:

  • The Compare and Merge Tools are useful anytime you want to compare a production database's objects to an unmodified set of objects to identify what has been changed. This might be in the process of upgrading the database to a new version or simply to better understand the contents of a database when you are about to embark on a new modification adventure.
  • The Source Analyzer tools are the more general purpose set of tools. Once you have loaded the source information for all your objects into the Developer's Tools database, you will be able to quickly generate a variety of useful code analyses. The starting point for your code analyses will be the Object Administrator view as shown in the following screenshot:

    Code Analysis and Debugging Tools in Microsoft Dynamics NAV 2009

When you get to this point, it's worthwhile experimenting with various menu options for each of the object types to get comfortable with the environment and how the tools work. Not only are there several tool options, but multiple viewing options. Some will be more useful than others depending on the specifics of the modification task you are addressing as well as your working habits.

Relations to Tables

With rare exceptions, table relations are defined between tables. The Toolkit allows you to select an object and request analysis of the defined relations between elements in that object and various tables. As a test of how the Relations to Tables analysis works, we will expand our Table entry in the Object Administrator to show all the tables. Then we will choose the Location table, right-click, and choose the option to view its Relations to other Tables with the result shown in the following screenshot:

Code Analysis and Debugging Tools in Microsoft Dynamics NAV 2009

If we want to see more detail, we can right-click on the Location table name in the right window, choose the Expand All option, and see the results as shown in the following screenshot:

Code Analysis and Debugging Tools in Microsoft Dynamics NAV 2009

This shows us the Relations to Tables, with the relating (from) field and the related (to) field both showing in each line.

Relations from Objects

If you are checking to see what objects have a relationship pointing back to a particular table (the inverse of what we just looked at), you can find that out in essentially the same fashion. Right-click on the table of interest and choose the Relations from Objects option. If you wanted to see both sets of relationships in the same display, you can right-click on the table name in the right window and choose the Relation to Tables option.

At that point your display will show both sets of relationships as shown in the following screenshot for the table Reservation Entry:

Code Analysis and Debugging Tools in Microsoft Dynamics NAV 2009

Source Access

On any of these screens you could select one of the relationships and drill down further into the detail of the underlying C/AL code. There is a search tool, the Source Finder. When you highlight one of the identified relationships and access the Code Viewer, the Toolkit will show you the object code where the relationship is defined.

Where Used

The Developer's Toolkit contains other tools that are also quite valuable to you as a developer. The idea of Where Used is fairly simple: list all the places where an element is used within the total library of source information. There are two different types of Where Used.

  • The Toolkit's first type of Where Used is powerful because it can search for uses of whole tables or key sequences or individual fields. Many developers also use other tools (primarily developer's text editors) to accomplish some of this. However, the Developer's Toolkit is specifically designed for use with C/AL and C/SIDE.
  • The second type of "Where Used" is Where Used With. This version of the Toolkit Where Used tool allows you to focus the search. Selecting the Where Used With Options brings up the screen in the following screenshot. As you can see, the degree of control you have over the search is extensive.

    Code Analysis and Debugging Tools in Microsoft Dynamics NAV 2009

    Screenshots of the other three tabs of the Where Used With Options form follow:

Trying it out

To really appreciate the capabilities and flexibilities of the Developer's Toolkit, you must work with it to address a real-life task. For example, what if your firm was in a market where the merger of firms was a frequent occurrence? In order to manage this, the manager of accounting might decide that the system needs to be able to merge the data for two customers, including accounting and sales history under a single customer number. If you do that, you must first find all the instances of the Customer No. referenced in keys of other tables. The tool to do this in the Developer's Toolkit is the Source Finder.

Calling up the Source Finder, first you Reset all fields by clearing them. Then enter the name of the field you are looking for, in this case that is Customer No., as shown in the following screenshot:

Now specify you are only looking for information contained in Tables, as shown in the following screenshot:

Next, specify that the search should only be in Keys, as shown in the following screenshot:

Your initial results will look like those in the following screenshot:

This data can be further constrained through the use of Filters (for example to find only Key 1 entries) and can be sorted by clicking on a column head.

Of course, as mentioned earlier, it will help you to experiment along the way. Don't make the mistake of thinking the Developer's Toolkit is the only tool you need to use. At the same time, don't make the mistake of ignoring this tool just because it won't do everything.

Working in exported text code

As mentioned a little earlier, some developers export objects into text files, then use a text editor to manipulate them. Let us take a look at an object that has been exported into text and imported into a text editor.

We will use one of the tables that are part of our ICAN development, the Donor Type table, 50001 as shown in the following screenshot:

The general structure of all exported objects is similar, with differences that you would expect for the different objects. For example, Table objects have no Sections, but Report objects do. You can also see here that this particular table contains no C/AL-coded logic, as those statements would be quoted in the text listing.

You can see by looking at this table object text screenshot that you could easily search for instances of the string Code throughout the text export of the entire system, but it would be more difficult to look for references to the Donor Type form/page, Form50002. And, while you can find the instances of Code with your text editor, it would be quite difficult to differentiate those instances that relate to the Donor Type table from those in any other table. This includes those that have nothing to do with our ICAN system enhancement, as well as those simply defined in an object as Global Variables. However, the Developer's Toolkit can make that differentiation.

If you were determined to use a text editor to find all instances of "Donor Type".Code, you could do the following:

Rename the field in question to something unique. C/SIDE will rename all the references to this field. Then export all the sources to text followed by using your text editor (or even Microsoft Word) to find the unique name. You must either remember to return the field in the database to the original name or you must be working in a temporary "work copy" of the database, which you will shortly discard. Otherwise, you will have quite a mess.

One task that needs to be done occasionally is to renumber an object or to change a reference inside an object that refers to a no longer existing element. The C/SIDE editor may not let you do that easily, or in some cases, not at all. In such a case, the best answer is to export the object into text, make the change there and then import it back in as modified. Be careful though. When you import a text object, C/SIDE does not check to see if you are overwriting another instance of that object number. C/SIDE makes that check when you import a fob (that is a compiled object) and warns you. If you must do renumbering, you should check the NAV forums on the Internet for the renumbering tools that are available there.

Theoretically, you could write all of your C/AL code with a text editor and then import the result. Given the difficulty of such a task and the usefulness of the tools embedded in C/SIDE, such an approach would be foolish. However, there are occasions when it is very helpful to simply view an object "flattened out" in text format. In a report where you may have overlapping logic in multiple data items and in several control triggers as well, the only way to see all the logic at once is in text format.

You can use any text editor you like, Notepad or Word or one of the visual programming editors; the exported object is just text. You need to cope with the fact that when you export a large number of objects in one pass, they all end up in the same text file. That makes the exported file relatively difficult to use. The solution is to split that file into individual text files, named logically, one for each NAV object. There are several freeware tools to do just that, available from the NAV forums on the Internet.

Two excellent NAV forums are www.mibuso.com and www.dynamicsuser.net.

Programming Microsoft Dynamics NAV 2009 Using this Microsoft Dynamics NAV book and eBook - develop and maintain high performance applications to meet changing business needs with improved agility and enhanced flexibility.
Published: November 2009
eBook Price: $41.99
Book Price: $69.99
See more
Select your format and quantity:

Read more about this book

(For more resources on Microsoft Dynamics, see here.)

Using Navigate

Navigate (Page 344) is an often under-appreciated tool both for the user and for the developer. Our focus is on its value to the developer. You might enhance your extension of the NAV system by expanding the coverage of the Navigate function.

Testing with Navigate

Navigate searches for and displays the number and types of all the associated entries for a particular posting transaction. The term "associated", in this case, is defined as those entries having the same Document Number and Posting Date.

Navigate can be called from the Navigate action, which appears on each screen that displays any of the entries that a Navigate might find and display. It can also be called directly from various Navigate entries in Action lists. These are generally located within History menu groups as shown in the following screenshot:

If you invoke the Navigate page using the menu action item, you must enter the Posting Date and Document Number for the entries you wish to find. Alternately, you can enter a Business Contact Type (Vendor or Customer), a Business Contact No. (Vendor No. or Customer No.), and optionally, an External Document No. There are occasions when this option is useful, but the Posting Date + Document No. option is much more frequently useful.

Instead of seeking out a Navigate page and entering the critical data fields, it is much easier to call Navigate from a Navigate action on a page showing data. In this case, you just highlight a record and click on Navigate to search for all the related entries. In the following example, the first General Ledger Entry displayed is highlighted.

(Move the mouse over the image to enlarge.)

After clicking on the Navigate action, the Navigate page will pop up, filled in, with the completed search, and will look similar to the following screenshot:

Had we accessed the Navigate page through one of the menu entries, we would have filled in the Document No. and Posting Date fields and clicked on Find. As you can see here, the Navigate form shows a list of related, posted entries including the one we highlighted to invoke the Navigate function. If you click on one of the items in the Table Name list at the bottom of the page, you will see an appropriately formatted display of the chosen entries.

For the G/L Entry table in this form, you would see a result like the following screenshot. Note that all the G/L Entry are displayed for same Posting Date and Document No., matching those specified at the top of the Navigate page.

You may ask "Why is this application page being discussed in a section about C/AL debugging?" The answer is: "When you have to test, you need to check the results. When it is easier to do a thorough check of your test results, your testing will go faster and be of higher quality". Whenever you make a modification that will affect any data that could be displayed through the use of Navigate, it will quickly become one of your favorite testing tools.

Modifying for Navigate

If your modification creates a new table that will contain posted data and the records contain both Document No. and Posting Date fields, you can include this new table in the Navigate function.

The C/AL Code for Posting Date + Document No. Navigate functionality is found in the FindRecords function trigger of Page 344 – Navigate. The following screenshot illustrates the segment of the Navigate CASE statement code for the Check Ledger Entry table:

The code checks READPERMISSION. If that is enabled for this table, then the appropriate filtering is applied. Next, there is a call to the InsertIntoDocEntry function, which fills the temporary table that is displayed in the Navigate page. If you wish to add a new table to the Navigate function, you must replicate this code for your new table. In addition, you must add the code that will call up the appropriate page to display the records that Navigate finds. This code should be inserted in the ShowRecords function trigger of the Navigate page, similar to the lines in the following screenshot:

Making a change like this, when appropriate, will not only provide a powerful tool for your users, but will also provide a powerful tool for you as a developer.

The C/SIDE Debugger

C/SIDE has a built-in Debugger which runs in the Classic Client environment. It is very helpful in tracking down the location of bugs in logic and processes. There are two basic usages of the available debugger. The first is identification of the location of a run-time error. This is fairly simple process, accomplished by setting the debugger (from the Tools Menu) to Active with the Break on Triggers option turned off, as shown in the following screenshot. When the run-time error occurs, the debugger will be activated and displayed exactly where the error is occurring.

The second option is the support of traditional tracing of logic. Use of the Debugger for this purpose is hard work. It requires that you, the developer, have firmly in mind how the code logic is supposed to work, what the values of variables should be under all conditions, and the ability to discern when the code is not working as intended.

The Debugger allows you to see what code is being executed, either on a step-by-step basis (by pressing F8) or trigger by trigger (by pressing F5). You can set your own Breakpoint ((by presing F9), points at which the Debugger will break the execution so you can examine the status of the system. The method by which execution is halted in the Debugger doesn't matter (so long as it is not by a run-time error); you have a myriad of tools with which to examine the status of the system at that point.

The C/SIDE Code Coverage tool

Code Coverage is another "Classic Client only" debugging tool. Code Coverage tracks code as it is executed and logs it for your review and analysis. Code Coverage is accessed from the Tools | Debugger option. When you invoke Code Coverage, it simply opens the Code Coverage form. Start Code Coverage by clicking on Start, and stop it by returning to the form via the Windows menu option, as the Code Coverage form will remain open while it is running.

Just like the C/SIDE Debugger, there is no substitute for experimenting to learn more about using Code Coverage. Code Coverage is a tool for gathering a volume of data about the path taken by the code while performing a task or series of tasks. This is very useful for two different purposes. One is simply to determine what code is being executed. But this tracking is done in high speed with a log file, whereas if you do the same thing in the debugger, the process is excruciatingly slow and you have to log the data manually.

The second use is to identify the volume of use of routines. By knowing how often a routine is executed within a particular process, you can make an informed judgement about what routines are using up the most processing time. That, in turn, allows you to focus any speed-up efforts on the code that is used the most. This approach will help you to realize the most benefit for your code acceleration work.

Client Monitor

Client Monitor is a performance analysis tool. It can be very effectively used in combination with Code Coverage to identify what is happening, in what sequence, how it is happening and how long it is taking. Before you start Client Monitor, there are several settings you can adjust to control its behavior, some specific to the SQL Server environment.

Client Monitor is accessed from Tools | Client Monitor. Client Monitor helps identify the code that is taking the major share of the processing time so that you can focus on code design optimization efforts. In the SQL Server environment, Client Monitor will help to see what calls are being made by SQL Server and clearly identify problems with improperly defined or chosen indexes and filters.

If you are familiar with T-SQL and SQL Server calls, the Client Monitor output will be even more meaningful to you. In fact, you may decide to combine these tools with the SQL Server Error Log for a more in-depth analysis of speed or locking problems.

Debugging NAV in Visual Studio

Debugging code running in the Role Tailored Client requires using Visual Studio based debugging tools.

The first step in setting up VS debugging for NAV is to enable debugging via a setting in the Customsettings.config file that controls various actions of the NAV Server. This file's location and the setting to change are described in the Help. After you change this setting, you need to Stop and Start the NAV Server Service before the change will take effect.

After the VS debugger has been enabled, each time you start up the RTC, all the C# code for all of the NAV objects will be generated and stored. The storage location for the C# code will vary depending on your system setup (OS version and perhaps other factors) – the original Help overlooks this variability. Look for a directory containing the string ...\60\Server\MicrosoftDynamicsNav\Server\source\... The source directory will contain subdirectories of Codeunit, Page, Record, Report, XMLport. Each directory contains a full set of appropriate files with a .cs extension containing the generated C# code. If you are familiar with C#, you may find it useful to study this code.

Visual Studio must then be attached to NAV Server Service before a breakpoint can be set. The referenced Walkthrough uses an example for breakpoint setting of creating a Codeunit with an easily identifiable piece of code. The point is to be able to identify a good spot in the C# code to set your breakpoint so that you can productively use the VS Debugger.

Dialog function debugging techniques

Sometimes the simpler methods are more productive than the more sophisticated tools, because you can set up and test quickly, resolve the issue (or answer a question), and move on. All the simpler methods involve using one of the C/AL DIALOG functions such as MESSAGE, CONFIRM, DIALOG, or ERROR. All of these have the advantage of working well in the RTC environment. If you need detailed RTC performance information, the Code Coverage and Client Monitor tools do not work with the RTC. In that case, you should use one of the following tools/techniques to provide performance data (until a better method is provided by Microsoft).

Debugging with MESSAGE

The simplest debug method is to insert MESSAGE statements at key points in your logic. It is very simple and, if structured properly, provides you a simple "trace" of the code logic path. You can number your messages to differentiate them and display any data (in small amounts) as part of a message.

The disadvantage is that MESSAGE statements do not display until processing either terminates or is interrupted for user interaction. If you force a user interaction at some point, then your accumulated messages will appear prior to the interaction. The simplest way to force user interaction is a CONFIRM message in the format as follows:

IF CONFIRM ('Test 1',TRUE) THEN;

Debugging with CONFIRM

If you want to do a simple trace but want every message to be displayed as it is generated (that is have the tracking process move at a very measured pace), you could use CONFIRM statements for all the messages. You will then have to respond to each one before your program will move on, but sometimes that is what you want.

Debugging with DIALOG

Another tool that is useful for certain kinds of progress tracking is the DIALOG function. This function is usually set up to display a window containing a small number of variable values. As processing progresses, the values are displayed in real time. This can be useful in several ways. A couple of examples follow:

  • Simply tracking progress of processing through a volume of data. This is the same reason you would provide a DIALOG display for the benefit of the user. The act of displaying does slow down processing somewhat. During debugging that may or may not matter. In production it is often a concern, so you may want to update the DIALOG display occasionally, not on every record.
  • Displaying indicators when processing reaches certain stages. This can be used as a very basic trace with the indicators showing the path taken so you may gauge the relative speed of progress through several steps. For example, you might have a six-step process to analyze. You could define six tracking variables and display all of them in the DIALOG.

    In this case, each tracking variable is initialized with values that are dependent on what you are tracking, but generally each would be different (for example, A1, B2000, C300000, and so on.). At each step of your process, you would update the contents and display the current state of one or all of the variables. This can be a very visual and intuitive guide for how your process is operating.

Debugging with text output

You can build a very handy debugging tool by outputting the values of critical variables or other informative indicators of progress either to an external text file or to a table created for this purpose. This approach allows you to run a considerable volume of test data through the system, tracking some important elements while collecting data on the variable values, progress through various sections of code, and so on. You can even timestamp your output records so that you can use this method to look for processing speed problems (a very minimalist code coverage replacement).

Following the test run, you can analyze the results of your test more quickly than if you were using displayed information. You can focus on just the items that appear most informative and ignore the rest. This type of debugging is fairly easy to set up and to refine, as you identify the variables or code segments of most interest. This approach can be combined with the following approach using the ERROR statement if you output to an external text file, then close it before invoking the ERROR statement, so that its contents are retained following the termination of the test run.

Debugging with ERROR

One of the challenges of testing is maintaining repeatability. Quite often you need to test several times using the same data, but the test changes the data. If you have a small database, you can always back up the database and start with a fresh copy each time. But that can be inefficient and, if the database is large, impractical. One alternative is to conclude your test with an ERROR function. This allows you to test and retest with exactly the same data.

The ERROR function forces a run-time error status, which means the database is not updated (it is rolled back to the status at the beginning of the process). This works well when your debugging information is provided by using the Debugger or by use of any of the DIALOG functions just mentioned prior to the execution of the ERROR function. If you are using MESSAGE to generate debugging information, you could execute a CONFIRM immediately prior to the ERROR statement and be assured that all of the messages are displayed. Obviously this method won't work well when your testing validation is dependent on checking results using Navigate or your test is a multi-step process such as order entry, review, and posting. But in applicable situations, it is a very handy technique for repeating a test with minimal effort.

When testing the posting of an item, it can be useful to place the test-concluding ERROR function just before the point in the applicable Posting codeunit where the process would otherwise be completed successfully.

C/SIDE test driven development

New in NAV 2009 SP1 is a C/AL Testability feature set designed to support C/AL development to be test driven. Test-driven development is an approach where the application tests are defined prior to the development of the application code. In an ideal situation, the code supporting application tests is written prior to, or at least at the same time as, the code implementing the target application function written.

The C/AL Testability feature provides two more types of Codeunits—Test Codeunits and Test Running Codeunits. Test Codeunits contain Test methods, C/AL code to support Test methods. Test Runner Codeunits are used to invoke Test Codeunits, thus providing test execution management, automation and integration. Test Runner Codeunits have two special triggers, each of which run in separate transactions, so the test execution state and results can be tracked.

  • OnBeforeTestRun is called before each test. It allows defining, via a Boolean, whether or not the test should be executed.
  • OnAfterTestRun is called when each test completes and the test results are available. This allows the test results to be logged, or otherwise processed via C/AL code.

Among the ultimate goals of the C/AL Testability feature are:

  • The ability to run suites of application tests automated and regressively:
    • Automated means that a defined series of tests could be run and results recorded, all without user intervention.
    • Regressively means that the test can be run repeatedly as part of a new testing pass to make sure that features previously tested are still in working order.
  • The ability to design tests in an "atomic" way, matching the granularity of the application code. In this way, the test functions can be focused and simplified. This in turn allows for relatively easy construction of a suite of tests and, in some cases, reuse of test codeunits (or at least reuse of the structure of previously created Test Codeunits).
  • The ability to develop and run the Test and Test Runner Codeunits within the familiar C/SIDE environment. The code for developing these testing codeunits is C/AL.
  • Once the testing codeunits have been developed, the actual testing process should be simple and fast in order to run and evaluate results.

Both positive and negative testing are supported. Positive testing is that where you look for a specific result, a correct answer. Negative testing is where you check that errors are presented when expected, especially when data or parameters are out of range. The testing structure is designed to support the logging of test results, both failures and success, to tables for review, reporting and analysis.

A function property defines functions within Test Codeunits to be either Test or TestHandler or Normal. Another function property, TestMethodType, allows the definition of a variety of Test Function types to be defined. TestMethodType property options include the following which are designed to handle User Interface events without the need for a person to intervene:

  • CatchMessage—catches and handles the MESSAGE statement
  • ConfirmDialogHandler—catches and handles CONFIRM dialogs
  • StrMenuHandler—catches and handles STRMENU menu dialogs
  • FormRunHandler—catches and handles Forms/Pages that are executed via RUN
  • FormRunModalHandler—catches and handles Forms/Pages that are executed via RUNMODAL
  • ReportHandler—handles reports

Use of the C/SIDE Test Driven Development approach will work along the following lines:

  • Define an application function specification
  • Define the application technical specification
  • Define the testing technical specification including both Positive and Negative tests
  • Develop Test and Test Running codeunits (frequently only one or a few Test Running codeunits will be required)
  • Develop Application objects
  • As soon as feasible, begin running Application object tests by means of the Test Running codeunits, and logging test results for historical and analytical purposes
  • Continue the development—testing cycle, updating the tests and the application as appropriate throughout the process
  • At the end of successful completion of development and testing, retain all the Test and TestRunning codeunits for use in regression testing the next time the application must be modified or upgraded

Summary

In this article we reviewed a host of handy code analysis and debugging techniques including use of the Developer Toolkit, working in objects exported as text, using Navigate, using the Debuggers and associated tools, and some handy tips on other creative techniques.


Further resources on this subject:


Programming Microsoft Dynamics NAV 2009 Using this Microsoft Dynamics NAV book and eBook - develop and maintain high performance applications to meet changing business needs with improved agility and enhanced flexibility.
Published: November 2009
eBook Price: $41.99
Book Price: $69.99
See more
Select your format and quantity:

About the Author :


David A. Studebaker

David Studebaker is Chief Technical Officer and a founder of Liberty Grove Software with his partner Karen Studebaker. Liberty Grove Software, a Microsoft Partner, provides development, consulting, training, and upgrade services internationally for Microsoft Dynamics NAV resellers and end user customers.

David has been recognized by Microsoft as a Certified Professional for NAV in all three areas: Development, Applications, and Installation & Configuration. He has been honored by Microsoft as a Lead Certified Microsoft Trainer for NAV.

David just celebrated his first half century of programming, having started programming in 1962. He has been developing in C/AL since 1996. David has been an active participant in each step of computing technology from the first solid state mainframes to today's technology, from binary assembly language coding to today's C/AL and C#.

David's special achievements include his role as co-developer of the first production multi-programmed SPOOLing system in 1967. David has worked on a diverse set of software applications including manufacturing, distribution, retail, engineering, general accounting, association management, professional services billing, distribution/inventory management, freight carriage, data collection and production management, among others. Prior to co-authoring this book, David was the author of Programming Microsoft Dynamics NAV (for the Classic Client) and Programming Microsoft Dynamics NAV 2009 (for the Role Tailored Client).

David has had a wide range of development, consulting, sales and management roles throughout his career. He has been partner or owner and manager of several software development businesses, while always maintaining a hands-on role as a business applications developer.

David has a BS in Mechanical Engineering from Purdue University and an MBA from the University of Chicago. He has been writing for publication since he was an undergraduate. David has been a member of the Association for Computing Machinery since 1963 and was a founding officer of two local chapters of the ACM.

Books From Packt


Microsoft Dynamics NAV Administration
Microsoft Dynamics NAV Administration

Microsoft Dynamics NAV 2009 Application Design
Microsoft Dynamics NAV 2009 Application Design

Implementing Microsoft Dynamics NAV 2009
Implementing Microsoft Dynamics NAV 2009

Microsoft Visio 2010 Business Process Diagramming and Validation
Microsoft Visio 2010 Business Process Diagramming and Validation

Microsoft Dynamics GP 2010 Cookbook
Microsoft Dynamics GP 2010 Cookbook

Microsoft Windows Communication Foundation 4.0 Cookbook for Developing SOA Applications
Microsoft Windows Communication Foundation 4.0 Cookbook for Developing SOA Applications

Microsoft Dynamics AX 2009 Development Cookbook
Microsoft Dynamics AX 2009 Development Cookbook

Microsoft Silverlight 4 and SharePoint 2010 Integration
Microsoft Silverlight 4 and SharePoint 2010 Integration


Code Download and Errata
Packt Anytime, Anywhere
Register Books
Print Upgrades
eBook Downloads
Video Support
Contact Us
Awards Voting Nominations Previous Winners
Judges Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software
Resources
Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software