Reader small image

You're reading from  Programming Microsoft Dynamics 365 Business Central - Sixth Edition

Product typeBook
Published inApr 2019
PublisherPackt
ISBN-139781789137798
Edition6th Edition
Right arrow
Authors (3):
Marije Brummel
Marije Brummel
author image
Marije Brummel

Author, programmer, consultant, project manager, presenter, evangelist, sales person, and a trainer. It's next to impossible to find someone as experienced as Marije Brummel in the Business Central community. Marije received the Microsoft MVP and the NAVUG All-Star awards among several others. She has chaired the Dynamics Credentialing committee and has authored official Microsoft Exam materials. She's the go-to girl for performance troubleshooting and upgrade challenges. One of her biggest achievements was introducing Design Patterns into the Business Central community. Her books, blog articles, and YouTube videos have influenced almost everyone involved with Business Central. She enjoys the outdoors with her dog and loves spending time with her family.
Read more about Marije Brummel

David Studebaker
David Studebaker
author image
David Studebaker

David Studebaker has been designing and developing software since 1962 as a developer, consultant, manager, and business owner. In 1967, David coauthored the first general-purpose SPOOL system, an AT&T / IBM joint project. He has been a founding partner in several firms, most recently Studebaker Technology and Liberty Grove Software. David's publications include a decade of technical reviews for ACM Computing Reviews and a number of articles on shop floor data collection. David originated the Packt series of books on programming Dynamics Business Central (aka Dynamics NAV). He has a BS in mechanical engineering from Purdue University and an MBA from the University of Chicago. He is a life member of the Association for Computing Machinery.
Read more about David Studebaker

Christopher D. Studebaker
Christopher D. Studebaker
author image
Christopher D. Studebaker

Chris Studebaker was a certified environmental consultant working with manufacturing facilities to meet national and state regulations before he started working with Navision in 1999. After working on regulatory reporting, data analysis, project management, and subcontractor oversight, Chris has used those skills to sell, develop, and implement NAV for the past 20 years. He has specialized in retail, manufacturing, job shop, and distribution implementations, mostly in high-user-count, high-data-volume applications. Chris acts in a consulting and training role for customers and for peer NAV professionals. He has a Bachelor of Science degree from Northern Illinois University and has done graduate work at Denmark Technical University.
Read more about Christopher D. Studebaker

View More author details
Right arrow
Intermediate AL
"You need to "listen deeply"—listen past what people say they want to hear to what they need."
– Jon Meads
"People's behavior makes sense if you think about it in terms of their goals, needs, and motives."
– Thomas Mann

In the previous chapter, you learned enough AL to create a basic operational set of code. In this chapter, you will learn more AL functions and pick up a few more good habits along the way. If you are getting started as a professional Business Central developer, the built-in AL functions represent a significant portion of the knowledge that you will need on a day-to-day basis. If you are a manager or consultant needing to know what Business Central can do for your business or your customer, an understanding of these functions will help you too.

Our goal is to competently manage I/O, create...

IntelliSense

As an IDE, Visual Studio Code contains a number of tools designed to make our AL development effort easier. One of these is IntelliSense. When we are in one of the areas where AL code is supported, the IntelliSense menu window can be accessed via Ctrl + spacebar:

IntelliSense is a very useful multi-purpose tool for the developer. It serves the following purposes:

  • As a quick reference to the available AL functions
  • Documentation of the syntax of those functions
  • Access Microsoft Docs for more information on those functions
  • As a source of variable names or function structures to paste into our code

The use of IntelliSense for reference purposes is especially helpful, not only when we are a beginning AL developer, but also after we have become experienced developers. It is a guide to the inventory of available code tools with some very handy built-in programming aids.

The second use of IntelliSense is as a symbol table. The symbol table for our object is visible in...

Internal documentation

When we are creating or modifying software, we should always document what we have done. It is often difficult for developers to spend much time (time equals money) on documentation because many don't enjoy doing it, and the benefits to customers are difficult to quantify in advance. A reasonable goal is to provide enough documentation so that a knowledgeable person can later understand what we have done, as well as the reasons why.

If we choose good variable names, the AL code will tend to be self-documenting. If we lay our code out neatly, use indentation consistently, and localize logical elements in functions, then the flow of our code should be easy to read. We should also include comments that describe the functional reason for the change. This will help the next person working in this code not only be able to follow the logic of the code, but to understand the business reasons for that code.

In the case of a brand-new function, a simple statement of...

Source code management

Instead of documenting within the code, many developers choose to use source code management tools, such as GitHub or Microsoft DevOps.

Tools such as these allow you to create change requests and connect changed code to these requests. Once the change is validated, it is connected to the documentation, automatically creating a knowledge database outside of the source code. This keeps the source code clean and readable.

Validation functions

AL includes a number of utility functions that are designed to facilitate data validation or initialization. Some of these functions are as follows:

  • TESTFIELD
  • FIELDERROR
  • INIT
  • VALIDATE

We will discuss these functions more in the following sections.

TESTFIELD

The TESTFIELD function is widely used in standard Business Central code. With TESTFIELD, we can test a variable value and generate an error message in a single statement if the test fails. The syntax is as follows:

Record.TESTFIELD (Field, [Value]) 

If Value is specified and the field does not equal that value, the process terminates with an error condition, and an error message is issued.

If no Value is specified, the field contents are checked for values of zero or blank. If the field is zero or blank, then an error message is issued.

The advantage of TESTFIELD is ease of use and consistency in code, and in the message displayed. The disadvantage is the error message is not as informative as a careful developer would provide.

The following screenshot of the TESTFIELD function usage is from Table 18 - Customer. This code checks to make sure that the Sales Order field's Status is equal to the option value Open before allowing the value of the Sell-to Customer...

FIELDERROR

Another function, which is very similar to the TESTFIELD function, is FIELDERROR. However, where TESTFIELD performs a test and terminates with either an error or an OK result, FIELDERROR presumes that the test was already performed and the field failed the test. FIELDERROR is designed to display an error message and then terminate the process. This approach is followed in much of the standard Business Central logic, especially in the posting codeunits (for example, codeunits 12, 80, and 90). The syntax is as follows:

TableName.FIELDERROR(FieldName[,OptionalMsgText]); 

If we include our own message text by defining a text constant in the variable section (so that the message can be multilingual), we will have the following line of code:

Text001    must be greater than Start Time 

Then, we can reference the text constant in code as follows:

IF Rec."End Time" <= "Start Time" THEN 
Rec.FIELDERROR("End Time",Text001); 

The result is an error...

INIT

The INIT function initializes a record in preparation for its use, typically in the course of building a record entry to insert in a table. The syntax is as follows:

Record.INIT; 

All the data fields in the record are initialized as follows:

  • Fields that have an InitValue property defined are initialized to the specified value.
  • Fields that do not have a defined InitValue are initialized to the default value for their data type.
  • Primary key fields and timestamps are not automatically initialized. If they contain values, those will remain. If new values are desired, they must be assigned in code.

VALIDATE

The syntax of the VALIDATE function is as follows:

Record.VALIDATE (Field [, Value]) 

VALIDATE will fire the OnValidate trigger of Record.Field. If we have specified a Value, it is assigned to the field and the field validations are invoked.

If we don't specify a Value, then the field validations are invoked using the field value that already exists in the field. This function allows us to easily centralize our code design around the table—one of Business Central's strengths.

For example, if we were to code and change Item."Base Unit of Measure" from one unit of measure to another, the code should make sure that the change is valid. We should get an error if the new unit of measure has any quantity other than 1, because quantity equals 1 is a requirement of the Base Unit of Measurement field. Making the unit of measure change with a simple assignment statement would not catch a quantity value error.

The following are two forms of using VALIDATE...

Date and time functions

Business Central provides a considerable number of date and time functions. In the following sections, we will cover several of those that are more commonly used, especially in the context of accounting date-sensitive activity:

  • TODAY, TIME, and CURRENTDATETIME functions
  • WORKDATE function
  • DATE2DMY, DATE2DWY, DMY2DATE, DWY2DATE, and CALCDATE functions

TODAY, TIME, and CURRENTDATETIME

TODAY retrieves the current system date, as set in the operating system. TIME retrieves the current system time, as set in the operating system. CURRENTDATETIME retrieves the current date and time in the DATETIME format, which is stored in UTC international time (formerly referenced as GMT or Greenwich Mean Time) and then displayed in local time. If we are using the Windows client, these use the time setting in the Business Central client. If the system operates in multiple time zones at one time, search Microsoft Dynamics Business Central Help on the time zone for several references on how to deal with multiple time zones.

The syntax for each of these is as follows:

DateField := TODAY; 
TimeField := TIME; 
DateTimeField := CURRENTDATETIME; 

These are often used for date- and time-stamping transactions or for filling in default values in fields of the appropriate data type. For data-entry purposes, the current system date can be entered by simply typing...

WORKDATE

Many standard Business Central routines default dates to the work date rather than to the system date. When a user logs into the system, the work date is initially set equal to the system date. However, at any time, the operator can set the work date to any date by accessing the application menu and clicking on Set Work Date..., and then entering the new work date:

The user can also click on the Work Date displayed in the status bar at the bottom of the RTC. The following screenshot shows how to set the work date:

For data-entry purposes, the current work date can be entered by the operator by simply typing the letter w or W or the word WORKDATE in the date entry field. Business Central will automatically convert that entry into the current work date.

The syntax to access the current WorkDate value from within AL code is as follows:

DateField := WORKDATE; 

The syntax to set the WORKDATE to a new date from within AL code is as follows:

WORKDATE(newdate); 

DATE2DMY function

DATE2DMY allows us to extract the sections of a date (day of the month, month, and year) from a Date field. The syntax is as follows:

IntegerVariable := DATE2DMY ( DateField, ExtractionChoice ) 

IntegerVariable and DateField are just as their names imply. The ExtractionChoice parameter allows us to choose which value (day, month, or year) will be assigned to the IntegerVariable field. The following table provides the DATE2DMY extraction choices:

DATE2DMY extraction choice

Integer value result

1

2-digit day (1–31)

2

2-digit month (112)

3

4-digit year

DATE2DWY function

DATE2DWY allows us to extract the sections of a date (day of the week, week of the year, and year) from a DateField in exactly the same fashion as DATE2DMY. The ExtractionChoice parameter allows us to choose which value (day, week, or year) will be assigned to IntegerVariable, as shown in the following table:

DATE2DWY extraction choice

Integer value result

1

2-digit day (1–7 for Monday to Sunday)

2

2-digit week (1–53)

3

4-digit year

DMY2DATE and DWY2DATE functions

DMY2DATE allows us to create a date from integer values (or defaults) representing the day of the month, month of the year, and the four-digit year. If an optional parameter (MonthValue or YearValue) is not specified, the corresponding value from the system date is used. The syntax is as follows:

DateVariable := DMY2DATE ( DayValue [, MonthValue] [, YearValue] ) 

The only way to have the function use work date values for month and year is to extract those values and then use them explicitly. An example is as follows:

DateVariable := DMY2DATE(22,DATE2MDY(WORKDATE,2),DATE2MDY(WORKDATE,3)) 
This example also illustrates how expressions can be built up of nested expressions and functions. We have WORKDATE within DATE2MDY within DMY2DATE.

DWY2DATE operates similarly to DMY2DATE, allowing us to create a date from integer values representing the day of the week (from 1 to 7 representing Monday to Sunday), week of the year (from 1 to 53), followed by the four-digit...

CALCDATE

CALCDATE allows us to calculate a date value that's been assigned to a date data type variable. The calculation is based on DateExpression that's been applied to a base date (reference date). If we don't specify BaseDateValue, the current system date is used as the default date. We can specify BaseDateValue either in the form of a variable of data type date or as a date constant.

The syntax for CALCDATE is as follows:

DateVariable := CALCDATE ( DateExpression [, BaseDateValue]) 

There are a number of ways in which we can build a DateExpression. The rules for the CALCDATE function, DateExpression, are similar to the rules for DateFormula, which are described in Chapter 3, Data Types and Fields.

If there is a CW, CM, CP, CQ, or CY (current week, current month, current period, current quarter, or current year, respectively) parameter in an expression, then the result will be evaluated based on the BaseDateValue. If we have more than one of these in our expression...

Data conversion and formatting functions

Some data-type conversions are handled in the normal process flow by Business Central without any particular attention on part of the developer, such as code to text and character to text. Some data-type conversions can only be handled through AL functions. Formatting is included because it can also include a data-type conversion. Rounding does not do a data-type conversion, but it does result in a change in format (the number of decimal places). Let's review the following functions:

  • ROUND function
  • FORMAT function
  • EVALUATE function

ROUND

The ROUND function allows us to control the rounding precision for a decimal expression. The syntax for the ROUND function is as follows:

DecimalResult := ROUND (Number [, Precision] [, Direction] ) 

Here, Number is rounded, Precision spells out the number of digits of decimal precision, and Direction indicates whether to round up, round down, or round to the nearest number. Some examples of Precision values are shown in the following table:

Precision value

Rounding effect

100

To a multiple of 100

1

To an integer format

01

To two decimal places (the US default)

0.01

Same as .01

.0001

To four decimal places

 

If no Precision value is specified, the rounding default is controlled by a value set in General Ledger Setup in the Appln. Rounding Precision field on the Application tab. If no value is specified, rounding will default to two decimal places. If the precision value is, for example, .04 rather than .01, the rounding will be done to...

FORMAT function

The FORMAT function allows you to convert an expression of any data type (for example, integer, decimal, date, option, time, Boolean) into a formatted string. The syntax is as follows:

StringField := FORMAT( ExpressionToFormat [, OutputLength]  
[, FormatString or FormatNumber]) 

The formatted output of the ExpressionToFormat will be assigned to the output of StringField. The optional parameters control the conversion according to a complex set of rules. These rules can be found in Microsoft Docs for the FORMAT function and FORMAT property. Whenever possible, we should always apply FORMAT in its simplest form. The best way to determine the likely results of a FORMAT expression is to test it through a range of the values to be formatted. We should make sure that we include the extremes of the range of possible values in our testing.

The optional OutputLength parameter can be zero (the default), a positive integer, or a negative integer. The typical OutputLength value...

EVALUATE function

The EVALUATE function is essentially the reverse of the FORMAT function, allowing for the conversion of a string value into the defined data type. The syntax of the EVALUATE function is as follows:

[ BooleanVariable := ] EVALUATE ( ResultVariable, 
StringToBeConverted [, 9] 

The handling of a runtime error can be done by specifying BooleanVariable or including EVALUATE in an expression to deal with an error, such as an IF statement. The ResultVariable data type will determine what data conversion the EVALUATE function will attempt. The format of the data in StringToBeConverted must be compatible with the data type of ResultVariable; otherwise, a runtime error will occur.

The optional parameter, number 9, is only used for XMLport data exporting. Use of the optional number 9 parameter will convert Business Central format data types into XML standard data types. This deals with the fact that several equivalent XML data types are represented differently at the base system...

FlowField and SumIndexField functions

In Chapter 3, Data Types and Fields, we discussed SumIndexFields and FlowFields in the context of table, field, and key definition. To recap briefly, SumIndexFields are defined in the screen where the table keys are defined. They allow for the very rapid calculation of values in filtered data. In most ERP and accounting software systems, the calculation of group totals, periodic totals, and so on requires time consuming-processing of all the data to be totaled.

SIFT allows a Business Central system to respond almost instantly with totals in any area where the SumIndexField was defined and is maintained. In fact, use of SIFT totals combined with Business Central's retention of detailed data supports totally flexible ad hoc queries of the form "What were our sales for red widgets between the dates of November 15th through December 24th?". And the answer is returned almost instantly! SumIndexFields are the basis of FlowFields that have...

CALCFIELDS function

The syntax for CALCFIELDS is as follows:

[BooleanField := ] Record.CALCFIELDS ( FlowField1 [, FlowField2] ,...) 

Executing the CALCFIELDS function will cause all the specified FlowFields to be calculated. Specification of the BooleanField allows us to handle any runtime error that may occur. Any runtime errors for CALCFIELDS usually result from a coding error or a change in a table key structure.

The FlowField calculation takes into account the filters (including FlowFilters) that are currently applied to the record (we need to be careful not to overlook this). After the CALCFIELDS execution, the included FlowFields can be used similarly to any other data fields. The CALCFIELDS must be executed for each cycle through the subject table.

Whenever the contents of a BLOB field are to be used, CALCFIELDS is used to load the contents of the BLOB field from the database into memory.

When the following conditions are true, CALCFIELDS uses dynamically maintained SIFT data:

...

SETAUTOCALCFIELDS function

The syntax for SETAUTOCALCFIELDS is as follows:

[BooleanField := ] Record.SETAUTOCALCFIELDS 
( FlowField1 [, FlowField2] [, FlowField3]...) 

When SETAUTOCALCFIELDS for a table is inserted in code in front of record retrieval, the specified FlowFields are automatically calculated as the record is read. This is more efficient than performing CALCFIELDS on the FlowFields after the record has been read.

If we want to end the automatic FlowField calculation to a record, call the function without any parameters:

 [BooleanField := ] Record.SETAUTOCALCFIELDS()

An automatic FlowField calculation equivalent to SETAUTOCALCFIELDS is automatically set for the system record variables, Rec and xRec.

CALCSUMS function

The CALCSUMS function is conceptually similar to CALCFIELDS for the calculation of Sums only. However, CALCFIELDS operates on FlowFields and CALCSUMS operates directly on the record where the SumIndexFields are defined for the keys. This difference means that we must specify the proper key, plus any filters to apply when using CALCSUMS (the applicable key and filters to apply are already defined in the properties for the FlowFields).

The syntax for CALCSUMS is as follows:

[ BooleanField := ] Record.CALCSUMS ( SumIndexField1 [,SumIndexField2] ,...) 

Prior to a statement of this type, we should specify a key that has the SumIndexFields defined (to maximize the probability of good performance). Before executing the CALCSUMS function, we also need to specify any filters that we want to apply to the Record from which the sums are to be calculated. The SumIndexFields calculations take into account the filters that are currently applied to the record.

Executing the CALCSUMS...

Comparing CALCFIELDS and CALCSUMS

In the Sales Header record, there are FlowFields defined for Amount and "AmountIncludingVAT". These FlowFields are all based on Sums of entries in the SalesLine table. The CalcFormula for Amount is Sum("SalesLine".AmountWHERE(DocumentType=FIELD(DocumentType),DocumentNo.=FIELD(No.))).

CALCSUMS can be used on any integer, big integer, or decimal field with any filter on any table, but for larger datasets, creating a key with a SumIndexField is recommended.

To calculate a TotalOrderAmount value while referencing the Sales Header table, the code can be as simple as this:

"Sales Header".CALCFIELDS (Amount); 

To use TotalOrderAmount := "Sales Header".Amount; to calculate the same value from code directly referencing the Sales Line table, the required code is similar to the following (assuming a Sales Header record has already been read):

"Sales Line".SETRANGE("Document Type","Sales Header...

Flow control functions

Process flow control functions are the functions that execute the decision-making and resultant logic branches in executable code. IF-THEN-ELSE, as discussed in Chapter 6, Introduction to AL, is also a member of this class of functions. Here, we will discuss the following:

  • REPEAT-UNTIL
  • WHILE-DO
  • FOR-TO and FOR-DOWNTO
  • CASE-ELSE
  • WITH-DO
  • QUIT, BREAK, EXIT, and SKIP

REPEAT-UNTIL

REPEAT-UNTIL allows us to create a repetitive code loop, which REPEATs a block of code UNTIL a specific conditional expression evaluates to TRUE. In that sense, REPEAT-UNTIL defines a block of code, operating somewhat like the BEGIN-END compound statement structure that we covered in Chapter 6, Introduction to AL. REPEAT tells the system to keep reprocessing the block of code, while the UNTIL serves as the exit doorman, checking whether the conditions for ending the processing are true. Because the exit condition is not evaluated until the end of the loop, a REPEAT-UNTIL structure will always process at least once through the contained code.

REPEAT-UNTIL is very important in Business Central because it is often part of the data input cycle, along with the FIND-NEXT structure, which will be covered shortly.

Here is an example of the REPEAT-UNTIL structure to process and sum data in the 10-element array CustSales:

LoopCount := 0; 
REPEAT 
  LoopCount := LoopCount + 1; 
  TotCustSales...

WHILE-DO

A WHILE-DO control structure allows us to create a repetitive code loop that will DO (execute) a block of code WHILE a specific conditional expression evaluates to TRUE. WHILE-DO is different from REPEAT-UNTIL, both because it may need a BEGIN-END structure to define the block of code to be executed repetitively (REPEAT-UNTIL does not), and because it has different timings for the evaluation of the exit condition.

The syntax of the WHILE-DO control structure is as follows:

WHILE <Condition> DO <Statement> 

The Condition can be any Boolean expression that evaluates to TRUE or FALSE. The Statement can be simple or the most complex compound BEGIN-END statement. Most WHILE-DO loops will be based on a BEGIN-END block of code. The Condition will be evaluated at the beginning of the loop. When it evaluates to FALSE, the loop will terminate. Thus, a WHILE-DO loop can be exited without processing.

A WHILE-DO structure to process data in the 10-element array CustSales ...

FOR-TO or FOR-DOWNTO

The syntax for FOR-TO and FOR-DOWNTO control statements are as follows:

FOR <Control Variable> := <Start Number> TO <End Number> DO <Statement> 
FOR <Control Variable> := <Start Number> DOWNTO <End Number> DO <Statement> 

A FOR control structure is used when we wish to execute a block of code a specific number of times.

The Control Variable is an integer variable. Start Number is the beginning count for the FOR loop and End Number is the final count for the loop. If we wrote the FOR LoopCount := 5 TO 7 DO [block of code] statement, then [block of code] would be executed three times.

FOR-TO increments the Control Variable. FOR-DOWNTO decrements the Control Variable.

We must be careful not to manipulate the Control Variable in the middle of our loop. Doing so will likely yield unpredictable results.

CASE-ELSE statement

The CASE-ELSE statement is a conditional expression, which is very similar to IF-THEN-ELSE, except that it allows more than two choices of outcomes for the evaluation of the controlling expression. The syntax of the CASE-ELSE statement is as follows:

CASE <ExpressionToBeEvaluated> OF 
  <Value Set 1> : <Action Statement 1>; 
  <Value Set 2> : <Action Statement 2>; 
  <Value Set 3> : <Action Statement 3>; 
  ... 
  ... 
  <Value Set n> : <Action Statement n>; 
 [ELSE <Action Statement n + 1>; 
END; 

The ExpressionToBeEvaluated must not be a record. The data type of the Value Set must be able to be automatically converted into the data type of the ExpressionToBeEvaluated. Each Value Set must be an expression, a set of values, or a range of values. The following example illustrates a typical instance of a CASE-ELSE statement:

 CASE Customer."Salesperson Code" OF 
   '2','5',&apos...

WITH-DO statement

When we are writing code referring to fields within a record, the most specific syntax for field references is the fully qualified [ RecordName.FieldName ] reference. When referring to the City field in the Customer record, use the Customer.City reference.

In many AL instances, the record name qualifier is implicit because the compiler assumes a default record qualifier based on code context. This happens automatically for variables within a page bounded to a table. The bound table becomes the implicit record qualifier for fields referenced in the page object. In a table object, the table is the implicit record qualifier for fields referenced in the AL in that object. In report and XMLport objects, the DataItem record is the implicit record qualifier for the fields referenced within the triggers of that DataItem, such as OnAfterGetRecord and OnAfterImportRecord.

In all other AL code, the only way to have an implicit record qualifier is to use the...

QUIT, BREAK, EXIT, and SKIP functions

This group of AL functions also controls process flow. Each acts to interrupt flow in different places and with different results. To get a full appreciation for how these functions are used, we should review them in code in Business Central.

QUIT function

The QUIT function is the ultimate processing interrupt for report or XMLport objects. When QUIT is executed, processing immediately terminates, even for the OnPostObject triggers. No database changes are committed. QUIT is often used in reports to terminate processing when the report logic determines that no useful output will be generated by further processing.

The syntax of the QUIT function is as follows:

CurrReport.QUIT; 
CurrXMLport.QUIT; 

BREAK function

The BREAK function terminates the DataItem in which it occurs. BREAK can only be used in DataItem triggers in reports and XMLports. It can be used to terminate the sequence of processing one DataItem segment of a report while allowing subsequent DataItem processing to continue.

The BREAK syntax is as follows:

CurrReport.BREAK; 
CurrXMLport.BREAK; 

EXIT function

EXIT is used to end the processing within an AL trigger. EXIT works the same, regardless of whether it is executed within a loop. It can be used to end the processing of the trigger or to pass a return value from a local function. A return value cannot be used for system-defined triggers or local functions that don't have a return value defined. If EXIT is used without a return value, a default return value of zero is returned.

The syntax for EXIT is as follows:

EXIT([<ReturnValue>])

SKIP function

When executed, the SKIP function will skip the remainder of the processing in the current record cycle of the current trigger. Unlike BREAK, it does not terminate the DataItem processing completely. It can be used only in the OnAfterGetRecord trigger of a report or XMLport object. In reports, when the results of processing in the OnAfterGetRecord trigger are determined not to be useful for output, the SKIP function is used to terminate that single iteration of the trigger, without interfering with any subsequent processing.

The SKIP syntax can be either of the following:

CurrReport.SKIP; 
CurrXMLport.SKIP; 

Input and output functions

In the previous chapter, you learned about the basics of the FIND function. You learned about FIND('-') to read from the beginning of a selected set of records, FINDSET to read a selected set of records, and FIND('+') to begin reading at the far end of the selected set of records. Now we will review additional functions that are generally used with FIND functions in typical production code. While we are designing code that uses the MODIFY and DELETE record functions, we need to consider possible interactions with other users on the system. There might be someone else modifying and deleting records in the same table that our application is updating.

We may want to utilize the LOCKTABLE function to gain total control of the data briefly while updating it. We can find more information on LOCKTABLE in the online AL Reference Guide help (https://docs.microsoft.com/en-us/dynamics365/business-central/dev-itpro/developer/devenv-dev-overview). The...

NEXT function with FIND or FINDSET

The syntax defined for the NEXT function is as follows:

IntegerValue := Record.NEXT ( ReadStepSize ) 

The full assignment statement format is rarely used to set an IntegerValue. In addition, there is no documentation for the usage of a non-zero IntegerValue. When IntegerValue goes to zero, it means a NEXT record was not found.

If the ReadStepSize value is negative, the table will be read in reverse; if ReadStepSize is positive (the default), then the table will be read forward. The size of the value in ReadStepSize controls which records should be read. For example, if ReadStepSize is 2 or -2, then every second record will be read. If ReadStepSize is 10 or -10, then every tenth record will be read. The default value is 1, in which case, every record will be read and the read direction will be forward.

In a typical data-read loop, the first read is a FIND or FINDSET function, followed by a REPEAT-UNTIL loop. The exit condition is the UNTILRecord...

INSERT function

The purpose of the INSERT function is to add new records to a table. The syntax for the INSERT function is as follows:

[BooleanValue :=] Record.INSERT ( [ TriggerControlBoolean ] )

If BooleanValue is not used and the INSERT function fails (for example, if the insertion would result in a duplicate primary key), the process will terminate with an error. Generally, we should handle a detected error in code using the BooleanValue and supplying our own error-handling logic, rather than allow a default termination.

The TriggerControlBoolean value controls whether or not the table's OnInsert trigger fires when the INSERT occurs. The default value is FALSE. If we let the default FALSE control, we run the risk of not performing error checking that the table's designer assumed would be run when a new record was added.

When we are reading a table, and we also need to INSERT records into that same table, the INSERT should be done to a separate instance of the table. We can...

MODIFY function

The purpose of the MODIFY function is to modify (update) existing data records. The syntax for MODIFY is as follows:

[BooleanValue :=] Record.MODIFY ( [ TriggerControlBoolean ] )

If BooleanValue is not used and MODIFY fails, for example, if another process changes the record after it was read by this process, then the process will terminate with an error statement. The code should either handle a detected error or gracefully terminate the process. The TriggerControlBoolean value controls whether the table's OnModify trigger fires when this MODIFY occurs. The default value is FALSE, which would not perform any OnModify processing. MODIFY cannot be used to cause a change in a primary key field. In that case, the RENAME function must be used.

System-based checking can be performed to make sure that a MODIFY is done using the current version of the data record. This is done by making sure that another process hasn't modified and committed the record after it was...

Rec and xRec

In the Table and Page objects, the system automatically provides us with the system variables, Rec and xRec. Until a record has been updated by MODIFY, Rec represents the current record data in process and xRec represents the record data before it was modified. By comparing field values in Rec and xRec, we can determine whether changes have been made to the record in the current process cycle. Rec and xRec records have all the same fields in the same structure as the table to which they relate.

DELETE function

The purpose of the DELETE function is to delete existing data records. The syntax for DELETE is as follows:

[BooleanValue :=] Record.DELETE ( [ TriggerControlBoolean ] ) 

If DELETE fails and the BooleanValue option is not used, the process will terminate with an error statement. Our code should handle any detected error or terminate the process gracefully, as appropriate.

The TriggerControlBoolean value is TRUE or FALSE, and it controls whether the table's OnDelete trigger fires when this DELETE occurs. The default value is FALSE. If we let the default FALSE prevail, we run the risk of not performing error checking that the table's designer assumed would be run when a record was deleted.

In Business Central, there is check to make sure that a DELETE is using the current version of the record and to make sure that another process hasn't modified and committed the record after it was read by this process.

MODIFYALL function

MODIFYALL is the high-volume version of the MODIFY function. If we have a group of records in which we wish to modify one field in all of these records to the same new value, we should use MODIFYALL. It is controlled by the filters that are applied at the time of invoking. The other choice for doing a mass modification would be to have a FIND-NEXT loop in which we modified each record, one at a time. The advantage of MODIFYALL is that it allows the developer and the system to optimize code for the volume update. Any system optimization will be a function of what SQL statements are generated by the AL compiler.

The syntax for MODIFYALL is as follows:

Record.MODIFYALL (FieldToBeModified,NewValue  
[,TriggerControlBoolean ] ) 

The TriggerControlBoolean value, a TRUE or FALSE entry, controls whether the table's OnModify trigger fires when this MODIFY occurs. The default value is FALSE, which would result in the OnValidate field trigger not being executed...

DELETEALL function

DELETEALL is the high-volume version of the DELETE function. If we have a group of records that we wish to delete, use DELETEALL. The other choice would be a FIND-NEXT loop, in which we delete each record, one at a time. The advantage of DELETEALL is that it allows the developer and the system to optimize code for the volume deletion. Any system optimization will be a function of what SQL statements are generated by the AL compiler.

The syntax for DELETEALL is as follows:

Record.DELETEALL ( [,TriggerControlBoolean] ) 

The TriggerControlBoolean value, a TRUE or FALSE entry, controls whether the table's OnDelete trigger fires when this DELETE occurs. The default value is FALSE. If the TriggerControlBoolean value is TRUE, then the OnDelete trigger will fire for each record that's deleted. In that case, there is little to no speed advantage for DELETEALL versus the use of a FIND-DELETE-NEXT loop.

In a typical situation, a filter or series of filters would be applied...

Filtering

Few other systems have filtering implemented as comprehensively as Business Central, nor do they have it tied so neatly to the detailed retention of historical data. The result of Business Central's features is that even the most basic implementation of Business Central includes very powerful data analysis capabilities that are available to the end user.

As developers, we should appreciate the fact that we cannot anticipate every need of any user, let alone anticipate all the needs of all users. We know we should give the users as much freedom as possible to allow them to selectively extract and review data from their system. Wherever feasible, users should be given the opportunity to apply their own filters so that they can determine the optimum selection of data for their particular situation. On the other hand, freedom, here as everywhere, is a double-edged sword. With the freedom to decide just how to segment our data, comes the responsibility for figuring out what...

The SETFILTER function

SETFILTER allows us to define and apply any filter expression that could be created manually, including various combinations of ranges, AL operators, and even wild cards. The SETFILTER syntax is as follows:

Record.SETFILTER ( Field, FilterString [, FilterValue1], . . . ] ); 

SETFILTER also can be applied to Query objects with similar syntax:

Query.SETFILTER ( ColumnName, FilterString  
[, FilterValue1], . . . ] ); 

FilterString can be a literal, such as '1000..20000' or 'A*|B*|C*', but this is not good practice. Optionally (and preferably), we can use variable tokens in the form of %1, %2, %3, and so forth, representing variables (but not operators) such as FilterValue1, FilterValue2, and so forth to be substituted in the filter string at runtime. This construct allows us to create filters whose data values can be defined dynamically at runtime. A new SETFILTER replaces any previous filtering in the same filter group (this will be discussed...

The COPYFILTER and COPYFILTERS functions

These functions allow you to copy the filters of a single field or all the filters on a record (table) and apply those filters to another record. Their syntax is as follows:

FromRecord.COPYFILTER(FromField, ToRecord.ToField) 

The From and To fields must be of the same data type. The From and To tables do not have to be of the same format:

ToRecord.COPYFILTERS(FromRecord) 

Note that the COPYFILTER field-based function begins with the FromRecord variable, while the COPYFILTERS record-based function begins with the ToRecord variable. ToRecord and FromRecord must be different instances of the same table.

The GETFILTER and GETFILTERS functions

These functions allow us to retrieve the filters on a single field or all the filters on a record (table) and assign the result to a text variable. Their syntax is as follows:

ResultString := FilteredRecord.GETFILTER(FilteredField) 
ResultString := FilteredRecord.GETFILTERS 

Similar functions exist for query objects. Those syntaxes are as follows:

ResultString := FilteredQuery.GETFILTER(FilteredColumn) 
ResultString := FilteredQuery.GETFILTERS 

The text contents of the ResultString will contain an identifier for each filtered field and the currently applied value of the filter. GETFILTERS is often used to retrieve the filters on a table and print them as part of a report heading. The ResultString will look similar to the following:

Customer:.No.: 10000..999999, Balance: >0

The FILTERGROUP function

The FILTERGROUP function can change or retrieve the filter group that is applied to a table. A filter group contains a set of filters that were previously applied to the table by the SETFILTER or SETRANGE functions, or as table properties defined in an object. The FILTERGROUP syntax is as follows:

[CurrentGroupInteger ] := Record.FILTERGROUP ([NewGroupInteger]) 

Using just the Record.FILTERGROUP([NewFilterGroupInteger]) portion sets the active filter group.

Filter groups can also be used to filter query DataItems. All the currently defined filter groups are active and apply in combination (in other words, they are logically ANDed, resulting in a logical intersection of the sets). The only way to eliminate the effect of a filter group is to remove the filters in a group.

The default filter group for Business Central is 0 (zero). Users have access to the filters in this filter group. Other filter groups, numbered up through 6, have assigned Business Central uses...

The MARK function

A mark on a record is an indicator that disappears when the current session ends and which is only visible to the process that is setting the mark. The MARK function sets the mark. The syntax is as follows:

[BooleanValue := ] Record.MARK ( [SetMarkBoolean] ) 

If the optional BooleanValue and assignment operator (:=) are present, the MARK function will give us the current MARK status (TRUE or FALSE) of the record. If the optional SetMarkBoolean parameter is present, the record will be marked (or unmarked) according to that value (TRUE or FALSE). The default value for SetMarkBoolean is FALSE. The MARK functions should be used carefully, and only when a simpler solution is not readily available. Marking records can cause significant performance problems on large datasets.

CLEARMARKS function

CLEARMARKS clears all the marks from the specified record, that is, from the particular instance of the table in this instance of the object. The syntax is as follows:

Record.CLEARMARKS 

The MARKEDONLY function

MARKEDONLY is a special filtering function that can apply a mark-based filter.

The syntax for MARKEDONLY is as follows:

[BooleanValue := ] Record.MARKEDONLY  
( [SeeMarkedRecordsOnlyBoolean] ) 

If the optional BooleanValue parameter is defined, it will be assigned a TRUE or FALSE value to tell us whether the special MARKEDONLY filter is active. Omitting the BooleanValue parameter, MARKEDONLY will set the special filter, depending on the value of SeeMarkedRecordsOnlyBoolean. If that value is TRUE, it will filter to show only marked records; if that value is FALSE, it will remove the marked filter and show all records. The default value for SeeMarkedRecordsOnlyBoolean is FALSE.

Although it may not seem logical, there is no option to see only the unmarked records.

For additional information on the MARKEDONLY function, refer to the following blog entry: https://marijebrummel.blog/2014/03/07/tip-36-using-mark-and-markedonly-in-the-role-tailored-client/.

The RESET function

This function allows us to RESET, that is, clear, all filters that are currently applied to a record. RESET also sets the current key back to the primary key, removes any marks, and clears all internal variables in the current instance of the record. Filters in FILTERGROUP 1 are not reset. The syntax is FilteredRecord.RESET;.

InterObject communication

There are several ways for communicating between objects during Business Central processing. We will review some of the more commonly used ways in the following sections.

Communicating through data

The most widely used and simplest communication method is through data tables. For example, the No. Series table is the central control for all document numbers. Each object that assigns numbers to a document (for example, order, invoice, shipment, and so on) uses codeunit 396, NoSeriesManagement, to access the No. Series table for the next number to use, and then updates the No. Series table so that the next object needing to assign a number to the same type of document will have the updated information.

Communicating through function parameters

When an object calls a function in another object, information is generally passed through the calling and return parameters. The calling and return parameter specifications are defined when the function is developed. The generic syntax for a function call is as follows:

[ReturnValue := ] FunctionName ( [ Parameter1 ] [ ,Parameter2 ] ,...) 

The rules for including or omitting the various optional fields are specific to the local variables that are defined for each individual function. As developers, when we design the function, we define the rules and thereby determine just how communications with the function will be handled. It is obviously important to define complete and consistent parameter passing rules prior to beginning a development project.

Communication via object calls

Sometimes, we need to create an object, which in turn calls other objects. We may simply want to allow the user to be able to run a series of processes and reports, but only enter the controlling parameters once. Our user interface object will be responsible for invoking the subordinate objects after having communicated setup and filter parameters.

An important set of standard functions has been designed for various modes and circumstances of invoking other objects. Examples of these functions are SETTABLEVIEW, SETRECORD, and GETRECORD (there are others as well). There are also instances where we will need to build our own data-passing function.

To properly manage these relatively complex processes, we need to be familiar with the various versions of the RUN and RUNMODAL functions. We will also need to understand the meaning and effect of a single instance or multiple instances of an object. Briefly, key differences between invoking a page or report...

Enhancing the WDTU application

Now that we have some new tool to work with, let's enhance our WDTU application. This time, our goal is to implement functionality to allow the program manager to plan the playlist schedules for radio shows. The process, from the user's point of view, will essentially be as follows:

  1. Call up Playlist document page that displays header, details, and FactBox workspaces.
  2. Enter Playlist Header using the Radio Show table data.
  3. Enter Playlist Line using the resource table DJ data; the Radio Show table data for news, weather, or sports shows; and the Item table data for music, PSAs, and advertisements.
  4. The FactBox will display the required program-element fields from Radio Show/Playlist Header. These will include news (yes or no), weather (yes or no), sports (yes or no), and Number of required PSAs and advertisements.
  5. The FactBox will also track each of the five possible required elements.

Since this development effort is an exercise to learn more...

Modifying table fields

Because we want the Business Central tables to be the core of the design and to host as much of the processing as makes sense, we will start our enhancement work with table modifications.

The first table modification is to add the data fields to the Playlist Header, as shown in the following screenshot, to support the definition and tracking of various program segment requirements. In the Radio Show table, each show has requirements defined for a specific number of PSAs and advertisements, and for the presence of news, sports, and weather spots. The Playlist Header needs this requirement information stored, along with the associated TSA and advertisement counts for this show instance. We will obtain the news, sports, and weather line counts by means of a function call:

Because Playlist Line includes an Option field that identifies the PSA and advertisement records, we will use a FlowField to calculate the counts for each of those line types. We will construct...

Adding validation logic

We need validation logic for both our Playlist tables, that is, Header and Line. We will start with Playlist Header validation.

Playlist Header validation

The Playlist Header data fields are as follows:

  • No.: This is the ID number for this instance of a radio show; its contents are user -defined
  • Radio Show No.: This is selected from the Radio Show table
  • Description: This is displayed by means of a FlowField from the Radio Show table
  • Broadcast Date: This is the show's scheduled broadcast date; it also serves as the posting date for any data analysis filtering
  • Start Time: This is the show's scheduled broadcast start time
  • End Time: This is the show's scheduled broadcast end time
  • Duration: This is the show's broadcast length, displayed by means of a FlowField from the Radio Show table
  • PSAs Required and Ads Required: These show whether PSAs and advertisements are required for broadcast during the show; they are copied from the Radio Show table, but are editable by the user
  • News Required, Sports Required, and Weather Required: This checks whether each of these program segments are required during...

Creating the Playlist subpage

In Chapter 2, Tables, a homework assignment was to create the 50003 Playlist Document page. We should have used the snippet to create that page, giving us something such as this:

Another necessary part of a document page is the subpage. Our subpage can be created using the snippet based on table 50003, Playlist Line. The result will be as follows:

To make the document work the way we are used to having Business Central document forms work, we will need to set some properties for this new page. See the bold properties circled in the following screenshot:

We have set the DelayedInsert and AutoSplitKey properties to true. These settings will allow the Playlist Line to not be saved until the primary key fields are all entered (DelayedInsert), and will support the easy insertion of new entries between two existing lines (AutoSplitKey).

Finally, we will need to connect our new Playlist subpage ListPart page 50109 to the Playlist document page 50104...

Playlist Line validations

The Playlist Line data fields are as follows:

  • Document No.: This is the automatically assigned link to the No. field in the parent Playlist record (the automatic assignment is based on the page, Playlist Subpage properties, which we'll take care of when working on the Playlist pages)
  • Line No.: This is an automatically assigned number (based on page properties), and the rightmost field in the Playlist Line primary key
  • Type: This is a user-selected Option that defines whether this entry is a Resource, such as an announcer; a Show, such as a news show; or an Item, such as a recording to play on the air
  • No.: This is the ID number of the selected entry in its parent table
  • DataFormat: This is information from the Item table for a show or recording
  • Description: This is assigned from the parent table, but can be edited by the user
  • Duration, Start Time and End Time: This is information about a show or recording that indicates the length and its position within...

Creating a function for our FactBox

For this application, we want our FactBox to display information relating to the specific radio show we are scheduling. The information to be displayed includes the five show segment requirements and the status of fulfillment (counts) of those requirements by the data that's entered into date. The requirements come from the Playlist Header fields: PSAs required, advertisements required, news required, sports required, and weather required. The counts come from summing up data in the Playline Line records for a show. We can use the Playlist Header field's PSA count and ad count for those two counts. These counts can be obtained through the FlowField property definitions we defined earlier for these two fields.

For the other three counts, we must read through the Playlist Line and sum up each of the counts. To accomplish that, we'll create a function that we can call from the FactBox page. Since our new function is local to the Playlist...

Creating a FactBox page

All the hard work is now done. We just have to define a FactBox page and add it to the Playlist page. We can create a FactBox page using the snippet to define a CardPart:

Our FactBox will contain the fields from the Playlist Header that relate to two of the five required show segments, that is, the PSA Count and Ad Count fields, as shown in the following screenshot:

Once we exit the Card Part Page Wizard and go into the page designer, we will have a page layout that looks as follows:

At this point, we will need to add the logic to take advantage of the NWSRequired function that we created earlier. This function is designed to return the count of the segment type that's identified in the calling parameter. Since a line on a page can be an expression, we can simply code the function calls right on the page lines with an appropriate caption defined, as we can see in the following screenshot. We will intersperse the Count lines for a consistent appearance on...

Summary

In this chapter, we covered a number of practical tools and topics regarding AL coding and development. We started by reviewing methods and then we dived into a long list of functions that we will need on a frequent basis.

We covered a variety of selected data-centric functions, including some for computation and validation, some for data conversion, and others for date handling. Next, we reviewed functions that affect the flow of logic and the flow of data, including FlowFields and SIFT, processing flow control, input and output, and filtering. Finally, we put a number of these to work in an enhancement for our WDTU application.

In the next chapter, we will move from the details of the functions to the broader view of AL development integration into the standard Business Central code, as well as debugging techniques.

Questions

  1. Which three of the following are valid date-related Business Central functions?
    • DATE2DWY
    • CALCDATE
    • DMY2DATE
    • DATE2NUM
  2. RESET is used to clear the current sort key setting from a record. True or false?
  3. Which functions can be used to cause FlowFields to be calculated? Choose two:
    • CALCSUMS
    • CALCFIELDS
    • SETAUTOCALCFIELDS
    • SUMFLOWFIELD
  4. Which of the following functions should be used within a report's OnAfterGetRecord trigger to end processing just for a single iteration of the trigger? Choose one:
    • EXIT
    • BREAK
    • QUIT
    • SKIP
  5. The WORKDATE value can be set to a different value from the system date. True or false?
  6. Only one FactBox is allowed on a page. True or false?
  7. Braces {} are used as a special form of a repeating CASE statement. True or false?
  8. Which of the following is not a valid AL flow control combination? Choose one:
    1. REPEAT-UNTIL
    2. DO-UNTIL
    3. CASE-ELSE
    4. IF-THEN
  1. A FILTERGROUP function should not be used in custom code...
lock icon
The rest of the chapter is locked
You have been reading a chapter from
Programming Microsoft Dynamics 365 Business Central - Sixth Edition
Published in: Apr 2019Publisher: PacktISBN-13: 9781789137798
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (3)

author image
Marije Brummel

Author, programmer, consultant, project manager, presenter, evangelist, sales person, and a trainer. It's next to impossible to find someone as experienced as Marije Brummel in the Business Central community. Marije received the Microsoft MVP and the NAVUG All-Star awards among several others. She has chaired the Dynamics Credentialing committee and has authored official Microsoft Exam materials. She's the go-to girl for performance troubleshooting and upgrade challenges. One of her biggest achievements was introducing Design Patterns into the Business Central community. Her books, blog articles, and YouTube videos have influenced almost everyone involved with Business Central. She enjoys the outdoors with her dog and loves spending time with her family.
Read more about Marije Brummel

author image
David Studebaker

David Studebaker has been designing and developing software since 1962 as a developer, consultant, manager, and business owner. In 1967, David coauthored the first general-purpose SPOOL system, an AT&T / IBM joint project. He has been a founding partner in several firms, most recently Studebaker Technology and Liberty Grove Software. David's publications include a decade of technical reviews for ACM Computing Reviews and a number of articles on shop floor data collection. David originated the Packt series of books on programming Dynamics Business Central (aka Dynamics NAV). He has a BS in mechanical engineering from Purdue University and an MBA from the University of Chicago. He is a life member of the Association for Computing Machinery.
Read more about David Studebaker

author image
Christopher D. Studebaker

Chris Studebaker was a certified environmental consultant working with manufacturing facilities to meet national and state regulations before he started working with Navision in 1999. After working on regulatory reporting, data analysis, project management, and subcontractor oversight, Chris has used those skills to sell, develop, and implement NAV for the past 20 years. He has specialized in retail, manufacturing, job shop, and distribution implementations, mostly in high-user-count, high-data-volume applications. Chris acts in a consulting and training role for customers and for peer NAV professionals. He has a Bachelor of Science degree from Northern Illinois University and has done graduate work at Denmark Technical University.
Read more about Christopher D. Studebaker