Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
article-image-netbeans-developers-life-cycle
Packt
08 Sep 2015
30 min read
Save for later

The NetBeans Developer's Life Cycle

Packt
08 Sep 2015
30 min read
In this article by David Salter, the author of Mastering NetBeans, we'll cover the following topics: Running applications Debugging applications Profiling applications Testing applications On a day-to-day basis, developers spend much of their time writing and running applications. While writing applications, they typically debug, test, and profile them to ensure that they provide the best possible application to customers. Running, debugging, profiling, and testing are all integral parts of the development life cycle, and NetBeans provides excellent tooling to help us in all these areas. (For more resources related to this topic, see here.) Running applications Executing applications from within NetBeans is as simple as either pressing the F6 button on the keyboard or selecting the Run menu item or Project Context menu item. Choosing either of these options will launch your application without specifying any additional Java command-line parameters using the default platform JDK that NetBeans is currently using. Sometimes we want to change the options that are used for launching applications. NetBeans allows these options to be easily specified by a project's properties. Right-clicking on a project in the Projects window and selecting the Properties menu option opens the Project Properties dialog. Selecting the Run category allows the configuration options to be defined for launching an application. From this dialog, we can define and select multiple run configurations for the project via the Configuration dropdown. Selecting the New… button to the right of the Configuration dropdown allows us to enter a name for a new configuration. Once a new configuration is created, it is automatically selected as the active configuration. The Delete button can be used for removing any unwanted configurations. The preceding screenshot shows the Project Properties dialog for a standard Java project. Different project types (for example, web or mobile projects) have different options in the Project Properties window. As can be seen from the preceding Project Properties dialog, several pieces of information can be defined for a standard Java project, which together make up the launch configuration for a project: Runtime Platform: This option allows us to define which Java platform we will use when launching the application. From here, we can select from all the Java platforms that are configured within NetBeans. Selecting the Manage Platforms… button opens the Java Platform Manager dialog, allowing full configuration of the different Java platforms available (both Java Standard Edition and Remote Java Standard Edition). Selecting this button has the same effect as selecting the Tools and then Java Platforms menu options. Main Class: This option defines the main class that is used to launch the application. If the project has more than one main class, selecting the Browse… button will cause the Browse Main Classes dialog to be displayed, listing all the main classes defined in the project. Arguments: Different command-line arguments can be passed to the main class as defined in this option. Working Directory: This option allows the working directory for the application to be specified. VM Options: If different VM options (such as heap size) require setting, they can be specified by this option. Selecting the Customize button displays a dialog listing the different standard VM options available which can be selected (ticked) as required. Custom VM properties can also be defined in the dialog. For more information on the different VM properties for Java, check out http://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp-140102.html. From here, the VM properties for Java 7 (and earlier versions) and Java 8 for Windows, Solaris, Linux, and Mac OS X can be referenced. Run with Java Web Start: Selecting this option allows the application to be executed using Java Web Start technologies. This option is only available if Web Start is enabled in the Application | Web Start category. When running a web application, the project properties are different from those of a standalone Java application. In fact, the project properties for a Maven web application are different from those of a standard NetBeans web application. The following screenshot shows the properties for a Maven-based web application; as discussed previously, Maven is the standard project management tool for Java applications, and the recommended tool for creating and managing web applications: Debugging applications In the previous section, we saw how NetBeans provides the easy-to-use features to allow developers to launch their applications, but then it also provides more powerful additional features. The same is true for debugging applications. For simple debugging, NetBeans provides the standard facilities you would expect, such as stepping into or over methods, setting line breakpoints, and monitoring the values of variables. When debugging applications, NetBeans provides several different windows, enabling different types of information to be displayed and manipulated by the developer: Breakpoints Variables Call stack Loaded classes Sessions Threads Sources Debugging Analyze stack All of these windows are accessible from the Window and then Debugging main menu within NetBeans. Breakpoints NetBeans provides a simple approach to set breakpoints and a more comprehensive approach that provides many more useful features. Breakpoints can be easily added into Java source code by clicking on the gutter on the left-hand side of a line of Java source code. When a breakpoint is set, a small pink square is shown in the gutter and the entire line of source code is also highlighted in the same color. Clicking on the breakpoint square in the gutter toggles the breakpoint on and off. Once a breakpoint has been created, instead of removing it altogether, it can be disabled by right-clicking on the bookmark in the gutter and selecting the Breakpoint and then Enabled menu options. This has the effect of keeping the breakpoint within your codebase, but execution of the application does not stop when the breakpoint is hit. Creating a simple breakpoint like this can be a very powerful way of debugging applications. It allows you to stop the execution of an application when a line of code is hit. If we want to add a bit more control onto a simple breakpoint, we can edit the breakpoint's properties by right-clicking on the breakpoint in the gutter and selecting the Breakpoint and then Properties menu options. This causes the Breakpoint Properties dialog to be displayed: In this dialog, we can see the line number and the file that the breakpoint belongs to. The line number can be edited to move the breakpoint if it has been created on the wrong line. However, what's more interesting is the conditions that we can apply to the breakpoint. The Condition entry allows us to define a condition that has to be met for the breakpoint to stop the code execution. For example, we can stop the code when the variable i is equal to 20 by adding a condition, i==20. When we add conditions to a breakpoint, the breakpoint becomes known as a conditional breakpoint, and the icon in the gutter changes to a square with the lower-right quadrant removed. We can also cause the execution of the application to halt at a breakpoint when the breakpoint has been hit a certain number of times. The Break when hit count is condition can be set to Equal to, Greater than, or Multiple of to halt the execution of the application when the breakpoint has been hit the requisite number of times. Finally, we can specify what actions occur when a breakpoint is hit. The Suspend dropdown allows us to define what threads are suspended when a breakpoint is hit. NetBeans can suspend All threads, Breakpoint thread, or no threads at all. The text that is displayed in the Output window can be defined via the Print Text edit box and different breakpoint groups can be enabled or disabled via the Enable Group and Disable Group drop-down boxes. But what exactly is a breakpoint group? Simply put, a breakpoint group is a collection of breakpoints that can all be set or unset at the same time. It is a way of categorizing breakpoints into similar collections, for example, all the breakpoints in a particular file, or all the breakpoints relating to exceptions or unit tests. Breakpoint groups are created in the Breakpoints window. This is accessible by selecting the Debugging and then Breakpoints menu options from within the main NetBeans Window menu. To create a new breakpoint group, simply right-click on an existing breakpoint in the Breakpoints window and select the Move Into Group… and then New… menu options. The Set the Name of Breakpoints Group dialog is displayed in which the name of the new breakpoint group can be entered. After creating a breakpoint group and assigning one or more breakpoints into it, the entire group of breakpoints can be enabled or disabled, or even deleted by right-clicking on the group in the Breakpoints window and selecting the appropriate option. Any newly created breakpoint groups will also be available in the Breakpoint Properties window. So far, we've seen how to create breakpoints that stop on a single line of code, and also how to create conditional breakpoints so that we can cause an application to stop when certain conditions occur for a breakpoint. These are excellent techniques to help debug applications. NetBeans, however, also provides the ability to create more advanced breakpoints so that we can get even more control of when the execution of applications is halted by breakpoints. So, how do we create these breakpoints? These different types of breakpoints are all created from in the Breakpoints window by right-clicking and selecting the New Breakpoint… menu option. In the New Breakpoint dialog, we can create different types of breakpoints by selecting the appropriate entry from the Breakpoint Type drop-down list. The preceding screenshot shows an example of creating a Class breakpoint. The following types of breakpoints can be created: Class: This creates a breakpoint that halts execution when a class is loaded, unloaded, or either event occurs. Exception: This stops execution when the specified exception is caught, uncaught, or either event occurs. Field: This creates a breakpoint that halts execution when a field on a class is accessed, modified, or either event occurs. Line: This stops execution when the specified line of code is executed. It acts the same way as creating a breakpoint by clicking on the gutter of the Java source code editor window. Method: This creates a breakpoint that halts execution when a method is entered, exited, or when either event occurs. Optionally, the breakpoint can be created for all methods inside a specified class rather than a single method. Thread: This creates a breakpoint that stops execution when a thread is started, finished, or either event occurs. AWT/Swing Component: This creates a breakpoint that stops execution when a GUI component is accessed. For each of these different types of breakpoints, conditions and actions can be specified in the same way as on simple line-based breakpoints. The Variables debug window The Variables debug window lists all the variables that are currently within  the scope of execution of the application. This is therefore thread-specific, so if multiple threads are running at one time, the Variables window will only display variables in scope for the currently selected thread. In the Variables window, we can see the variables currently in scope for the selected thread, their type, and value. To display variables for a different thread to that currently selected, we must select an alternative thread via the Debugging window. Using the triangle button to the left of each variable, we can expand variables and drill down into the properties within them. When a variable is a simple primitive (for example, integers or strings), we can modify it or any property within it by altering the value in the Value column in the Variables window. The variable's value will then be changed within the running application to the newly entered value. By default, the Variables window shows three columns (Name, Type, and Value). We can modify which columns are visible by pressing the selection icon () at the top-right of the window. Selecting this displays the Change Visible Columns dialog, from which we can select from the Name, String value, Type, and Value columns: The Watches window The Watches window allows us to see the contents of variables and expressions during a debugging session, as can be seen in the following screenshot: In this screenshot, we can see that the variable i is being displayed along with the expressions 10+10 and i+20. New expressions can be watched by clicking on the <Enter new watch> option or by right-clicking on the Java source code editor and selecting the New Watch… menu option. Evaluating expressions In addition to watching variables in a debugging session, NetBeans also provides the facility to evaluate expressions. Expressions can contain any Java code that is valid for the running scope of the application. So, for example, local variables, class variables, or new instances of classes can be evaluated. To evaluate variables, open the Evaluate Expression window by selecting the Debug and then Evaluate Expression menu options. Enter an expression to be evaluated in this window and press the Evaluate Code Fragment button at the bottom-right corner of the window. As a shortcut, pressing the Ctrl + Enter keys will also evaluate the code fragment. Once an expression has been evaluated, it is shown in the Evaluation Result window. The Evaluation Result window shows a history of each expression that has previously been evaluated. Expressions can be added to the list of watched variables by right-clicking on the expression and selecting the Create Fixed Watch expression. The Call Stack window The Call Stack window displays the call stack for the currently executing thread: The call stack is displayed from top to bottom with the currently executing frame at the top of the list. Double-clicking on any entry in the call stack opens up the corresponding source code in the Java editor within NetBeans. Right-clicking on an entry in the call stack displays a pop-up menu with the choice to: Make Current: This makes the selected thread the current thread Pop To Here: This pops the execution of the call stack to the selected location Go To Source: This displays the selected code within the Java source editor Copy Stack: This copies the stack trace to the clipboard for use elsewhere When debugging, it can be useful to change the stack frame of the currently executing thread by selecting the Pop To Here option from within the stack trace window. Imagine the following code: // Get some magic int magic = getSomeMagicNumber(); // Perform calculation performCalculation(magic); During a debugging session, if after stepping over the getSomeMagicNumber() method, we decided that the method has not worked as expected, our course of action would probably be to debug into the getSomeMagicNumber() method. But, we've just stepped over the method, so what can we do? Well, we can stop the debugging session and start again or repeat the operation that called this section of code and hope there are no changes to the application state that affect the method we want to debug. A better solution, however, would be to select the line of code that calls the getSomeMagicNumber() method and pop the stack frame using the Pop To Here option. This would have the effect of rewinding the code execution so that we can then step into the method and see what is happening inside it. As well as using the Pop To Here functionality, NetBeans also offers several menu options for manipulating the stack frame, namely: Make Callee Current: This makes the callee of the current method the currently executing stack frame Make Caller Current: This makes the caller of the current method the currently executing stack frame Pop Topmost Call: This pops one stack frame, making the calling method the currently executing stack frame When moving around the call stack using these techniques, any operations performed by the currently executing method are not undone. So, for example, strange results may be seen if global or class-based variables are altered within a method and then an entry is popped from the call stack. Popping entries in the call stack is safest when no state changes are made within a method. The call stack displayed in the Debugging window for each thread behaves in the same way as in the Call Stack window itself. The Loaded Classes window The Loaded Classes window displays a list of all the classes that are currently loaded, showing how many instances there are of each class as a number and as a percentage of the total number of classes loaded. Depending upon the number of external libraries (including the standard Java runtime libraries) being used, you may find it difficult to locate instances of your own classes in this window. Fortunately, the filter at the bottom of the window allows the list of classes to be filtered, based upon an entered string. So, for example, entering the filter String will show all the classes with String in the fully qualified class name that are currently loaded, including java.lang.String and java.lang.StringBuffer. Since the filter works on the fully qualified name of a class, entering a package name will show all the classes listed in that package and subpackages. So, for example, entering a filter value as com.davidsalter.multithread would show only the classes listed in that package and subpackages. The Sessions window Within NetBeans, it is possible to perform multiple debugging sessions where either one project is being debugged multiple times, or more commonly, multiple projects are being debugged at the same time, where one is acting as a client application and the other is acting as a server application. The Sessions window displays a list of the currently running debug sessions, allowing the developer control over which one is the current session. Right-clicking on any of the sessions listed in the window provides the following options: Make Current: This makes the selected session the currently active debugging session Scope: This debugs the current thread or all the threads in the selected session Language: This options shows the language of the application being debugged—Java Finish: This finishes the selected debugging session Finish All: This finishes all the debugging sessions The Sessions window shows the name of the debug session (for example the main class being executed), its state (whether the application is Stopped or Running) and language being debugged. Clicking the selection icon () at the top-right of the window allows the user to choose which columns are displayed in the window. The default choice is to display all columns except for the Host Name column, which displays the name of the computer the session is running on. The Threads window The Threads window displays a hierarchical list of threads in use by the application currently being debugged. The current thread is displayed in bold. Double-clicking on any of the threads in the hierarchy makes the thread current. Similar to the Debugging window, threads can be made current, suspended, or interrupted by right-clicking on the thread and selecting the appropriate option. The default display for the Threads window is to show the thread's name and its state (Running, Waiting, or Sleeping). Clicking the selection icon () at the top-right of the window allows the user to choose which columns are displayed in the window. The Sources window The Sources window simply lists all of the source roots that NetBeans considers for the selected project. These are the only locations that NetBeans will search when looking for source code while debugging an application. If you find that you are debugging an application, and you cannot step into code, the most likely scenario is that the source root for the code you wish to debug is not included in the Sources window. To add a new source root, right-click in the Sources window and select the Add Source Root option. The Debugging window The Debugging window allows us to see which threads are running while debugging our application. This window is, therefore, particularly useful when debugging multithreaded applications. In this window, we can see the different threads that are running within our application. For each thread, we can see the name of the thread and the call stack leading to the breakpoint. The current thread is highlighted with a green band along the left-hand side edge of the window. Other threads created within our application are denoted with a yellow band along the left-hand side edge of the window. System threads are denoted with a gray band. We can make any of the threads the current thread by right-clicking on it and selecting the Make Current menu option. When we do this, the Variables and Call Stack windows are updated to show new information for the selected thread. The current thread can also be selected by clicking on the Debug and then Set Current Thread… menu options. Upon selecting this, a list of running threads is shown from which the current thread can be selected. Right-clicking on a thread and selecting the Resume option will cause the selected thread to continue execution until it hits another breakpoint. For each thread that is running, we can also Suspend, Interrupt, and Resume the thread by right-clicking on the thread and choosing the appropriate action. In each thread listing, the current methods call stack is displayed for each thread. This can be manipulated in the same way as from the Call Stack window. When debugging multithreaded applications, new breakpoints can be hit within different threads at any time. NetBeans helps us with multithreaded debugging by not automatically switching the user interface to a different thread when a breakpoint is hit on the non-current thread. When a breakpoint is hit on any thread other than the current thread, an indication is displayed at the bottom of the Debugging window, stating New Breakpoint Hit (an example of this can be seen in the previous window). Clicking on the icon to the right of the message shows all the breakpoints that have been hit together with the thread name in which they occur. Selecting the alternate thread will cause the relevant breakpoint to be opened within NetBeans and highlighted in the appropriate Java source code file. NetBeans provides several filters on the Debugging window so that we can show more/less information as appropriate. From left to right, these images allow us to: Show less (suspended and current threads only) Show thread groups Show suspend/resume table Show system threads Show monitors Show qualified names Sort by suspended/resumed state Sort by name Sort by default Debugging multithreaded applications can be a lot easier if you give your threads names. The thread's name is displayed in the Debugging window, and it's a lot easier to understand what a thread with a proper name is doing as opposed to a thread called Thread-1. Deadlock detection When debugging multithreaded applications, one of the problems that we can see is that a deadlock occurs between executing threads. A deadlock occurs when two or more threads become blocked forever because they are both waiting for a shared resource to become available. In Java, this typically occurs when the synchronized keyword is used. NetBeans allows us to easily check for deadlocks using the Check for Deadlock tool available on the Debug menu. When a deadlock is detected using this tool, the state of the deadlocked threads is set to On Monitor in the Threads window. Additionally, the threads are marked as deadlocked in the Debugging window. Each deadlocked thread is displayed with a red band on the left-hand side border and the Deadlock detected warning message is displayed. Analyze Stack Window When running an application within NetBeans, if an exception is thrown and not caught, the stack trace will be displayed in the Output window, allowing the developer to see exactly where errors have occurred. From the following screenshot, we can easily see that a NullPointerException was thrown from within the FaultyImplementation class in the doUntestedOperation() method at line 16. Looking before this in the stack trace (that is the entry underneath), we can see that the doUntestedOperation() method was called from within the main() method of the Main class at line 21: In the preceding example, the FaultyImplementation class is defined as follows: public class FaultyImplementation { public void doUntestedOperation() { throw new NullPointerException(); } } Java is providing an invaluable feature to developers, allowing us to easily see where exceptions are thrown and what the sequence of events was that led to the exception being thrown. NetBeans, however, enhances the display of the stack traces by making the class and line numbers clickable hyperlinks which, when clicked on, will navigate to the appropriate line in the code. This allows us to easily delve into a stack trace and view the code at all the levels of the stack trace. In the previous screenshot, we can click on the hyperlinks FaultyImplementation.java:16 and Main.java:21 to take us to the appropriate line in the appropriate Java file. This is an excellent time-saving feature when developing applications, but what do we do when someone e-mails us a stack trace to look at an error in a production system? How do we manage stack traces that are stored in log files? Fortunately, NetBeans provides an easy way to allow a stack trace to be turned into clickable hyperlinks so that we can browse through the stack trace without running the application. To load and manage stack traces into NetBeans, the first step is to copy the stack trace onto the system clipboard. Once the stack trace has been copied onto the clipboard, Analyze Stack Window can be opened within NetBeans by selecting the Window and then Debugging and then Analyze Stack menu options (the default installation for NetBeans has no keyboard shortcut assigned to this operation). Analyze Stack Window will default to showing the stack trace that is currently in the system clipboard. If no stack trace is in the clipboard, or any other data is in the system's clipboard, Analyze Stack Window will be displayed with no contents. To populate the window, copy a stack trace into the system's clipboard and select the Insert StackTrace From Clipboard button. Once a stack trace has been displayed in Analyze Stack Window, clicking on the hyperlinks in it will navigate to the appropriate location in the Java source files just as it does from the Output window when an exception is thrown from a running application. You can only navigate to source code from a stack trace if the project containing the relevant source code is open in the selected project group. Variable formatters When debugging an application, the NetBeans debugger can display the values of simple primitives in the Variables window. As we saw previously, we can also display the toString() representation of a variable if we select the appropriate columns to display in the window. Sometimes when debugging, however, the toString() representation is not the best way to display formatted information in the Variables window. In this window, we are showing the value of a complex number class that we have used in high school math. The ComplexNumber class being debugged in this example is defined as: public class ComplexNumber { private double realPart; private double imaginaryPart; public ComplexNumber(double realPart, double imaginaryPart) { this.realPart = realPart; this.imaginaryPart = imaginaryPart; } @Override public String toString() { return "ComplexNumber{" + "realPart=" + realPart + ", imaginaryPart=" + imaginaryPart + '}'; } // Getters and Setters omitted for brevity… } Looking at this class, we can see that it essentially holds two members—realPart and imaginaryPart. The toString() method outputs a string, detailing the name of the object and its parameters which would be very useful when writing ComplexNumbers to log files, for example: ComplexNumber{realPart=1.0, imaginaryPart=2.0} When debugging, however, this is a fairly complicated string to look at and comprehend—particularly, when there is a lot of debugging information being displayed. NetBeans, however, allows us to define custom formatters for classes that detail how an object will be displayed in the Variables window when being debugged. To define a custom formatter, select the Java option from the NetBeans Options dialog and then select the Java Debugger tab. From this tab, select the Variable Formatters category. On this screen, all the variable formatters that are defined within NetBeans are shown. To create a new variable formatter, select the Add… button to display the Add Variable Formatter dialog. In the Add Variable Formatter dialog, we need to enter Formatter Name and a list of Class types that NetBeans will apply the formatting to when displaying values in the debugger. To apply the formatter to multiple classes, enter the different classes, separated by commas. The value that is to be formatted is entered in the Value formatted as a result of code snippet field. This field takes the scope of the object being debugged. So, for example, to output the ComplexNumber class, we can enter the custom formatter as: "("+realPart+", "+imaginaryPart+"i)" We can see that the formatter is built up from concatenating static strings and the values of the members realPart and imaginaryPart. We can see the results of debugging variables using custom formatters in the following screenshot: Debugging remote applications The NetBeans debugger provides rapid access for debugging local applications that are executing within the same JVM as NetBeans. What happens though when we want to debug a remote application? A remote application isn't necessarily hosted on a separate server to your development machine, but is defined as any application running outside of the local JVM (that is the one that is running NetBeans). To debug a remote application, the NetBeans debugger can be "attached" to the remote application. Then, to all intents, the application can be debugged in exactly the same way as a local application is debugged, as described in the previous sections of this article. To attach to a remote application, select the Debug and then Attach Debugger… menu options. On the Attach dialog, the connector (SocketAttach, ProcessAttach, or SocketListen) must be specified to connect to the remote application. The appropriate connection details must then be entered to attach the debugger. For example, the process ID must be entered for the ProcessAttach connector and the host and port must be specified for the SocketAttach connector. Profiling applications Learning how to debug applications is an essential technique in software development. Another essential technique that is often overlooked is profiling applications. Profiling applications involves measuring various metrics such as the amount of heap memory used or the number of loaded classes or running threads. By profiling applications, we can gain an understanding of what our applications are actually doing and as such we can optimize them and make them function better. NetBeans provides first class profiling tools that are easy to use and provide results that are easy to interpret. The NetBeans profiler allows us to profile three specific areas: Application monitoring Performance monitoring Memory monitoring Each of these monitoring tools is accessible from the Profile menu within NetBeans. To commence profiling, select the Profile and then Profile Project menu options. After instructing NetBeans to profile a project, the profiler starts providing the choice of the type of profiling to perform. Testing applications Writing tests for applications is probably one of the most important aspects of modern software development. NetBeans provides the facility to write and run both JUnit and TestNG tests and test suites. In this section, we'll provide details on how NetBeans allows us to write and run these types of tests, but we'll assume that you have some knowledge of either JUnit or TestNG. TestNG support is provided by default with NetBeans, however, due to license concerns, JUnit may not have been installed when you installed NetBeans. If JUnit support is not installed, it can easily be added through the NetBeans Plugins system. In a project, NetBeans creates two separate source roots: one for application sources and the other for test sources. This allows us to keep tests separate from application source code so that when we ship applications, we do not need to ship tests with them. This separation of application source code and test source code enables us to write better tests and have less coupling between tests and applications. The best situation is for the test source root to have a dependency on application classes and the application classes to have no dependency on the tests that we have written. To write a test, we must first have a project. Any type of Java project can have tests added into it. To add tests into a project, we can use the New File wizard. In the Unit Tests category, there are templates for: JUnit Tests Tests for Existing Class (this is for JUnit tests) Test Suite (this is for JUnit tests) TestNG Test Case TestNG Test Suite When creating classes for these types of tests, NetBeans provides the option to automatically generate code; this is usually a good starting point for writing classes. When executing tests, NetBeans iterates through the test packages in a project looking for the classes that are suffixed with the word Test. It is therefore essential to properly name tests to ensure they are executed correctly. Once tests have been created, NetBeans provides several methods for running the tests. The first method is to run all the tests that we have defined for an application. Selecting the Run and then Test Project menu options runs all of the tests defined for a project. The type of the project doesn't matter (Java SE or Java EE), nor whether a project uses Maven or the NetBeans project build system (Ant projects are even supported if they have a valid test activity), all tests for the project will be run when selecting this option. After running the tests, the Test Results window will be displayed, highlighting successful tests in green and failed tests in red. In the Test Results window, we have several options to help categorize and manage the tests: Rerun all of the tests Rerun the failed tests Show only the passed tests Show only the failed tests Show errors Show aborted tests Show skipped tests Locate previous failure Locate next failure Always open test result window Always open test results in a new tab The second option within NetBeans for running tests it to run all the tests in a package or class. To perform these operations, simply right-click on a package in the Projects window and select Test Package or right-click on a Java class in the Projects window and select Test File. The final option for running tests it to execute a single test in a class. To perform this operation, right-click on a test in the Java source code editor and select the Run Focussed Test Method menu option. After creating tests, how do we keep them up to date when we add new methods to application code? We can keep tests suites up to date by manually editing them and adding new methods corresponding to new application code or we can use the Create/Update Tests menu. Selecting the Tools and then Create/Update Tests menu options displays the Create Tests dialog that allows us to edit the existing test classes and add new methods into them, based upon the existing application classes. Summary In this article, we looked at the typical tasks that a developer does on a day-to-day basis when writing applications. We saw how NetBeans can help us to run and debug applications and how to profile applications and write tests for them. Finally, we took a brief look at TDD, and saw how the Red-Green-Refactor cycle can be used to help us develop more stable applications. Resources for Article: Further resources on this subject: Contexts and Dependency Injection in NetBeans [article] Creating a JSF composite component [article] Getting to know NetBeans [article]
Read more
  • 0
  • 0
  • 12174

article-image-go-programming-control-flow
Packt
10 Aug 2016
13 min read
Save for later

Go Programming Control Flow

Packt
10 Aug 2016
13 min read
In this article by Vladimir Vivien author of the book Learning Go programming explains some basic control flow of Go programming language. Go borrows several of the control flow syntax from its C-family of languages. It supports all of the expected control structures including if-else, switch, for-loop, and even goto. Conspicuously absent though are while or do-while statements. The following topics examine Go's control flow elements. Some of which you may already be familiar and others that bring new set of functionalities not found in other languages. The if statement The switch statement The type Switch (For more resources related to this topic, see here.) The If Statement The if-statement, in Go, borrows its basic structural form from other C-like languages. The statement conditionally executes a code-block when the Boolean expression that follows the if keyword which evaluates to true as illustrated in the following abbreviated program that displays information about the world currencies. import "fmt" type Currency struct { Name string Country string Number int } var CAD = Currency{ Name: "Canadian Dollar", Country: "Canada", Number: 124} var FJD = Currency{ Name: "Fiji Dollar", Country: "Fiji", Number: 242} var JMD = Currency{ Name: "Jamaican Dollar", Country: "Jamaica", Number: 388} var USD = Currency{ Name: "US Dollar", Country: "USA", Number: 840} func main() { num0 := 242 if num0 > 100 || num0 < 900 { mt.Println("Currency: ", num0) printCurr(num0) } else { fmt.Println("Currency unknown") } if num1 := 388; num1 > 100 || num1 < 900 { fmt.Println("Currency:", num1) printCurr(num1) } } func printCurr(number int) { if CAD.Number == number { fmt.Printf("Found: %+vn", CAD) } else if FJD.Number == number { fmt.Printf("Found: %+vn", FJD) } else if JMD.Number == number { fmt.Printf("Found: %+vn", JMD) } else if USD.Number == number { fmt.Printf("Found: %+vn", USD) } else { fmt.Println("No currency found with number", number) } } The if statement in Go looks similar to other languages. However, it sheds a few syntactic rules while enforcing new ones. The parentheses, around the test expression, are not necessary. While the following if-statement will compile, it is not idiomatic: if (num0 > 100 || num0 < 900) { fmt.Println("Currency: ", num0) printCurr(num0) } Use instead: if num0 > 100 || num0 < 900 { fmt.Println("Currency: ", num0) printCurr(num0) } The curly braces for the code block are always required. The following snippet will not compile: if num0 > 100 || num0 < 900 printCurr(num0) However, this will compile: if num0 > 100 || num0 < 900 {printCurr(num0)} It is idiomatic, however, to write the if statement on multiple lines (no matter how simple the statement block may be). This encourages good style and clarity. The following snippet will compile with no issues: if num0 > 100 || num0 < 900 {printCurr(num0)} However, the preferred idiomatic layout for the statement is to use multiple lines as follows: if num0 > 100 || num0 < 900 { printCurr(num0) } The if statement may include an optional else block which is executed when the expression in the if block evaluates to false. The code in the else block must be wrapped in curly braces using multiple lines as shown in the following. if num0 > 100 || num0 < 900 { fmt.Println("Currency: ", num0) printCurr(num0) } else { fmt.Println("Currency unknown") } The else keyword may be immediately followed by another if statement forming an if-else-if chain as used in function printCurr() from the source code listed earlier. if CAD.Number == number { fmt.Printf("Found: %+vn", CAD) } else if FJD.Number == number { fmt.Printf("Found: %+vn", FJD) The if-else-if statement chain can grow as long as needed and may be terminated by an optional else statement to express all other untested conditions. Again, this is done in the printCurr() function which tests four conditions using the if-else-if blocks. Lastly, it includes an else statement block to catch any other untested conditions: func printCurr(number int) { if CAD.Number == number { fmt.Printf("Found: %+vn", CAD) } else if FJD.Number == number { fmt.Printf("Found: %+vn", FJD) } else if JMD.Number == number { fmt.Printf("Found: %+vn", JMD) } else if USD.Number == number { fmt.Printf("Found: %+vn", USD) } else { fmt.Println("No currency found with number", number) } } In Go, however, the idiomatic and cleaner way to write such a deep if-else-if code block is to use an expressionless switch statement. This is covered later in the section on SwitchStatement. If Statement Initialization The if statement supports a composite syntax where the tested expression is preceded by an initialization statement. At runtime, the initialization is executed before the test expression is evaluated as illustrated in this code snippet (from the program listed earlier). if num1 := 388; num1 > 100 || num1 < 900 { fmt.Println("Currency:", num1) printCurr(num1) } The initialization statement follows normal variable declaration and initialization rules. The scope of the initialized variables is bound to the if statement block beyond which they become unreachable. This is a commonly used idiom in Go and is supported in other flow control constructs covered in this article. Switch Statements Go also supports a switch statement similarly to that found in other languages such as C or Java. The switch statement in Go achieves multi-way branching by evaluating values or expressions from case clauses as shown in the following abbreviated source code: import "fmt" type Curr struct { Currency string Name string Country string Number int } var currencies = []Curr{ Curr{"DZD", "Algerian Dinar", "Algeria", 12}, Curr{"AUD", "Australian Dollar", "Australia", 36}, Curr{"EUR", "Euro", "Belgium", 978}, Curr{"CLP", "Chilean Peso", "Chile", 152}, Curr{"EUR", "Euro", "Greece", 978}, Curr{"HTG", "Gourde", "Haiti", 332}, ... } func isDollar(curr Curr) bool { var bool result switch curr { default: result = false case Curr{"AUD", "Australian Dollar", "Australia", 36}: result = true case Curr{"HKD", "Hong Kong Dollar", "Hong Koong", 344}: result = true case Curr{"USD", "US Dollar", "United States", 840}: result = true } return result } func isDollar2(curr Curr) bool { dollars := []Curr{currencies[2], currencies[6], currencies[9]} switch curr { default: return false case dollars[0]: fallthrough case dollars[1]: fallthrough case dollars[2]: return true } return false } func isEuro(curr Curr) bool { switch curr { case currencies[2], currencies[4], currencies[10]: return true default: return false } } func main() { curr := Curr{"EUR", "Euro", "Italy", 978} if isDollar(curr) { fmt.Printf("%+v is Dollar currencyn", curr) } else if isEuro(curr) { fmt.Printf("%+v is Euro currencyn", curr) } else { fmt.Println("Currency is not Dollar or Euro") } dol := Curr{"HKD", "Hong Kong Dollar", "Hong Koong", 344} if isDollar2(dol) { fmt.Println("Dollar currency found:", dol) } } The switch statement in Go has some interesting properties and rules that make it easy to use and reason about. Semantically, Go's switch-statement can be used in two contexts: An expression-switch statement A type-switch statement The break statement can be used to escape out of a switch code block early The switch statement can include a default case when no other case expressions evaluate to a match. There can only be one default case and it may be placed anywhere within the switch block. Using Expression Switches Expression switches are flexible and can be used in many contexts where control flow of a program needs to follow multiple path. An expression switch supports many attributes as outlined in the following bullets. Expression switches can test values of any types. For instance, the following code snippet (from the previous program listing) tests values of struct type Curr. func isDollar(curr Curr) bool { var bool result switch curr { default: result = false case Curr{"AUD", "Australian Dollar", "Australia", 36}: result = true case Curr{"HKD", "Hong Kong Dollar", "Hong Koong", 344}: result = true case Curr{"USD", "US Dollar", "United States", 840}: result = true } return result } The expressions in case clauses are evaluated from left to right, top to bottom, until a value (or expression) is found that is equal to that of the switch expression. Upon encountering the first case that matches the switch expression, the program will execute the statements for the case block and then immediately exist the switch block. Unlike other languages, the Go case statement does not need to use a break to avoid falling through the next case. For instance, calling isDollar(Curr{"HKD", "Hong Kong Dollar", "Hong Koong", 344}) will match the second case statement in the function above. The code will set result to true and exist the switch code block immediately. Case clauses can have multiple values (or expressions) separated by commas with logical OR operator implied between them. For instance, in the following snippet, the switch expression curr is tested against values currencies[2], currencies[4], or currencies[10] using one case clause until a match is found. func isEuro(curr Curr) bool { switch curr { case currencies[2], currencies[4], currencies[10]: return true default: return false } } The switch statement is the cleaner and preferred idiomatic approach to writing complex conditional statements in Go. This is evident when the snippet above is compared to the following which does the same comparison using if statements. func isEuro(curr Curr) bool { if curr == currencies[2] || curr == currencies[4], curr == currencies[10]{ return true }else{ return false } } Fallthrough Cases There is no automatic fall through in Go's case clause as it does in the C or Java switch statements. Recall that a switch block that will exit after executing its first matching case. The code must explicitly place the fallthrough keyword, as the last statement in a case block, to force the execution flow to fall through the successive case block. The following code snippet shows a switch statement with a fallthrough in each case block. func isDollar2(curr Curr) bool { switch curr { case Curr{"AUD", "Australian Dollar", "Australia", 36}: fallthrough case Curr{"HKD", "Hong Kong Dollar", "Hong Koong", 344}: fallthrough case Curr{"USD", "US Dollar", "United States", 840}: return true default: return false } } When a case is matched, the fallthrough statements cascade down to the first statement of the successive case block. So if curr = Curr{"AUD", "Australian Dollar", "Australia", 36}, the first case will be matched. Then the flow cascades down to the first statement of the second case block which is also a fallthrough statement. This causes the first statement, return true, of the third case block to execute. This is functionally equivalent to following snippet. switch curr { case Curr{"AUD", "Australian Dollar", "Australia", 36}, Curr{"HKD", "Hong Kong Dollar", "Hong Koong", 344}, Curr{"USD", "US Dollar", "United States", 840}: return true default: return false } Expressionless Switches Go supports a form of the switch statement that does not specify an expression. In this format, each case expression must evaluate to a Boolean value true. The following abbreviated source code illustrates the uses of an expressionless switch statement as listed in function find(). The function loops through the slice of Curr values to search for a match based on field values in the struct passed in: import ( "fmt" "strings" ) type Curr struct { Currency string Name string Country string Number int } var currencies = []Curr{ Curr{"DZD", "Algerian Dinar", "Algeria", 12}, Curr{"AUD", "Australian Dollar", "Australia", 36}, Curr{"EUR", "Euro", "Belgium", 978}, Curr{"CLP", "Chilean Peso", "Chile", 152}, ... } func find(name string) { for i := 0; i < 10; i++ { c := currencies[i] switch { case strings.Contains(c.Currency, name), strings.Contains(c.Name, name), strings.Contains(c.Country, name): fmt.Println("Found", c) } } } Notice in the previous example, the switch statement in function find() does not include an expression. Each case expression is separated by a comma and must be evaluated to a Boolean value with an implied OR operator between each case. The previous switch statement is equivalent to the following use of if statement to achieve the same logic. func find(name string) { for i := 0; i < 10; i++ { c := currencies[i] if strings.Contains(c.Currency, name) || strings.Contains(c.Name, name) || strings.Contains(c.Country, name){ fmt.Println("Found", c) } } } Switch Initializer The switch keyword may be immediately followed by a simple initialization statement where variables, local to the switch code block, may be declared and initialized. This convenient syntax uses a semicolon between the initializer statement and the switch expression to declare variables which may appear anywhere in the switch code block. The following code sample shows how this is done by initializing two variables name and curr as part of the switch declaration. func assertEuro(c Curr) bool { switch name, curr := "Euro", "EUR"; { case c.Name == name: return true case c.Currency == curr: return true } return false } The previous code snippet uses an expressionless switch statement with an initializer. Notice the trailing semicolon to indicate the separation between the initialization statement and the expression area for the switch. In the example, however, the switch expression is empty. Type Switches Given Go's strong type support, it should be of little surprise that the language supports the ability to query type information. The type switch is a statement that uses the Go interface type to compare underlying type information of values (or expressions). A full discussion on interface types and type assertion is beyond the scope of this section. For now all you need to know is that Go offers the type interface{}, or empty interface, as a super type that is implemented by all other types in the type system. When a value is assigned type interface{}, it can be queried using the type switch as, shown in function findAny() in following code snippet, to query information about its underlying type. func find(name string) { for i := 0; i < 10; i++ { c := currencies[i] switch { case strings.Contains(c.Currency, name), strings.Contains(c.Name, name), strings.Contains(c.Country, name): fmt.Println("Found", c) } } } func findNumber(num int) { for _, curr := range currencies { if curr.Number == num { fmt.Println("Found", curr) } } } func findAny(val interface{}) { switch i := val.(type) { case int: findNumber(i) case string: find(i) default: fmt.Printf("Unable to search with type %Tn", val) } } func main() { findAny("Peso") findAny(404) findAny(978) findAny(false) } The function findAny() takes an interface{} as its parameter. The type switch is used to determine the underlying type and value of the variable val using the type assertion expression: switch i := val.(type) Notice the use of the keyword type in the type assertion expression. Each case clause will be tested against the type information queried from val.(type). Variable i will be assigned the actual value of the underlying type and is used to invoke a function with the respective value. The default block is invoked to guard against any unexpected type assigned to the parameter val parameter. Function findAny may then be invoked with values of diverse types, as shown in the following code snippet. findAny("Peso") findAny(404) findAny(978) findAny(false) Summary This article gave a walkthrough of the mechanism of control flow in Go including if, switch statements. While Go’s flow control constructs appear simple and easy to use, they are powerful and implement all branching primitives expected for a modern language. Resources for Article: Further resources on this subject: Game Development Using C++ [Article] Boost.Asio C++ Network Programming [Article] Introducing the Boost C++ Libraries [Article]
Read more
  • 0
  • 0
  • 12171

article-image-solving-many-many-relationship-dimensional-modeling
Packt
28 Dec 2009
3 min read
Save for later

Solving Many-to-Many Relationship in Dimensional Modeling

Packt
28 Dec 2009
3 min read
Bridge table solution We will use a simplified book sales dimensional model as an example to demonstrate our bridge solution. Our book sales model initially has the SALES_FACT fact table and two dimension tables: BOOK_DIM and DATE_DIM. The granularity of the model is sales amount by date (daily) and by book. Assume the BOOK_DIM table has five rows: BOOK_SK TITLE AUTHOR 1 Programming in Java King, Chan 3 Learning Python Simpson 2 Introduction to BIRT Chan, Gupta, Simpson (Editor) 4 Advanced Java King, Chan 5 Beginning XML King, Chan (Foreword) The DATE_DIM has two rows: DATE_SK DT 1 11-DEC-2009 2 12-DEC-2009 3 13-DEC-2009 And, the SALES_FACT table has ten rows: DATE_SK BOOK_SK SALES_AMT 1 1 1000 1 2 2000 1 3 3000 1 4 4000 2 2 2500 2 3 3500 2 4 4500 2 5 5500 3 3 8000 3 4 8500 Note that:The columns with _sk suffixes in the dimension tables are surrogate keys of the dimension tables; these surrogate keys relate the rows of the fact table to the rows in the dimension tables.King and Chan have collaborated in three books; two as co-authors, while in the “Beginning XML” Chan’s contribution is writing its foreword. Chan also co-authors the “Introduction to BIRT”.Simpson singly writes the “Learning Python” and is an editor for “Introduction to BIRT”. To analyze daily book sales, you simply run a query, joining the dimension tables to the fact table: SELECT dt, title, sales_amt FROM sales_fact s, date_dim d, book_dim bWHERE s.date_sk = d.date_skAND s.book_sk = b.book_sk This query produces the result showing the daily sales amount of every book that has a sale: DT TITLE SALES_AMT 11-DEC-09 Advanced Java 4000 11-DEC-09 Introduction to BIRT 2000 11-DEC-09 Learning Python 3000 11-DEC-09 Programming in Java 1000 12-DEC-09 Advanced Java 4500 12-DEC-09 Beginning XML 5500 12-DEC-09 Introduction to BIRT 2500 12-DEC-09 Learning Python 3500 13-DEC-09 Advanced Java 8500 13-DEC-09 Learning Python 8000 You will notice that the model does not allow you to readily analyze the sales by individual writer—the AUTHOR column is multi-value, not normalized, which violates the dimension modeling rule (we can resolve this by creating a view to “bundle” the AUTHOR_DIM with the SALES_FACT tables such that the AUTHORtable connects to the view as a normal dimension. We will create the view a bit later in this section). We can solve this issue by adding an AUTHOR_DIM and its AUTHOR_GROUP bridge table. The AUTHOR_DIM must contain all individual contributors, which you will have to extract from the books and enter into the table. In our example we have four authors. AUTHOR_SK NAME 1 Chan 2 King 3 Gupta 4 Simpson The weighting_factor column in the AUTHOR_GROUP bridge table contains a fractional numeric value that determines the contribution of an author to a book. Typically the authors have equal contribution to the book they write, but you might want to have different weighting_factor for different roles; for example, an editor and a foreword writer have smaller weighting_factors than that of an author. The total weighting_factors for a book must always equal to 1. The AUTHOR_GROUP bridge table has one surrogate key for every group of authors (a single author is considered a group that has one author only), and as many rows with that surrogate key for every contributor in the group.
Read more
  • 0
  • 1
  • 12138

article-image-performance-design
Packt
15 Sep 2015
9 min read
Save for later

Performance by Design

Packt
15 Sep 2015
9 min read
In this article by Shantanu Kumar, author of the book, Clojure High Performance Programming - Second Edition, we learn how Clojure is a safe, functional programming language that brings great power and simplicity to the user. Clojure is also dynamically and strongly typed, and has very good performance characteristics. Naturally, every activity performed on a computer has an associated cost. What constitutes acceptable performance varies from one use-case and workload to another. In today's world, performance is even the determining factor for several kinds of applications. We will discuss Clojure (which runs on the JVM (Java Virtual Machine)), and its runtime environment in the light of performance, which is the goal of the book. In this article, we will study the basics of performance analysis, including the following: A whirlwind tour of how the application stack impacts performance Classifying the performance anticipations by the use cases types (For more resources related to this topic, see here.) Use case classification The performance requirements and priority vary across the different kinds of use cases. We need to determine what constitutes acceptable performance for the various kinds of use cases. Hence, we classify them to identify their performance model. When it comes to details, there is no sure shot performance recipe of any kind of use case, but it certainly helps to study their general nature. Note that in real life, the use cases listed in this section may overlap with each other. The user-facing software The performance of user facing applications is strongly linked to the user's anticipation. Having a difference of a good number of milliseconds may not be perceptible for the user but at the same time, a wait for more than a few seconds may not be taken kindly. One important element to normalize the anticipation is to engage the user by providing a duration-based feedback. A good idea to deal with such a scenario would be to start the task asynchronously in the background, and poll it from the UI layer to generate duration-based feedback for the user. Another way could be to incrementally render the results to the user to even out the anticipation. Anticipation is not the only factor in user facing performance. Common techniques like staging or precomputation of data, and other general optimization techniques can go a long way to improve the user experience with respect to performance. Bear in mind that all kinds of user facing interfaces fall into this use case category—the Web, mobile web, GUI, command line, touch, voice-operated, gesture...you name it. Computational and data-processing tasks Non-trivial compute intensive tasks demand a proportional amount of computational resources. All of the CPU, cache, memory, efficiency and the parallelizability of the computation algorithms would be involved in determining the performance. When the computation is combined with distribution over a network or reading from/staging to disk, I/O bound factors come into play. This class of workloads can be further subclassified into more specific use cases. A CPU bound computation A CPU bound computation is limited by the CPU cycles spent on executing it. Arithmetic processing in a loop, small matrix multiplication, determining whether a number is a Mersenne prime, and so on, would be considered CPU bound jobs. If the algorithm complexity is linked to the number of iterations/operations N, such as O(N), O(N2) and more, then the performance depends on how big N is, and how many CPU cycles each step takes. For parallelizable algorithms, performance of such tasks may be enhanced by assigning multiple CPU cores to the task. On virtual hardware, the performance may be impacted if the CPU cycles are available in bursts. A memory bound task A memory bound task is limited by the availability and bandwidth of the memory. Examples include large text processing, list processing, and more. For example, specifically in Clojure, the (reduce f (pmap g coll)) operation would be memory bound if coll is a large sequence of big maps, even though we parallelize the operation using pmap here. Note that higher CPU resources cannot help when memory is the bottleneck, and vice versa. Lack of availability of memory may force you to process smaller chunks of data at a time, even if you have enough CPU resources at your disposal. If the maximum speed of your memory is X and your algorithm on single the core accesses the memory at speed X/3, the multicore performance of your algorithm cannot exceed three times the current performance, no matter how many CPU cores you assign to it. The memory architecture (for example, SMP and NUMA) contributes to the memory bandwidth in multicore computers. Performance with respect to memory is also subject to page faults. A cache bound task A task is cache bound when its speed is constrained by the amount of cache available. When a task retrieves values from a small number of repeated memory locations, for example, a small matrix multiplication, the values may be cached and fetched from there. Note that CPUs (typically) have multiple layers of cache, and the performance will be at its best when the processed data fits in the cache, but the processing will still happen, more slowly, when the data does not fit into the cache. It is possible to make the most of the cache using cache-oblivious algorithms. A higher number of concurrent cache/memory bound threads than CPU cores is likely to flush the instruction pipeline, as well as the cache at the time of context switch, likely leading to a severely degraded performance. An input/output bound task An input/output (I/O) bound task would go faster if the I/O subsystem, that it depends on, goes faster. Disk/storage and network are the most commonly used I/O subsystems in data processing, but it can be serial port, a USB-connected card reader, or any I/O device. An I/O bound task may consume very few CPU cycles. Depending on the speed of the device, connection pooling, data compression, asynchronous handling, application caching, and more, may help in performance. One notable aspect of I/O bound tasks is that performance is usually dependent on the time spent waiting for connection/seek, and the amount of serialization that we do, and hardly on the other resources. In practice, many data processing workloads are usually a combination of CPU bound, memory bound, cache bound, and I/O bound tasks. The performance of such mixed workloads effectively depends on the even distribution of CPU, cache, memory, and I/O resources over the duration of the operation. A bottleneck situation arises only when one resource gets too busy to make way for another. Online transaction processing The online transaction processing (OLTP) systems process the business transactions on demand. It can sit behind systems such as a user-facing ATM machine, point-of-sale terminal, a network-connected ticket counter, ERP systems, and more. The OLTP systems are characterized by low latency, availability, and data integrity. They run day-to-day business transactions. Any interruption or outage is likely to have a direct and immediate impact on the sales or service. Such systems are expected to be designed for resiliency rather than the delayed recovery from failures. When the performance objective is unspecified, you may like to consider graceful degradation as a strategy. It is a common mistake to ask the OLTP systems to answer analytical queries; something that they are not optimized for. It is desirable of an informed programmer to know the capability of the system, and suggest design changes as per the requirements. Online analytical processing The online analytical processing (OLAP) systems are designed to answer analytical queries in short time. They typically get data from the OLTP operations, and their data model is optimized for querying. They basically provide for consolidation (roll-up), drill-down and slicing, and dicing of data for analytical purposes. They often use specialized data stores that can optimize the ad-hoc analytical queries on the fly. It is important for such databases to provide pivot-table like capability. Often, the OLAP cube is used to get fast access to the analytical data. Feeding the OLTP data into the OLAP systems may entail workflows and multistage batch processing. The performance concern of such systems is to efficiently deal with large quantities of data, while also dealing with inevitable failures and recovery. Batch processing Batch processing is automated execution of predefined jobs. These are typically bulk jobs that are executed during off-peak hours. Batch processing may involve one or more stages of job processing. Often batch processing is clubbed with work-flow automation, where some workflow steps are executed offline. Many of the batch processing jobs work on staging of data, and on preparing data for the next stage of processing to pick up. Batch jobs are generally optimized for the best utilization of the computing resources. Since there is little to moderate the demand to lower the latencies of some particular subtasks, these systems tend to optimize for throughput. A lot of batch jobs involve largely I/O processing and are often distributed over a cluster. Due to distribution, the data locality is preferred when processing the jobs; that is, the data and processing should be local in order to avoid network latency in reading/writing data. Summary We learned about the basics of what it is like to think more deeply about performance. The performance of Clojure applications depend on various factors. For a given application, understanding its use cases, design and implementation, algorithms, resource requirements and alignment with the hardware, and the underlying software capabilities, is essential. Resources for Article: Further resources on this subject: Big Data [article] The Observer Pattern [article] Working with Incanter Datasets [article]
Read more
  • 0
  • 0
  • 12069

article-image-using-groovy-closures-instead-template-method
Packt
23 Dec 2010
3 min read
Save for later

Using Groovy Closures Instead of Template Method

Packt
23 Dec 2010
3 min read
  Groovy for Domain-Specific Languages Extend and enhance your Java applications with Domain Specific Languages in Groovy Build your own Domain Specific Languages on top of Groovy Integrate your existing Java applications using Groovy-based Domain Specific Languages (DSLs) Develop a Groovy scripting interface to Twitter A step-by-step guide to building Groovy-based Domain Specific Languages that run seamlessly in the Java environment         Read more about this book       (For more resources on Groovy, see here.) Template Method Pattern Overview The template method pattern often applies during the thought "Well I have a piece of code that I want to use again, but I can't use it 100%. I want to change a few lines to make it useful." In general, using this pattern involves creating an abstract class and varying its implementation through abstract hook methods. Subclasses implement these abstract hook methods to solve their specific problem. This approach is very effective and is used extensively in frameworks. However, closures provide an elegant solution. Sample HttpBuilder Request It is best to illustrate the closure approach with an example. Recently I was developing a consumer of REST webservices with HttpBuilder. With HttpBuilder, the client simply creates the class and issues an HTTP call. The framework waits for a response and provides hooks for processing. Many of the requests being made were very similar to one another, only the URI was different. In addition, each request needed to process the returned XML differently, as the XML received would vary. I wanted to use the same request code, but vary the XML processing. To summarize the problem: HttpBuilder code should be reused Different URIs should be sent out with the same HttpBuilder code Different XML should be processed with the same HttpBuilder code Here is my first draft of HttpBuilder code. Note the call to convertXmlToCompanyDomainObject(xml). static String URI_PREFIX = '/someApp/restApi/' private List issueHttpBuilderRequest(RequestObject requestObj, String uriPath) { def http = new HTTPBuilder("http://localhost:8080/") def parsedObjectsFromXml = [] http.request(Method.POST, ContentType.XML) { req -> // set uri path on the delegate uri.path = URI_PREFIX + uriPath uri.query = [ company: requestObj.company, date: requestObj.date type: requestObj.type ] headers.'User-Agent' = 'Mozilla/5.0' // when response is a success, parse the gpath xml response.success = { resp, xml -> assert resp.statusLine.statusCode == 200 // store the list parsedObjectsFromXml = convertXmlToCompanyDomainObject(xml) } // called only for a 404 (not found) status code: response.'404' = { resp -> log.info 'HTTP status code: 404 Not found' } } parsedObjectsFromXml } private List convertXmlToCompanyDomainObject(GPathResult xml) { def list = [] // .. implementation to parse the xml and turn into objects } As you can see, URI is passed as a parameter to issueHttpBuilderRequest. This solves the problem of sending different URIs, but what about parsing the different XML formats that are returned? Using Template Method Pattern The following diagram illustrates applying the template method pattern to this problem. In summary, we need to move the issueHttpBuilderRequest code to an abstract class, and provide an abstract method convertXmlToDomainObjects(). Subclasses would provide the appropriate XML conversion implementation.
Read more
  • 0
  • 0
  • 12060

article-image-observability-as-code-secrets-as-a-service-and-chaos-katas-thoughtworks-outlines-key-engineering-techniques-to-trial-and-assess
Richard Gall
14 Nov 2018
5 min read
Save for later

Observability as code, secrets as a service, and chaos katas: ThoughtWorks outlines key engineering techniques to trial and assess

Richard Gall
14 Nov 2018
5 min read
ThoughtWorks has just published vol. 19 of its essential radar report. As always, it's a vital insight into what's beginning to emerge in the technology field. In the techniques quadrant of its radar, there were some really interesting new entries. Let's take a look at some of them now, so you can better plan and evaluate your roadmap and skill set for 2019. 8 of the best new techniques you should be trialling (according to ThoughtWorks) 1% canary: a way to build better feedback loops This sounds like a weird one, but the concept is simple. It's essentially about building a quick feedback loop to a tiny segment of customers - say, 1%. This can allow engineering teams to learn things quickly and make changes on other aspects of the project as it evolves. Bounded buy: a smarter way to buy out-of-the-box software solutions Bounded buy mitigates the scope creep that can cause headaches for businesses dealing with out-of-the-box software. It means those responsible for purchasing software focus only on solutions that are modular, with each 'piece' directly connecting into a particular department's needs or workflow. Crypto shredding: securing sensitive data Crypto shredding is a method of securing data that might otherwise be easily replicated or copied. Essentially, it overwrites sensitive data with encryption keys which can easily be removed or deleted. It adds an extra layer of control over a large data set - a technique that could be particularly useful in a field like healthcare. Four key metrics - focus on what's most important to build a high performance team Building a high performance team, can be challenging. Accelerate, the team behind the State of DevOps report, highlighted key drivers that engineers and team leaders should focus on: lead time, deployment frequency, mean time to restore (MTTR), and change fail percentage. According to ThoughtWorks "each metric creates a virtuous cycle and focuses the teams on continuous improvement." Observability as code - breaking through the limits of traditional monitoring tools Observability has emerged as a bit of a buzzword over the last 12 months. But in the context of microservices, and increased complexity in software architecture, it is nevertheless important. However, the means through which you 'do' observability - a range of monitoring tools and dashboards - can be limiting in terms of making adjustments and replicating dashboards. This is why treating observability as code is going to become increasingly more important. It makes sense - if infrastructure as code is the dominant way we think about building software, why shouldn't it be the way we monitor it too? Run cost as architecture fitness function There's a wide assumption that serverless can save you money. This is true when you're starting out, or want to do something quickly, but it's less true as you scale up. If you're using serverless functions repeatedly, you're likely to be paying a lot - more than if you has a slightly less fashionable cloud or on premise server. To combat this complacency, you should instead watch how much services cost against the benefit delivered by them. Seems obvious, but easy to miss if you've just got excited about going serverless. Secrets as a service Without wishing to dampen what sounds incredibly cool, secrets as a service are ultimately just elaborate password managers. They can help organizations more easily decouple credentials, API keys from their source code, a move which should ensure improved security - and simplicity. By using credential rotation, organizations can be much better prepared at tackling and mitigating any security issues. AWS has 'Secrets Manager' while HashiCorp's Vault offers similar functionality. Security chaos engineering In the last edition of Radar, security chaos engineering was in the assess phase - which means ThoughtWorks thinks it's worth looking at, but maybe too early to deploy. With volume 19, security chaos engineering has moved into trial. Clearly, while chaos engineering more broadly has seen slower adoption, it would seem that over the last 12 months the security field has taken chaos engineering to heart. 2 new software engineering techniques to assess Chaos katas If chaos engineering is finding it hard to gain mainstream adoption, perhaps chaos katas is the way forward. This is essentially a technique that helps engineers deploy chaos practices in their respective domains using the training approach known as kata - a Japanese word that simply refers to a set of choreographed movements. In this context, the 'katas' are a set of code patterns that implement failures in a structured way, which engineers can then identify and explore. This is essentially a bottom up way of doing chaos engineering that also gives engineers a deeper insight into their software infrastructure. Infrastructure configuration scanner The question of who should manage your infrastructure is still a tricky one, with plenty of conflicting perspectives. However, from a productivity and agility perspective, putting the infrastructure in the hands of engineers makes a lot of sense. Of course, this could feel like an extra burden - but with an infrastructure configuration scanner, like Scout2 or Watchmen, engineers can ensure that everything is configured correctly. Software engineering techniques need to maintain simplicity as complexity increases There's clearly a diverse range of techniques on the ThoughtWorks Radar. Ultimately, however, the picture that emerges is one where efficiency and observability are key. A crucial part of software engineering will managing increased complexity and developing new tools and processes to instil some degree of simplicity and clarity. Was there anything ThoughtWorks missed?
Read more
  • 0
  • 0
  • 11976
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
Packt
03 Sep 2015
10 min read
Save for later

Learning RSLogix 5000 – Buffering I/O Module Input and Output Values

Packt
03 Sep 2015
10 min read
 In the following article by Austin Scott, the author of Learning RSLogix 5000 Programming, you will be introduced to the high performance, asynchronous nature of the Logix family of controllers and the requirement for the buffering I/O module data it drives. You will learn various techniques for the buffering I/O module values in RSLogix 5000 and Studio 5000 Logix Designer. You will also learn about the IEC Languages that do not require the input or output module buffering techniques to be applied to them. In order to understand the need for buffering, let's start by exploring the evolution of the modern line of Rockwell Automation Controllers. (For more resources related to this topic, see here.) ControlLogix controllers The ControlLogix controller was first launched in 1997 as a replacement for Allen Bradley's previous large-scale control platform. The PLC-5. ControlLogix represented a significant technological step forward, which included a 32-bit ARM-6 RISC-core microprocessor and the ABrisc Boolean processor combined with a bus interface on the same silicon chip. At launch, the Series 5 ControlLogix controllers (also referred to as L5 and ControlLogix 5550, which has now been superseded by the L6 and L7 series controllers) were able to execute code three times faster than PLC-5. The following is an illustration of the original ControlLogix L5 Controller: ControlLogix Logix L5 Controller The L5 controller is considered to be a PAC (Programmable Automation Controller) rather than a traditional PLC (Programmable Logic Controller), due to its modern design, power, and capabilities beyond a traditional PLC (such as motion control, advanced networking, batching and sequential control). ControlLogix represented a significant technological step forward for Rockwell Automation, but this new technology also presented new challenges for automation professionals. ControlLogix was built using a modern asynchronous operating model rather than the more traditional synchronous model used by all the previous generations of controllers. The asynchronous operating model requires a different approach to real-time software development in RSLogix 5000 (now known in version 20 and higher as Studio 5000 Logix Designer). Logix operating cycle The entire Logix family of controllers (ControlLogix and CompactLogix) have diverged from the traditional synchronous PLC scan architecture in favor of a more efficient asynchronous operation. Like most modern computer systems, asynchronous operation allows the Logix controller to handle multiple Tasks at the same time by slicing the processing time between each task. The continuous update of information in an asynchronous processor creates some programming challenges, which we will explore in this article. The following diagram illustrates the difference between synchronous and asynchronous operation. Synchronous versus Asynchronous Processor Operation Addressing module I/O data Individual channels on a module can be referenced in your Logix Designer / RSLogix 5000 programs using it's address. An address gives the controller directions to where it can find a particular piece of information about a channel on one of your modules. The following diagram illustrates the components of an address in RsLogix 5000 or Studio 5000 Logix Designer: The components of an I/O Module Address in Logix Module I/O tags can be viewed using the Controller Tags window, as the following screen shot illustrates. I/O Module Tags in Studio 5000 Logix Designer Controller Tags Window Using the module I/O tags, input and output module data can be directly accessed anywhere within a logic routine. However, it is recommended that we buffer module I/O data before we evaluate it in Logic. Otherwise, due to the asynchronous tag value updates in our I/O modules, the state of our process values could change part way through logic execution, thus creating unpredictable results. In the next section, we will introduce the concept of module I/O data buffering. Buffering module I/O data In the olden days of PLC5s and SLC500s, before we had access to high-performance asynchronous controllers like the ControlLogix, SoftLogix and CompactLogix families, program execution was sequential (synchronous) and very predictable. In asynchronous controllers, there are many activities happening at the same time. Input and output values can change in the middle of a program scan and put the program in an unpredictable state. Imagine a program starting a pump in one line of code and closing a valve directly in front of that pump in the next line of code, because it detected a change in process conditions. In order to address this issue, we use a technique call buffering and, depending on the version of Logix you are developing on, there are a few different methods of achieving this. Buffering is a technique where the program code does not directly access the real input or output tags on the modules during the execution of a program. Instead, the input and output module tags are copied at the beginning of a programs scan to a set of base tags that will not change state during the program's execution. Think of buffering as taking a snapshot of the process conditions and making decisions on those static values rather than live values that are fluctuating every millisecond. Today, there is a rule in most automation companies that require programmers to write code that "Buffers I/O" data to base tags that will not change during a programs execution. The two widely accepted methods of buffering are: Buffering to base tags Program parameter buffering (only available in the Logix version 24 and higher) Do not underestimate the importance of buffering a program's I/O. I worked on an expansion project for a process control system where the original programmers had failed to implement buffering. Once a month, the process would land in a strange state, which the program could not recover from. The operators had attributed these problem to "Gremlins" for years, until I identified and corrected the issue. Buffering to base tags Logic can be organized into manageable pieces and executed based on different intervals and conditions. The buffering to base tags practice takes advantage of Logix's ability to organize code into routines. The default ladder logic routine that is created in every new Logix project is called MainRoutine. The recommended best practice for buffering tags in ladder logic is to create three routines: One for reading input values and buffering them One for executing logic One for writing the output values from the buffered values The following ladder logic excerpt is from MainRoutine of a program that implements Input and Output Buffering: MainRoutine Ladder Logic Routine with Input and Output Buffering Subroutine Calls The following ladder logic is taken from the BufferInputs routine and demonstrates the buffering of digital input module tag values to Local tags prior to executing our PumpControl routine: Ladder Logic Routine with Input Module Buffering After our input module values have been buffered to Local tags, we can execute our processlogic in our PumpControl routine without having to worry about our values changing in the middle of the routine's execution. The following ladder logic code determines whether all the conditions are met to run a pump: Pump Control Ladder Logic Routine Finally, after all of our Process Logic has finished executing, we can write the resulting values to our digital output modules. The following ladder logic BufferOutputs, routine copies the resulting RunPump value to the digital output module tag. Ladder Logic Routine with Output Module Buffering We have now buffered our module inputs and module outputs in order to ensure they do not change in the middle of a program execution and potentially put our process into an undesired state. Buffering Structured Text I/O module values Just like ladder logic, Structured Text I/O module values should be buffered at the beginning of a routine or prior to executing a routine in order to prevent the values from changing mid-execution and putting the process into a state you could not have predicted. Following is an example of the ladder logic buffering routines written in Structured Text (ST)and using the non-retentive assignment operator: (* I/O Buffering in Structured Text Input Buffering *) StartPump [:=] Local:2:I.Data[0].0; HighPressure [:=] Local:2:I.Data[0].1; PumpStartManualOverride [:=] Local:2:I.Data[0].2; (* I/O Buffering in Structured Text Output Buffering *) Local:3:O.Data[0].0 [:=] RunPump; Function Block Diagram (FBD) and Sequential Function Chart (SFC) I/O module buffering Within Rockwell Automation's Logix platform, all of the supported IEC languages (ladder logic, structured text, function block, and sequential function chart) will compile down to the same controller bytecode language. The available functions and development interface in the various Logix programming languages are vastly different. Function Block Diagrams (FBD) and Sequential Function Charts (SFC) will always automatically buffer input values prior to executing Logic. Once a Function Block Diagram or a Sequential Function Chart has completed the execution of their logic, they will write all Output Module values at the same time. There is no need to perform Buffering on FBD or SFC routines, as it is automatically handled. Buffering using program parameters A program parameter is a powerful new feature in Logix that allows the association of dynamic values to tags and programs as parameters. The importance of program parameters is clear by the way they permeate the user interface in newer versions of Logix Designer (version 24 and higher). Program parameters are extremely powerful, but the key benefit to us for using them is that they are automatically buffered. This means that we could have effectively created the same result in one ladder logic rung rather than the eight we created in the previous exercise. There are four types of program parameters: Input: This program parameter is automatically buffered and passed into a program on each scan cycle. Output: This program parameter is automatically updated at the end of a program (as a result of executing that program) on each scan cycle. It is similar to the way we buffered our output module value in the previous exercise. InOut: This program parameter is updated at the start of the program scan and the end of the program scan. It is also important to note that, unlike the input and output parameters; the InOut parameter is passed as a pointer in memory. A pointer shares a piece of memory with other processes rather than creating a copy of it, which means that it is possible for an InOut parameter to change its value in the middle of a program scan. This makes InOut program parameters unsuitable for buffering when used on their own. Public: This program parameterbehaves like a normal controller tag and can be connected to input, output, and InOut parameters. it is similar to the InOut parameter, public parameters that are updated globally as their values are changed. This makes program parameters unsuitable for buffering, when used on their own. Primarily public program parameters are used for passing large data structures between programs on a controller. In Logix Designer version 24 and higher, a program parameter can be associated with a local tag using the Parameters and Local Tags in the Control Organizer (formally called "Program Tags"). The module input channel can be associated with a base tag within your program scope using the Parameter Connections. Add the module input value as a parameter connection. The previous screenshot demonstrates how we would associate the input module channel with our StartPump base tag using the Parameter Connection value. Summary In this article, we explored the asynchronous nature of the Logix family of controllers. We learned the importance of buffering input module and output module values for ladder logic routines and structured text routines. We also learned that, due to the way Function Block Diagrams (FBD/FB) and Sequential Function Chart (SFC) Routines execute, there is no need to buffer input module or output module tag values. Finally, we introduced the concept of buffering tags using program parameters in version 24 and high of Studio 5000 Logix Designer. Resources for Article: Further resources on this subject: vROps – Introduction and Architecture[article] Ensuring Five-star Rating in the MarketPlace[article] Building Ladder Diagram programs (Simple) [article]
Read more
  • 0
  • 0
  • 11961

article-image-intro-docker-part-2-developing-simple-application
Julian Gindi
30 Oct 2015
5 min read
Save for later

Intro to Docker Part 2: Developing a Simple Application

Julian Gindi
30 Oct 2015
5 min read
In my last post, we learned some basic concepts related to Docker, and we learned a few basic operations for using Docker containers. In this post, we will develop a simple application using Docker. Along the way we will learn how to use Dockerfiles and Docker's amazing 'compose' feature to link multiple containers together. The Application We will be building a simple clone of Reddit's very awesome and mysterious "The Button". The application will be written in Python using the Flask web framework, and will use Redis as it's storage backend. If you do not know Python or Flask, fear not, the code is very readable and you are not required to understand the code to follow along with the Docker-specific sections. Getting Started Before we get started, we need to create a few files and directories. First, go ahead and create a Dockerfile, requirements.txt (where we will specify project-specific dependencies), and a main app.py file. touch Dockerfile requirements.txt app.py Next we will create a simple endpoint that will return "Hello World". Go ahead and edit your app.py file to look like such: from flask import Flask app = Flask(__name__) @app.route('/') def main(): return 'Hello World!' if __name__ == '__main__': app.run('0.0.0.0') Now we need to tell Docker how to build a container containing all the dependencies and code needed to run the app. Edit your Dockerfile to look like such: 1 FROM python:2.7 2 3 RUN mkdir /code 4 WORKDIR /code 5 6 ADD requirements.txt /code/ 7 RUN pip install -r requirements.txt 8 9 ADD . /code/1011 EXPOSE 5000 Before we move on, let me explain the basics of Dockerfiles. Dockerfiles A Dockerfile is a configuration file that specifies instructions on how to build a Docker container. I will now explain each line in the Dockerfile we just created (I will reference individual lines). 1: First, we specify the base image to use as our starting point (we discussed this in more detail in the last post). Here we are using a stock Python 2.7 image. 3: Dockerfiles can container a few 'directives' that dictate certain behaviors. RUN is one such directive. It does exactly what it sounds like - runs an arbitrary command. Here, were are just making a working directory. 4: We use WORKDIR to specify the main working directory. 6: ADD allows us to selectively add files to the container during the build process. Currently, we just need to add the requirements file to tell Docker while dependencies to install. 7: We use the RUN command and python's pip package manager to install all the needed dependencies. 9: Here we add all the code in our current directory into the Docker container (add /code). 11: Finally we 'expose' the ports we will need to access. In this case, Flask will run on port 5000. Building from a Dockerfile We are almost ready to build an image from this Dockerfile, but first, let's specify the dependencies we will need in our requirements.txt file. flask==0.10.1 redis==2.10.3 I am using specific versions here to ensure that your version will work just like mine does. Once we have all these pieces in place we can build the image with the following command. > docker build -t thebutton . We are 'tagging' this image with an easy-to-remember name that we can use later. Once the build completes, we can run the container and see our message in the browser. > docker run -p 5000:5000 thebutton python app.py We are doing a few things here: The -p flag tells Docker to expose port 5000 inside the container, to port 5000 outside the container (this just makes our lives easier). Next we specify the image name (thebutton) and finally the command to run inside the container - python app.py - this will start the web server and server for our page. We are almost ready to view our page but first, we must discover which IP the site will be on. For linux-based systems, you can use localhost but for Mac you will need to run boot2docker ip to discover the IP address to visit. Navigate to your site (in my case it's 192.168.59.103:5000) and you should see "Hello World" printed. Congrats! You are running your first site from inside a Docker container. Putting it All Together Now, we are going to complete the app, and use Docker Compose to launch the entire project for us. This will contain two containers, one running our Flask app, and another running an instance of Redis. The great thing about docker-compose is that you can specify a system to create, and how to connect all the containers. Let's create our docker-compose.yml file now. redis: image: redis:2.8.19 web: build: . command: python app.py ports: - "5000:5000" links: - redis:redis This file specifies the two containers (web and redis). It specifies how to build each container (we are just using the stock redis image here). The web container is a bit more involved since we first build the container using our local Dockerfile (the build: . line). Than we expose port 5000 and link the Redis container to our web container. The awesome thing about linking containers this way, is that the web container automatically gets information about the redis container. In this case, there is an /etc/host called 'redis' that points to our Redis container. This allows us to configure Redis easily in our application: db = redis.StrictRedis('redis', 6379, 0) To test this all out, you can grab the complete source here. All you will need to run is docker-compose up and than access the site the same way we did before. Congratulations! You now have all the tools you need to use docker effectively! About the author Julian Gindi is a Washington DC-based software and infrastructure engineer. He currently serves as Lead Infrastructure Engineer at [iStrategylabs](isl.co) where he does everything from system administration to designing and building deployment systems. He is most passionate about Operating System design and implementation, and in his free time contributes to the Linux Kernel.
Read more
  • 0
  • 0
  • 11909

Packt
16 Feb 2016
3 min read
Save for later

Machine learning and Python – the Dream Team

Packt
16 Feb 2016
3 min read
In this article we will be learning more about machine learning and Python. Machine learning (ML) teaches machines how to carry out tasks by themselves. It is that simple. The complexity comes with the details, and that is most likely the reason you are reading this article. (For more resources related to this topic, see here.) Machine learning and Python – the dream team The goal of machine learning is to teach machines (software) to carry out tasks by providing them a couple of examples (how to do or not do a task). Let us assume that each morning when you turn on your computer, you perform the same task of moving e-mails around so that only those e-mails belonging to a particular topic end up in the same folder. After some time, you feel bored and think of automating this chore. One way would be to start analyzing your brain and writing down all the rules your brain processes while you are shuffling your e-mails. However, this will be quite cumbersome and always imperfect. While you will miss some rules, you will over-specify others. A better and more future-proof way would be to automate this process by choosing a set of e-mail meta information and body/folder name pairs and let an algorithm come up with the best rule set. The pairs would be your training data, and the resulting rule set (also called model) could then be applied to future e-mails, which we have not yet seen. This is machine learning in its simplest form. Of course, machine learning (often also referred to as data mining or predictive analysis) is not a brand new field in itself. Quite the contrary, its success over the recent years can be attributed to the pragmatic way of using rock-solid techniques and insights from other successful fields; for example, statistics. There, the purpose is for us humans to get insights into the data by learning more about the underlying patterns and relationships. As you read more and more about successful applications of machine learning (you have checked out kaggle.com already, haven't you?), you will see that applied statistics is a common field among machine learning experts. As you will see later, the process of coming up with a decent ML approach is never a waterfall-like process. Instead, you will see yourself going back and forth in your analysis, trying out different versions of your input data on diverse sets of ML algorithms. It is this explorative nature that lends itself perfectly to Python. Being an interpreted high-level programming language, it may seem that Python was designed specifically for the process of trying out different things. What is more, it does this very fast. Sure enough, it is slower than C or similar statically typed programming languages; nevertheless, with a myriad of easy-to-use libraries that are often written in C, you don't have to sacrifice speed for agility. Summary In this is article we learned about machine learning and its goals. To learn more please refer to the following books: Building Machine Learning Systems with Python - Second Edition (https://www.packtpub.com/big-data-and-business-intelligence/building-machine-learning-systems-python-second-edition) Expert Python Programming (https://www.packtpub.com/application-development/expert-python-programming) Resources for Article:   Further resources on this subject: Python Design Patterns in Depth – The Observer Pattern [article] Python Design Patterns in Depth: The Factory Pattern [article] Customizing IPython [article]
Read more
  • 0
  • 0
  • 11883

article-image-roslyn-cookbook
Packt
20 Feb 2018
6 min read
Save for later

Consuming Diagnostic Analyzers in .NET projects

Packt
20 Feb 2018
6 min read
We know how to write diagnostic analyzers to analyze and report issues about .NET source code and contribute them to the .NET developer community. In this article by the author Manish Vasani, of the book Roslyn Cookbook, we will show you how to search, install, view and configure the analyzers that have already been published by various analyzer authors on NuGet and VS Extension gallery. We will cover the following recipes: (For more resources related to this topic, see here.) Searching and installing analyzers through the NuGet package manager. Searching and installing VSIX analyzers through the VS extension gallery. Viewing and configuring analyzers in solution explorer in Visual Studio. Using ruleset file and ruleset editor to configure analyzers. Diagnostic analyzers are extensions to the Roslyn C# compiler and Visual Studio IDE to analyze user code and report diagnostics. User will see these diagnostics in the error list after building the project from Visual Studio and even when building the project on the command line. They will also see the diagnostics live while editing the source code in the Visual Studio IDE. Analyzers can report diagnostics to enforce specific code styles, improve code quality and maintenance, recommend design guidelines or even report very domain specific issues which cannot be covered by the core compiler. Analyzers can be installed to a .NET project either as a NuGet package or as a VSIX. To get a better understanding of these packaging schemes and learn about the differences in the analyzer experience when installed as a NuGet package versus a VSIX. Analyzers are supported on various different flavors of .NET standard, .NET core and .NET framework projects, for example, class library, console app, etc. Searching and installing analyzers through the NuGet package manager In this recipe we will show you how to search and install analyzer NuGet packages in the NuGet package manager in Visual Studio and see how the analyzer diagnostics from an installed NuGet package light up in project build and as live diagnostics during code editing in Visual Studio. Getting ready You will need to have Visual Studio 2017 installed on your machine to this recipe. You can install a free community version of Visual Studio 2017 from https://www.visualstudio.com/thank-you-downloading-visual-studio/?sku=Community&rel=15.  How to do it… Create a C# class library project, say ClassLibrary, in Visual Studio 2017. In solution explorer, right click on the solution or project node and execute Manage NuGet Packages command.  This brings up the NuGet Package Manager, which can be used to search and install NuGet packages to the solution or project. In the search bar type the following text to find NuGet packages tagged as analyzers: Tags:"analyzers" Note that some of the well known packages are tagged as analyzer, so you may also want to search:Tags:"analyzer" Check or uncheck the Include prerelease checkbox to the right of the search bar to search or hide the prerelease analyzer packages respectively. The packages are listed based on the number of downloads, with the highest downloaded package at the top. Select a package to install, say System.Runtime.Analyzers, and pick a specific version, say 1.1.0, and click Install. Click on I Accept button on the License Acceptance dialog to install the NuGet package. Verify the installed analyzer(s) show up under the Analyzers node in the solution explorer. Verify the project file has a new ItemGroup with the following analyzer references from the installed analyzer package: <ItemGroup> <Analyzer Include="..packagesSystem.Runtime.Analyzers.1.1.0analyzersdotnetcsSystem.Runtime.Analyzers.dll" /> <Analyzer Include="..packagesSystem.Runtime.Analyzers.1.1.0analyzersdotnetcsSystem.Runtime.CSharp.Analyzers.dll" /> </ItemGroup> Add the following code to your C# project: namespace ClassLibrary { public class MyAttribute : System.Attribute { } } Verify the analyzer diagnostic from the installed analyzer is shown in the error list: Open a Visual Studio 2017 Developer Command Prompt and build the project to verify that the analyzer is executed on the command line build and the analyzer diagnostic is reported: Create a new C# project in VS2017 and add the same code to it as step 9 and verify no analyzer diagnostic shows up in error list or command line, confirming that the analyzer package was only installed to the selected project in steps 1-6. Note that CA1018 (Custom attribute should have AttributeUsage defined) has been moved to a separate analyzer assembly in future versions of FxCop/System.Runtime.Analyzers package. It is recommended that you install Microsoft.CodeAnalysis.FxCopAnalyzers NuGet package to get the latest group of Microsoft recommended analyzers. Searching and installing VSIX analyzers through the VS extension gallery In this recipe we will show you how to search and install analyzer VSIX packages in the Visual Studio Extension manager and see how the analyzer diagnostics from an installed VSIX light up as live diagnostics during code editing in Visual Studio. Getting ready You will need to have Visual Studio 2017 installed on your machine to this recipe. You can install a free community version of Visual Studio 2017 from https://www.visualstudio.com/thank-you-downloading-visual-studio/?sku=Community&rel=15. How to do it… Create a C# class library project, say ClassLibrary, in Visual Studio 2017. From the top level menu, execute Tools | Extensions and Updates Navigate to Online | Visual Studio Marketplace on the left tab of the dialog to view the available VSIXes in the Visual Studio extension gallery/marketplace. Search analyzers in the search text box in the upper right corner of the dialog and download an analyzer VSIX, say Refactoring Essentials for Visual Studio. Once the download completes, you will get a message at the bottom of the dialog that the install will be scheduled to execute once Visual Studio and related windows are closed. Close the dialog and then close the Visual Studio instance to start the install. In the VSIX Installer dialog, click Modify to start installation. The subsequent message prompts you to kill all the active Visual Studio and satellite processes. Save all your relevant work in all the open Visual Studio instances, and click End Tasks to kill these processes and install the VSIX. After installation, restart VS, click Tools | Extensions And Updates, and verify Refactoring Essentials VSIX is installed. Create a new C# project with the following source code and verify analyzer diagnostic RECS0085 (Redundant array creation expression) in the error list: namespace ClassLibrary { public class Class1 { void Method() { int[] values = new int[] { 1, 2, 3 }; } } } Build the project from Visual Studio 2017 or command line and confirm no analyzer diagnostic shows up in the Output Window or the command line respectively, confirming that the VSIX analyzer did not execute as part of the build. Resources for Article: Further resources on this subject: C++, SFML, Visual Studio, and Starting the first game [article] Connecting to Microsoft SQL Server Compact 3.5 with Visual Studio [article] Creating efficient reports with Visual Studio [article]
Read more
  • 0
  • 0
  • 11880
article-image-preparing-your-forms-conversion-using-oracle-application-express-apex
Packt
09 Oct 2009
9 min read
Save for later

Preparing Your Forms Conversion Using Oracle Application Express (APEX)

Packt
09 Oct 2009
9 min read
When we are participating in a Forms Conversion project, it means we take the source files of our application, turn them into XML files, and upload them into the Forms Conversion part of APEX. This article describes what we do before uploading the XML files and starting our actual Forms Conversion project. Get your stuff! When we talk about source files, it would come in very handy if we got all the right versions of these files. In order to do the Conversion project, we need the same components that are used in the production environment. For these components, we have to get the source files of the components we want to convert. This means we have no use of the runtime files (Oracle Forms runtime files have the FMX extension). In other words, for Forms components we don't need the FMX files, but the FMB source files. These are a few ground rules we have to take into account: We need to make sure that there's no more development on the components we are about to use in our Conversion project. This is because we are now going to freeze our sources and new developments won't be taken into the Conversion project at all. So there will be no changes in our project. Put all the source files in a safe place. In other words, copy the latest version of your files into a new directory to which only you, and perhaps your teammates, have access. If the development team of your organization is using Oracle Designer for the development of its applications, it would be a good idea to generate all the modules from scratch. You would like to use the source on which the runtime files were created only if there are post-generation adjustments to be made in the modules. We need the following files for our Conversion project: Forms Modules: With the FMB extension Object Libraries: With the OLB extension Forms Menus: With the MMB extension PL/SQL Libraries: With the PLL extension Report Files: With the RDF, REX, or JSP extensions When we take these source files, we will be able to create all the necessary XML files that we need for the Forms Conversion project. Creating XML files To create XML files, we need three parts of the Oracle Developer Suite. All of these parts come with a normal 10g or 9i installation of the Developer Suite. These three parts are the Forms Builder, the Reports Builder, and the Forms2XML conversion tool. The Forms2XML conversion tool is the most extensive to understand and is used to create XML files from Form modules, Object Libraries, and Forms Menus. So, we will first discuss the possibilities of this tool. The Forms2XML conversion tool This tool can be used both from the command line as well as a Java applet. As the command line gives us all the possibilities we need and is as easy as a Java applet, we will only use the command-line possibilities. The frmf2xml command comes with some options. The following syntax is used while converting the Forms Modules, the Object Libraries, and the Forms Menus to an XML structure: frmf2xml [option] file [file] In other words, we follow these steps: We first type frmf2xml. Alternatively, we give one of the options with it. We tell the command which file we want to convert, and we have the option to address more than one file for the conversion to XML. We probably want to give the OVERWRITE=YES option with our command. This property ensures that the newly created XML file will overwrite the one with the same name in the directory where we are working. If another file with the same name already exists in this directory and we don't give the OVERWRITE option the value YES (the default is NO), the file will not be generated, as we see in the following screenshot: If there are any images used in modules (Forms or Object Libraries), the Forms2XML tool will refer to the image in the XML file created, and that file will create a TIF file of the image in the directory. The XML files that are created will be stored in the same directory from which we call the command. It will use the following syntax for the name of the XML file: formname.fmb will become formname_fmb.xml libraryname.olb will become libraryname_olb.xml menuname.mmb will become menuname_mmb.xml To convert the .FMB, OLB and, MMB files to XML, we need to do the following steps in the command prompt: Forms Modules The following steps are done in order to convert the .FMB file to XML: We will change the working directory to the directory that has the FMB file. In my example, I have stored all the files in a directory called summit directly under the C drive, like this: C:>cd C:summit Now, we can call the frmf2xml command to convert one of our Forms Modules to an XML file. In this example, we convert the orders.fmb module: C:summit>frmf2xml OVERWRITE=YES orders.fmb As we see in the following screenshot, this command creates an XML file called orders_fmb.xml in the working directory: Object Libraries To convert the .OLB file to XML, the following steps are needed: We first change the working directory to the directory that the OLB file is in. It's done like this: C:>cd C:summit Now we can call the frmf2xml command to convert one of our Object Libraries to an XML file. In this example, we convert the Form_Builder_II.olb library as follows: C:summit>frmf2xml OVERWRITE=YES Form_Builder_II.olb As we see in the following screenshot, the command creates an XML file calledForm_Builder_II_olb.xml and two images as .tif files in the working directory: Forms Menus To convert the MMB file to XML, we follow these steps: We change the working directory to the directory that the .MMB file is in, like this: C:>cd C:summit Now we can call the frmf2xml command to convert one of our Forms Menus to an XML file. In this example we convert the customers.mmb menu: C:summit>frmf2xml OVERWRITE=YES customers.mmb As we can see in the following screenshot, the command creates an XML file called customers_mmb.xml in the working directory: Report Files In our example, we will convert the Customers Report from a RDF file to an XML file. To do this, we follow the steps given here: We need to open the Employees.rdf file with Reports Builder. Open Reports Builder from your Start menu. If Reports Builder is opened, we need to cancel the wizard that asks us if we want to create a new report. After this we use Ctrl+O to open the Report File (or in the menu, File | Open) which we want to convert to XML as we see in the following screenshot: After this we use Shift+Ctrl+S (or in the File | Save As menu) to save the Report. We choose that we want to save the report as a Reports XML (*.xml) file and we click on the Save button as shown in the following screenshot: PL/SQL Libraries To convert PL/SQL Libraries to an XML format, it's easiest to use the convert command that comes with the Report Builder. With this command called rwconverter, we define the source type, call the source, and define the destination type and the destination. In this way, we have control over the way we need to convert the original .pll file to a .pld flat file that we can upload into the APEX Forms converter. It is possible to convert the PL/SQL Libraries with the convert option in Forms Builder, but, personally, I think this option works better. The rwconverter command has a few parameters we give with it to execute. They are given as follows: stype: This is the type of source file we need to convert. In our situation, this will be a .pll file and so the value we need to set is pllfile. source: This is the name of the source file, including the extension. In our case, it is wizard.pll. dtype: This is the file type we want to convert our source file to. In our case, it is a .pld file and so the value becomes pldfile. dest: This is the name, including the extension, of the destination file. In our case, it is wizard.pld. In our example, we use the wizard.pll file that's in our summit files directory. This PL/SQL Library that contains .pll files is normally used to create a PL/SQL Library in the Oracle Database. But this time, we will use it to create a .pld flat file that we will upload to APEX. First, we change the directory to work directory which has the original .pll file. In our case, the summit directory directly under the C drive, shown as follows: C:>cd C:summit After this, we call rwconverter in the command prompt as shown here: C:summit> rwconverter stype=pllfile source=wizard.pll dtype=pldfile dest=wizard.pld When you press the Enter key, a screen will open that is used to do the conversion. We will see that the types and names of the files are the same as we entered them in the command line. We need to click on the OK button to convert the file from .pll to .pld. The conversion may take a few seconds, but when the file has been converted we will see a confirmation that the conversion was successful. After this, we can look in the C:summit directory and we will see that a file wizard.pld is created.
Read more
  • 0
  • 0
  • 11837

article-image-microsoft-lightswitch-querying-and-filtering-data
Packt
16 Sep 2011
5 min read
Save for later

Microsoft LightSwitch: Querying and Filtering Data

Packt
16 Sep 2011
5 min read
  (For more resources on this topic, see here.)   Querying in LightSwitch The following figure is based on the one you may review on the link mentioned earlier and schematically summarizes the architectural details: Each entity set has a default All and Single as shown in the entity Category. All entity sets have a Save operation that saves the changes. As defined, the entity sets are queryable and therefore query operations on these sets are allowed and supported. A query (query operation) requests an entity set(s) with optional filtering and sorting as shown, for example, in a simple, filtered, and sorted query on the Category entity. Queries can be parameterized with one or more parameters returning single or multiple results (result sets). In addition to the defaults (for example, Category*(SELECT All) and Category), additional filtering and sorting predicates can be defined. Although queries are based on LINQ, all of the IQueryable LINQ operations are not supported. The query passes through the following steps (the pipeline) before the results are returned. Pre-processing CanExecute—called to determine if this operation may be called or not Executing—called before the query is processed Pre-process query expression—builds up the final query expression Execution—LightSwitch passes the query expression to the data provider for execution Post-processing Executed—after the query is processed but before returning the results ExecuteFailed—if the query operation failed   Querying a Single Entity We will start off creating a Visual Studio LightSwitch project LSQueries6 using the Visual Basic Template as shown (the same can be carried out with a C# template). We will attach this application to the SQL Server Express server's Northwind database and bring in the Products (table) entity. We will create a screen EditableProductList which brings up all the data in the Products entity as shown in the previous screenshot. The above screen was created using the Editable Grid Screen template as shown next with the source of data being the Products entity. We see that the EditableProductList screen is displaying all columns including those discontinued items and it is editable as seen by the controls on the displayed screen. This is equivalent to the SQL query, Select * from Products as far as display is concerned.   Filtering and sorting the data Often you do not need all the columns but only a few columns of importance for your immediate needs, which besides being sufficient, enormously reduces the cost of running a query. What do you do to achieve this? Of course, you filter the data by posing a query to the entity. Let us now say, we want products listing ProductID, ProductName excluding the discontinued items. We also need the list sorted. In SQL Syntax, this reduces to: SELECT [Product List].ProductID, [Product List].ProductNameFROM Products AS [Product List]WHERE ((([Product List].Discontinued) =0))ORDER BY [Product List].ProductName; This is a typical filtering of data followed by sorting the filtered data. Filtering the data In LightSwitch, this filtering is carried out as shown in the following steps: Click on Query menu item in the LSQueries Designer as shown: The Designer (short for Query Designer) pops-up as shown and the following changes are made in the IDE: A default Query1 gets added to the Products entity on which it is based as shown; the Query1 property window is displayed and the Query Designer window is displayed. Query1 can be renamed in its Properties window (this will be renamed as Product List). The query target is the Products table and the return type is Product. As you can see Microsoft has provided all the necessary basic querying in this designer. If the query has to be changed to something more complicated, the Edit Additional Query Code link can be clicked to access the ProductListDataService as shown: Well, this is not a SQL Query but a LINQ Query working in the IDE. We know that entities are not just for relational data and this makes perfect sense because of the known advantages of LINQ for queries (review the following link: http://msdn.microsoft.com/en-us/library/bb425822.aspx). One of those main advantages is that you can write the query inVB or C#, and the DataContext, the main player takes it to SQL and runs queries that SQL Databases understand. It's more like a language translation for queries with many more advantages than the one mentioned. Hover over Add Filter to review what this will do as shown: This control will add a new filter condition. Note that Query1 has been renamed (right-click on Query1 and choose Rename) to ProductList. Click on the Add Filter button. The Filter area changes to display the following: The first field in the entity will come up by default as shown for the filtered field for the 'Where' clause. The GUI is helping to build up "Where CategoryID = ". However, as you can see from the composite screenshot (four screens were integrated to create this screenshot) built from using all the drop-down options that you can indeed filter any of the columns and choose any of the built-in criteria. Depending on the choice, you can also add parameter(s) with this UI. For the particular SQL Query we started with, choose the drop-down as shown. Notice that LightSwitch was intelligent enough to get the right data type of value for the Boolean field Discontinued. You also have an icon (in red, left of Where) to click on should you desire to delete the query. Add a Search Data Screen using the previous query as the source by providing the following information to the screen designer (associating the ProductList query for the Screen Data). This screen when displayed shows all products not discontinued as shown. The Discontinued column has been dragged to the position shown in the displayed screen.  
Read more
  • 0
  • 0
  • 11773

article-image-getting-started-mockito
Packt
19 Jun 2014
14 min read
Save for later

Getting Started with Mockito

Packt
19 Jun 2014
14 min read
(For more resources related to this topic, see here.) Mockito is an open source framework for Java that allows you to easily create test doubles (mocks). What makes Mockito so special is that it eliminates the common expect-run-verify pattern (which was present, for example, in EasyMock—please refer to http://monkeyisland.pl/2008/02/24/can-i-test-what-i-want-please for more details) that in effect leads to a lower coupling of the test code to the production code as such. In other words, one does not have to define the expectations of how the mock should behave in order to verify its behavior. That way, the code is clearer and more readable for the user. On one hand, Mockito has a very active group of contributors and is actively maintained. On the other hand, by the time this article is written, the last Mockito release (Version 1.9.5) would have been in October 2012. You may ask yourself the question, "Why should I even bother to use Mockito in the first place?" Out of many, Mockito offers the following key features: There is no expectation phase for Mockito—you can either stub or verify the mock's behavior You are able to mock both interfaces and classes You can produce little boilerplate code while working with Mockito by means of annotations You can easily verify or stub with intuitive argument matchers Before diving into Mockito as such, one has to understand the concept behind System Under Test (SUT) and test doubles. We will base on what Gerard Meszaros has defined in the xUnit Patterns (http://xunitpatterns.com/Mocks,%20Fakes,%20Stubs%20and%20Dummies.html). SUT (http://xunitpatterns.com/SUT.html) describes the system that we are testing. It doesn't have to necessarily signify a class but any part of the application that we are testing or even the whole application as such. As for test doubles (http://www.martinfowler.com/bliki/TestDouble.html), it's an object that is used only for testing purposes, instead of a real object. Let's take a look at different types of test doubles: Dummy: This is an object that is used only for the code to compile—it doesn't have any business logic (for example, an object passed as a parameter to a method) Fake: This is an object that has an implementation but it's not production ready (for example, using an in-memory database instead of communicating with a standalone one) Stub: This is an object that has predefined answers to method executions made during the test Mock: This is an object that has predefined answers to method executions made during the test and has recorded expectations of these executions Spy: These are objects that are similar to stubs, but they additionally record how they were executed (for example, a service that holds a record of the number of sent messages) An additional remark is also related to testing the output of our application. The more decoupled your test code is from your production code, the better since you will have to spend less time (or even none) on modifying your tests after you change the implementation of the code. Coming back to the article's content—this article is all about getting started with Mockito. We will begin with how to add Mockito to your classpath. Then, we'll see a simple setup of tests for both JUnit and TestNG test frameworks. Next, we will check why it is crucial to assert the behavior of the system under test instead of verifying its implementation details. Finally, we will check out some of Mockito's experimental features, adding hints and warnings to the exception messages. The very idea of the following recipes is to prepare your test classes to work with Mockito and to show you how to do this with as little boilerplate code as possible. Due to my fondness of the behavior driven development (http://dannorth.net/introducing-bdd/ first introduced by Dan North), I'm using Mockito's BDDMockito and AssertJ's BDDAssertions static methods to make the code even more readable and intuitive in all the test cases. Also, please read Szczepan Faber's blog (author of Mockito) about the given, when, then separation in your test methods—http://monkeyisland.pl/2009/12/07/given-when-then-forever/—since these are omnipresent throughout the article. I don't want the article to become a duplication of the Mockito documentation, which is of high quality—I would like you to take a look at good tests and get acquainted with Mockito syntax from the beginning. What's more, I've used static imports in the code to make it even more readable, so if you get confused with any of the pieces of code, it would be best to consult the repository and the code as such. Adding Mockito to a project's classpath Adding Mockito to a project's classpath is as simple as adding one of the two jars to your project's classpath: mockito-all: This is a single jar with all dependencies (with the hamcrest and objenesis libraries—as of June 2011). mockito-core: This is only Mockito core (without hamcrest or objenesis). Use this if you want to control which version of hamcrest or objenesis is used. How to do it... If you are using a dependency manager that connects to the Maven Central Repository, then you can get your dependencies as follows (examples of how to add mockito-all to your classpath for Maven and Gradle): For Maven, use the following code: <dependency> <groupId>org.mockito</groupId> <artifactId>mockito-all</artifactId> <version>1.9.5</version> <scope>test</scope> </dependency> For Gradle, use the following code: testCompile "org.mockito:mockito-all:1.9.5" If you are not using any of the dependency managers, you have to either download mockito-all.jar or mockito-core.jar and add it to your classpath manually (you can download the jars from https://code.google.com/p/mockito/downloads/list). Getting started with Mockito for JUnit Before going into details regarding Mockito and JUnit integration, it is worth mentioning a few words about JUnit. JUnit is a testing framework (an implementation of the xUnit famework) that allows you to create repeatable tests in a very readable manner. In fact, JUnit is a port of Smalltalk's SUnit (both the frameworks were originally implemented by Kent Beck). What is important in terms of JUnit and Mockito integration is that under the hood, JUnit uses a test runner to run its tests (from xUnit—test runner is a program that executes the test logic and reports the test results). Mockito has its own test runner implementation that allows you to reduce boilerplate in order to create test doubles (mocks and spies) and to inject them (either via constructors, setters, or reflection) into the defined object. What's more, you can easily create argument captors. All of this is feasible by means of proper annotations as follows: @Mock: This is used for mock creation @Spy: This is used to create a spy instance @InjectMocks: This is used to instantiate the @InjectMock annotated field and inject all the @Mock or @Spy annotated fields into it (if applicable) @Captor: This is used to create an argument captor By default, you should profit from Mockito's annotations to make your code look neat and to reduce the boilerplate code in your application. Getting ready In order to add JUnit to your classpath, if you are using a dependency manager that connects to the Maven Central Repository, then you can get your dependencies as follows (examples for Maven and Gradle): To add JUnit in Maven, use the following code: <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.11</version> <scope>test</scope> </dependency> To add JUnit in Gradle, use the following code: testCompile('junit:junit:4.11') If you are not using any of the dependency managers, you have to download the following jars: junit.jar hamcrest-core.jar Add the downloaded files to your classpath manually (you can download the jars from https://github.com/junit-team/junit/wiki/Download-and-Install). For this recipe, our system under test will be a MeanTaxFactorCalculator class that will call an external service, TaxService, to get the current tax factor for the current user. It's a tax factor and not tax as such since, for simplicity, we will not be using BigDecimals but doubles, and I'd never suggest using doubles to anything related to money, as follows: public class MeanTaxFactorCalculator { private final TaxService taxService; public MeanTaxFactorCalculator(TaxService taxService) { this.taxService = taxService; } public double calculateMeanTaxFactorFor(Person person) { double currentTaxFactor = taxService.getCurrentTaxFactorFor(person); double anotherTaxFactor = taxService.getCurrentTaxFactorFor(person); return (currentTaxFactor + anotherTaxFactor) / 2; } } How to do it... To use Mockito's annotations, you have to perform the following steps: Annotate your test with the @RunWith(MockitoJUnitRunner.class). Annotate the test fields with the @Mock or @Spy annotation to have either a mock or spy object instantiated. Annotate the test fields with the @InjectMocks annotation to first instantiate the @InjectMock annotated field and then inject all the @Mock or @Spy annotated fields into it (if applicable). The following snippet shows the JUnit and Mockito integration in a test class that verifies the SUT's behavior (remember that I'm using BDDMockito.given(...) and AssertJ's BDDAssertions.then(...) static methods: @RunWith(MockitoJUnitRunner.class) public class MeanTaxFactorCalculatorTest { static final double TAX_FACTOR = 10; @Mock TaxService taxService; @InjectMocks MeanTaxFactorCalculator systemUnderTest; @Test public void should_calculate_mean_tax_factor() { // given given(taxService.getCurrentTaxFactorFor(any(Person.class))).willReturn(TAX_FACTOR); // when double meanTaxFactor = systemUnderTest.calculateMeanTaxFactorFor(new Person()); // then then(meanTaxFactor).isEqualTo(TAX_FACTOR); } } To profit from Mockito's annotations using JUnit, you just have to annotate your test class with @RunWith(MockitoJUnitRunner.class). How it works... The Mockito test runner will adapt its strategy depending on the version of JUnit. If there exists a org.junit.runners.BlockJUnit4ClassRunner class, it means that the codebase is using at least JUnit in Version 4.5.What eventually happens is that the MockitoAnnotations.initMocks(...) method is executed for the given test, which initializes all the Mockito annotations (for more information, check the subsequent There's more… section). There's more... You may have a situation where your test class has already been annotated with a @RunWith annotation and seemingly, you may not profit from Mockito's annotations. In order to achieve this, you have to call the MockitoAnnotations.initMocks method manually in the @Before annotated method of your test, as shown in the following code: public class MeanTaxFactorCalculatorTest { static final double TAX_FACTOR = 10; @Mock TaxService taxService; @InjectMocks MeanTaxFactorCalculator systemUnderTest; @Before public void setup() { MockitoAnnotations.initMocks(this); } @Test public void should_calculate_mean_tax_factor() { // given given(taxService.getCurrentTaxFactorFor(Mockito.any(Person.class))).willReturn(TAX_FACTOR); // when double meanTaxFactor = systemUnderTest.calculateMeanTaxFactorFor(new Person()); // then then(meanTaxFactor).isEqualTo(TAX_FACTOR); } } To use Mockito's annotations without a JUnit test runner, you have to call the MockitoAnnotations.initMocks method and pass the test class as its parameter. Mockito checks whether the user has overridden the global configuration of AnnotationEngine, and if this is not the case, the InjectingAnnotationEngine implementation is used to process annotations in tests. What is done internally is that the test class fields are scanned for annotations and proper test doubles are initialized and injected into the @InjectMocks annotated object (either by a constructor, property setter, or field injection, in that precise order). You have to remember several factors related to the automatic injection of test doubles as follows: If Mockito is not able to inject test doubles into the @InjectMocks annotated fields through either of the strategies, it won't report failure—the test will continue as if nothing happened (and most likely, you will get NullPointerException). For constructor injection, if arguments cannot be found, then null is passed For constructor injection, if nonmockable types are required in the constructor, then the constructor injection won't take place. For other injection strategies, if you have properties with the same type (or same erasure) and if Mockito matches mock names with a field/property name, it will inject that mock properly. Otherwise, the injection won't take place. For other injection strategies, if the @InjectMocks annotated object wasn't previously initialized, then Mockito will instantiate the aforementioned object using a no-arg constructor if applicable. See also JUnit documentation at https://github.com/junit-team/junit/wiki Martin Fowler's article on xUnit at http://www.martinfowler.com/bliki/Xunit.html Gerard Meszaros's xUnit Test Patterns at http://xunitpatterns.com/ @InjectMocks Mockito documentation (with description of injection strategies) at http://docs.mockito.googlecode.com/hg/1.9.5/org/mockito/InjectMocks.html Getting started with Mockito for TestNG Before going into details regarding Mockito and TestNG integration, it is worth mentioning a few words about TestNG. TestNG is a unit testing framework for Java that was created, as the author defines it on the tool's website (refer to the See also section for the link), out of frustration for some JUnit deficiencies. TestNG was inspired by both JUnit and TestNG and aims at covering the whole scope of testing—from unit, through functional, integration, end-to-end tests, and so on. However, the JUnit library was initially created for unit testing only. The main differences between JUnit and TestNG are as follows: The TestNG author disliked JUnit's approach of having to define some methods as static to be executed before the test class logic gets executed (for example, the @BeforeClass annotated methods)—that's why in TestNG you don't have to define these methods as static TestNG has more annotations related to method execution before single tests, suites, and test groups TestNG annotations are more descriptive in terms of what they do; for example, the JUnit's @Before versus TestNG's @BeforeMethod Mockito in Version 1.9.5 doesn't provide any out-of-the-box solution to integrate with TestNG in a simple way, but there is a special Mockito subproject for TestNG (refer to the See also section for the URL) that should be part one of the subsequent Mockito releases. In the following recipe, we will take a look at how to profit from that code and that very elegant solution. Getting ready When you take a look at Mockito's TestNG subproject on the Mockito GitHub repository, you will find that there are three classes in the org.mockito.testng package, as follows: MockitoAfterTestNGMethod MockitoBeforeTestNGMethod MockitoTestNGListener Unfortunately, until this project eventually gets released you have to just copy and paste those classes to your codebase. How to do it... To integrate TestNG and Mockito, perform the following steps: Copy the MockitoAfterTestNGMethod, MockitoBeforeTestNGMethod, and MockitoTestNGListener classes to your codebase from Mockito's TestNG subproject. Annotate your test class with @Listeners(MockitoTestNGListener.class). Annotate the test fields with the @Mock or @Spy annotation to have either a mock or spy object instantiated. Annotate the test fields with the @InjectMocks annotation to first instantiate the @InjectMock annotated field and inject all the @Mock or @Spy annotated fields into it (if applicable). Annotate the test fields with the @Captor annotation to make Mockito instantiate an argument captor. Now let's take a look at this snippet that, using TestNG, checks whether the mean tax factor value has been calculated properly (remember that I'm using the BDDMockito.given(...) and AssertJ's BDDAssertions.then(...) static methods: @Listeners(MockitoTestNGListener.class) public class MeanTaxFactorCalculatorTestNgTest { static final double TAX_FACTOR = 10; @Mock TaxService taxService; @InjectMocks MeanTaxFactorCalculator systemUnderTest; @Test public void should_calculate_mean_tax_factor() { // given given(taxService.getCurrentTaxFactorFor(any(Person.class))).willReturn(TAX_FACTOR); // when double meanTaxFactor = systemUnderTest.calculateMeanTaxFactorFor(new Person()); // then then(meanTaxFactor).isEqualTo(TAX_FACTOR); } } How it works... TestNG allows you to register custom listeners (your listener class has to implement the IInvokedMethodListener interface). Once you do this, the logic inside the implemented methods will be executed before and after every configuration and test methods get called. Mockito provides you with a listener whose responsibilities are as follows: Initialize mocks annotated with the @Mock annotation (it is done only once) Validate the usage of Mockito after each test method Remember that with TestNG, all mocks are reset (or initialized if it hasn't already been done so) before any TestNG method! See also The TestNG homepage at http://testng.org/doc/index.html The Mockito TestNG subproject at https://github.com/mockito/mockito/tree/master/subprojects/testng The Getting started with Mockito for JUnit recipe on the @InjectMocks analysis
Read more
  • 0
  • 0
  • 11708
article-image-custom-coding-apex
Packt
27 Apr 2015
18 min read
Save for later

Custom Coding with Apex

Packt
27 Apr 2015
18 min read
In this article by Chamil Madusanka, author of the book Learning Force.com Application Development, you will learn about the custom coding in Apex and also about triggers. We have used many declarative methods such as creating the object's structure, relationships, workflow rules, and approval process to develop the Force.com application. The declarative development method doesn't require any coding skill and specific Integrated Development Environment (IDE). This article will show you how to extend the declarative capabilities using custom coding of the Force.com platform. Apex controllers and Apex triggers will be explained with examples of the sample application. The Force.com platform query language and data manipulation language will be described with syntaxes and examples. At the end of the article, there will be a section to describe bulk data handling methods in Apex. This article covers the following topics: Introducing Apex Working with Apex (For more resources related to this topic, see here.) Introducing Apex Apex is the world's first on-demand programming language that allows developers to implement and execute business flows, business logic, and transactions on the Force.com platform. There are two types of Force.com application development methods: declarative developments and programmatic developments. Apex is categorized under the programmatic development method. Since Apex is a strongly-typed, object-based language, it is connected with data in the Force.com platform and data manipulation using the query language and the search language. The Apex language has the following features: Apex provides a lot of built-in support for the Force.com platform features such as: Data Manipulation Language (DML) with the built-in exception handling (DmlException) to manipulate the data during the execution of the business logic. Salesforce Object Query Language (SOQL) and Salesforce Object Search Language (SOSL) to query and retrieve the list of sObjects records. Bulk data processing on multiple records at a time. Apex allows handling errors and warning using an in-built error-handling mechanism. Apex has its own record-locking mechanism to prevent conflicts of record updates. Apex allows building custom public Force.com APIs from stored Apex methods. Apex runs in a multitenant environment. The Force.com platform has multitenant architecture. Therefore, the Apex runtime engine obeys the multitenant environment. It prevents monopolizing of shared resources using the guard with limits. If any particular Apex code violates the limits, error messages will be displayed. Apex is hosted in the Force.com platform. Therefore, the Force.com platform interprets, executes, and controls Apex. Automatically upgradable and versioned: Apex codes are stored as metadata in the platform. Therefore, they are automatically upgraded with the platform. You don't need to rewrite your code when the platform gets updated. Each code is saved with the current upgrade version. You can manually change the version. It is easy to maintain the Apex code with the versioned mechanism. Apex can be used easily. Apex is similar to Java syntax and variables. The syntaxes and semantics of Apex are easy to understand and write codes. Apex is a data-focused programming language. Apex is designed for multithreaded query and DML statements in a single execution context on the Force.com servers. Many developers can use database stored procedures to run multiple transaction statements on the database server. Apex is different from other databases when it comes to stored procedures; it doesn't attempt to provide general support for rendering elements in the user interface. The execution context is one of the key concepts in Apex programming. It influences every aspect of software development on the Force.com platform. Apex is a strongly-typed language that directly refers to schema objects and object fields. If there is any error, it fails the compilation. All the objects, fields, classes, and pages are stored in metadata after successful compilation. Easy to perform unit testing. Apex provides a built-in feature for unit testing and test execution with the code coverage. Apex allows developers to write the logic in two ways: As an Apex class: The developer can write classes in the Force.com platform using Apex code. An Apex class includes action methods which related to the logic implementation. An Apex class can be called from a trigger. A class can be associated with a Visualforce page (Visualforce Controllers/Extensions) or can act as a supporting class (WebService, Email-to-Apex service/Helper classes, Batch Apex, and Schedules). Therefore, Apex classes are explicitly called from different places on the Force.com platform. As a database trigger: A trigger is executed related to a particular database interaction of a Force.com object. For example, you can create a trigger on the Leave Type object that fires whenever the Leave Type record is inserted. Therefore, triggers are implicitly called from a database action. Apex is included in the Unlimited Edition, Developer Edition, Enterprise Edition, Database.com, and Performance Edition. The developer can write Apex classes or Apex triggers in a developer organization or a sandbox of a production organization. After you finish the development of the Apex code, you can deploy the particular Apex code to the production organization. Before you deploy the Apex code, you have to write test methods to cover the implemented Apex code. Apex code in the runtime environment You already know that Apex code is stored and executed on the Force.com platform. Apex code also has a compile time and a runtime. When you attempt to save an Apex code, it checks for errors, and if there are no errors, it saves with the compilation. The code is compiled into a set of instructions that are about to execute at runtime. Apex always adheres to built-in governor limits of the Force.com platform. These governor limits protect the multitenant environment from runaway processes. Apex code and unit testing Unit testing is important because it checks the code and executes the particular method or trigger for failures and exceptions during test execution. It provides a structured development environment. We gain two good requirements for this unit testing, namely, best practice for development and best practice for maintaining the Apex code. The Force.com platform forces you to cover the Apex code you implemented. Therefore, the Force.com platform ensures that you follow the best practices on the platform. Apex governors and limits Apex codes are executed on the Force.com multitenant infrastructure and the shared resources are used across all customers, partners, and developers. When we are writing custom code using Apex, it is important that the Apex code uses the shared resources efficiently. Apex governors are responsible for enforcing runtime limits set by Salesforce. It discontinues the misbehaviors of the particular Apex code. If the code exceeds a limit, a runtime exception is thrown that cannot be handled. This error will be seen by the end user. Limit warnings can be sent via e-mail, but they also appear in the logs. Governor limits are specific to a namespace, so AppExchange certified managed applications have their own set of limits, independent of the other applications running in the same organization. Therefore, the governor limits have their own scope. The limit scope will start from the beginning of the code execution. It will be run through the subsequent blocks of code until the particular code terminates. Apex code and security The Force.com platform has a component-based security, record-based security and rich security framework, including profiles, record ownership, and sharing. Normally, Apex codes are executed as a system mode (not as a user mode), which means the Apex code has access to all data and components. However, you can make the Apex class run in user mode by defining the Apex class with the sharing keyword. The with sharing/without sharing keywords are employed to designate that the sharing rules for the running user are considered for the particular Apex class. Use the with sharing keyword when declaring a class to enforce the sharing rules that apply to the current user. Use the without sharing keyword when declaring a class to ensure that the sharing rules for the current user are not enforced. For example, you may want to explicitly turn off sharing rule enforcement when a class acquires sharing rules after it is called from another class that is declared using with sharing. The profile also can maintain the permission for developing Apex code and accessing Apex classes. The author's Apex permission is required to develop Apex codes and we can limit the access of Apex classes through the profile by adding or removing the granted Apex classes. Although triggers are built using Apex code, the execution of triggers cannot be controlled by the user. They depend on the particular operation, and if the user has permission for the particular operation, then the trigger will be fired. Apex code and web services Like other programming languages, Apex supports communication with the outside world through web services. Apex methods can be exposed as a web service. Therefore, an external system can invoke the Apex web service to execute the particular logic. When you write a web service method, you must use the webservice keyword at the beginning of the method declaration. The variables can also be exposed with the webservice keyword. After you create the webservice method, you can generate the Web Service Definition Language (WSDL), which can be consumed by an external application. Apex supports both Simple Object Access Protocol (SOAP) and Representational State Transfer (REST) web services. Apex and metadata Because Apex is a proprietary language, it is strongly typed to Salesforce metadata. The same sObject and fields that are created through the declarative setup menu can be referred to through Apex. Like other Force.com features, the system will provide an error if you try to delete an object or field that is used within Apex. Apex is not technically autoupgraded with each new Salesforce release, as it is saved with a specific version of the API. Therefore, Apex, like other Force.com features, will automatically work with future versions of Salesforce applications. Force.com application development tools use the metadata. Working with Apex Before you start coding with Apex, you need to learn a few basic things. Apex basics Apex has come up with a syntactical framework. Similar to Java, Apex is strongly typed and is an object-based language. If you have some experience with Java, it will be easy to understand Apex. The following table explains the similarities and differences between Apex and Java: Similarities Differences Both languages have classes, inheritance, polymorphism, and other common object oriented programming features Apex runs in a multitenant environment and is very controlled in its invocations and governor limits Both languages have extremely similar syntax and notations Apex is case sensitive Both languages are compiled, strongly-typed, and transactional Apex is on-demand and is compiled and executed in the cloud   Apex is not a general purpose programming language, but is instead a proprietary language used for specific business logic functions   Apex requires unit testing for deployment into a production environment This section will not discuss everything that is included in the Apex documentation from Salesforce, but it will cover topics that are essential for understanding concepts discussed in this article. With this basic knowledge of Apex, you can create Apex code in the Force.com platform. Apex data types In Apex classes and triggers, we use variables that contain data values. Variables must be bound to a data type and that particular variable can hold the values with the same data type. All variables and expressions have one of the following data types: Primitives Enums sObjects Collections An object created from the user or system-defined classes Null (for the null constant) Primitive data types Apex uses the same primitive data types as the web services API, most of which are similar to their Java counterparts. It may seem that Apex primitive variables are passed by value, but they actually use immutable references, similar to Java string behavior. The following are the primitive data types of Apex: Boolean: A value that can only be assigned true, false, or null. Date, Datetime, and Time: A Date value indicates particular day and not contains any information about time. A Datetime value indicates a particular day and time. A Time value indicates a particular time. Date, Datetime and Time values must always be created with a system static method. ID: 18 or 15 digits version. Integer, Long, Double, and Decimal: Integer is a 32-bit number that does not include decimal points. Integers have a minimum value of -2,147,483,648 and a maximum value of 2,147,483,647. Long is a 64-bit number that does not include a decimal point. Use this datatype when you need a range of values wider than those provided by Integer. Double is a 64-bit number that includes a decimal point. Both Long and Doubles have a minimum value of -263 and a maximum value of 263-1. Decimal is a number that includes a decimal point. Decimal is an arbitrary precision number. String: String is any set of characters surrounded by single quotes. Strings have no limit on the number of characters that can be included. But the heap size limit is used to ensure to the particular Apex program do not grow too large. Blob: Blob is a collection of binary data stored as a single object. Blog can be accepted as Web service argument, stored in a document or sent as attachments. Object: This can be used as the base type for any other data type. Objects are supported for casting. Enum data types Enum (or enumerated list) is an abstract data type that stores one value of a finite set of specified identifiers. To define an Enum, use the enum keyword in the variable declaration and then define the list of values. You can define and use enum in the following way: Public enum Status {NEW, APPROVED, REJECTED, CANCELLED} The preceding enum has four values: NEW, APPROVED, REJECTED, CANCELLED. By creating this enum, you have created a new data type called Status that can be used as any other data type for variables, return types, and method arguments. Status leaveStatus = Status. NEW; Apex provides Enums for built-in concepts such as API error (System.StatusCode). System-defined enums cannot be used in web service methods. sObject data types sObjects (short for Salesforce Object) are standard or custom objects that store record data in the Force.com database. There is also an sObject data type in Apex that is the programmatic representation of these sObjects and their data in code. Developers refer to sObjects and their fields by their API names, which can be found in the schema browser. sObject and field references within Apex are validated against actual object and field names when code is written. Force.com tracks the objects and fields used within Apex to prevent users from making the following changes: Changing a field or object name Converting from one data type to another Deleting a field or object Organization-wide changes such as record sharing It is possible to declare variables of the generic sObject data type. The new operator still requires a concrete sObject type, so the instances are all specific sObjects. The following is a code example: sObject s = new Employee__c(); Casting will be applied as expected as each row knows its runtime type and can be cast back to that type. The following casting works fine: Employee__c e = (Employee__c)s; However, the following casting will generate a runtime exception for data type collision: Leave__c leave = (Leave__c)s; sObject super class only has the ID variable. So we can only access the ID via the sObject class. This method can also be used with collections and DML operations, although only concrete types can be instantiated. Collection will be described in the upcoming section and DML operations will be discussed in the Data manipulation section on the Force.com platform. Let's have a look at the following code: sObject[] sList = new Employee__c[0]; List<Employee__c> = (List<Employee__c>)sList; Database.insert(sList); Collection data types Collection data types store groups of elements of other primitive, composite, or collection data types. There are three different types of collections in Apex: List: A list is an ordered collection of primitives or composite data types distinguished by its index. Each element in a list contains two pieces of information; an index (this is an integer) and a value (the data). The index of the first element is zero. You can define an Apex list in the following way: List<DataType> listName = new List<DataType>(); List<String> sList = new List< String >(); There are built-in methods that can be used with lists adding/removing elements from the end of the list, getting/setting values at a particular index, and sizing the list by obtaining the number of elements. A full set of list methods are listed at http://www.salesforce.com/us/developer/docs/dbcom_apex250/Content/apex_methods_system_list.htm. The Apex list is defined in the following way: List<String> sList = new List< String >(); sList.add('string1'); sList.add('string2'); sList.add('string3'); sList.add('string4'); Integer sListSize = sList.size(); // this will return the   value as 4 sList.get(3); //This method will return the value as   "string4" Apex allows developers familiar with the standard array syntax to use that interchangeably with the list syntax. The main difference is the use of square brackets, which is shown in the following code: String[] sList = new String[4]; sList [0] = 'string1'; sList [1] = 'string2'; sList [2] = 'string3'; sList [3] = 'string4'; Integer sListSize = sList.size(); // this will return the   value as 4 Lists, as well as maps, can be nested up to five levels deep. Therefore, you can create a list of lists in the following way: List<List<String>> nestedList = new List<List<String>> (); Set: A set is an unordered collection of data of one primitive data type or sObjects that must have unique values. The Set methods are listed at http://www.salesforce.com/us/developer/docs/dbcom_apex230/Content/apex_methods_system_set.htm. Similar to the declaration of List, you can define a Set in the following way: Set<DataType> setName = new Set<DataType>(); Set<String> setName = new Set<String>(); There are built-in methods for sets, including add/remove elements to/from the set, check whether the set contains certain elements, and the size of the set. Map: A map is an unordered collection of unique keys of one primitive data type and their corresponding values. The Map methods are listed in the following link at http://www.salesforce.com/us/developer/docs/dbcom_apex250/Content/apex_methods_system_map.htm. You can define a Map in the following way: Map<PrimitiveKeyDataType, DataType> = mapName = new   Map<PrimitiveKeyDataType, DataType>(); Map<Integer, String> mapName = new Map<Integer, String>(); Map<Integer, List<String>> sMap = new Map<Integer,   List<String>>(); Maps are often used to map IDs to sObjects. There are built-in methods that you can use with maps, including adding/removing elements on the map, getting values for a particular key, and checking whether the map contains certain keys. You can use these methods as follows: Map<Integer, String> sMap = new Map<Integer, String>(); sMap.put(1, 'string1'); // put key and values pair sMap.put(2, 'string2'); sMap.put(3, 'string3'); sMap.put(4, 'string4'); sMap.get(2); // retrieve the value of key 2 Apex logics and loops Like all programming languages, Apex language has the syntax to implement conditional logics (IF-THEN-ELSE) and loops (for, Do-while, while). The following table will explain the conditional logic and loops in Apex: IF Conditional IF statements in Apex are similar to Java. The IF-THEN statement is the most basic of all the control flow statements. It tells your program to execute a certain section of code only if a particular test evaluates to true. The IF-THEN-ELSE statement provides a secondary path of execution when an IF clause evaluates to false. if (Boolean_expression){ statement; statement; statement; statement;} else { statement; statement;} For There are three variations of the FOR loop in Apex, which are as follows: FOR(initialization;Boolean_exit_condition;increment) {     statement; }   FOR(variable : list_or_set) {     statement; }   FOR(variable : [inline_soql_query]) {     statement; } All loops allow for the following commands: break: This is used to exit the loop continue: This is used to skip to the next iteration of the loop While The while loop is similar, but the condition is checked before the first loop, as shown in the following code: while (Boolean_condition) { code_block; }; Do-While The do-while loop repeatedly executes as long as a particular Boolean condition remains true. The condition is not checked until after the first pass is executed, as shown in the following code: do { //code_block; } while (Boolean_condition); Summary In this article, you have learned to develop custom coding in the Force.com platform, including the Apex classes and triggers. And you learned two query languages in the Force.com platform. Resources for Article: Further resources on this subject: Force.com: Data Management [article] Configuration in Salesforce CRM [article] Learning to Fly with Force.com [article]
Read more
  • 0
  • 0
  • 11700

article-image-python-multimedia-enhancing-images
Packt
20 Jan 2011
5 min read
Save for later

Python Multimedia: Enhancing Images

Packt
20 Jan 2011
5 min read
Adjusting brightness and contrast One often needs to tweak the brightness and contrast level of an image. For example, you may have a photograph that was taken with a basic camera, when there was insufficient light. How would you correct that digitally? The brightness adjustment helps make the image brighter or darker whereas the contrast adjustments emphasize differences between the color and brightness level within the image data. The image can be made lighter or darker using the ImageEnhance module in PIL. The same module provides a class that can auto-contrast an image. Time for action – adjusting brightness and contrast Let's learn how to modify the image brightness and contrast. First, we will write code to adjust brightness. The ImageEnhance module makes our job easier by providing Brightness class. Download image 0165_3_12_Before_BRIGHTENING.png and rename it to Before_BRIGHTENING.png. Use the following code: 1 import Image 2 import ImageEnhance 3 4 brightness = 3.0 5 peak = Image.open( "C:imagesBefore_BRIGHTENING.png ") 6 enhancer = ImageEnhance.Brightness(peak) 7 bright = enhancer.enhance(brightness) 8 bright.save( "C:imagesBRIGHTENED.png ") 9 bright.show() On line 6 in the code snippet, we created an instance of the class Brightness. It takes Image instance as an argument. Line 7 creates a new image bright by using the specified brightness value. A value between 0.0 and less than 1.0 gives a darker image, whereas a value greater than 1.0 makes it brighter. A value of 1.0 keeps the brightness of the image unchanged. The original and resultant image are shown in the next illustration. Comparison of images before and after brightening. Let's move on and adjust the contrast of the brightened image. We will append the following lines of code to the code snippet that brightened the image. 10 contrast = 1.3 11 enhancer = ImageEnhance.Contrast(bright) 12 con = enhancer.enhance(contrast) 13 con.save( "C:imagesCONTRAST.png ") 14 con.show() Thus, similar to what we did to brighten the image, the image contrast was tweaked by using the ImageEnhance.Contrast class. A contrast value of 0.0 creates a black image. A value of 1.0 keeps the current contrast. The resultant image is compared with the original in the following illustration. The original image with the image displaying the increasing contrast. In the preceding code snippet, we were required to specify a contrast value. If you prefer PIL for deciding an appropriate contrast level, there is a way to do this. The ImageOps.autocontrast functionality sets an appropriate contrast level. This function normalizes the image contrast. Let's use this functionality now. Use the following code: import ImageOps bright = Image.open( "C:imagesBRIGHTENED.png ") con = ImageOps.autocontrast(bright, cutoff = 0) con.show() The highlighted line in the code is where contrast is automatically set. The autocontrast function computes histogram of the input image. The cutoff argument represents the percentage of lightest and darkest pixels to be trimmed from this histogram. The image is then remapped. What just happened? Using the classes and functionality in ImageEnhance module, we learned how to increase or decrease the brightness and the contrast of the image. We also wrote code to auto-contrast an image using functionality provided in the ImageOps module. Tweaking colors Another useful operation performed on the image is adjusting the colors within an image. The image may contain one or more bands, containing image data. The image mode contains information about the depth and type of the image pixel data. The most common modes we will use are RGB (true color, 3x8 bit pixel data), RGBA (true color with transparency mask, 4x8 bit) and L (black and white, 8 bit). In PIL, you can easily get the information about the bands data within an image. To get the name and number of bands, the getbands() method of the class Image can be used. Here, img is an instance of class Image. >>> img.getbands() ('R', 'G', 'B', 'A') Time for action – swap colors within an image! To understand some basic concepts, let's write code that just swaps the image band data. Download the image 0165_3_15_COLOR_TWEAK.png and rename it as COLOR_TWEAK.png. Type the following code: 1 import Image 2 3 img = Image.open( "C:imagesCOLOR_TWEAK.png ") 4 img = img.convert('RGBA') 5 r, g, b, alpha = img.split() 6 img = Image.merge( "RGBA ", (g, r, b, alpha)) 7 img.show() Let's analyze this code now. On line 2, the Image instance is created as usual. Then, we change the mode of the image to RGBA. Here we should check if the image already has that mode or if this conversion is possible. You can add that check as an exercise! Next, the call to Image.split() creates separate instances of Image class, each containing a single band data. Thus, we have four Image instances—r, g, b, and alpha corresponding to red, green, and blue bands, and the alpha channel respectively. The code in line 6 does the main image processing. The first argument that Image.merge takes mode as the first argument whereas the second argument is a tuple of image instances containing band information. It is required to have same size for all the bands. As you can notice, we have swapped the order of band data in Image instances r and g while specifying the second argument. The original and resultant image thus obtained are compared in the next illustration. The color of the flower now has a shade of green and the grass behind the flower is rendered with a shade of red. Please download and refer to the supplementary PDF file Chapter 3 Supplementary Material.pdf. Here, the color images are provided that will help you see the difference. Original (left) and the color swapped image (right). What just happened? We accomplished creating an image with its band data swapped. We learned how to use PIL's Image.split() and Image.merge() to achieve this. However, this operation was performed on the whole image. In the next section, we will learn how to apply color changes to a specific color region.
Read more
  • 0
  • 0
  • 11501
Modal Close icon
Modal Close icon