Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Application Development

357 Articles
article-image-interactive-documents
Packt
02 Nov 2015
5 min read
Save for later

Interactive Documents

Packt
02 Nov 2015
5 min read
This article by Julian Hillebrand and Maximilian H. Nierhoff authors of the book Mastering RStudio for R Development covers the following topics: The two main ways to create interactive R Markdown documents Creating R Markdown and Shiny documents and presentations Using the ggvis package with R Markdown Embedding different types of interactive charts in documents Deploying interactive R Markdown documents (For more resources related to this topic, see here.) Creating interactive documents with R Markdown In this article, we want to focus on the opportunities to create interactive documents with R Markdown and RStudio. This is, of course, particularly interesting for the readers of a document, since it enables them to interact with the document by changing chart types, parameters, values, or other similar things. In principle, there are two ways to make an R Markdown document interactive. Firstly, you can use the Shiny web application framework of RStudio, or secondly, there is the possibility of incorporating various interactive chart types by using corresponding packages. Using R Markdown and Shiny Besides building complete web applications, there is also the possibility of integrating entire Shiny applications into R Markdown documents and presentations. Since we have already learned all the basic functions of R Markdown, and the use and logic of Shiny, we will focus on the following lines of integrating a simple Shiny app into an R Markdown file. In order for Shiny and R Markdown to work together, the argument, runtime: shiny must be added to the YAML header of the file. Of course, the RStudio IDE offers a quick way to create a new Shiny document presentation. Click on the new file, choose R Markdown, and in the popup window, select Shiny from the left-hand side menu. In the Shiny menu, you can decide whether you want to start with a Shiny Document option or a Shiny Presentation option: Shiny Document After choosing the Shiny Document option, a prefilled .Rmd file opens. It is different from the known R Markdown interface in that there is the Run Document button instead of the knit button and icon. The prefilled .Rmd file produces an R Markdown document with a working and interactive Shiny application. You can change the number of bins in the plot and also adjust the bandwidth. All these changes get rendered in real time, directly in your document. Shiny Presentation Also, when you click on Shiny Presentation in the selection menu, a prefilled .Rmd file opens. Because it is a presentation, the output format is changed to ioslides_presentation in the YAML header. The button in the code pane is now called Run Presentation: Otherwise, Shiny Presentation looks just like the normal R Markdown presentations. The Shiny app gets embedded in a slide and you can again interact with the underlying data of the application: Dissembling a Shiny R Markdown document Of course, the questions arises that how is it possible to embed a whole Shiny application onto an R Markdown document without the two usual basic files, ui.R and server.R? In fact, the rmarkdown package creates an invisible server.R file by extracting the R code from the code chunks. Reactive elements get placed into the index.html file of the HTML output, while the whole R Markdown document acts as the ui.R file. Embedding interactive charts into R Markdown The next way is to embed interactive chart types into R Markdown documents by using various R packages that enable us to create interactive charts. Some packages are as follows: ggvis rCharts googleVis dygraphs Therefore, we will not introduce them again, but will introduce some more packages that enable us to build interactive charts. They are: threejs networkD3 metricsgraphics plotly Please keep in mind that the interactivity logically only works with the HTML output of R Markdown. Using ggvis for interactive R Markdown documents Broadly speaking, ggvis is the successor of the well-known graphic package, ggplot2. The interactivity options of ggvis, which are based on the reactive programming model of the Shiny framework, are also useful for creating interactive R Markdown documents. To create an interactive R markdown document with ggvis, you need to click on the new file, then on R Markdown..., choose Shiny in the left menu of the new window, and finally, click on OK to create the document. As told before, since ggvis uses the reactive model of Shiny, we need to create an R Markdown document with ggvis this way. If you want to include an interactive ggvis plot within a normal R Markdown file, make sure to include the runtime: shiny argument in the YAML header. As shown, readers of this R Markdown document can easily adjust the bandwidth, and also, the kernel model. The interactive controls are created with input_. In our example, we used the controls, input_slider() and input_select(). For example, some of the other controls are input_checkbox(), input_numeric(), and so on. These controls have different arguments depending on the type of input. For both controls in our example, we used the label argument, which is just a text label shown next to the controls. Other arguments are ID (a unique identifier for the assigned control) and map (a function that remaps the output). Summary In this article, we have learned the two main ways to create interactive R Markdown documents. On the one hand, there is the versatile, usable Shiny framework. This includes the inbuilt Shiny documents and presentations options in RStudio, and also the ggvis package, which takes the advantages of the Shiny framework to build its interactivity. On the other hand, we introduced several already known, and also some new, R packages that make it possible to create several different types of interactive charts. Most of them achieve this by binding R to Existing JavaScript libraries. Resources for Article: Further resources on this subject: Jenkins Continuous Integration [article] Aspects of Data Manipulation in R [article] Find Friends on Facebook [article]
Read more
  • 0
  • 0
  • 12269

article-image-netbeans-developers-life-cycle
Packt
08 Sep 2015
30 min read
Save for later

The NetBeans Developer's Life Cycle

Packt
08 Sep 2015
30 min read
In this article by David Salter, the author of Mastering NetBeans, we'll cover the following topics: Running applications Debugging applications Profiling applications Testing applications On a day-to-day basis, developers spend much of their time writing and running applications. While writing applications, they typically debug, test, and profile them to ensure that they provide the best possible application to customers. Running, debugging, profiling, and testing are all integral parts of the development life cycle, and NetBeans provides excellent tooling to help us in all these areas. (For more resources related to this topic, see here.) Running applications Executing applications from within NetBeans is as simple as either pressing the F6 button on the keyboard or selecting the Run menu item or Project Context menu item. Choosing either of these options will launch your application without specifying any additional Java command-line parameters using the default platform JDK that NetBeans is currently using. Sometimes we want to change the options that are used for launching applications. NetBeans allows these options to be easily specified by a project's properties. Right-clicking on a project in the Projects window and selecting the Properties menu option opens the Project Properties dialog. Selecting the Run category allows the configuration options to be defined for launching an application. From this dialog, we can define and select multiple run configurations for the project via the Configuration dropdown. Selecting the New… button to the right of the Configuration dropdown allows us to enter a name for a new configuration. Once a new configuration is created, it is automatically selected as the active configuration. The Delete button can be used for removing any unwanted configurations. The preceding screenshot shows the Project Properties dialog for a standard Java project. Different project types (for example, web or mobile projects) have different options in the Project Properties window. As can be seen from the preceding Project Properties dialog, several pieces of information can be defined for a standard Java project, which together make up the launch configuration for a project: Runtime Platform: This option allows us to define which Java platform we will use when launching the application. From here, we can select from all the Java platforms that are configured within NetBeans. Selecting the Manage Platforms… button opens the Java Platform Manager dialog, allowing full configuration of the different Java platforms available (both Java Standard Edition and Remote Java Standard Edition). Selecting this button has the same effect as selecting the Tools and then Java Platforms menu options. Main Class: This option defines the main class that is used to launch the application. If the project has more than one main class, selecting the Browse… button will cause the Browse Main Classes dialog to be displayed, listing all the main classes defined in the project. Arguments: Different command-line arguments can be passed to the main class as defined in this option. Working Directory: This option allows the working directory for the application to be specified. VM Options: If different VM options (such as heap size) require setting, they can be specified by this option. Selecting the Customize button displays a dialog listing the different standard VM options available which can be selected (ticked) as required. Custom VM properties can also be defined in the dialog. For more information on the different VM properties for Java, check out http://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp-140102.html. From here, the VM properties for Java 7 (and earlier versions) and Java 8 for Windows, Solaris, Linux, and Mac OS X can be referenced. Run with Java Web Start: Selecting this option allows the application to be executed using Java Web Start technologies. This option is only available if Web Start is enabled in the Application | Web Start category. When running a web application, the project properties are different from those of a standalone Java application. In fact, the project properties for a Maven web application are different from those of a standard NetBeans web application. The following screenshot shows the properties for a Maven-based web application; as discussed previously, Maven is the standard project management tool for Java applications, and the recommended tool for creating and managing web applications: Debugging applications In the previous section, we saw how NetBeans provides the easy-to-use features to allow developers to launch their applications, but then it also provides more powerful additional features. The same is true for debugging applications. For simple debugging, NetBeans provides the standard facilities you would expect, such as stepping into or over methods, setting line breakpoints, and monitoring the values of variables. When debugging applications, NetBeans provides several different windows, enabling different types of information to be displayed and manipulated by the developer: Breakpoints Variables Call stack Loaded classes Sessions Threads Sources Debugging Analyze stack All of these windows are accessible from the Window and then Debugging main menu within NetBeans. Breakpoints NetBeans provides a simple approach to set breakpoints and a more comprehensive approach that provides many more useful features. Breakpoints can be easily added into Java source code by clicking on the gutter on the left-hand side of a line of Java source code. When a breakpoint is set, a small pink square is shown in the gutter and the entire line of source code is also highlighted in the same color. Clicking on the breakpoint square in the gutter toggles the breakpoint on and off. Once a breakpoint has been created, instead of removing it altogether, it can be disabled by right-clicking on the bookmark in the gutter and selecting the Breakpoint and then Enabled menu options. This has the effect of keeping the breakpoint within your codebase, but execution of the application does not stop when the breakpoint is hit. Creating a simple breakpoint like this can be a very powerful way of debugging applications. It allows you to stop the execution of an application when a line of code is hit. If we want to add a bit more control onto a simple breakpoint, we can edit the breakpoint's properties by right-clicking on the breakpoint in the gutter and selecting the Breakpoint and then Properties menu options. This causes the Breakpoint Properties dialog to be displayed: In this dialog, we can see the line number and the file that the breakpoint belongs to. The line number can be edited to move the breakpoint if it has been created on the wrong line. However, what's more interesting is the conditions that we can apply to the breakpoint. The Condition entry allows us to define a condition that has to be met for the breakpoint to stop the code execution. For example, we can stop the code when the variable i is equal to 20 by adding a condition, i==20. When we add conditions to a breakpoint, the breakpoint becomes known as a conditional breakpoint, and the icon in the gutter changes to a square with the lower-right quadrant removed. We can also cause the execution of the application to halt at a breakpoint when the breakpoint has been hit a certain number of times. The Break when hit count is condition can be set to Equal to, Greater than, or Multiple of to halt the execution of the application when the breakpoint has been hit the requisite number of times. Finally, we can specify what actions occur when a breakpoint is hit. The Suspend dropdown allows us to define what threads are suspended when a breakpoint is hit. NetBeans can suspend All threads, Breakpoint thread, or no threads at all. The text that is displayed in the Output window can be defined via the Print Text edit box and different breakpoint groups can be enabled or disabled via the Enable Group and Disable Group drop-down boxes. But what exactly is a breakpoint group? Simply put, a breakpoint group is a collection of breakpoints that can all be set or unset at the same time. It is a way of categorizing breakpoints into similar collections, for example, all the breakpoints in a particular file, or all the breakpoints relating to exceptions or unit tests. Breakpoint groups are created in the Breakpoints window. This is accessible by selecting the Debugging and then Breakpoints menu options from within the main NetBeans Window menu. To create a new breakpoint group, simply right-click on an existing breakpoint in the Breakpoints window and select the Move Into Group… and then New… menu options. The Set the Name of Breakpoints Group dialog is displayed in which the name of the new breakpoint group can be entered. After creating a breakpoint group and assigning one or more breakpoints into it, the entire group of breakpoints can be enabled or disabled, or even deleted by right-clicking on the group in the Breakpoints window and selecting the appropriate option. Any newly created breakpoint groups will also be available in the Breakpoint Properties window. So far, we've seen how to create breakpoints that stop on a single line of code, and also how to create conditional breakpoints so that we can cause an application to stop when certain conditions occur for a breakpoint. These are excellent techniques to help debug applications. NetBeans, however, also provides the ability to create more advanced breakpoints so that we can get even more control of when the execution of applications is halted by breakpoints. So, how do we create these breakpoints? These different types of breakpoints are all created from in the Breakpoints window by right-clicking and selecting the New Breakpoint… menu option. In the New Breakpoint dialog, we can create different types of breakpoints by selecting the appropriate entry from the Breakpoint Type drop-down list. The preceding screenshot shows an example of creating a Class breakpoint. The following types of breakpoints can be created: Class: This creates a breakpoint that halts execution when a class is loaded, unloaded, or either event occurs. Exception: This stops execution when the specified exception is caught, uncaught, or either event occurs. Field: This creates a breakpoint that halts execution when a field on a class is accessed, modified, or either event occurs. Line: This stops execution when the specified line of code is executed. It acts the same way as creating a breakpoint by clicking on the gutter of the Java source code editor window. Method: This creates a breakpoint that halts execution when a method is entered, exited, or when either event occurs. Optionally, the breakpoint can be created for all methods inside a specified class rather than a single method. Thread: This creates a breakpoint that stops execution when a thread is started, finished, or either event occurs. AWT/Swing Component: This creates a breakpoint that stops execution when a GUI component is accessed. For each of these different types of breakpoints, conditions and actions can be specified in the same way as on simple line-based breakpoints. The Variables debug window The Variables debug window lists all the variables that are currently within  the scope of execution of the application. This is therefore thread-specific, so if multiple threads are running at one time, the Variables window will only display variables in scope for the currently selected thread. In the Variables window, we can see the variables currently in scope for the selected thread, their type, and value. To display variables for a different thread to that currently selected, we must select an alternative thread via the Debugging window. Using the triangle button to the left of each variable, we can expand variables and drill down into the properties within them. When a variable is a simple primitive (for example, integers or strings), we can modify it or any property within it by altering the value in the Value column in the Variables window. The variable's value will then be changed within the running application to the newly entered value. By default, the Variables window shows three columns (Name, Type, and Value). We can modify which columns are visible by pressing the selection icon () at the top-right of the window. Selecting this displays the Change Visible Columns dialog, from which we can select from the Name, String value, Type, and Value columns: The Watches window The Watches window allows us to see the contents of variables and expressions during a debugging session, as can be seen in the following screenshot: In this screenshot, we can see that the variable i is being displayed along with the expressions 10+10 and i+20. New expressions can be watched by clicking on the <Enter new watch> option or by right-clicking on the Java source code editor and selecting the New Watch… menu option. Evaluating expressions In addition to watching variables in a debugging session, NetBeans also provides the facility to evaluate expressions. Expressions can contain any Java code that is valid for the running scope of the application. So, for example, local variables, class variables, or new instances of classes can be evaluated. To evaluate variables, open the Evaluate Expression window by selecting the Debug and then Evaluate Expression menu options. Enter an expression to be evaluated in this window and press the Evaluate Code Fragment button at the bottom-right corner of the window. As a shortcut, pressing the Ctrl + Enter keys will also evaluate the code fragment. Once an expression has been evaluated, it is shown in the Evaluation Result window. The Evaluation Result window shows a history of each expression that has previously been evaluated. Expressions can be added to the list of watched variables by right-clicking on the expression and selecting the Create Fixed Watch expression. The Call Stack window The Call Stack window displays the call stack for the currently executing thread: The call stack is displayed from top to bottom with the currently executing frame at the top of the list. Double-clicking on any entry in the call stack opens up the corresponding source code in the Java editor within NetBeans. Right-clicking on an entry in the call stack displays a pop-up menu with the choice to: Make Current: This makes the selected thread the current thread Pop To Here: This pops the execution of the call stack to the selected location Go To Source: This displays the selected code within the Java source editor Copy Stack: This copies the stack trace to the clipboard for use elsewhere When debugging, it can be useful to change the stack frame of the currently executing thread by selecting the Pop To Here option from within the stack trace window. Imagine the following code: // Get some magic int magic = getSomeMagicNumber(); // Perform calculation performCalculation(magic); During a debugging session, if after stepping over the getSomeMagicNumber() method, we decided that the method has not worked as expected, our course of action would probably be to debug into the getSomeMagicNumber() method. But, we've just stepped over the method, so what can we do? Well, we can stop the debugging session and start again or repeat the operation that called this section of code and hope there are no changes to the application state that affect the method we want to debug. A better solution, however, would be to select the line of code that calls the getSomeMagicNumber() method and pop the stack frame using the Pop To Here option. This would have the effect of rewinding the code execution so that we can then step into the method and see what is happening inside it. As well as using the Pop To Here functionality, NetBeans also offers several menu options for manipulating the stack frame, namely: Make Callee Current: This makes the callee of the current method the currently executing stack frame Make Caller Current: This makes the caller of the current method the currently executing stack frame Pop Topmost Call: This pops one stack frame, making the calling method the currently executing stack frame When moving around the call stack using these techniques, any operations performed by the currently executing method are not undone. So, for example, strange results may be seen if global or class-based variables are altered within a method and then an entry is popped from the call stack. Popping entries in the call stack is safest when no state changes are made within a method. The call stack displayed in the Debugging window for each thread behaves in the same way as in the Call Stack window itself. The Loaded Classes window The Loaded Classes window displays a list of all the classes that are currently loaded, showing how many instances there are of each class as a number and as a percentage of the total number of classes loaded. Depending upon the number of external libraries (including the standard Java runtime libraries) being used, you may find it difficult to locate instances of your own classes in this window. Fortunately, the filter at the bottom of the window allows the list of classes to be filtered, based upon an entered string. So, for example, entering the filter String will show all the classes with String in the fully qualified class name that are currently loaded, including java.lang.String and java.lang.StringBuffer. Since the filter works on the fully qualified name of a class, entering a package name will show all the classes listed in that package and subpackages. So, for example, entering a filter value as com.davidsalter.multithread would show only the classes listed in that package and subpackages. The Sessions window Within NetBeans, it is possible to perform multiple debugging sessions where either one project is being debugged multiple times, or more commonly, multiple projects are being debugged at the same time, where one is acting as a client application and the other is acting as a server application. The Sessions window displays a list of the currently running debug sessions, allowing the developer control over which one is the current session. Right-clicking on any of the sessions listed in the window provides the following options: Make Current: This makes the selected session the currently active debugging session Scope: This debugs the current thread or all the threads in the selected session Language: This options shows the language of the application being debugged—Java Finish: This finishes the selected debugging session Finish All: This finishes all the debugging sessions The Sessions window shows the name of the debug session (for example the main class being executed), its state (whether the application is Stopped or Running) and language being debugged. Clicking the selection icon () at the top-right of the window allows the user to choose which columns are displayed in the window. The default choice is to display all columns except for the Host Name column, which displays the name of the computer the session is running on. The Threads window The Threads window displays a hierarchical list of threads in use by the application currently being debugged. The current thread is displayed in bold. Double-clicking on any of the threads in the hierarchy makes the thread current. Similar to the Debugging window, threads can be made current, suspended, or interrupted by right-clicking on the thread and selecting the appropriate option. The default display for the Threads window is to show the thread's name and its state (Running, Waiting, or Sleeping). Clicking the selection icon () at the top-right of the window allows the user to choose which columns are displayed in the window. The Sources window The Sources window simply lists all of the source roots that NetBeans considers for the selected project. These are the only locations that NetBeans will search when looking for source code while debugging an application. If you find that you are debugging an application, and you cannot step into code, the most likely scenario is that the source root for the code you wish to debug is not included in the Sources window. To add a new source root, right-click in the Sources window and select the Add Source Root option. The Debugging window The Debugging window allows us to see which threads are running while debugging our application. This window is, therefore, particularly useful when debugging multithreaded applications. In this window, we can see the different threads that are running within our application. For each thread, we can see the name of the thread and the call stack leading to the breakpoint. The current thread is highlighted with a green band along the left-hand side edge of the window. Other threads created within our application are denoted with a yellow band along the left-hand side edge of the window. System threads are denoted with a gray band. We can make any of the threads the current thread by right-clicking on it and selecting the Make Current menu option. When we do this, the Variables and Call Stack windows are updated to show new information for the selected thread. The current thread can also be selected by clicking on the Debug and then Set Current Thread… menu options. Upon selecting this, a list of running threads is shown from which the current thread can be selected. Right-clicking on a thread and selecting the Resume option will cause the selected thread to continue execution until it hits another breakpoint. For each thread that is running, we can also Suspend, Interrupt, and Resume the thread by right-clicking on the thread and choosing the appropriate action. In each thread listing, the current methods call stack is displayed for each thread. This can be manipulated in the same way as from the Call Stack window. When debugging multithreaded applications, new breakpoints can be hit within different threads at any time. NetBeans helps us with multithreaded debugging by not automatically switching the user interface to a different thread when a breakpoint is hit on the non-current thread. When a breakpoint is hit on any thread other than the current thread, an indication is displayed at the bottom of the Debugging window, stating New Breakpoint Hit (an example of this can be seen in the previous window). Clicking on the icon to the right of the message shows all the breakpoints that have been hit together with the thread name in which they occur. Selecting the alternate thread will cause the relevant breakpoint to be opened within NetBeans and highlighted in the appropriate Java source code file. NetBeans provides several filters on the Debugging window so that we can show more/less information as appropriate. From left to right, these images allow us to: Show less (suspended and current threads only) Show thread groups Show suspend/resume table Show system threads Show monitors Show qualified names Sort by suspended/resumed state Sort by name Sort by default Debugging multithreaded applications can be a lot easier if you give your threads names. The thread's name is displayed in the Debugging window, and it's a lot easier to understand what a thread with a proper name is doing as opposed to a thread called Thread-1. Deadlock detection When debugging multithreaded applications, one of the problems that we can see is that a deadlock occurs between executing threads. A deadlock occurs when two or more threads become blocked forever because they are both waiting for a shared resource to become available. In Java, this typically occurs when the synchronized keyword is used. NetBeans allows us to easily check for deadlocks using the Check for Deadlock tool available on the Debug menu. When a deadlock is detected using this tool, the state of the deadlocked threads is set to On Monitor in the Threads window. Additionally, the threads are marked as deadlocked in the Debugging window. Each deadlocked thread is displayed with a red band on the left-hand side border and the Deadlock detected warning message is displayed. Analyze Stack Window When running an application within NetBeans, if an exception is thrown and not caught, the stack trace will be displayed in the Output window, allowing the developer to see exactly where errors have occurred. From the following screenshot, we can easily see that a NullPointerException was thrown from within the FaultyImplementation class in the doUntestedOperation() method at line 16. Looking before this in the stack trace (that is the entry underneath), we can see that the doUntestedOperation() method was called from within the main() method of the Main class at line 21: In the preceding example, the FaultyImplementation class is defined as follows: public class FaultyImplementation { public void doUntestedOperation() { throw new NullPointerException(); } } Java is providing an invaluable feature to developers, allowing us to easily see where exceptions are thrown and what the sequence of events was that led to the exception being thrown. NetBeans, however, enhances the display of the stack traces by making the class and line numbers clickable hyperlinks which, when clicked on, will navigate to the appropriate line in the code. This allows us to easily delve into a stack trace and view the code at all the levels of the stack trace. In the previous screenshot, we can click on the hyperlinks FaultyImplementation.java:16 and Main.java:21 to take us to the appropriate line in the appropriate Java file. This is an excellent time-saving feature when developing applications, but what do we do when someone e-mails us a stack trace to look at an error in a production system? How do we manage stack traces that are stored in log files? Fortunately, NetBeans provides an easy way to allow a stack trace to be turned into clickable hyperlinks so that we can browse through the stack trace without running the application. To load and manage stack traces into NetBeans, the first step is to copy the stack trace onto the system clipboard. Once the stack trace has been copied onto the clipboard, Analyze Stack Window can be opened within NetBeans by selecting the Window and then Debugging and then Analyze Stack menu options (the default installation for NetBeans has no keyboard shortcut assigned to this operation). Analyze Stack Window will default to showing the stack trace that is currently in the system clipboard. If no stack trace is in the clipboard, or any other data is in the system's clipboard, Analyze Stack Window will be displayed with no contents. To populate the window, copy a stack trace into the system's clipboard and select the Insert StackTrace From Clipboard button. Once a stack trace has been displayed in Analyze Stack Window, clicking on the hyperlinks in it will navigate to the appropriate location in the Java source files just as it does from the Output window when an exception is thrown from a running application. You can only navigate to source code from a stack trace if the project containing the relevant source code is open in the selected project group. Variable formatters When debugging an application, the NetBeans debugger can display the values of simple primitives in the Variables window. As we saw previously, we can also display the toString() representation of a variable if we select the appropriate columns to display in the window. Sometimes when debugging, however, the toString() representation is not the best way to display formatted information in the Variables window. In this window, we are showing the value of a complex number class that we have used in high school math. The ComplexNumber class being debugged in this example is defined as: public class ComplexNumber { private double realPart; private double imaginaryPart; public ComplexNumber(double realPart, double imaginaryPart) { this.realPart = realPart; this.imaginaryPart = imaginaryPart; } @Override public String toString() { return "ComplexNumber{" + "realPart=" + realPart + ", imaginaryPart=" + imaginaryPart + '}'; } // Getters and Setters omitted for brevity… } Looking at this class, we can see that it essentially holds two members—realPart and imaginaryPart. The toString() method outputs a string, detailing the name of the object and its parameters which would be very useful when writing ComplexNumbers to log files, for example: ComplexNumber{realPart=1.0, imaginaryPart=2.0} When debugging, however, this is a fairly complicated string to look at and comprehend—particularly, when there is a lot of debugging information being displayed. NetBeans, however, allows us to define custom formatters for classes that detail how an object will be displayed in the Variables window when being debugged. To define a custom formatter, select the Java option from the NetBeans Options dialog and then select the Java Debugger tab. From this tab, select the Variable Formatters category. On this screen, all the variable formatters that are defined within NetBeans are shown. To create a new variable formatter, select the Add… button to display the Add Variable Formatter dialog. In the Add Variable Formatter dialog, we need to enter Formatter Name and a list of Class types that NetBeans will apply the formatting to when displaying values in the debugger. To apply the formatter to multiple classes, enter the different classes, separated by commas. The value that is to be formatted is entered in the Value formatted as a result of code snippet field. This field takes the scope of the object being debugged. So, for example, to output the ComplexNumber class, we can enter the custom formatter as: "("+realPart+", "+imaginaryPart+"i)" We can see that the formatter is built up from concatenating static strings and the values of the members realPart and imaginaryPart. We can see the results of debugging variables using custom formatters in the following screenshot: Debugging remote applications The NetBeans debugger provides rapid access for debugging local applications that are executing within the same JVM as NetBeans. What happens though when we want to debug a remote application? A remote application isn't necessarily hosted on a separate server to your development machine, but is defined as any application running outside of the local JVM (that is the one that is running NetBeans). To debug a remote application, the NetBeans debugger can be "attached" to the remote application. Then, to all intents, the application can be debugged in exactly the same way as a local application is debugged, as described in the previous sections of this article. To attach to a remote application, select the Debug and then Attach Debugger… menu options. On the Attach dialog, the connector (SocketAttach, ProcessAttach, or SocketListen) must be specified to connect to the remote application. The appropriate connection details must then be entered to attach the debugger. For example, the process ID must be entered for the ProcessAttach connector and the host and port must be specified for the SocketAttach connector. Profiling applications Learning how to debug applications is an essential technique in software development. Another essential technique that is often overlooked is profiling applications. Profiling applications involves measuring various metrics such as the amount of heap memory used or the number of loaded classes or running threads. By profiling applications, we can gain an understanding of what our applications are actually doing and as such we can optimize them and make them function better. NetBeans provides first class profiling tools that are easy to use and provide results that are easy to interpret. The NetBeans profiler allows us to profile three specific areas: Application monitoring Performance monitoring Memory monitoring Each of these monitoring tools is accessible from the Profile menu within NetBeans. To commence profiling, select the Profile and then Profile Project menu options. After instructing NetBeans to profile a project, the profiler starts providing the choice of the type of profiling to perform. Testing applications Writing tests for applications is probably one of the most important aspects of modern software development. NetBeans provides the facility to write and run both JUnit and TestNG tests and test suites. In this section, we'll provide details on how NetBeans allows us to write and run these types of tests, but we'll assume that you have some knowledge of either JUnit or TestNG. TestNG support is provided by default with NetBeans, however, due to license concerns, JUnit may not have been installed when you installed NetBeans. If JUnit support is not installed, it can easily be added through the NetBeans Plugins system. In a project, NetBeans creates two separate source roots: one for application sources and the other for test sources. This allows us to keep tests separate from application source code so that when we ship applications, we do not need to ship tests with them. This separation of application source code and test source code enables us to write better tests and have less coupling between tests and applications. The best situation is for the test source root to have a dependency on application classes and the application classes to have no dependency on the tests that we have written. To write a test, we must first have a project. Any type of Java project can have tests added into it. To add tests into a project, we can use the New File wizard. In the Unit Tests category, there are templates for: JUnit Tests Tests for Existing Class (this is for JUnit tests) Test Suite (this is for JUnit tests) TestNG Test Case TestNG Test Suite When creating classes for these types of tests, NetBeans provides the option to automatically generate code; this is usually a good starting point for writing classes. When executing tests, NetBeans iterates through the test packages in a project looking for the classes that are suffixed with the word Test. It is therefore essential to properly name tests to ensure they are executed correctly. Once tests have been created, NetBeans provides several methods for running the tests. The first method is to run all the tests that we have defined for an application. Selecting the Run and then Test Project menu options runs all of the tests defined for a project. The type of the project doesn't matter (Java SE or Java EE), nor whether a project uses Maven or the NetBeans project build system (Ant projects are even supported if they have a valid test activity), all tests for the project will be run when selecting this option. After running the tests, the Test Results window will be displayed, highlighting successful tests in green and failed tests in red. In the Test Results window, we have several options to help categorize and manage the tests: Rerun all of the tests Rerun the failed tests Show only the passed tests Show only the failed tests Show errors Show aborted tests Show skipped tests Locate previous failure Locate next failure Always open test result window Always open test results in a new tab The second option within NetBeans for running tests it to run all the tests in a package or class. To perform these operations, simply right-click on a package in the Projects window and select Test Package or right-click on a Java class in the Projects window and select Test File. The final option for running tests it to execute a single test in a class. To perform this operation, right-click on a test in the Java source code editor and select the Run Focussed Test Method menu option. After creating tests, how do we keep them up to date when we add new methods to application code? We can keep tests suites up to date by manually editing them and adding new methods corresponding to new application code or we can use the Create/Update Tests menu. Selecting the Tools and then Create/Update Tests menu options displays the Create Tests dialog that allows us to edit the existing test classes and add new methods into them, based upon the existing application classes. Summary In this article, we looked at the typical tasks that a developer does on a day-to-day basis when writing applications. We saw how NetBeans can help us to run and debug applications and how to profile applications and write tests for them. Finally, we took a brief look at TDD, and saw how the Red-Green-Refactor cycle can be used to help us develop more stable applications. Resources for Article: Further resources on this subject: Contexts and Dependency Injection in NetBeans [article] Creating a JSF composite component [article] Getting to know NetBeans [article]
Read more
  • 0
  • 0
  • 12174

article-image-solving-many-many-relationship-dimensional-modeling
Packt
28 Dec 2009
3 min read
Save for later

Solving Many-to-Many Relationship in Dimensional Modeling

Packt
28 Dec 2009
3 min read
Bridge table solution We will use a simplified book sales dimensional model as an example to demonstrate our bridge solution. Our book sales model initially has the SALES_FACT fact table and two dimension tables: BOOK_DIM and DATE_DIM. The granularity of the model is sales amount by date (daily) and by book. Assume the BOOK_DIM table has five rows: BOOK_SK TITLE AUTHOR 1 Programming in Java King, Chan 3 Learning Python Simpson 2 Introduction to BIRT Chan, Gupta, Simpson (Editor) 4 Advanced Java King, Chan 5 Beginning XML King, Chan (Foreword) The DATE_DIM has two rows: DATE_SK DT 1 11-DEC-2009 2 12-DEC-2009 3 13-DEC-2009 And, the SALES_FACT table has ten rows: DATE_SK BOOK_SK SALES_AMT 1 1 1000 1 2 2000 1 3 3000 1 4 4000 2 2 2500 2 3 3500 2 4 4500 2 5 5500 3 3 8000 3 4 8500 Note that:The columns with _sk suffixes in the dimension tables are surrogate keys of the dimension tables; these surrogate keys relate the rows of the fact table to the rows in the dimension tables.King and Chan have collaborated in three books; two as co-authors, while in the “Beginning XML” Chan’s contribution is writing its foreword. Chan also co-authors the “Introduction to BIRT”.Simpson singly writes the “Learning Python” and is an editor for “Introduction to BIRT”. To analyze daily book sales, you simply run a query, joining the dimension tables to the fact table: SELECT dt, title, sales_amt FROM sales_fact s, date_dim d, book_dim bWHERE s.date_sk = d.date_skAND s.book_sk = b.book_sk This query produces the result showing the daily sales amount of every book that has a sale: DT TITLE SALES_AMT 11-DEC-09 Advanced Java 4000 11-DEC-09 Introduction to BIRT 2000 11-DEC-09 Learning Python 3000 11-DEC-09 Programming in Java 1000 12-DEC-09 Advanced Java 4500 12-DEC-09 Beginning XML 5500 12-DEC-09 Introduction to BIRT 2500 12-DEC-09 Learning Python 3500 13-DEC-09 Advanced Java 8500 13-DEC-09 Learning Python 8000 You will notice that the model does not allow you to readily analyze the sales by individual writer—the AUTHOR column is multi-value, not normalized, which violates the dimension modeling rule (we can resolve this by creating a view to “bundle” the AUTHOR_DIM with the SALES_FACT tables such that the AUTHORtable connects to the view as a normal dimension. We will create the view a bit later in this section). We can solve this issue by adding an AUTHOR_DIM and its AUTHOR_GROUP bridge table. The AUTHOR_DIM must contain all individual contributors, which you will have to extract from the books and enter into the table. In our example we have four authors. AUTHOR_SK NAME 1 Chan 2 King 3 Gupta 4 Simpson The weighting_factor column in the AUTHOR_GROUP bridge table contains a fractional numeric value that determines the contribution of an author to a book. Typically the authors have equal contribution to the book they write, but you might want to have different weighting_factor for different roles; for example, an editor and a foreword writer have smaller weighting_factors than that of an author. The total weighting_factors for a book must always equal to 1. The AUTHOR_GROUP bridge table has one surrogate key for every group of authors (a single author is considered a group that has one author only), and as many rows with that surrogate key for every contributor in the group.
Read more
  • 0
  • 1
  • 12138

article-image-using-groovy-closures-instead-template-method
Packt
23 Dec 2010
3 min read
Save for later

Using Groovy Closures Instead of Template Method

Packt
23 Dec 2010
3 min read
  Groovy for Domain-Specific Languages Extend and enhance your Java applications with Domain Specific Languages in Groovy Build your own Domain Specific Languages on top of Groovy Integrate your existing Java applications using Groovy-based Domain Specific Languages (DSLs) Develop a Groovy scripting interface to Twitter A step-by-step guide to building Groovy-based Domain Specific Languages that run seamlessly in the Java environment         Read more about this book       (For more resources on Groovy, see here.) Template Method Pattern Overview The template method pattern often applies during the thought "Well I have a piece of code that I want to use again, but I can't use it 100%. I want to change a few lines to make it useful." In general, using this pattern involves creating an abstract class and varying its implementation through abstract hook methods. Subclasses implement these abstract hook methods to solve their specific problem. This approach is very effective and is used extensively in frameworks. However, closures provide an elegant solution. Sample HttpBuilder Request It is best to illustrate the closure approach with an example. Recently I was developing a consumer of REST webservices with HttpBuilder. With HttpBuilder, the client simply creates the class and issues an HTTP call. The framework waits for a response and provides hooks for processing. Many of the requests being made were very similar to one another, only the URI was different. In addition, each request needed to process the returned XML differently, as the XML received would vary. I wanted to use the same request code, but vary the XML processing. To summarize the problem: HttpBuilder code should be reused Different URIs should be sent out with the same HttpBuilder code Different XML should be processed with the same HttpBuilder code Here is my first draft of HttpBuilder code. Note the call to convertXmlToCompanyDomainObject(xml). static String URI_PREFIX = '/someApp/restApi/' private List issueHttpBuilderRequest(RequestObject requestObj, String uriPath) { def http = new HTTPBuilder("http://localhost:8080/") def parsedObjectsFromXml = [] http.request(Method.POST, ContentType.XML) { req -> // set uri path on the delegate uri.path = URI_PREFIX + uriPath uri.query = [ company: requestObj.company, date: requestObj.date type: requestObj.type ] headers.'User-Agent' = 'Mozilla/5.0' // when response is a success, parse the gpath xml response.success = { resp, xml -> assert resp.statusLine.statusCode == 200 // store the list parsedObjectsFromXml = convertXmlToCompanyDomainObject(xml) } // called only for a 404 (not found) status code: response.'404' = { resp -> log.info 'HTTP status code: 404 Not found' } } parsedObjectsFromXml } private List convertXmlToCompanyDomainObject(GPathResult xml) { def list = [] // .. implementation to parse the xml and turn into objects } As you can see, URI is passed as a parameter to issueHttpBuilderRequest. This solves the problem of sending different URIs, but what about parsing the different XML formats that are returned? Using Template Method Pattern The following diagram illustrates applying the template method pattern to this problem. In summary, we need to move the issueHttpBuilderRequest code to an abstract class, and provide an abstract method convertXmlToDomainObjects(). Subclasses would provide the appropriate XML conversion implementation.
Read more
  • 0
  • 0
  • 12060

article-image-intro-docker-part-2-developing-simple-application
Julian Gindi
30 Oct 2015
5 min read
Save for later

Intro to Docker Part 2: Developing a Simple Application

Julian Gindi
30 Oct 2015
5 min read
In my last post, we learned some basic concepts related to Docker, and we learned a few basic operations for using Docker containers. In this post, we will develop a simple application using Docker. Along the way we will learn how to use Dockerfiles and Docker's amazing 'compose' feature to link multiple containers together. The Application We will be building a simple clone of Reddit's very awesome and mysterious "The Button". The application will be written in Python using the Flask web framework, and will use Redis as it's storage backend. If you do not know Python or Flask, fear not, the code is very readable and you are not required to understand the code to follow along with the Docker-specific sections. Getting Started Before we get started, we need to create a few files and directories. First, go ahead and create a Dockerfile, requirements.txt (where we will specify project-specific dependencies), and a main app.py file. touch Dockerfile requirements.txt app.py Next we will create a simple endpoint that will return "Hello World". Go ahead and edit your app.py file to look like such: from flask import Flask app = Flask(__name__) @app.route('/') def main(): return 'Hello World!' if __name__ == '__main__': app.run('0.0.0.0') Now we need to tell Docker how to build a container containing all the dependencies and code needed to run the app. Edit your Dockerfile to look like such: 1 FROM python:2.7 2 3 RUN mkdir /code 4 WORKDIR /code 5 6 ADD requirements.txt /code/ 7 RUN pip install -r requirements.txt 8 9 ADD . /code/1011 EXPOSE 5000 Before we move on, let me explain the basics of Dockerfiles. Dockerfiles A Dockerfile is a configuration file that specifies instructions on how to build a Docker container. I will now explain each line in the Dockerfile we just created (I will reference individual lines). 1: First, we specify the base image to use as our starting point (we discussed this in more detail in the last post). Here we are using a stock Python 2.7 image. 3: Dockerfiles can container a few 'directives' that dictate certain behaviors. RUN is one such directive. It does exactly what it sounds like - runs an arbitrary command. Here, were are just making a working directory. 4: We use WORKDIR to specify the main working directory. 6: ADD allows us to selectively add files to the container during the build process. Currently, we just need to add the requirements file to tell Docker while dependencies to install. 7: We use the RUN command and python's pip package manager to install all the needed dependencies. 9: Here we add all the code in our current directory into the Docker container (add /code). 11: Finally we 'expose' the ports we will need to access. In this case, Flask will run on port 5000. Building from a Dockerfile We are almost ready to build an image from this Dockerfile, but first, let's specify the dependencies we will need in our requirements.txt file. flask==0.10.1 redis==2.10.3 I am using specific versions here to ensure that your version will work just like mine does. Once we have all these pieces in place we can build the image with the following command. > docker build -t thebutton . We are 'tagging' this image with an easy-to-remember name that we can use later. Once the build completes, we can run the container and see our message in the browser. > docker run -p 5000:5000 thebutton python app.py We are doing a few things here: The -p flag tells Docker to expose port 5000 inside the container, to port 5000 outside the container (this just makes our lives easier). Next we specify the image name (thebutton) and finally the command to run inside the container - python app.py - this will start the web server and server for our page. We are almost ready to view our page but first, we must discover which IP the site will be on. For linux-based systems, you can use localhost but for Mac you will need to run boot2docker ip to discover the IP address to visit. Navigate to your site (in my case it's 192.168.59.103:5000) and you should see "Hello World" printed. Congrats! You are running your first site from inside a Docker container. Putting it All Together Now, we are going to complete the app, and use Docker Compose to launch the entire project for us. This will contain two containers, one running our Flask app, and another running an instance of Redis. The great thing about docker-compose is that you can specify a system to create, and how to connect all the containers. Let's create our docker-compose.yml file now. redis: image: redis:2.8.19 web: build: . command: python app.py ports: - "5000:5000" links: - redis:redis This file specifies the two containers (web and redis). It specifies how to build each container (we are just using the stock redis image here). The web container is a bit more involved since we first build the container using our local Dockerfile (the build: . line). Than we expose port 5000 and link the Redis container to our web container. The awesome thing about linking containers this way, is that the web container automatically gets information about the redis container. In this case, there is an /etc/host called 'redis' that points to our Redis container. This allows us to configure Redis easily in our application: db = redis.StrictRedis('redis', 6379, 0) To test this all out, you can grab the complete source here. All you will need to run is docker-compose up and than access the site the same way we did before. Congratulations! You now have all the tools you need to use docker effectively! About the author Julian Gindi is a Washington DC-based software and infrastructure engineer. He currently serves as Lead Infrastructure Engineer at [iStrategylabs](isl.co) where he does everything from system administration to designing and building deployment systems. He is most passionate about Operating System design and implementation, and in his free time contributes to the Linux Kernel.
Read more
  • 0
  • 0
  • 11909

Packt
16 Feb 2016
3 min read
Save for later

Machine learning and Python – the Dream Team

Packt
16 Feb 2016
3 min read
In this article we will be learning more about machine learning and Python. Machine learning (ML) teaches machines how to carry out tasks by themselves. It is that simple. The complexity comes with the details, and that is most likely the reason you are reading this article. (For more resources related to this topic, see here.) Machine learning and Python – the dream team The goal of machine learning is to teach machines (software) to carry out tasks by providing them a couple of examples (how to do or not do a task). Let us assume that each morning when you turn on your computer, you perform the same task of moving e-mails around so that only those e-mails belonging to a particular topic end up in the same folder. After some time, you feel bored and think of automating this chore. One way would be to start analyzing your brain and writing down all the rules your brain processes while you are shuffling your e-mails. However, this will be quite cumbersome and always imperfect. While you will miss some rules, you will over-specify others. A better and more future-proof way would be to automate this process by choosing a set of e-mail meta information and body/folder name pairs and let an algorithm come up with the best rule set. The pairs would be your training data, and the resulting rule set (also called model) could then be applied to future e-mails, which we have not yet seen. This is machine learning in its simplest form. Of course, machine learning (often also referred to as data mining or predictive analysis) is not a brand new field in itself. Quite the contrary, its success over the recent years can be attributed to the pragmatic way of using rock-solid techniques and insights from other successful fields; for example, statistics. There, the purpose is for us humans to get insights into the data by learning more about the underlying patterns and relationships. As you read more and more about successful applications of machine learning (you have checked out kaggle.com already, haven't you?), you will see that applied statistics is a common field among machine learning experts. As you will see later, the process of coming up with a decent ML approach is never a waterfall-like process. Instead, you will see yourself going back and forth in your analysis, trying out different versions of your input data on diverse sets of ML algorithms. It is this explorative nature that lends itself perfectly to Python. Being an interpreted high-level programming language, it may seem that Python was designed specifically for the process of trying out different things. What is more, it does this very fast. Sure enough, it is slower than C or similar statically typed programming languages; nevertheless, with a myriad of easy-to-use libraries that are often written in C, you don't have to sacrifice speed for agility. Summary In this is article we learned about machine learning and its goals. To learn more please refer to the following books: Building Machine Learning Systems with Python - Second Edition (https://www.packtpub.com/big-data-and-business-intelligence/building-machine-learning-systems-python-second-edition) Expert Python Programming (https://www.packtpub.com/application-development/expert-python-programming) Resources for Article:   Further resources on this subject: Python Design Patterns in Depth – The Observer Pattern [article] Python Design Patterns in Depth: The Factory Pattern [article] Customizing IPython [article]
Read more
  • 0
  • 0
  • 11883
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-preparing-your-forms-conversion-using-oracle-application-express-apex
Packt
09 Oct 2009
9 min read
Save for later

Preparing Your Forms Conversion Using Oracle Application Express (APEX)

Packt
09 Oct 2009
9 min read
When we are participating in a Forms Conversion project, it means we take the source files of our application, turn them into XML files, and upload them into the Forms Conversion part of APEX. This article describes what we do before uploading the XML files and starting our actual Forms Conversion project. Get your stuff! When we talk about source files, it would come in very handy if we got all the right versions of these files. In order to do the Conversion project, we need the same components that are used in the production environment. For these components, we have to get the source files of the components we want to convert. This means we have no use of the runtime files (Oracle Forms runtime files have the FMX extension). In other words, for Forms components we don't need the FMX files, but the FMB source files. These are a few ground rules we have to take into account: We need to make sure that there's no more development on the components we are about to use in our Conversion project. This is because we are now going to freeze our sources and new developments won't be taken into the Conversion project at all. So there will be no changes in our project. Put all the source files in a safe place. In other words, copy the latest version of your files into a new directory to which only you, and perhaps your teammates, have access. If the development team of your organization is using Oracle Designer for the development of its applications, it would be a good idea to generate all the modules from scratch. You would like to use the source on which the runtime files were created only if there are post-generation adjustments to be made in the modules. We need the following files for our Conversion project: Forms Modules: With the FMB extension Object Libraries: With the OLB extension Forms Menus: With the MMB extension PL/SQL Libraries: With the PLL extension Report Files: With the RDF, REX, or JSP extensions When we take these source files, we will be able to create all the necessary XML files that we need for the Forms Conversion project. Creating XML files To create XML files, we need three parts of the Oracle Developer Suite. All of these parts come with a normal 10g or 9i installation of the Developer Suite. These three parts are the Forms Builder, the Reports Builder, and the Forms2XML conversion tool. The Forms2XML conversion tool is the most extensive to understand and is used to create XML files from Form modules, Object Libraries, and Forms Menus. So, we will first discuss the possibilities of this tool. The Forms2XML conversion tool This tool can be used both from the command line as well as a Java applet. As the command line gives us all the possibilities we need and is as easy as a Java applet, we will only use the command-line possibilities. The frmf2xml command comes with some options. The following syntax is used while converting the Forms Modules, the Object Libraries, and the Forms Menus to an XML structure: frmf2xml [option] file [file] In other words, we follow these steps: We first type frmf2xml. Alternatively, we give one of the options with it. We tell the command which file we want to convert, and we have the option to address more than one file for the conversion to XML. We probably want to give the OVERWRITE=YES option with our command. This property ensures that the newly created XML file will overwrite the one with the same name in the directory where we are working. If another file with the same name already exists in this directory and we don't give the OVERWRITE option the value YES (the default is NO), the file will not be generated, as we see in the following screenshot: If there are any images used in modules (Forms or Object Libraries), the Forms2XML tool will refer to the image in the XML file created, and that file will create a TIF file of the image in the directory. The XML files that are created will be stored in the same directory from which we call the command. It will use the following syntax for the name of the XML file: formname.fmb will become formname_fmb.xml libraryname.olb will become libraryname_olb.xml menuname.mmb will become menuname_mmb.xml To convert the .FMB, OLB and, MMB files to XML, we need to do the following steps in the command prompt: Forms Modules The following steps are done in order to convert the .FMB file to XML: We will change the working directory to the directory that has the FMB file. In my example, I have stored all the files in a directory called summit directly under the C drive, like this: C:>cd C:summit Now, we can call the frmf2xml command to convert one of our Forms Modules to an XML file. In this example, we convert the orders.fmb module: C:summit>frmf2xml OVERWRITE=YES orders.fmb As we see in the following screenshot, this command creates an XML file called orders_fmb.xml in the working directory: Object Libraries To convert the .OLB file to XML, the following steps are needed: We first change the working directory to the directory that the OLB file is in. It's done like this: C:>cd C:summit Now we can call the frmf2xml command to convert one of our Object Libraries to an XML file. In this example, we convert the Form_Builder_II.olb library as follows: C:summit>frmf2xml OVERWRITE=YES Form_Builder_II.olb As we see in the following screenshot, the command creates an XML file calledForm_Builder_II_olb.xml and two images as .tif files in the working directory: Forms Menus To convert the MMB file to XML, we follow these steps: We change the working directory to the directory that the .MMB file is in, like this: C:>cd C:summit Now we can call the frmf2xml command to convert one of our Forms Menus to an XML file. In this example we convert the customers.mmb menu: C:summit>frmf2xml OVERWRITE=YES customers.mmb As we can see in the following screenshot, the command creates an XML file called customers_mmb.xml in the working directory: Report Files In our example, we will convert the Customers Report from a RDF file to an XML file. To do this, we follow the steps given here: We need to open the Employees.rdf file with Reports Builder. Open Reports Builder from your Start menu. If Reports Builder is opened, we need to cancel the wizard that asks us if we want to create a new report. After this we use Ctrl+O to open the Report File (or in the menu, File | Open) which we want to convert to XML as we see in the following screenshot: After this we use Shift+Ctrl+S (or in the File | Save As menu) to save the Report. We choose that we want to save the report as a Reports XML (*.xml) file and we click on the Save button as shown in the following screenshot: PL/SQL Libraries To convert PL/SQL Libraries to an XML format, it's easiest to use the convert command that comes with the Report Builder. With this command called rwconverter, we define the source type, call the source, and define the destination type and the destination. In this way, we have control over the way we need to convert the original .pll file to a .pld flat file that we can upload into the APEX Forms converter. It is possible to convert the PL/SQL Libraries with the convert option in Forms Builder, but, personally, I think this option works better. The rwconverter command has a few parameters we give with it to execute. They are given as follows: stype: This is the type of source file we need to convert. In our situation, this will be a .pll file and so the value we need to set is pllfile. source: This is the name of the source file, including the extension. In our case, it is wizard.pll. dtype: This is the file type we want to convert our source file to. In our case, it is a .pld file and so the value becomes pldfile. dest: This is the name, including the extension, of the destination file. In our case, it is wizard.pld. In our example, we use the wizard.pll file that's in our summit files directory. This PL/SQL Library that contains .pll files is normally used to create a PL/SQL Library in the Oracle Database. But this time, we will use it to create a .pld flat file that we will upload to APEX. First, we change the directory to work directory which has the original .pll file. In our case, the summit directory directly under the C drive, shown as follows: C:>cd C:summit After this, we call rwconverter in the command prompt as shown here: C:summit> rwconverter stype=pllfile source=wizard.pll dtype=pldfile dest=wizard.pld When you press the Enter key, a screen will open that is used to do the conversion. We will see that the types and names of the files are the same as we entered them in the command line. We need to click on the OK button to convert the file from .pll to .pld. The conversion may take a few seconds, but when the file has been converted we will see a confirmation that the conversion was successful. After this, we can look in the C:summit directory and we will see that a file wizard.pld is created.
Read more
  • 0
  • 0
  • 11837

article-image-microsoft-lightswitch-querying-and-filtering-data
Packt
16 Sep 2011
5 min read
Save for later

Microsoft LightSwitch: Querying and Filtering Data

Packt
16 Sep 2011
5 min read
  (For more resources on this topic, see here.)   Querying in LightSwitch The following figure is based on the one you may review on the link mentioned earlier and schematically summarizes the architectural details: Each entity set has a default All and Single as shown in the entity Category. All entity sets have a Save operation that saves the changes. As defined, the entity sets are queryable and therefore query operations on these sets are allowed and supported. A query (query operation) requests an entity set(s) with optional filtering and sorting as shown, for example, in a simple, filtered, and sorted query on the Category entity. Queries can be parameterized with one or more parameters returning single or multiple results (result sets). In addition to the defaults (for example, Category*(SELECT All) and Category), additional filtering and sorting predicates can be defined. Although queries are based on LINQ, all of the IQueryable LINQ operations are not supported. The query passes through the following steps (the pipeline) before the results are returned. Pre-processing CanExecute—called to determine if this operation may be called or not Executing—called before the query is processed Pre-process query expression—builds up the final query expression Execution—LightSwitch passes the query expression to the data provider for execution Post-processing Executed—after the query is processed but before returning the results ExecuteFailed—if the query operation failed   Querying a Single Entity We will start off creating a Visual Studio LightSwitch project LSQueries6 using the Visual Basic Template as shown (the same can be carried out with a C# template). We will attach this application to the SQL Server Express server's Northwind database and bring in the Products (table) entity. We will create a screen EditableProductList which brings up all the data in the Products entity as shown in the previous screenshot. The above screen was created using the Editable Grid Screen template as shown next with the source of data being the Products entity. We see that the EditableProductList screen is displaying all columns including those discontinued items and it is editable as seen by the controls on the displayed screen. This is equivalent to the SQL query, Select * from Products as far as display is concerned.   Filtering and sorting the data Often you do not need all the columns but only a few columns of importance for your immediate needs, which besides being sufficient, enormously reduces the cost of running a query. What do you do to achieve this? Of course, you filter the data by posing a query to the entity. Let us now say, we want products listing ProductID, ProductName excluding the discontinued items. We also need the list sorted. In SQL Syntax, this reduces to: SELECT [Product List].ProductID, [Product List].ProductNameFROM Products AS [Product List]WHERE ((([Product List].Discontinued) =0))ORDER BY [Product List].ProductName; This is a typical filtering of data followed by sorting the filtered data. Filtering the data In LightSwitch, this filtering is carried out as shown in the following steps: Click on Query menu item in the LSQueries Designer as shown: The Designer (short for Query Designer) pops-up as shown and the following changes are made in the IDE: A default Query1 gets added to the Products entity on which it is based as shown; the Query1 property window is displayed and the Query Designer window is displayed. Query1 can be renamed in its Properties window (this will be renamed as Product List). The query target is the Products table and the return type is Product. As you can see Microsoft has provided all the necessary basic querying in this designer. If the query has to be changed to something more complicated, the Edit Additional Query Code link can be clicked to access the ProductListDataService as shown: Well, this is not a SQL Query but a LINQ Query working in the IDE. We know that entities are not just for relational data and this makes perfect sense because of the known advantages of LINQ for queries (review the following link: http://msdn.microsoft.com/en-us/library/bb425822.aspx). One of those main advantages is that you can write the query inVB or C#, and the DataContext, the main player takes it to SQL and runs queries that SQL Databases understand. It's more like a language translation for queries with many more advantages than the one mentioned. Hover over Add Filter to review what this will do as shown: This control will add a new filter condition. Note that Query1 has been renamed (right-click on Query1 and choose Rename) to ProductList. Click on the Add Filter button. The Filter area changes to display the following: The first field in the entity will come up by default as shown for the filtered field for the 'Where' clause. The GUI is helping to build up "Where CategoryID = ". However, as you can see from the composite screenshot (four screens were integrated to create this screenshot) built from using all the drop-down options that you can indeed filter any of the columns and choose any of the built-in criteria. Depending on the choice, you can also add parameter(s) with this UI. For the particular SQL Query we started with, choose the drop-down as shown. Notice that LightSwitch was intelligent enough to get the right data type of value for the Boolean field Discontinued. You also have an icon (in red, left of Where) to click on should you desire to delete the query. Add a Search Data Screen using the previous query as the source by providing the following information to the screen designer (associating the ProductList query for the Screen Data). This screen when displayed shows all products not discontinued as shown. The Discontinued column has been dragged to the position shown in the displayed screen.  
Read more
  • 0
  • 0
  • 11773

article-image-custom-coding-apex
Packt
27 Apr 2015
18 min read
Save for later

Custom Coding with Apex

Packt
27 Apr 2015
18 min read
In this article by Chamil Madusanka, author of the book Learning Force.com Application Development, you will learn about the custom coding in Apex and also about triggers. We have used many declarative methods such as creating the object's structure, relationships, workflow rules, and approval process to develop the Force.com application. The declarative development method doesn't require any coding skill and specific Integrated Development Environment (IDE). This article will show you how to extend the declarative capabilities using custom coding of the Force.com platform. Apex controllers and Apex triggers will be explained with examples of the sample application. The Force.com platform query language and data manipulation language will be described with syntaxes and examples. At the end of the article, there will be a section to describe bulk data handling methods in Apex. This article covers the following topics: Introducing Apex Working with Apex (For more resources related to this topic, see here.) Introducing Apex Apex is the world's first on-demand programming language that allows developers to implement and execute business flows, business logic, and transactions on the Force.com platform. There are two types of Force.com application development methods: declarative developments and programmatic developments. Apex is categorized under the programmatic development method. Since Apex is a strongly-typed, object-based language, it is connected with data in the Force.com platform and data manipulation using the query language and the search language. The Apex language has the following features: Apex provides a lot of built-in support for the Force.com platform features such as: Data Manipulation Language (DML) with the built-in exception handling (DmlException) to manipulate the data during the execution of the business logic. Salesforce Object Query Language (SOQL) and Salesforce Object Search Language (SOSL) to query and retrieve the list of sObjects records. Bulk data processing on multiple records at a time. Apex allows handling errors and warning using an in-built error-handling mechanism. Apex has its own record-locking mechanism to prevent conflicts of record updates. Apex allows building custom public Force.com APIs from stored Apex methods. Apex runs in a multitenant environment. The Force.com platform has multitenant architecture. Therefore, the Apex runtime engine obeys the multitenant environment. It prevents monopolizing of shared resources using the guard with limits. If any particular Apex code violates the limits, error messages will be displayed. Apex is hosted in the Force.com platform. Therefore, the Force.com platform interprets, executes, and controls Apex. Automatically upgradable and versioned: Apex codes are stored as metadata in the platform. Therefore, they are automatically upgraded with the platform. You don't need to rewrite your code when the platform gets updated. Each code is saved with the current upgrade version. You can manually change the version. It is easy to maintain the Apex code with the versioned mechanism. Apex can be used easily. Apex is similar to Java syntax and variables. The syntaxes and semantics of Apex are easy to understand and write codes. Apex is a data-focused programming language. Apex is designed for multithreaded query and DML statements in a single execution context on the Force.com servers. Many developers can use database stored procedures to run multiple transaction statements on the database server. Apex is different from other databases when it comes to stored procedures; it doesn't attempt to provide general support for rendering elements in the user interface. The execution context is one of the key concepts in Apex programming. It influences every aspect of software development on the Force.com platform. Apex is a strongly-typed language that directly refers to schema objects and object fields. If there is any error, it fails the compilation. All the objects, fields, classes, and pages are stored in metadata after successful compilation. Easy to perform unit testing. Apex provides a built-in feature for unit testing and test execution with the code coverage. Apex allows developers to write the logic in two ways: As an Apex class: The developer can write classes in the Force.com platform using Apex code. An Apex class includes action methods which related to the logic implementation. An Apex class can be called from a trigger. A class can be associated with a Visualforce page (Visualforce Controllers/Extensions) or can act as a supporting class (WebService, Email-to-Apex service/Helper classes, Batch Apex, and Schedules). Therefore, Apex classes are explicitly called from different places on the Force.com platform. As a database trigger: A trigger is executed related to a particular database interaction of a Force.com object. For example, you can create a trigger on the Leave Type object that fires whenever the Leave Type record is inserted. Therefore, triggers are implicitly called from a database action. Apex is included in the Unlimited Edition, Developer Edition, Enterprise Edition, Database.com, and Performance Edition. The developer can write Apex classes or Apex triggers in a developer organization or a sandbox of a production organization. After you finish the development of the Apex code, you can deploy the particular Apex code to the production organization. Before you deploy the Apex code, you have to write test methods to cover the implemented Apex code. Apex code in the runtime environment You already know that Apex code is stored and executed on the Force.com platform. Apex code also has a compile time and a runtime. When you attempt to save an Apex code, it checks for errors, and if there are no errors, it saves with the compilation. The code is compiled into a set of instructions that are about to execute at runtime. Apex always adheres to built-in governor limits of the Force.com platform. These governor limits protect the multitenant environment from runaway processes. Apex code and unit testing Unit testing is important because it checks the code and executes the particular method or trigger for failures and exceptions during test execution. It provides a structured development environment. We gain two good requirements for this unit testing, namely, best practice for development and best practice for maintaining the Apex code. The Force.com platform forces you to cover the Apex code you implemented. Therefore, the Force.com platform ensures that you follow the best practices on the platform. Apex governors and limits Apex codes are executed on the Force.com multitenant infrastructure and the shared resources are used across all customers, partners, and developers. When we are writing custom code using Apex, it is important that the Apex code uses the shared resources efficiently. Apex governors are responsible for enforcing runtime limits set by Salesforce. It discontinues the misbehaviors of the particular Apex code. If the code exceeds a limit, a runtime exception is thrown that cannot be handled. This error will be seen by the end user. Limit warnings can be sent via e-mail, but they also appear in the logs. Governor limits are specific to a namespace, so AppExchange certified managed applications have their own set of limits, independent of the other applications running in the same organization. Therefore, the governor limits have their own scope. The limit scope will start from the beginning of the code execution. It will be run through the subsequent blocks of code until the particular code terminates. Apex code and security The Force.com platform has a component-based security, record-based security and rich security framework, including profiles, record ownership, and sharing. Normally, Apex codes are executed as a system mode (not as a user mode), which means the Apex code has access to all data and components. However, you can make the Apex class run in user mode by defining the Apex class with the sharing keyword. The with sharing/without sharing keywords are employed to designate that the sharing rules for the running user are considered for the particular Apex class. Use the with sharing keyword when declaring a class to enforce the sharing rules that apply to the current user. Use the without sharing keyword when declaring a class to ensure that the sharing rules for the current user are not enforced. For example, you may want to explicitly turn off sharing rule enforcement when a class acquires sharing rules after it is called from another class that is declared using with sharing. The profile also can maintain the permission for developing Apex code and accessing Apex classes. The author's Apex permission is required to develop Apex codes and we can limit the access of Apex classes through the profile by adding or removing the granted Apex classes. Although triggers are built using Apex code, the execution of triggers cannot be controlled by the user. They depend on the particular operation, and if the user has permission for the particular operation, then the trigger will be fired. Apex code and web services Like other programming languages, Apex supports communication with the outside world through web services. Apex methods can be exposed as a web service. Therefore, an external system can invoke the Apex web service to execute the particular logic. When you write a web service method, you must use the webservice keyword at the beginning of the method declaration. The variables can also be exposed with the webservice keyword. After you create the webservice method, you can generate the Web Service Definition Language (WSDL), which can be consumed by an external application. Apex supports both Simple Object Access Protocol (SOAP) and Representational State Transfer (REST) web services. Apex and metadata Because Apex is a proprietary language, it is strongly typed to Salesforce metadata. The same sObject and fields that are created through the declarative setup menu can be referred to through Apex. Like other Force.com features, the system will provide an error if you try to delete an object or field that is used within Apex. Apex is not technically autoupgraded with each new Salesforce release, as it is saved with a specific version of the API. Therefore, Apex, like other Force.com features, will automatically work with future versions of Salesforce applications. Force.com application development tools use the metadata. Working with Apex Before you start coding with Apex, you need to learn a few basic things. Apex basics Apex has come up with a syntactical framework. Similar to Java, Apex is strongly typed and is an object-based language. If you have some experience with Java, it will be easy to understand Apex. The following table explains the similarities and differences between Apex and Java: Similarities Differences Both languages have classes, inheritance, polymorphism, and other common object oriented programming features Apex runs in a multitenant environment and is very controlled in its invocations and governor limits Both languages have extremely similar syntax and notations Apex is case sensitive Both languages are compiled, strongly-typed, and transactional Apex is on-demand and is compiled and executed in the cloud   Apex is not a general purpose programming language, but is instead a proprietary language used for specific business logic functions   Apex requires unit testing for deployment into a production environment This section will not discuss everything that is included in the Apex documentation from Salesforce, but it will cover topics that are essential for understanding concepts discussed in this article. With this basic knowledge of Apex, you can create Apex code in the Force.com platform. Apex data types In Apex classes and triggers, we use variables that contain data values. Variables must be bound to a data type and that particular variable can hold the values with the same data type. All variables and expressions have one of the following data types: Primitives Enums sObjects Collections An object created from the user or system-defined classes Null (for the null constant) Primitive data types Apex uses the same primitive data types as the web services API, most of which are similar to their Java counterparts. It may seem that Apex primitive variables are passed by value, but they actually use immutable references, similar to Java string behavior. The following are the primitive data types of Apex: Boolean: A value that can only be assigned true, false, or null. Date, Datetime, and Time: A Date value indicates particular day and not contains any information about time. A Datetime value indicates a particular day and time. A Time value indicates a particular time. Date, Datetime and Time values must always be created with a system static method. ID: 18 or 15 digits version. Integer, Long, Double, and Decimal: Integer is a 32-bit number that does not include decimal points. Integers have a minimum value of -2,147,483,648 and a maximum value of 2,147,483,647. Long is a 64-bit number that does not include a decimal point. Use this datatype when you need a range of values wider than those provided by Integer. Double is a 64-bit number that includes a decimal point. Both Long and Doubles have a minimum value of -263 and a maximum value of 263-1. Decimal is a number that includes a decimal point. Decimal is an arbitrary precision number. String: String is any set of characters surrounded by single quotes. Strings have no limit on the number of characters that can be included. But the heap size limit is used to ensure to the particular Apex program do not grow too large. Blob: Blob is a collection of binary data stored as a single object. Blog can be accepted as Web service argument, stored in a document or sent as attachments. Object: This can be used as the base type for any other data type. Objects are supported for casting. Enum data types Enum (or enumerated list) is an abstract data type that stores one value of a finite set of specified identifiers. To define an Enum, use the enum keyword in the variable declaration and then define the list of values. You can define and use enum in the following way: Public enum Status {NEW, APPROVED, REJECTED, CANCELLED} The preceding enum has four values: NEW, APPROVED, REJECTED, CANCELLED. By creating this enum, you have created a new data type called Status that can be used as any other data type for variables, return types, and method arguments. Status leaveStatus = Status. NEW; Apex provides Enums for built-in concepts such as API error (System.StatusCode). System-defined enums cannot be used in web service methods. sObject data types sObjects (short for Salesforce Object) are standard or custom objects that store record data in the Force.com database. There is also an sObject data type in Apex that is the programmatic representation of these sObjects and their data in code. Developers refer to sObjects and their fields by their API names, which can be found in the schema browser. sObject and field references within Apex are validated against actual object and field names when code is written. Force.com tracks the objects and fields used within Apex to prevent users from making the following changes: Changing a field or object name Converting from one data type to another Deleting a field or object Organization-wide changes such as record sharing It is possible to declare variables of the generic sObject data type. The new operator still requires a concrete sObject type, so the instances are all specific sObjects. The following is a code example: sObject s = new Employee__c(); Casting will be applied as expected as each row knows its runtime type and can be cast back to that type. The following casting works fine: Employee__c e = (Employee__c)s; However, the following casting will generate a runtime exception for data type collision: Leave__c leave = (Leave__c)s; sObject super class only has the ID variable. So we can only access the ID via the sObject class. This method can also be used with collections and DML operations, although only concrete types can be instantiated. Collection will be described in the upcoming section and DML operations will be discussed in the Data manipulation section on the Force.com platform. Let's have a look at the following code: sObject[] sList = new Employee__c[0]; List<Employee__c> = (List<Employee__c>)sList; Database.insert(sList); Collection data types Collection data types store groups of elements of other primitive, composite, or collection data types. There are three different types of collections in Apex: List: A list is an ordered collection of primitives or composite data types distinguished by its index. Each element in a list contains two pieces of information; an index (this is an integer) and a value (the data). The index of the first element is zero. You can define an Apex list in the following way: List<DataType> listName = new List<DataType>(); List<String> sList = new List< String >(); There are built-in methods that can be used with lists adding/removing elements from the end of the list, getting/setting values at a particular index, and sizing the list by obtaining the number of elements. A full set of list methods are listed at http://www.salesforce.com/us/developer/docs/dbcom_apex250/Content/apex_methods_system_list.htm. The Apex list is defined in the following way: List<String> sList = new List< String >(); sList.add('string1'); sList.add('string2'); sList.add('string3'); sList.add('string4'); Integer sListSize = sList.size(); // this will return the   value as 4 sList.get(3); //This method will return the value as   "string4" Apex allows developers familiar with the standard array syntax to use that interchangeably with the list syntax. The main difference is the use of square brackets, which is shown in the following code: String[] sList = new String[4]; sList [0] = 'string1'; sList [1] = 'string2'; sList [2] = 'string3'; sList [3] = 'string4'; Integer sListSize = sList.size(); // this will return the   value as 4 Lists, as well as maps, can be nested up to five levels deep. Therefore, you can create a list of lists in the following way: List<List<String>> nestedList = new List<List<String>> (); Set: A set is an unordered collection of data of one primitive data type or sObjects that must have unique values. The Set methods are listed at http://www.salesforce.com/us/developer/docs/dbcom_apex230/Content/apex_methods_system_set.htm. Similar to the declaration of List, you can define a Set in the following way: Set<DataType> setName = new Set<DataType>(); Set<String> setName = new Set<String>(); There are built-in methods for sets, including add/remove elements to/from the set, check whether the set contains certain elements, and the size of the set. Map: A map is an unordered collection of unique keys of one primitive data type and their corresponding values. The Map methods are listed in the following link at http://www.salesforce.com/us/developer/docs/dbcom_apex250/Content/apex_methods_system_map.htm. You can define a Map in the following way: Map<PrimitiveKeyDataType, DataType> = mapName = new   Map<PrimitiveKeyDataType, DataType>(); Map<Integer, String> mapName = new Map<Integer, String>(); Map<Integer, List<String>> sMap = new Map<Integer,   List<String>>(); Maps are often used to map IDs to sObjects. There are built-in methods that you can use with maps, including adding/removing elements on the map, getting values for a particular key, and checking whether the map contains certain keys. You can use these methods as follows: Map<Integer, String> sMap = new Map<Integer, String>(); sMap.put(1, 'string1'); // put key and values pair sMap.put(2, 'string2'); sMap.put(3, 'string3'); sMap.put(4, 'string4'); sMap.get(2); // retrieve the value of key 2 Apex logics and loops Like all programming languages, Apex language has the syntax to implement conditional logics (IF-THEN-ELSE) and loops (for, Do-while, while). The following table will explain the conditional logic and loops in Apex: IF Conditional IF statements in Apex are similar to Java. The IF-THEN statement is the most basic of all the control flow statements. It tells your program to execute a certain section of code only if a particular test evaluates to true. The IF-THEN-ELSE statement provides a secondary path of execution when an IF clause evaluates to false. if (Boolean_expression){ statement; statement; statement; statement;} else { statement; statement;} For There are three variations of the FOR loop in Apex, which are as follows: FOR(initialization;Boolean_exit_condition;increment) {     statement; }   FOR(variable : list_or_set) {     statement; }   FOR(variable : [inline_soql_query]) {     statement; } All loops allow for the following commands: break: This is used to exit the loop continue: This is used to skip to the next iteration of the loop While The while loop is similar, but the condition is checked before the first loop, as shown in the following code: while (Boolean_condition) { code_block; }; Do-While The do-while loop repeatedly executes as long as a particular Boolean condition remains true. The condition is not checked until after the first pass is executed, as shown in the following code: do { //code_block; } while (Boolean_condition); Summary In this article, you have learned to develop custom coding in the Force.com platform, including the Apex classes and triggers. And you learned two query languages in the Force.com platform. Resources for Article: Further resources on this subject: Force.com: Data Management [article] Configuration in Salesforce CRM [article] Learning to Fly with Force.com [article]
Read more
  • 0
  • 0
  • 11700

article-image-python-multimedia-enhancing-images
Packt
20 Jan 2011
5 min read
Save for later

Python Multimedia: Enhancing Images

Packt
20 Jan 2011
5 min read
Adjusting brightness and contrast One often needs to tweak the brightness and contrast level of an image. For example, you may have a photograph that was taken with a basic camera, when there was insufficient light. How would you correct that digitally? The brightness adjustment helps make the image brighter or darker whereas the contrast adjustments emphasize differences between the color and brightness level within the image data. The image can be made lighter or darker using the ImageEnhance module in PIL. The same module provides a class that can auto-contrast an image. Time for action – adjusting brightness and contrast Let's learn how to modify the image brightness and contrast. First, we will write code to adjust brightness. The ImageEnhance module makes our job easier by providing Brightness class. Download image 0165_3_12_Before_BRIGHTENING.png and rename it to Before_BRIGHTENING.png. Use the following code: 1 import Image 2 import ImageEnhance 3 4 brightness = 3.0 5 peak = Image.open( "C:imagesBefore_BRIGHTENING.png ") 6 enhancer = ImageEnhance.Brightness(peak) 7 bright = enhancer.enhance(brightness) 8 bright.save( "C:imagesBRIGHTENED.png ") 9 bright.show() On line 6 in the code snippet, we created an instance of the class Brightness. It takes Image instance as an argument. Line 7 creates a new image bright by using the specified brightness value. A value between 0.0 and less than 1.0 gives a darker image, whereas a value greater than 1.0 makes it brighter. A value of 1.0 keeps the brightness of the image unchanged. The original and resultant image are shown in the next illustration. Comparison of images before and after brightening. Let's move on and adjust the contrast of the brightened image. We will append the following lines of code to the code snippet that brightened the image. 10 contrast = 1.3 11 enhancer = ImageEnhance.Contrast(bright) 12 con = enhancer.enhance(contrast) 13 con.save( "C:imagesCONTRAST.png ") 14 con.show() Thus, similar to what we did to brighten the image, the image contrast was tweaked by using the ImageEnhance.Contrast class. A contrast value of 0.0 creates a black image. A value of 1.0 keeps the current contrast. The resultant image is compared with the original in the following illustration. The original image with the image displaying the increasing contrast. In the preceding code snippet, we were required to specify a contrast value. If you prefer PIL for deciding an appropriate contrast level, there is a way to do this. The ImageOps.autocontrast functionality sets an appropriate contrast level. This function normalizes the image contrast. Let's use this functionality now. Use the following code: import ImageOps bright = Image.open( "C:imagesBRIGHTENED.png ") con = ImageOps.autocontrast(bright, cutoff = 0) con.show() The highlighted line in the code is where contrast is automatically set. The autocontrast function computes histogram of the input image. The cutoff argument represents the percentage of lightest and darkest pixels to be trimmed from this histogram. The image is then remapped. What just happened? Using the classes and functionality in ImageEnhance module, we learned how to increase or decrease the brightness and the contrast of the image. We also wrote code to auto-contrast an image using functionality provided in the ImageOps module. Tweaking colors Another useful operation performed on the image is adjusting the colors within an image. The image may contain one or more bands, containing image data. The image mode contains information about the depth and type of the image pixel data. The most common modes we will use are RGB (true color, 3x8 bit pixel data), RGBA (true color with transparency mask, 4x8 bit) and L (black and white, 8 bit). In PIL, you can easily get the information about the bands data within an image. To get the name and number of bands, the getbands() method of the class Image can be used. Here, img is an instance of class Image. >>> img.getbands() ('R', 'G', 'B', 'A') Time for action – swap colors within an image! To understand some basic concepts, let's write code that just swaps the image band data. Download the image 0165_3_15_COLOR_TWEAK.png and rename it as COLOR_TWEAK.png. Type the following code: 1 import Image 2 3 img = Image.open( "C:imagesCOLOR_TWEAK.png ") 4 img = img.convert('RGBA') 5 r, g, b, alpha = img.split() 6 img = Image.merge( "RGBA ", (g, r, b, alpha)) 7 img.show() Let's analyze this code now. On line 2, the Image instance is created as usual. Then, we change the mode of the image to RGBA. Here we should check if the image already has that mode or if this conversion is possible. You can add that check as an exercise! Next, the call to Image.split() creates separate instances of Image class, each containing a single band data. Thus, we have four Image instances—r, g, b, and alpha corresponding to red, green, and blue bands, and the alpha channel respectively. The code in line 6 does the main image processing. The first argument that Image.merge takes mode as the first argument whereas the second argument is a tuple of image instances containing band information. It is required to have same size for all the bands. As you can notice, we have swapped the order of band data in Image instances r and g while specifying the second argument. The original and resultant image thus obtained are compared in the next illustration. The color of the flower now has a shade of green and the grass behind the flower is rendered with a shade of red. Please download and refer to the supplementary PDF file Chapter 3 Supplementary Material.pdf. Here, the color images are provided that will help you see the difference. Original (left) and the color swapped image (right). What just happened? We accomplished creating an image with its band data swapped. We learned how to use PIL's Image.split() and Image.merge() to achieve this. However, this operation was performed on the whole image. In the next section, we will learn how to apply color changes to a specific color region.
Read more
  • 0
  • 0
  • 11501
article-image-data-storage-forcecom
Packt
09 Jan 2017
14 min read
Save for later

Data Storage in Force.com

Packt
09 Jan 2017
14 min read
In this article by Andrew Fawcett, author of the book Force.com Enterprise Architecture - Second Edition, we will discuss how it is important to consider your customers' storage needs and use cases around their data creation and consumption patterns early in the application design phase. This ensures that your object schema is the most optimum one with respect to large data volumes, data migration processes (inbound and outbound), and storage cost. In this article, we will extend the Custom Objects in the FormulaForce application as we explore how the platform stores and manages data. We will also explore the difference between your applications operational data and configuration data and the benefits of using Custom Metadata Types for configuration management and deployment. (For more resources related to this topic, see here.) You will obtain a good understanding of the types of storage provided and how the costs associated with each are calculated. It is also important to understand the options that are available when it comes to reusing or attempting to mirror the Standard Objects such as Account, Opportunity, or Product, which extend the discussion further into license cost considerations. You will also become aware of the options for standard and custom indexes over your application data. Finally, we will have some insight into new platform features for consuming external data storage from within the platform. In this article, we will cover the following topics: Mapping out end user storage requirements Understanding the different storage types Reusing existing Standard Objects Importing and exporting application data Options for replicating and archiving data External data sources Mapping out end user storage requirements During the initial requirements and design phase of your application, the best practice is to create user categorizations known as personas. Personas consider the users' typical skills, needs, and objectives. From this information, you should also start to extrapolate their data requirements, such as the data they are responsible for creating (either directly or indirectly, by running processes) and what data they need to consume (reporting). Once you have done this, try to provide an estimate of the number of records that they will create and/or consume per month. Share these personas and their data requirements with your executive sponsors, your market researchers, early adopters, and finally the whole development team so that they can keep them in mind and test against them as the application is developed. For example, in our FormulaForce application, it is likely that managers will create and consume data, whereas race strategists will mostly consume a lot of data. Administrators will also want to manage your applications configuration data. Finally, there will likely be a background process in the application, generating a lot of data, such as the process that records Race Data from the cars and drivers during the qualification stages and the race itself, such as sector (a designated portion of the track) times. You may want to capture your conclusions regarding personas and data requirements in a spreadsheet along with some formulas that help predict data storage requirements. This will help in the future as you discuss your application with Salesforce during the AppExchange Listing process and will be a useful tool during the sales cycle as prospective customers wish to know how to budget their storage costs with your application installed. Understanding the different storage types The storage used by your application records contributes to the most important part of the overall data storage allocation on the platform. There is also another type of storage used by the files uploaded or created on the platform. From the Storage Usage page under the Setup menu, you can see a summary of the records used, including those that reside in the Salesforce Standard Objects. Later in this article, we will create a Custom Metadata Type object to store configuration data. Storage consumed by this type of object is not reflected on the Storage Usage page and is managed and limited in a different way. The preceding page also shows which users are using the most amount of storage. In addition to the individual's User details page, you can also locate the Used Data Space and Used File Space fields; next to these are the links to view the users' data and file storage usage. The limit shown for each is based on a calculation between the minimum allocated data storage depending on the type of organization or the number of users multiplied by a certain number of MBs, which also depends on the organization type; whichever is greater becomes the limit. For full details of this, click on the Help for this Page link shown on the page. Data storage Unlike other database platforms, Salesforce typically uses a fixed 2 KB per record size as part of its storage usage calculations, regardless of the actual number of fields or the size of the data within them on each record. There are some exceptions to this rule, such as Campaigns that take up 8 KB and stored Email Messages use up the size of the contained e-mail, though all Custom Object records take up 2 KB. Note that this record size also applies even if the Custom Object uses large text area fields. File storage Salesforce has a growing number of ways to store file-based data, ranging from the historic Document tab, to the more sophisticated Content tab, to using the Files tab, and not to mention Attachments, which can be applied to your Custom Object records if enabled. Each has its own pros and cons for end users and file size limits that are well defined in the Salesforce documentation. From the perspective of application development, as with data storage, be aware of how much your application is generating on behalf of the user and give them a means to control and delete that information. In some cases, consider if the end user would be happy to have the option to recreate the file on demand (perhaps as a PDF) rather than always having the application to store it. Reusing the existing Standard Objects When designing your object model, a good knowledge of the existing Standard Objects and their features is the key to knowing when and when not to reference them. Keep in mind the following points when considering the use of Standard Objects: From a data storage perspective: Ignoring Standard Objects creates a potential data duplication and integration effort for your end users if they are already using similar Standard Objects as pre-existing Salesforce customers. Remember that adding additional custom fields to the Standard Objects via your package will not increase the data storage consumption for those objects. From a license cost perspective: Conversely, referencing some Standard Objects might cause additional license costs for your users, since not all are available to the users without additional licenses from Salesforce. Make sure that you understand the differences between Salesforce (CRM) and Salesforce Platform licenses with respect to the Standard Objects available. Currently, the Salesforce Platform license provides Accounts and Contacts; however, to use the Opportunity or Product objects, a Salesforce (CRM) license is needed by the user. Refer to the Salesforce documentation for the latest details on these. Use your user personas to define what Standard Objects your users use and reference them via lookups, Apex code, and Visualforce accordingly. You may wish to use extension packages and/or dynamic Apex and SOQL to make these kind of references optional. Since Developer Edition orgs have all these licenses and objects available (although in a limited quantity), make sure that you review your Package dependencies before clicking on the Upload button each time to check for unintentional references. Importing and exporting data Salesforce provides a number of its own tools for importing and exporting data as well as a number of third-party options based on the Salesforce APIs; these are listed on AppExchange. When importing records with other record relationships, it is not possible to predict and include the IDs of related records, such as the Season record ID when importing Race records; in this section, we will present a solution to this. Salesforce provides Data Import Wizard, which is available under the Setup menu. This tool only supports Custom Objects and Custom Settings. Custom Metadata Type records are essentially considered metadata by the platform, and as such, you can use packages, developer tools, and Change Sets to migrate these records between orgs. There is an open source CSV data loader for Custom Metadata Types at https://github.com/haripriyamurthy/CustomMetadataLoader. It is straightforward to import a CSV file with a list of race Season since this is a top-level object and has no other object dependencies. However, to import the Race information (which is a child object related to Season), the Season and Fasted Lap By record IDs are required, which will typically not be present in a Race import CSV file by default. Note that IDs are unique across the platform and cannot be shared between orgs. External ID fields help address this problem by allowing Salesforce to use the existing values of such fields as a secondary means to associate records being imported that need to reference parent or related records. All that is required is that the related record Name or, ideally, a unique external ID be included in the import data file. This CSV file includes three columns: Year, Name, and Fastest Lap By (of the driver who performed the fastest lap of that race, indicated by their Twitter handle). You may remember that a Driver record can also be identified by this since the field has been defined as an External ID field. Both the 2014 Season record and the Lewis Hamilton Driver record should already be present in your packaging org. Now, run Data Import Wizard and complete the settings as shown in the following screenshot: Next, complete the field mappings as shown in the following screenshot: Click on Start Import and then on OK to review the results once the data import has completed. You should find that four new Race records have been created under 2014 Season, with the Fasted Lap By field correctly associated with the Lewis Hamilton Driver record. Note that these tools will also stress your Apex Trigger code for volumes, as they typically have the bulk mode enabled and insert records in chunks of 200 records. Thus, it is recommended that you test your triggers to at least this level of record volumes. Options for replicating and archiving data Enterprise customers often have legacy and/or external systems that are still being used or that they wish to phase out in the future. As such, they may have requirements to replicate aspects of the data stored in the Salesforce platform to another. Likewise, in order to move unwanted data off the platform and manage their data storage costs, there is a need to archive data. The following lists some platform and API facilities that can help you and/or your customers build solutions to replicate or archive data. There are, of course, a number of AppExchange solutions listed that provide applications that use these APIs already: Replication API: This API exists in both the web service SOAP and Apex form. It allows you to develop a scheduled process to query the platform for any new, updated, or deleted records between a given time period for a specific object. The getUpdated and getDeleted API methods return only the IDs of the records, requiring you to use the conventional Salesforce APIs to query the remaining data for the replication. The frequency in which this API is called is important to avoid gaps. Refer to the Salesforce documentation for more details. Outbound Messaging: This feature offers a more real-time alternative to the replication API. An outbound message event can be configured using the standard workflow feature of the platform. This event, once configured against a given object, provides a Web Service Definition Language (WSDL) file that describes a web service endpoint to be called when records are created and updated. It is the responsibility of a web service developer to create the end point based on this definition. Note that there is no provision for deletion with this option. Bulk API: This API provides a means to move up to 5000 chunks of Salesforce data (up to 10 MB or 10,000 records per chunk) per rolling 24-hour period. Salesforce and third-party data loader tools, including the Salesforce Data Loader tool, offer this as an option. It can also be used to delete records without them going into the recycle bin. This API is ideal for building solutions to archive data. Heroku Connect is a seamless data synchronization solution between Salesforce and Heroku Postgres. For further information, refer to https://www.heroku.com/connect. External data sources One of the downsides of moving data off the platform in an archive use case or with not being able to replicate data onto the platform is that the end users have to move between applications and logins to view data; this causes an overhead as the process and data is not connected. The Salesforce Connect (previously known as Lightning Connect) is a chargeable add-on feature of the platform is the ability to surface external data within the Salesforce user interface via the so-called External Objects and External Data Sources configurations under Setup. They offer a similar functionality to Custom Objects, such as List views, Layouts, and Custom Buttons. Currently, Reports and Dashboards are not supported, though it is possible to build custom report solutions via Apex, Visualforce or Lightning Components. External Data Sources can be connected to existing OData-based end points and secured through OAuth or Basic Authentication. Alternatively, Apex provides a Connector API whereby developers can implement adapters to connect to other HTTP-based APIs. Depending on the capabilities of the associated External Data Source, users accessing External Objects using the data source can read and even update records through the standard Salesforce UIs such as Salesforce Mobile and desktop interfaces. Summary This article explored the declarative aspects of developing an application on the platform that applies to how an application is stored and how relational data integrity is enforced through the use of the lookup field deletion constraints and applying unique fields. Upload the latest version of the FormulaForce package and install it into your test org. The summary page during the installation of new and upgraded components should look something like the following screenshot. Note that the permission sets are upgraded during the install. Once you have installed the package in your testing org, visit the Custom Metadata Types page under Setup and click on Manage Records next to the object. You will see that the records are shown as managed and cannot be deleted. Click on one of the records to see that the field values themselves cannot also be edited. This is the effect of the Field Manageability checkbox when defining the fields. The Namespace Prefix shown here will differ from yours. Try changing or adding the Track Lap Time records in your packaging org, for example, update a track time on an existing record. Upload the package again then upgrade your test org. You will see the records are automatically updated. Conversely, any records you created in your test org will be retained between upgrades. In this article, we have now covered some major aspects of the platform with respect to packaging, platform alignment, and how your application data is stored as well as the key aspects of your application's architecture. Resources for Article: Further resources on this subject: Process Builder to the Rescue [article] Custom Coding with Apex [article] Building, Publishing, and Supporting Your Force.com Application [article]
Read more
  • 0
  • 0
  • 11434

article-image-introduction-enterprise-business-messages
Packt
27 Feb 2012
4 min read
Save for later

Introduction to Enterprise Business Messages

Packt
27 Feb 2012
4 min read
(For more resources on Oracle, see here.) Before we jump into the AIA Enterprise Business Message (EBM) standards, let us understand a little more about Business Messages. In general, Business Message is information shared between people, organizations, systems, or processes. Any information communicated to any object in a standard understandable format are called messages. In the application integration world, there are various information-sharing approaches that are followed. Therefore, we need not go through it again, but in a service-oriented environment, message-sharing between systems is the fundamental characteristic. There should be a standard approach followed across an enterprise, so that every existing or new business system could understand and follow the uniform method. XML technology is a widely-accepted message format by all the technologies and tools. Oracle AIA framework provides a standard messaging format to share the information between AIA components. Overview of Enterprise Business Message (EBM) Enterprise Business Messages (EBMs) are business information exchanged between enterprise business systems as messages. EBMs define the elements that are used to form the messages in service-oriented operations. EBM payloads represent specific content of an EBO that is required to perform a specific service. In an AIA infrastructure, EBMs are messages exchanged between all components in the Canonical Layer. Enterprise Business Services (EBS) accepts EBM as a request message and responds back to EBM as an output payload. However, in Application Business Connector Service (ABCS), the provider ABCS accepts messages in the EBM format and translates them into the application provider's Application Business Message (ABM) format. Alternatively, the requester ABCS receives ABM as a request message, transforms it into an EBS, and calls the EBS to submit the EBM message. Therefore, EBM has been a widely-accepted message standard within AIA components. The context-oriented EBMs are built using a set of common components and EBO business components. Some EBMs may require more than one EBO to fulfill the business integration needs. The following diagram describes the role of an EBM in the AIA architecture: EBM characteristics The fundamentals of EBM and its characteristics are as follows: Each business service request and response should be represented in an EBM format using a unique combination of an action and an EBO instance. One EBM can support only one action or verb. EBM component should import the common component to make use of metadata and data types across the EBM structure. EBMs are application interdependencies. Any requester application that invokes Enterprise Business Services (EBS) through ABCS should follow the EBM format standards to pass as payload in integration. The action that is embedded in the EBM is the only action that sender or requester application can execute to perform integration. The action in the EBM may also carry additional data that has to be done as part of service execution. For example, the update action may carry information about whether the system should notify after successful execution of update. The information that exists in the EBM header is common to all EBMs. However, information existing in the data area and corresponding actions are specific to only one EBM. EBM headers may carry tracking information, auditing information, source and target system information, and error-handling information. EBM components do not rely on the underlying transport protocol. Any service protocols such as HTTP, HTTPs, SMTP, SOAP, and JMS should carry EBM payload documents. Exploring AIA EBMs We explored the physical structure of the Oracle AIA EBO in the previous chapter; EBMs do not have a separate structure. EBMs are also part of the EBO's physical package structure. Every EBO is bound with an EBM. The following screenshot will show the physical structure of the EBM groups as directories: As EBOs are grouped as packages based on the business model, EBMs are also a part of that structure and can be located along with the EBO schema under the Core EBO package.
Read more
  • 0
  • 0
  • 10954

article-image-setting-glassfish-jms-and-working-message-queues
Packt
30 Jul 2010
4 min read
Save for later

Setting up GlassFish for JMS and Working with Message Queues

Packt
30 Jul 2010
4 min read
(For more resources on Java, see here.) Setting up GlassFish for JMS Before we start writing code to take advantage of the JMS API, we need to configure some GlassFish resources. Specifically, we need to set up a JMS connection factory, a message queue, and a message topic. Setting up a JMS connection factory The easiest way to set up a JMS connection factory is via GlassFish's web console. The web console can be accessed by starting our domain, by entering the following command in the command line: asadmin start-domain domain1 Then point the browser to http://localhost:4848 and log in: A connection factory can be added by expanding the Resources node in the tree at the left-hand side of the web console, expanding the JMS Resources node and clicking on the Connection Factories node, then clicking on the New... button in the main area of the web console. For our purposes, we can take most of the defaults. The only thing we need to do is enter a Pool Name and pick a Resource Type for our connection factory. It is always a good idea to use a Pool Name starting with "jms/" when picking a name for JMS resources. This way JMS resources can be easily identified when browsing a JNDI tree. In the text field labeled Pool Name, enter jms/GlassFishBookConnectionFactory. Our code examples later in this article will use this JNDI name to obtain a reference to this connection factory. The Resource Type drop-down menu has three options: javax.jms.TopicConnectionFactory - used to create a connection factory that creates JMS topics for JMS clients using the pub/sub messaging domain javax.jms.QueueConnectionFactory - used to create a connection factory that creates JMS queues for JMS clients using the PTP messaging domain javax.jms.ConnectionFactory - used to create a connection factory that creates either JMS topics or JMS queues For our example, we will select javax.jms.ConnectionFactory. This way we can use the same connection factory for all our examples, those using the PTP messaging domain and those using the pub/sub messaging domain. After entering the Pool Name for our connection factory, selecting a connection factory type, and optionally entering a description for our connection factory, we must click on the OK button for the changes to take effect. We should then see our newly created connection factory listed in the main area of the GlassFish web console. Setting up a JMS message queue A JMS message queue can be added by expanding the Resources node in the tree at the left-hand side of the web console, expanding the JMS Resources node and clicking on the Destination Resources node, then clicking on the New... button in the main area of the web console. In our example, the JNDI name of the message queue is jms/GlassFishBookQueue. The resource type for message queues must be javax.jms.Queue. Additionally, a Physical Destination Name must be entered. In this example, we use GlassFishBookQueue as the value for this field. After clicking on the New... button, entering the appropriate information for our message queue, and clicking on the OK button, we should see the newly created queue: Setting up a JMS message topic Setting up a JMS message topic in GlassFish is very similar to setting up a message queue. In the GlassFish web console, expand the Resources node in the tree at the left hand side, then expand the JMS Resouces node and click on the Destination Resources node, then click on the New... button in the main area of the web console. Our examples will use a JNDI Name of jms/GlassFishBookTopic. As this is a message topic, Resource Type must be javax.jms.Topic. The Description field is optional. The Physical Destination Name property is required. For our example, we will use GlassFishBookTopic as the value for this property. After clicking on the OK button, we can see our newly created message topic: Now that we have set up a connection factory, a message queue, and a message topic, we are ready to start writing code using the JMS API.
Read more
  • 0
  • 0
  • 10948
article-image-team-project-setup
Packt
29 Oct 2015
5 min read
Save for later

Team Project Setup

Packt
29 Oct 2015
5 min read
In this article, by Tarun Arora and Ahmed Al-Asaad, author of the book Microsoft Team Foundation Server Cookbook, gives you knowledge about: Using Team Explorer to connect to Team Foundation Server Creating and setting up a new Team Project for a scrum team (For more resources related to this topic, see here.) Microsoft Visual Studio Team Foundation Server 2015 is the backbone of Microsoft's Application Lifecycle Management (ALM) solution, providing core services such as version control, work item tracking, reporting, and automated builds. Team Foundation Server helps organizations communicate and collaborate more effectively throughout the process of designing, building, testing, and deploying software—ultimately leading to increased productivity and team output, improved quality, and greater visibility into the application life cycle. Team Foundation Server is Microsoft on premise offering application life cycle management tooling; Visual Studio Online is a collection of developer services that runs on Microsoft Azure and extends the development experience in the cloud. Team Foundation server is very flexible and supports a broad spectrum of topologies. While a simple one-machine setup may suffice for small teams, you'll see enterprises using scaled-out complex topologies. You'll find that TFS topologies are largely driven by the scope and scale of its use in an organization. Ensure that you have details of your Team Foundation Server handy. Please refer to the Microsoft Visual Studio Licensing guide available at the following link to learn about the license requirements for Team Foundation Server: http://www.microsoft.com/en-gb/download/details.aspx?id=13350. Using Team Explorer to connect to Team Foundation Server 2015 and GitHub To build, plan, and track your software development project using Team Foundation Server, you'll need to connect the client of your choice to Team Foundation Server. In this recipe, we'll focus on connecting Team Explorer to Team Foundation Server. Getting ready Team Explorer is installed with each version of Visual Studio; alternatively, you can also install Team Explorer from the Microsoft download center as a standalone client. When you start Visual Studio for the first time, you'll be asked to sign in with a Microsoft account, such as Live or Hotmail, and provide some basic registration information. You should choose a Microsoft account that best represents you. If you already have an MSDN account, it's recommended that you sign in with its associated Microsoft account. If you don't have a Microsoft account, you can create one for free. Logging in is advisable, not mandatory. How to do it... Open Visual Studio 2015. Click on the Team Toolbar and select Connect to Team Foundation Server. In Team Explorer ,click on Select Team Projects.... In the Connect to Team Foundation Server form, the dropdown shows a list of all the TFS Servers you have connected to before. If you can't see the server you want to connect to in the dropdown, click on Servers to enter the details of the team foundation server. Click on Add and enter the details of your TFS Server and then click on OK. You may be required to enter the log in details to authenticate against the TFS server. Click Close on the Add/Remove Team Foundation Server form. You should now see the details of your server in the Connect to Team Foundation Server form. At the bottom left, you'll see the user ID being used to establish this connection. Click on Connect to complete the connection, this will navigate you back to the Team Explorer. At this point, you have successfully connected Team Explorer to Team Foundation Server. Creating and setting up a new Team Project for a Scrum Team Software projects require a logical container to store project artifacts such as work items, code, build, releases, and documents. In the Team Foundation Server the logical container is referred to as Team Project. Different teams follow different processes to organize, manage, and track their work. Team Projects can be customized to specific project delivery frameworks through process templates. This recipe explains how to create a new team project for a scrum team in the Team Foundation Server. Getting ready The new Team Project created action needs to be trigged from Team Explorer. Before you can create a new Team Project, you need to connect Team Explorer to Team Foundation Server. The recipe Connecting Team Explorer to Team Foundation Server explains how this can be done. In order to create a new Team Project, you will need the following permissions: You must have the Create new projects permission on the TFS application tier. This permission is granted by adding users to the Project Collection Administrators TFS group. The Team Foundation Administrators global group also includes this permission. You must have created new team sites permission within the SharePoint site collection that corresponds to the TFS team project collection. This permission is granted by adding the user to a SharePoint group with Full Control rights on the SharePoint site collection. In order to use the SQL Server Reporting Services features, you must be a member of the Team Foundation Content Manager role in Reporting Services. To verify whether you have the correct permissions, you can download Team Foundation Server Administration Tool from Codeplex available at https://tfsadmin.codeplex.com/. TFS Admin is an open source tool available under the Microsoft Public license (Ms-PL). Summary In this article, we have looked at setting up a Team Project in Team Foundation Server 2015. We started off by connecting Team Explorer to Team Foundation Server and GitHub. We then looked at creating a team project and setting up a scrum team. Resources for Article: Further resources on this subject: Introducing Liferay for Your Intranet [article] Preparing our Solution [article] Work Item Querying [article]
Read more
  • 0
  • 0
  • 10756

article-image-getting-started-kinect-windows-sdk-programming
Packt
20 Feb 2013
12 min read
Save for later

Getting started with Kinect for Windows SDK Programming

Packt
20 Feb 2013
12 min read
(For more resources related to this topic, see here.) System requirements for the Kinect for Windows SDK While developing applications for any device using an SDK, compatibility plays a pivotal role. It is really important that your development environment must fulfill the following set of requirements before starting to work with the Kinect for Windows SDK. Downloading the example code You can download the example code files for all Packt books you have purchased from your account at http://www.PacktPub.com. If you purchased this book elsewhere, you can visit http://www.PacktPub.com/support. and register to have the files e-mailed directly to you. Supported operating systems The Kinect for Windows SDK, as its name suggests, runs only on the Windows operating system. The following are the supported operating systems for development: Windows 7 Windows Embedded 7 Windows 8 The Kinect for Windows sensor will also work on Windows operating systems running in a virtual machine such as Microsoft HyperV, VMWare, and Parallels. System configuration The hardware requirements are not as stringent as the software requirements. It can be run on most of the hardware available in the market. The following are the minimum configurations required for development with Kinect for Windows: A 32- (x86) or 64-bit (x64) processor Dual core 2.66 GHz or faster processor Dedicated USB 2.0 bus 2 GB RAM The Kinect sensor It goes without saying, you need a Kinect sensor for your development. You can use the Kinect for Windows or the Kinect for Xbox sensor for your development. Before choosing a sensor for your development, make sure you are clear about the limitations of the Kinect for Xbox sensor over the Kinect for Windows sensor, in terms of features, API supports, and licensing mechanisms. The Kinect for Windows sensor By now, you are already familiar with the Kinect for Windows sensor and its different components. The Kinect for Windows sensor comes with an external power supply, which supplies the additional power, and a USB adapter to connect with the system. For the latest updates and availability of the Kinect for Windows sensor, you can refer to http://www.microsoft.com/en-us/kinectforwindows/site. The Kinect for Xbox sensor If you already have a Kinect sensor with your Xbox gaming console, you may use it for development. Similar to the Kinect for Windows sensor, you will require a separate power supply for the device so that it can power up the motor, camera, IR sensor, and so on. If you have bought a Kinect sensor with an Xbox as a bundle, you will need to buy the adapter / power supply separately. You can check out the external power supply adapter at http://www.microsoftstore.com. If you have bought only the Kinect for Xbox sensor, you will have everything that is required for a connection with a PC and external power cable. Development tools and software The following are the software that are required for development with Kinect SDK: Microsoft Visual Studio 2010 Express or higher editions of Visual Studio Microsoft .NET Framework 4.0 or higher Kinect for Windows SDK Kinect for Windows SDK uses the underlying speech capability of a Windows operating system to interact with the Kinect audio system. This will require Microsoft Speech Platform – Server Runtime, the Microsoft Speech Platform SDK, and a language pack to be installed in the system, and these will be installed along with the Kinect for Windows SDK. The system requirements for SDK may change with upcoming releases. Refer to http://www.microsoft.com/en-us/ kinectforwindows/. for the latest system requirements. Evaluation of the Kinect for Windows SDK   Though the Kinect for Xbox sensor has been in the market for quite some time, Kinect for Windows SDK is still fairly new in the developer paradigm, and it's evolving. The book is written on Kinect for Windows SDK v1.6. The Kinect for Windows SDK was first launched as a Beta 1 version in June 2011, and after a thunderous response from the developer community, the updated version of Kinect for Windows SDK Beta 2 version was launched in November 2011. Initially, both the SDK versions were a non-commercial release and were meant only for hobbyists. The first commercial version of Kinect for Windows SDK (v1.0) was launched in February 2012 along with a separate commercial hardware device. SDK v1.5 was released on May 2012 with bunches of new features, and the current version of Kinect for Windows SDK (v1.6) was launched in October 2012. The hardware hasn't changed since its first release. It was initially limited to only 12 countries across the globe. Now the new Kinect for Windows sensor is available in more than 40 countries. The current version of SDK also has the support of speech recognition for multiple languages.   Downloading the SDK and the Developer Toolkit The Kinect SDK and the Developer Toolkit are available for free and can be downloaded from http://www.microsoft.com/en-us/kinectforwindows/. The installer will automatically install the 64- or 32-bit version of SDK depending on your operating system. The Kinect for Windows Developer Toolkit is an additional installer that includes samples, tools, and other development extensions. The following diagram shows these components:   The main reason behind keeping SDK and Developer Toolkit in two different installers is to update the Developer Toolkit independently from the SDK. This will help to keep the toolkit and samples updated and distributed to the community without changing or updating the actual SDK version. The version of Kinect for Windows SDK and that for the Kinect for Windows Developer Toolkit might not be the same. Installing Kinect for Windows SDK Before running the installation, make sure of the following: You have uninstalled all the previous versions of Kinect for Windows SDK The Kinect sensor is not plugged into the USB port on the computer There are no Visual Studio instances currently running Start the installer, which will display the start screen as End User License Agreement. You need to read and accept this agreement to proceed with the installation. The following screenshot shows the license agreement:   Accept the agreement by selecting the checkbox and clicking on the Install option, which will do the rest of the job automatically. Before the installation, your computer may pop out the User Access Control (UAC) dialog, to get a confirmation from you that you are authorizing the installer to make changes in your computer. Once the installation is over, you will be notified along with an option for installing the Developer Toolkit, as shown in the next screenshot: Is it mandatory to uninstall the previous version of SDK before we install the new one? The upgrade will happen without any hassles if your current version is a non-Beta version. As a standard procedure, it is always recommended to uninstall the older SDK prior to installing the newer one, if your current version is a Beta version. Installing the Developer Toolkit If you didn't downloaded the Developer Toolkit installer earlier, you can click on the Download the Developer Toolkit option of the SDK setup wizard (refer to the previous screenshot); this will first download and then install the Developer Toolkit setup. If you have already downloaded the setup, you can close the current window and execute the standalone Toolkit installer. The installation process for Developer Toolkit is similar to the process for the SDK installer. Components installed by the SDK and the Developer Toolkit The Kinect for Windows SDK and Kinect for Windows Developer Toolkit install the drivers, assemblies, samples, and the documentation. To check which components are installed, you can navigate to the Install and Uninstall Programs section of Control Panel and search for Kinect. The following screenshot shows the list of components that are installed with the SDK and Toolkit installer: The default location for the SDK and Toolkit installation is %ProgramFiles%/Microsoft SDKs/Kinect. Kinect management service The Kinect for Windows SDK also installs Kinect Management, which is a Windows service that runs in the background while your PC communicates with the device. This service is responsible for the following tasks: Listening to the Kinect device for any status changes Interacting with the COM Server for any native support Managing the Kinect audio components by interacting with Windows audio drivers You can view this service by launching Services by navigating to Control Panel |Administrative Tools, or by typing Services.msc in the Run command. Is it necessary to install the Kinect SDK to end users' systems? The answer is No. When you install the Kinect for Windows SDK, it creates a Redist directory containing an installer that is designed to be deployed with Kinect applications, which install the runtime and drivers. This is the path where you can find the setup file after the SDK is installed: %ProgramFiles%/Microsoft SDKsKinectv1.6Redist KinectRuntime-v1.6-Setup.exe This can be used with your application deployment package, which will install only the runtime and necessary drivers. Connecting the sensor with the system Now that we have installed the SDK, we can plug the Kinect device into your PC. The very first time you plug the device into your system, you will notice the LED indicator of the Kinect sensor turning solid red and the system will start installing the drivers automatically. The default location of the driver is %Program Files%Microsoft Kinect DriversDrivers. The drivers will be loaded only after the installation of SDK is complete and it's a one-time job. This process also checks for the latest Windows updates on USB Drivers, so it is good to be connected to the Internet if you don't have the latest updates of Windows. The check marks in the dialog box shown in the next screenshot indicate successful driver software installation: When the drivers have finished loading and are loaded properly, the LED light on your Kinect sensor will turn solid green. This indicates that the device is functioning properly and can communicate with the PC as well. Verifying the installed drivers This is typically a troubleshooting procedure in case you encounter any problems. Also, the verification procedure will help you to understand how the device drivers are installed within your system. In order to verify that the drivers are installed correctly, open Control Panel and select Device Manager; then look for the Kinect for Windows node. You will find the Kinect for Windows Device option listed as shown in the next screenshot: Not able to view all the device components At some point of time, it may happen that you are able to view only the Kinect for Windows Device node (refer to the following screenshot). At this point of time, it looks as if the device is ready. However, a careful examination reveals a small hitch. Let's see whether you can figure it out or not! The Kinect device LED is on and Device Manager has also detected the device, which is absolutely fine, but we are still missing something here. The device is connected to the PC using the USB port, and the system prompt shows the device installed successfully—then where is the problem? The default USB port that is plugged into the system doesn't have the power capabilities required by the camera, sensor, and motor. At this point, if you plug it into an external power supplier and turn the power on, you will find all the driver nodes in Device Manager loaded automatically. This is one of the most common mistakes made by the developers. While working with Kinect SDK, make sure your Kinect device is connected with the computer using the USB port, and the external power adapter is plugged in and turned on. The next picture shows the Kinect sensor with USB connector and power adapter, and how they have been used: With the aid of the external power supply, the system will start searching for Windows updates for the USB components. Once everything is installed properly, the system will prompt you as shown in the next screenshot: All the check marks in the screenshot indicate that the corresponding components are ready to be used and the same components are also reflected in Device Manager. The messages prompting for the loading of drivers, and the prompts for the installation displaying during the loading of drivers, may vary depending upon the operating system you are using. You might also not receive any of them if the drivers are being loaded in the background. Detecting the loaded drivers in Device Manager Navigate to Control Panel | Device Manager, look for the Kinect for Windows node, and you will find the list of components detected. Refer to the next screenshot: The Kinect for Windows Audio Array Control option indicates the driver for the Kinect audio system whereas the Kinect for Windows Camera option controls the camera sensor. The Kinect for Windows Security Control option is used to check whether the device being used is a genuine Microsoft Kinect for Windows or not. In addition to appearing under the Kinect for Windows node, the Kinect for Windows USB Audio option should also appear under the Sound, Video and Game Controllers node, as shown in the next screenshot: Once the Kinect sensor is connected, you can identify the Kinect microphone like any other microphone connected to your PC in the Audio Device Manager section. Look at the next screenshot:  
Read more
  • 0
  • 0
  • 10553
Modal Close icon
Modal Close icon