Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7009 Articles
article-image-netbeans-developers-life-cycle
Packt
08 Sep 2015
30 min read
Save for later

The NetBeans Developer's Life Cycle

Packt
08 Sep 2015
30 min read
In this article by David Salter, the author of Mastering NetBeans, we'll cover the following topics: Running applications Debugging applications Profiling applications Testing applications On a day-to-day basis, developers spend much of their time writing and running applications. While writing applications, they typically debug, test, and profile them to ensure that they provide the best possible application to customers. Running, debugging, profiling, and testing are all integral parts of the development life cycle, and NetBeans provides excellent tooling to help us in all these areas. (For more resources related to this topic, see here.) Running applications Executing applications from within NetBeans is as simple as either pressing the F6 button on the keyboard or selecting the Run menu item or Project Context menu item. Choosing either of these options will launch your application without specifying any additional Java command-line parameters using the default platform JDK that NetBeans is currently using. Sometimes we want to change the options that are used for launching applications. NetBeans allows these options to be easily specified by a project's properties. Right-clicking on a project in the Projects window and selecting the Properties menu option opens the Project Properties dialog. Selecting the Run category allows the configuration options to be defined for launching an application. From this dialog, we can define and select multiple run configurations for the project via the Configuration dropdown. Selecting the New… button to the right of the Configuration dropdown allows us to enter a name for a new configuration. Once a new configuration is created, it is automatically selected as the active configuration. The Delete button can be used for removing any unwanted configurations. The preceding screenshot shows the Project Properties dialog for a standard Java project. Different project types (for example, web or mobile projects) have different options in the Project Properties window. As can be seen from the preceding Project Properties dialog, several pieces of information can be defined for a standard Java project, which together make up the launch configuration for a project: Runtime Platform: This option allows us to define which Java platform we will use when launching the application. From here, we can select from all the Java platforms that are configured within NetBeans. Selecting the Manage Platforms… button opens the Java Platform Manager dialog, allowing full configuration of the different Java platforms available (both Java Standard Edition and Remote Java Standard Edition). Selecting this button has the same effect as selecting the Tools and then Java Platforms menu options. Main Class: This option defines the main class that is used to launch the application. If the project has more than one main class, selecting the Browse… button will cause the Browse Main Classes dialog to be displayed, listing all the main classes defined in the project. Arguments: Different command-line arguments can be passed to the main class as defined in this option. Working Directory: This option allows the working directory for the application to be specified. VM Options: If different VM options (such as heap size) require setting, they can be specified by this option. Selecting the Customize button displays a dialog listing the different standard VM options available which can be selected (ticked) as required. Custom VM properties can also be defined in the dialog. For more information on the different VM properties for Java, check out http://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp-140102.html. From here, the VM properties for Java 7 (and earlier versions) and Java 8 for Windows, Solaris, Linux, and Mac OS X can be referenced. Run with Java Web Start: Selecting this option allows the application to be executed using Java Web Start technologies. This option is only available if Web Start is enabled in the Application | Web Start category. When running a web application, the project properties are different from those of a standalone Java application. In fact, the project properties for a Maven web application are different from those of a standard NetBeans web application. The following screenshot shows the properties for a Maven-based web application; as discussed previously, Maven is the standard project management tool for Java applications, and the recommended tool for creating and managing web applications: Debugging applications In the previous section, we saw how NetBeans provides the easy-to-use features to allow developers to launch their applications, but then it also provides more powerful additional features. The same is true for debugging applications. For simple debugging, NetBeans provides the standard facilities you would expect, such as stepping into or over methods, setting line breakpoints, and monitoring the values of variables. When debugging applications, NetBeans provides several different windows, enabling different types of information to be displayed and manipulated by the developer: Breakpoints Variables Call stack Loaded classes Sessions Threads Sources Debugging Analyze stack All of these windows are accessible from the Window and then Debugging main menu within NetBeans. Breakpoints NetBeans provides a simple approach to set breakpoints and a more comprehensive approach that provides many more useful features. Breakpoints can be easily added into Java source code by clicking on the gutter on the left-hand side of a line of Java source code. When a breakpoint is set, a small pink square is shown in the gutter and the entire line of source code is also highlighted in the same color. Clicking on the breakpoint square in the gutter toggles the breakpoint on and off. Once a breakpoint has been created, instead of removing it altogether, it can be disabled by right-clicking on the bookmark in the gutter and selecting the Breakpoint and then Enabled menu options. This has the effect of keeping the breakpoint within your codebase, but execution of the application does not stop when the breakpoint is hit. Creating a simple breakpoint like this can be a very powerful way of debugging applications. It allows you to stop the execution of an application when a line of code is hit. If we want to add a bit more control onto a simple breakpoint, we can edit the breakpoint's properties by right-clicking on the breakpoint in the gutter and selecting the Breakpoint and then Properties menu options. This causes the Breakpoint Properties dialog to be displayed: In this dialog, we can see the line number and the file that the breakpoint belongs to. The line number can be edited to move the breakpoint if it has been created on the wrong line. However, what's more interesting is the conditions that we can apply to the breakpoint. The Condition entry allows us to define a condition that has to be met for the breakpoint to stop the code execution. For example, we can stop the code when the variable i is equal to 20 by adding a condition, i==20. When we add conditions to a breakpoint, the breakpoint becomes known as a conditional breakpoint, and the icon in the gutter changes to a square with the lower-right quadrant removed. We can also cause the execution of the application to halt at a breakpoint when the breakpoint has been hit a certain number of times. The Break when hit count is condition can be set to Equal to, Greater than, or Multiple of to halt the execution of the application when the breakpoint has been hit the requisite number of times. Finally, we can specify what actions occur when a breakpoint is hit. The Suspend dropdown allows us to define what threads are suspended when a breakpoint is hit. NetBeans can suspend All threads, Breakpoint thread, or no threads at all. The text that is displayed in the Output window can be defined via the Print Text edit box and different breakpoint groups can be enabled or disabled via the Enable Group and Disable Group drop-down boxes. But what exactly is a breakpoint group? Simply put, a breakpoint group is a collection of breakpoints that can all be set or unset at the same time. It is a way of categorizing breakpoints into similar collections, for example, all the breakpoints in a particular file, or all the breakpoints relating to exceptions or unit tests. Breakpoint groups are created in the Breakpoints window. This is accessible by selecting the Debugging and then Breakpoints menu options from within the main NetBeans Window menu. To create a new breakpoint group, simply right-click on an existing breakpoint in the Breakpoints window and select the Move Into Group… and then New… menu options. The Set the Name of Breakpoints Group dialog is displayed in which the name of the new breakpoint group can be entered. After creating a breakpoint group and assigning one or more breakpoints into it, the entire group of breakpoints can be enabled or disabled, or even deleted by right-clicking on the group in the Breakpoints window and selecting the appropriate option. Any newly created breakpoint groups will also be available in the Breakpoint Properties window. So far, we've seen how to create breakpoints that stop on a single line of code, and also how to create conditional breakpoints so that we can cause an application to stop when certain conditions occur for a breakpoint. These are excellent techniques to help debug applications. NetBeans, however, also provides the ability to create more advanced breakpoints so that we can get even more control of when the execution of applications is halted by breakpoints. So, how do we create these breakpoints? These different types of breakpoints are all created from in the Breakpoints window by right-clicking and selecting the New Breakpoint… menu option. In the New Breakpoint dialog, we can create different types of breakpoints by selecting the appropriate entry from the Breakpoint Type drop-down list. The preceding screenshot shows an example of creating a Class breakpoint. The following types of breakpoints can be created: Class: This creates a breakpoint that halts execution when a class is loaded, unloaded, or either event occurs. Exception: This stops execution when the specified exception is caught, uncaught, or either event occurs. Field: This creates a breakpoint that halts execution when a field on a class is accessed, modified, or either event occurs. Line: This stops execution when the specified line of code is executed. It acts the same way as creating a breakpoint by clicking on the gutter of the Java source code editor window. Method: This creates a breakpoint that halts execution when a method is entered, exited, or when either event occurs. Optionally, the breakpoint can be created for all methods inside a specified class rather than a single method. Thread: This creates a breakpoint that stops execution when a thread is started, finished, or either event occurs. AWT/Swing Component: This creates a breakpoint that stops execution when a GUI component is accessed. For each of these different types of breakpoints, conditions and actions can be specified in the same way as on simple line-based breakpoints. The Variables debug window The Variables debug window lists all the variables that are currently within  the scope of execution of the application. This is therefore thread-specific, so if multiple threads are running at one time, the Variables window will only display variables in scope for the currently selected thread. In the Variables window, we can see the variables currently in scope for the selected thread, their type, and value. To display variables for a different thread to that currently selected, we must select an alternative thread via the Debugging window. Using the triangle button to the left of each variable, we can expand variables and drill down into the properties within them. When a variable is a simple primitive (for example, integers or strings), we can modify it or any property within it by altering the value in the Value column in the Variables window. The variable's value will then be changed within the running application to the newly entered value. By default, the Variables window shows three columns (Name, Type, and Value). We can modify which columns are visible by pressing the selection icon () at the top-right of the window. Selecting this displays the Change Visible Columns dialog, from which we can select from the Name, String value, Type, and Value columns: The Watches window The Watches window allows us to see the contents of variables and expressions during a debugging session, as can be seen in the following screenshot: In this screenshot, we can see that the variable i is being displayed along with the expressions 10+10 and i+20. New expressions can be watched by clicking on the <Enter new watch> option or by right-clicking on the Java source code editor and selecting the New Watch… menu option. Evaluating expressions In addition to watching variables in a debugging session, NetBeans also provides the facility to evaluate expressions. Expressions can contain any Java code that is valid for the running scope of the application. So, for example, local variables, class variables, or new instances of classes can be evaluated. To evaluate variables, open the Evaluate Expression window by selecting the Debug and then Evaluate Expression menu options. Enter an expression to be evaluated in this window and press the Evaluate Code Fragment button at the bottom-right corner of the window. As a shortcut, pressing the Ctrl + Enter keys will also evaluate the code fragment. Once an expression has been evaluated, it is shown in the Evaluation Result window. The Evaluation Result window shows a history of each expression that has previously been evaluated. Expressions can be added to the list of watched variables by right-clicking on the expression and selecting the Create Fixed Watch expression. The Call Stack window The Call Stack window displays the call stack for the currently executing thread: The call stack is displayed from top to bottom with the currently executing frame at the top of the list. Double-clicking on any entry in the call stack opens up the corresponding source code in the Java editor within NetBeans. Right-clicking on an entry in the call stack displays a pop-up menu with the choice to: Make Current: This makes the selected thread the current thread Pop To Here: This pops the execution of the call stack to the selected location Go To Source: This displays the selected code within the Java source editor Copy Stack: This copies the stack trace to the clipboard for use elsewhere When debugging, it can be useful to change the stack frame of the currently executing thread by selecting the Pop To Here option from within the stack trace window. Imagine the following code: // Get some magic int magic = getSomeMagicNumber(); // Perform calculation performCalculation(magic); During a debugging session, if after stepping over the getSomeMagicNumber() method, we decided that the method has not worked as expected, our course of action would probably be to debug into the getSomeMagicNumber() method. But, we've just stepped over the method, so what can we do? Well, we can stop the debugging session and start again or repeat the operation that called this section of code and hope there are no changes to the application state that affect the method we want to debug. A better solution, however, would be to select the line of code that calls the getSomeMagicNumber() method and pop the stack frame using the Pop To Here option. This would have the effect of rewinding the code execution so that we can then step into the method and see what is happening inside it. As well as using the Pop To Here functionality, NetBeans also offers several menu options for manipulating the stack frame, namely: Make Callee Current: This makes the callee of the current method the currently executing stack frame Make Caller Current: This makes the caller of the current method the currently executing stack frame Pop Topmost Call: This pops one stack frame, making the calling method the currently executing stack frame When moving around the call stack using these techniques, any operations performed by the currently executing method are not undone. So, for example, strange results may be seen if global or class-based variables are altered within a method and then an entry is popped from the call stack. Popping entries in the call stack is safest when no state changes are made within a method. The call stack displayed in the Debugging window for each thread behaves in the same way as in the Call Stack window itself. The Loaded Classes window The Loaded Classes window displays a list of all the classes that are currently loaded, showing how many instances there are of each class as a number and as a percentage of the total number of classes loaded. Depending upon the number of external libraries (including the standard Java runtime libraries) being used, you may find it difficult to locate instances of your own classes in this window. Fortunately, the filter at the bottom of the window allows the list of classes to be filtered, based upon an entered string. So, for example, entering the filter String will show all the classes with String in the fully qualified class name that are currently loaded, including java.lang.String and java.lang.StringBuffer. Since the filter works on the fully qualified name of a class, entering a package name will show all the classes listed in that package and subpackages. So, for example, entering a filter value as com.davidsalter.multithread would show only the classes listed in that package and subpackages. The Sessions window Within NetBeans, it is possible to perform multiple debugging sessions where either one project is being debugged multiple times, or more commonly, multiple projects are being debugged at the same time, where one is acting as a client application and the other is acting as a server application. The Sessions window displays a list of the currently running debug sessions, allowing the developer control over which one is the current session. Right-clicking on any of the sessions listed in the window provides the following options: Make Current: This makes the selected session the currently active debugging session Scope: This debugs the current thread or all the threads in the selected session Language: This options shows the language of the application being debugged—Java Finish: This finishes the selected debugging session Finish All: This finishes all the debugging sessions The Sessions window shows the name of the debug session (for example the main class being executed), its state (whether the application is Stopped or Running) and language being debugged. Clicking the selection icon () at the top-right of the window allows the user to choose which columns are displayed in the window. The default choice is to display all columns except for the Host Name column, which displays the name of the computer the session is running on. The Threads window The Threads window displays a hierarchical list of threads in use by the application currently being debugged. The current thread is displayed in bold. Double-clicking on any of the threads in the hierarchy makes the thread current. Similar to the Debugging window, threads can be made current, suspended, or interrupted by right-clicking on the thread and selecting the appropriate option. The default display for the Threads window is to show the thread's name and its state (Running, Waiting, or Sleeping). Clicking the selection icon () at the top-right of the window allows the user to choose which columns are displayed in the window. The Sources window The Sources window simply lists all of the source roots that NetBeans considers for the selected project. These are the only locations that NetBeans will search when looking for source code while debugging an application. If you find that you are debugging an application, and you cannot step into code, the most likely scenario is that the source root for the code you wish to debug is not included in the Sources window. To add a new source root, right-click in the Sources window and select the Add Source Root option. The Debugging window The Debugging window allows us to see which threads are running while debugging our application. This window is, therefore, particularly useful when debugging multithreaded applications. In this window, we can see the different threads that are running within our application. For each thread, we can see the name of the thread and the call stack leading to the breakpoint. The current thread is highlighted with a green band along the left-hand side edge of the window. Other threads created within our application are denoted with a yellow band along the left-hand side edge of the window. System threads are denoted with a gray band. We can make any of the threads the current thread by right-clicking on it and selecting the Make Current menu option. When we do this, the Variables and Call Stack windows are updated to show new information for the selected thread. The current thread can also be selected by clicking on the Debug and then Set Current Thread… menu options. Upon selecting this, a list of running threads is shown from which the current thread can be selected. Right-clicking on a thread and selecting the Resume option will cause the selected thread to continue execution until it hits another breakpoint. For each thread that is running, we can also Suspend, Interrupt, and Resume the thread by right-clicking on the thread and choosing the appropriate action. In each thread listing, the current methods call stack is displayed for each thread. This can be manipulated in the same way as from the Call Stack window. When debugging multithreaded applications, new breakpoints can be hit within different threads at any time. NetBeans helps us with multithreaded debugging by not automatically switching the user interface to a different thread when a breakpoint is hit on the non-current thread. When a breakpoint is hit on any thread other than the current thread, an indication is displayed at the bottom of the Debugging window, stating New Breakpoint Hit (an example of this can be seen in the previous window). Clicking on the icon to the right of the message shows all the breakpoints that have been hit together with the thread name in which they occur. Selecting the alternate thread will cause the relevant breakpoint to be opened within NetBeans and highlighted in the appropriate Java source code file. NetBeans provides several filters on the Debugging window so that we can show more/less information as appropriate. From left to right, these images allow us to: Show less (suspended and current threads only) Show thread groups Show suspend/resume table Show system threads Show monitors Show qualified names Sort by suspended/resumed state Sort by name Sort by default Debugging multithreaded applications can be a lot easier if you give your threads names. The thread's name is displayed in the Debugging window, and it's a lot easier to understand what a thread with a proper name is doing as opposed to a thread called Thread-1. Deadlock detection When debugging multithreaded applications, one of the problems that we can see is that a deadlock occurs between executing threads. A deadlock occurs when two or more threads become blocked forever because they are both waiting for a shared resource to become available. In Java, this typically occurs when the synchronized keyword is used. NetBeans allows us to easily check for deadlocks using the Check for Deadlock tool available on the Debug menu. When a deadlock is detected using this tool, the state of the deadlocked threads is set to On Monitor in the Threads window. Additionally, the threads are marked as deadlocked in the Debugging window. Each deadlocked thread is displayed with a red band on the left-hand side border and the Deadlock detected warning message is displayed. Analyze Stack Window When running an application within NetBeans, if an exception is thrown and not caught, the stack trace will be displayed in the Output window, allowing the developer to see exactly where errors have occurred. From the following screenshot, we can easily see that a NullPointerException was thrown from within the FaultyImplementation class in the doUntestedOperation() method at line 16. Looking before this in the stack trace (that is the entry underneath), we can see that the doUntestedOperation() method was called from within the main() method of the Main class at line 21: In the preceding example, the FaultyImplementation class is defined as follows: public class FaultyImplementation { public void doUntestedOperation() { throw new NullPointerException(); } } Java is providing an invaluable feature to developers, allowing us to easily see where exceptions are thrown and what the sequence of events was that led to the exception being thrown. NetBeans, however, enhances the display of the stack traces by making the class and line numbers clickable hyperlinks which, when clicked on, will navigate to the appropriate line in the code. This allows us to easily delve into a stack trace and view the code at all the levels of the stack trace. In the previous screenshot, we can click on the hyperlinks FaultyImplementation.java:16 and Main.java:21 to take us to the appropriate line in the appropriate Java file. This is an excellent time-saving feature when developing applications, but what do we do when someone e-mails us a stack trace to look at an error in a production system? How do we manage stack traces that are stored in log files? Fortunately, NetBeans provides an easy way to allow a stack trace to be turned into clickable hyperlinks so that we can browse through the stack trace without running the application. To load and manage stack traces into NetBeans, the first step is to copy the stack trace onto the system clipboard. Once the stack trace has been copied onto the clipboard, Analyze Stack Window can be opened within NetBeans by selecting the Window and then Debugging and then Analyze Stack menu options (the default installation for NetBeans has no keyboard shortcut assigned to this operation). Analyze Stack Window will default to showing the stack trace that is currently in the system clipboard. If no stack trace is in the clipboard, or any other data is in the system's clipboard, Analyze Stack Window will be displayed with no contents. To populate the window, copy a stack trace into the system's clipboard and select the Insert StackTrace From Clipboard button. Once a stack trace has been displayed in Analyze Stack Window, clicking on the hyperlinks in it will navigate to the appropriate location in the Java source files just as it does from the Output window when an exception is thrown from a running application. You can only navigate to source code from a stack trace if the project containing the relevant source code is open in the selected project group. Variable formatters When debugging an application, the NetBeans debugger can display the values of simple primitives in the Variables window. As we saw previously, we can also display the toString() representation of a variable if we select the appropriate columns to display in the window. Sometimes when debugging, however, the toString() representation is not the best way to display formatted information in the Variables window. In this window, we are showing the value of a complex number class that we have used in high school math. The ComplexNumber class being debugged in this example is defined as: public class ComplexNumber { private double realPart; private double imaginaryPart; public ComplexNumber(double realPart, double imaginaryPart) { this.realPart = realPart; this.imaginaryPart = imaginaryPart; } @Override public String toString() { return "ComplexNumber{" + "realPart=" + realPart + ", imaginaryPart=" + imaginaryPart + '}'; } // Getters and Setters omitted for brevity… } Looking at this class, we can see that it essentially holds two members—realPart and imaginaryPart. The toString() method outputs a string, detailing the name of the object and its parameters which would be very useful when writing ComplexNumbers to log files, for example: ComplexNumber{realPart=1.0, imaginaryPart=2.0} When debugging, however, this is a fairly complicated string to look at and comprehend—particularly, when there is a lot of debugging information being displayed. NetBeans, however, allows us to define custom formatters for classes that detail how an object will be displayed in the Variables window when being debugged. To define a custom formatter, select the Java option from the NetBeans Options dialog and then select the Java Debugger tab. From this tab, select the Variable Formatters category. On this screen, all the variable formatters that are defined within NetBeans are shown. To create a new variable formatter, select the Add… button to display the Add Variable Formatter dialog. In the Add Variable Formatter dialog, we need to enter Formatter Name and a list of Class types that NetBeans will apply the formatting to when displaying values in the debugger. To apply the formatter to multiple classes, enter the different classes, separated by commas. The value that is to be formatted is entered in the Value formatted as a result of code snippet field. This field takes the scope of the object being debugged. So, for example, to output the ComplexNumber class, we can enter the custom formatter as: "("+realPart+", "+imaginaryPart+"i)" We can see that the formatter is built up from concatenating static strings and the values of the members realPart and imaginaryPart. We can see the results of debugging variables using custom formatters in the following screenshot: Debugging remote applications The NetBeans debugger provides rapid access for debugging local applications that are executing within the same JVM as NetBeans. What happens though when we want to debug a remote application? A remote application isn't necessarily hosted on a separate server to your development machine, but is defined as any application running outside of the local JVM (that is the one that is running NetBeans). To debug a remote application, the NetBeans debugger can be "attached" to the remote application. Then, to all intents, the application can be debugged in exactly the same way as a local application is debugged, as described in the previous sections of this article. To attach to a remote application, select the Debug and then Attach Debugger… menu options. On the Attach dialog, the connector (SocketAttach, ProcessAttach, or SocketListen) must be specified to connect to the remote application. The appropriate connection details must then be entered to attach the debugger. For example, the process ID must be entered for the ProcessAttach connector and the host and port must be specified for the SocketAttach connector. Profiling applications Learning how to debug applications is an essential technique in software development. Another essential technique that is often overlooked is profiling applications. Profiling applications involves measuring various metrics such as the amount of heap memory used or the number of loaded classes or running threads. By profiling applications, we can gain an understanding of what our applications are actually doing and as such we can optimize them and make them function better. NetBeans provides first class profiling tools that are easy to use and provide results that are easy to interpret. The NetBeans profiler allows us to profile three specific areas: Application monitoring Performance monitoring Memory monitoring Each of these monitoring tools is accessible from the Profile menu within NetBeans. To commence profiling, select the Profile and then Profile Project menu options. After instructing NetBeans to profile a project, the profiler starts providing the choice of the type of profiling to perform. Testing applications Writing tests for applications is probably one of the most important aspects of modern software development. NetBeans provides the facility to write and run both JUnit and TestNG tests and test suites. In this section, we'll provide details on how NetBeans allows us to write and run these types of tests, but we'll assume that you have some knowledge of either JUnit or TestNG. TestNG support is provided by default with NetBeans, however, due to license concerns, JUnit may not have been installed when you installed NetBeans. If JUnit support is not installed, it can easily be added through the NetBeans Plugins system. In a project, NetBeans creates two separate source roots: one for application sources and the other for test sources. This allows us to keep tests separate from application source code so that when we ship applications, we do not need to ship tests with them. This separation of application source code and test source code enables us to write better tests and have less coupling between tests and applications. The best situation is for the test source root to have a dependency on application classes and the application classes to have no dependency on the tests that we have written. To write a test, we must first have a project. Any type of Java project can have tests added into it. To add tests into a project, we can use the New File wizard. In the Unit Tests category, there are templates for: JUnit Tests Tests for Existing Class (this is for JUnit tests) Test Suite (this is for JUnit tests) TestNG Test Case TestNG Test Suite When creating classes for these types of tests, NetBeans provides the option to automatically generate code; this is usually a good starting point for writing classes. When executing tests, NetBeans iterates through the test packages in a project looking for the classes that are suffixed with the word Test. It is therefore essential to properly name tests to ensure they are executed correctly. Once tests have been created, NetBeans provides several methods for running the tests. The first method is to run all the tests that we have defined for an application. Selecting the Run and then Test Project menu options runs all of the tests defined for a project. The type of the project doesn't matter (Java SE or Java EE), nor whether a project uses Maven or the NetBeans project build system (Ant projects are even supported if they have a valid test activity), all tests for the project will be run when selecting this option. After running the tests, the Test Results window will be displayed, highlighting successful tests in green and failed tests in red. In the Test Results window, we have several options to help categorize and manage the tests: Rerun all of the tests Rerun the failed tests Show only the passed tests Show only the failed tests Show errors Show aborted tests Show skipped tests Locate previous failure Locate next failure Always open test result window Always open test results in a new tab The second option within NetBeans for running tests it to run all the tests in a package or class. To perform these operations, simply right-click on a package in the Projects window and select Test Package or right-click on a Java class in the Projects window and select Test File. The final option for running tests it to execute a single test in a class. To perform this operation, right-click on a test in the Java source code editor and select the Run Focussed Test Method menu option. After creating tests, how do we keep them up to date when we add new methods to application code? We can keep tests suites up to date by manually editing them and adding new methods corresponding to new application code or we can use the Create/Update Tests menu. Selecting the Tools and then Create/Update Tests menu options displays the Create Tests dialog that allows us to edit the existing test classes and add new methods into them, based upon the existing application classes. Summary In this article, we looked at the typical tasks that a developer does on a day-to-day basis when writing applications. We saw how NetBeans can help us to run and debug applications and how to profile applications and write tests for them. Finally, we took a brief look at TDD, and saw how the Red-Green-Refactor cycle can be used to help us develop more stable applications. Resources for Article: Further resources on this subject: Contexts and Dependency Injection in NetBeans [article] Creating a JSF composite component [article] Getting to know NetBeans [article]
Read more
  • 0
  • 0
  • 12174

Packt
08 Sep 2015
17 min read
Save for later

The Symfony Framework – Installation and Configuration

Packt
08 Sep 2015
17 min read
 In this article by Wojciech Bancer, author of the book, Symfony2 Essentials, we will learn the basics of Symfony, its installation, configuration, and use. The Symfony framework is currently one of the most popular PHP frameworks existing within the PHP developer's environment. Version 2, which was released a few years ago, has been a great improvement, and in my opinion was one of the key elements for making the PHP ecosystem suitable for larger enterprise projects. The framework version 2.0 not only required the modern PHP version (minimal version required for Symfony is PHP 5.3.8), but also uses state-of-the-art technology — namespaces and anonymous functions. Authors also put a lot of efforts to provide long term support and to minimize changes, which break the compatibility between versions. Also, Symfony forced developers to use a few useful design concepts. The key one, introduced in Symfony, was DependencyInjection. (For more resources related to this topic, see here.) In most cases, the article will refer to the framework as Symfony2. If you want to look over the Internet or Google about this framework, apart from using Symfony keyword you may also try to use the Symfony2 keyword. This was the way recommended some time ago by one of the creators to make searching or referencing to the specific framework version easier in future. Key reasons to choose Symfony2 Symfony2 is recognized in the PHP ecosystem as a very well-written and well-maintained framework. Design patterns that are recommended and forced within the framework allow work to be more efficient in the group, this allows better tests and the creation of reusable code. Symfony's knowledge can also be verified through a certificate system, and this allows its developers to be easily found and be more recognized on the market. Last but not least, the Symfony2 components are used as parts of other projects, for example, look at the following: Drupal phpBB Laravel eZ Publish and more Over time, there is a good chance that you will find the parts of the Symfony2 components within other open source solutions. Bundles and extendable architecture are also some of the key Symfony2 features. They not only allow you to make your work easier through the easy development of reusable code, but also allows you to find smaller or larger pieces of code that you can embed and use within your project to speed up and make your work faster. The standards of Symfony2 also make it easier to catch errors and to write high-quality code; its community is growing every year. The history of Symfony There are many Symfony versions around, and it's good to know the differences between them to learn how the framework was evolving during these years. The first stable Symfony version — 1.0 — was released in the beginning of 2007 and was supported for three years. In mid-2008, version 1.1 was presented, which wasn't compatible with the previous release, and it was difficult to upgrade any old project to this. Symfony 1.2 version was released shortly after this, at the end of 2008. Migrating between these versions was much easier, and there were no dramatic changes in the structure. The final versions of Symfony 1's legacy family was released nearly one year later. Simultaneously, there were two version releases, 1.3 and 1.4. Both were identical, but Symfony 1.4 did not have deprecated features, and it was recommended to start new projects with it. Version 1.4 had 3 years of support. If you look into the code, version 1.x was very different from version 2. The company that was behind Symfony (the French company, SensioLabs) made a bold move and decided to rewrite the whole framework from scratch. The first release of Symfony2 wasn't perfect, but it was very promising. It relied on Git submodules (the composer did not exist back then). The 2.1 and 2.2 versions were closer to the one we use now, although it required a lot of effort to migrate to the upper level. Finally, the Symfony 2.3 was released — the first long-term support version within the 2.x branch. After this version, the changes provided within the next major versions (2.4, 2.5, and 2.6) are not so drastic and usually they do not break compatibility. This article was written based on the latest stable Symfony 2.7.4 version and was tested with PHP 5.5). This Symfony version is marked as the so called long-term support version, and updates for it will be released for 3 years since the first 2.7 version release. Installation Prior to installing Symfony2, you don't need to have a configured web server. If you have at least PHP version 5.4, you can use the standalone server provided by Symfony2. This server is suitable for development purposes and should not be used for production. It is strongly recommend to work with a Linux/UNIX system for both development and production deployment of Symfony2 framework applications. While it is possible to install and operate on a Windows box, due to its different nature, working with Windows can sometimes force you to maintain a separate fragment of code for this system. Even if your primary OS is Windows, it is strongly recommended to configure Linux system in a virtual environment. Also, there are solutions that will help you in automating the whole process. As an example, see more on https://www.vagrantup.com/ website. To install Symfony2, you can use a few methods as follows: Use a new Symfony2 installer script (currently, the only officially recommended). Please note that installer requires at least PHP 5.4. Use a composer dependency manager to install a Symfony project. Download a zip or tgz package and unpack it. It does not really matter which method you choose, as they all give you similar results. Installing Symfony2 by using an installer To install Symfony2 through an installer, go to the Symfony website at http://symfony.com/download, and install the Symfony2 installer by issuing the following commands: $ sudo curl -LsS http://symfony.com/installer -o /usr/local/bin/symfony $ sudo chmod +x /usr/local/bin/symfony After this, you can install Symfony by just typing the following command: $ symfony new <new_project_folder> To install the Symfony2 framework for a to-do application, execute the following command: $ symfony new <new_project_folder> This command installs the latest Symfony2 stable version on the newly created todoapp folder, creates the Symfony2 application, and prepares some basic structure for you to work with. After the app creation, you can verify that your local PHP is properly configured for Symfony2 by typing the following command: $ php app/check.php If everything goes fine, the script should complete with the following message: [OK] Your system is ready to run Symfony projects Symfony2 is equipped with a standalone server. It makes development easier. If you want to run this, type the following command: $ php app/console server:run If everything went alright, you will see a message that your server is working on the IP 127.0.0.1 and port 8000. If there is an error, make sure you are not running anything else that is listening on port 8000. It is also possible to run the server on a different port or IP, if you have such a requirement, by adding the address and port as a parameter, that is: $ php app/console server:run 127.0.0.1:8080 If everything works, you can now type the following: http://127.0.0.1:8000/ Now, you will visit Symfony's welcome page. This page presents you with a nice welcome information and useful documentation link. The Symfony2 directory structure Let's dive in to the initial directory structure within the typical Symfony application. Here it is: app bin src vendor web While Symfony2 is very flexible in terms of directory structure, it is recommended to keep the basic structure mentioned earlier. The following table describes their purpose: Directory Used for app This holds information about general configuration, routing, security configuration, database parameters, and many others. It is also the recommended place for putting new view files. This directory is a starting point. bin It holds some helper executables. It is not really important during the development process, and rarely modified. src This directory holds the project PHP code (usually your bundles). vendor These are third-party libraries used within the project. Usually, this directory contains all the open source third-party bundles, libraries, and other resources. It's worth to mention that it's recommended to keep the files within this directory outside the versioning system. It means that you should not modify them under any circumstances. Fortunately, there are ways to modify the code, if it suits your needs more. This will be demonstrated when we implement user management within our to-do application. web This is the directory that is accessible through the web server. It holds the main entry point to the application (usually the app.php and app_dev.php files), CSS files, JavaScript files, and all the files that need to be available through the web server (user uploadable files). So, in most cases, you will be usually modifying and creating the PHP files within the src/ directory, the view and configuration files within the app/ directory, and the JS/CSS files within the web/ directory. The main directory also holds a few files as follows: .gitignore README.md composer.json composer.lock The .gitignore file's purpose is to provide some preconfigured settings for the Git repository, while the composer.json and composer.lock files are the files used by the composer dependency manager. What is a bundle? Within the Symfony2 application, you will be using the "bundle" term quite often. Bundle is something similar to plugins. So it can literally hold any code controllers, views, models, and services. A bundle can integrate other non-Symfony2 libraries and hold some JavaScript/CSS code as well. We can say that almost everything is a bundle in Symfony2; even some of the core framework features together form a bundle. A bundle usually implements a single feature or functionality. The code you are writing when you write a Symfony2 application is also a bundle. There are two types of bundles. The first kind of bundle is the one you write within the application, which is project-specific and not reusable. For this purpose, there is a special bundle called AppBundle created for you when you install the Symfony2 project. Also, there are reusable bundles that are shared across the various projects either written by you, your team, or provided by a third-party vendors. Your own bundles are usually stored within the src/ directory, while the third-party bundles sit within the vendor/ directory. The vendor directory is used to store third-party libraries and is managed by the composer. As such, it should never be modified by you. There are many reusable open source bundles, which help you to implement various features within the application. You can find many of them to help you with User Management, writing RESTful APIs, making better documentation, connecting to Facebook and AWS, and even generating a whole admin panel. There are tons of bundles, and everyday brings new ones. If you want to explore open source bundles, and want to look around what's available, I recommend you to start with the http://knpbundles.com/ website. The bundle name is correlated with the PHP namespace. As such, it needs to follow some technical rules, and it needs to end with the Bundle suffix. A few examples of correct names are AppBundle and AcmeDemoBundle, CompanyBlogBundle or CompanySocialForumBundle, and so on. Composer Symfony2 is built based on components, and it would be very difficult to manage the dependencies between them and the framework without a dependency manager. To make installing and managing these components easier, Symfony2 uses a manager called composer. You can get it from the https://getcomposer.org/ website. The composer makes it easy to install and check all dependencies, download them, and integrate them to your work. If you want to find additional packages that can be installed with the composer, you should visit https://packagist.org/. This site is the main composer repository, and contains information about most of the packages that are installable with the composer. To install the composer, go to https://getcomposer.org/download/ and see the download instruction. The download instruction should be similar to the following: $ curl -sS https://getcomposer.org/installer | php If the download was successful, you should see the composer.phar file in your directory. Move this to the project location in the same place where you have the composer.json and composer.lock files. You can also install it globally, if you prefer to, with these two commands: $ curl -sS https://getcomposer.org/installer | php $ sudo mv composer.phar /usr/local/bin/composer You will usually need to use only three composer commands: require, install, and update. The require command is executed when you need to add a new dependency. The install command is used to install the package. The update command is used when you need to fetch the latest version of your dependencies as specified within the JSON file. The difference between install and update is subtle, but very important. If you are executing the update command, your composer.lock file gets updated with the version of the code you just fetched and downloaded. The install command uses the information stored in the composer.lock file and the fetch version stored in this file. When to use install? For example, if you deploy the code to the server, you should use install rather than update, as it will deploy the version of the code stored in composer.lock, rather than download the latest version (which may be untested by you). Also, if you work in a team and you just got an update through Git, you should use install to fetch the vendor code updated by other developers. You should use the update command if you want to check whether there is an updated version of the package you have installed, that is, whether a new minor version of Symfony2 will be released, then the update command will fetch everything. As an example, let's install one extra package for user management called FOSUserBundle (FOS is a shortcut of Friends of Symfony). We will only install it here; we will not configure it. To install FOSUserBundle, we need to know the correct package name and version. The easiest way is to look in the packagist site at https://packagist.org/ and search for the package there. If you type fosuserbundle, the search should return a package called friendsofsymfony/user-bundle as one of the top results. The download counts visible on the right-hand side might be also helpful in determining how popular the bundle is. If you click on this, you will end up on the page with the detailed information about that bundle, such as homepage, versions, and requirements of the package. Type the following command: $ php composer.phar require friendsofsymfony/user-bundle ^1.3 Using version ^1.3 for friendsofsymfony/user-bundle ./composer.json has been updated Loading composer repositories with package information Updating dependencies (including require-dev) - Installing friendsofsymfony/user-bundle (v1.3.6) Loading from cache friendsofsymfony/user-bundle suggests installing willdurand/propel-typehintable-behavior (Needed when using the propel implementation) Writing lock file Generating autoload files ... Which version of the package you choose is up to you. If you are interested in package versioning standards, see the composer website at https://getcomposer.org/doc/01-basic-usage.md#package-versions to get more information on it. The composer holds all the configurable information about dependencies and where to install them in a special JSON file called composer.json. Let's take a look at this: { "name": "wbancer/todoapp", "license": "proprietary", "type": "project", "autoload": { "psr-0": { "": "src/", "SymfonyStandard": "app/SymfonyStandard/" } }, "require": { "php": ">=5.3.9", "symfony/symfony": "2.7.*", "doctrine/orm": "~2.2,>=2.2.3,<2.5", // [...] "incenteev/composer-parameter-handler": "~2.0", "friendsofsymfony/user-bundle": "^1.3" }, "require-dev": { "sensio/generator-bundle": "~2.3" }, "scripts": { "post-root-package-install": [ "SymfonyStandard\\Composer::hookRootPackageInstall" ], "post-install-cmd": [ // post installation steps ], "post-update-cmd": [ // post update steps ] }, "config": { "bin-dir": "bin" }, "extra": { // [...] } } The most important section is the one with the require key. It holds all the information about the packages we want to use within the project. The key scripts contain a set of instructions to run post-install and post-update. The extra key in this case contains some settings specific to the Symfony2 framework. Note that one of the values in here points out to the parameter.yml file. This file is the main file holding the custom machine-specific parameters. The meaning of the other keys is rather obvious. If you look into the vendor/ directory, you will notice that our package has been installed in the vendor/friendsofsymfony/user-bundle directory. The configuration files Each application has a need to hold some global and machine-specific parameters and configurations. Symfony2 holds configuration within the app/config directory and it is split into a few files as follows: config.yml config_dev.yml config_prod.yml config_test.yml parameters.yml parameters.yml.dist routing.yml routing_dev.yml security.yml services.yml All the files except the parameters.yml* files contain global configuration, while the parameters.yml file holds machine-specific information such as database host, database name, user, password, and SMTP configuration. The default configuration file generated by the new Symfony command will be similar to the following one. This file is auto-generated during the composer install: parameters: database_driver: pdo_mysql database_host: 127.0.0.1 database_port: null database_name: symfony database_user: root database_password: null mailer_transport: smtp mailer_host: 127.0.0.1 mailer_user: null mailer_password: null secret: 93b0eebeffd9e229701f74597e10f8ecf4d94d7f As you can see, it mostly holds the parameters related to database, SMTP, locale settings, and secret key that are used internally by Symfony2. Here, you can add your custom parameters using the same syntax. It is a good practice to keep machine-specific data such as passwords, tokens, api-keys, and access keys within this file only. Putting passwords in the general config.yml file is considered as a security risk bug. The global configuration file (config.yml) is split into a few other files called routing*.yml that contain information about routing on the development and production configuration. The file called as security.yml holds information related to authentication and securing the application access. Note that some files contains information for development, production, or test mode. You can define your mode when you run Symfony through the command-line console and when you run it through the web server. In most cases, while developing you will be using the dev mode. The Symfony2 console To finish, let's take a look at the Symfony console script. We used it before to fire up the development server, but it offers more. Execute the following: $ php app/console You will see a list of supported commands. Each command has a short description. Each of the standard commands come with help, so I will not be describing each of them here, but it is worth to mention a few commonly used ones: Command Description app/console: cache:clear Symfony in production uses a lot of caching. Therefore, if you need to change values within a template (twig) or within configuration files while in production mode, you will need to clear the cache. Cache is also one of the reasons why it's worth to work in the development mode. app/console container:debug Displays all configured public services app/console router:debug Displays all routing configuration along with method, scheme, host, and path. app/console security:check Checks your composer and packages version against known security vulnerabilities. You should run this command regularly. Summary In this article, we have demonstrated how to use the Symfony2 installer, test the configuration, run the deployment server, and play around with the Symfony2 command line. We have also installed the composer and learned how to install a package using it. To demonstrate how Symfony2 enables you to make web applications faster, we will try to learn through examples that can be found in real life. To make this task easier, we will try to produce a real to-do web application with modern look and a few working features. In case you are interested in knowing other Symfony books that Packt has in store for you, here is the link: Symfony 1.3 Web Application Development, Tim Bowler, Wojciech Bancer Extending Symfony2 Web Application Framework, Sébastien Armand Resources for Article: Further resources on this subject: A Command-line Companion Called Artisan[article] Creating and Using Composer Packages[article] Services [article]
Read more
  • 0
  • 0
  • 3674

article-image-working-powershell
Packt
07 Sep 2015
17 min read
Save for later

Working with PowerShell

Packt
07 Sep 2015
17 min read
In this article, you will cover: Retrieving system information – Configuration Service cmdlets Administering hosts and machines – Host and MachineCreation cmdlets Managing additional components – StoreFront Admin and Logging cmdlets (For more resources related to this topic, see here.) Introduction With hundreds or thousands of hosts to configure and machines to deploy, configuring all the components manually could be difficult. As for the previous XenDesktop releases, and also with the XenDesktop 7.6 version, you can find an integrated set of PowerShell modules. With its use, IT technicians are able to reduce the time required to perform management tasks by the creation of PowerShell scripts, which will be used to deploy, manage, and troubleshoot at scale the greatest part of the XenDesktop components. Working with PowerShell instead of the XenDesktop GUI will give you more flexibility in terms of what kind of operations to execute, having a set of additional features to use during the infrastructure creation and configuration phases. Retrieving system information – Configuration Service cmdlets In this recipe, we will use and explain a general-purpose PowerShell cmdlet: the Configuration Service category. This is used to retrieve general configuration parameters, and to obtain information about the implementation of the XenDesktop Configuration Service. Getting ready No preliminary tasks are required. You have already installed the Citrix XenDesktop PowerShell SDK during the installation of the Desktop Controller role machine(s). To be able to run a PowerShell script (.ps1 format), you have to enable the script execution from the PowerShell prompt in the following way, using its application: Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Force How to do it… In this section, we will explain and execute the commands associated with the XenDesktop System and Services configuration area: Connect to one of the Desktop Broker servers, by using a remote Desktop connection, for instance. Right-click on the PowerShell icon installed on the Windows taskbar and select the Run as Administrator option. Load the Citrix PowerShell modules by typing the following command and then press the Enter key: Asnp Citrix* As an alternative to the Asnp command, you can use the Add-PSSnapin command. Retrieve the active and configured Desktop Controller features by running the following command: Get-ConfigEnabledFeature To retrieve the current status of the Config Service, run the following command. The output result will be OK in the absence of configuration issues: Get-ConfigServiceStatus To get the connection string used by the Configuration Service and to connect to the XenDesktop database, run the following command: Get-ConfigDBConnection Starting from the previously received output, it's possible to configure the connection string to let the Configuration Service use the system DB. For this command, you have to specify the Server, Initial Catalog, and Integrated Security parameters: Set-ConfigDBConnection –DBConnection"Server=<ServernameInstanceName>; Initial Catalog=<DatabaseName>; Integrated Security=<True | False>" Starting from an existing Citrix database, you can generate a SQL procedure file to use as a backup to recreate the database. Run the following command to complete this task, specifying the DatabaseName and ServiceGroupName parameters: Get-ConfigDBSchema -DatabaseName<DatabaseName> -ServiceGroupName<ServiceGroupName>> Path:FileName.sql You need to configure a destination database with the same name as that of the source DB, otherwise the script will fail! To retrieve information about the active Configuration Service objects (Instance, Service, and Service Group), run the following three commands respectively: Get-ConfigRegisteredServiceInstance Get-ConfigService Get-ConfigServiceGroup To test a set of operations to check the status of the Configuration Service, run the following script: #------------ Script - Configuration Service #------------ Define Variables $Server_Conn="SqlDatabaseServer.xdseven.localCITRIX,1434" $Catalog_Conn="CitrixXD7-Site-First" #------------ write-Host"XenDesktop - Configuration Service CmdLets" #---------- Clear the existing Configuration Service DB connection $Clear = Set-ConfigDBConnection -DBConnection $null Write-Host "Clearing any previous DB connection - Status: " $Clear #---------- Set the Configuration Service DB connection string $New_Conn = Set-ConfigDBConnection -DBConnection"Server=$Server_Conn; Initial Catalog=$Catalog_Conn; Integrated Security=$true" Write-Host "Configuring the DB string connection - Status: " $New_Conn $Configured_String = Get-configDBConnection Write-Host "The new configured DB connection string is: " $Configured_String You have to save this script with the .ps1 extension, in order to invoke it with PowerShell. Be sure to change the specific parameters related to your infrastructure, in order to be able to run the script. This is shown in the following screenshot: How it works... The Configuration Service cmdlets of XenDesktop PowerShell permit the managing of the Configuration Service and its related information: the Metadata for the entire XenDesktop infrastructure, the Service instances registered within the VDI architecture, and the collections of these services, called Service Groups. This set of commands offers the ability to retrieve and check the DB connection string to contact the configured XenDesktop SQL Server database. These operations are permitted by the Get-ConfigDBConnection command (to retrieve the current configuration) and the Set-ConfigDBConnection command (to configure the DB connection string); both the commands use the DB Server Name with the Instance name, DB name, and Integrated Security as information fields. In the attached script, we have regenerated a database connection string. To be sure to be able to recreate it, first of all we have cleared any existing connection, setting it to null (verify the command associated with the $Clear variable), then we have defined the $New_Conn variable, using the Set-ConfigDBConnection command; all the parameters are defined at the top of the script, in the form of variables. Use the Write-Host command to echo results on the standard output. There's more... In some cases, you may need to retrieve the state of the registered services, in order to verify their availability. You can use the Test-ConfigServiceInstanceAvailability cmdlet, retrieving whether the service is responding or not and its response time. Run the following example to test the use of this command: Get-ConfigRegisteredServiceInstance | Test-ConfigServiceInstanceAvailability | more Use the –ForceWaitForOneOfEachType parameter to stop the check for a service category, when one of its services responds. Administering hosts and machines – Host and MachineCreation cmdlets In this recipe, we will describe how to create the connection between the Hypervisor and the XenDesktop servers, and the way to generate machines to assign to the end users, all by using Citrix PowerShell. Getting ready No preliminary tasks are required. You have already installed the Citrix XenDesktop PowerShell SDK during the installation of the Desktop Controller role machine(s). To be sure to be able to run a PowerShell script (the.ps1 format), you have to enable the script execution from the PowerShell prompt in this way: Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Force How to do it… In this section, we will discuss the PowerShell commands used to connect XenDesktop with the supported hypervisors plus the creation of the machines from the command line: Connect to one of the Desktop Broker servers. Click on the PowerShell icon installed on the Windows taskbar. Load the Citrix PowerShell modules by typing the following command, and then press the Enter key: Asnp Citrix* To list the available Hypervisor types, execute this task: Get-HypHypervisorPlugin –AdminAddress<BrokerAddress> To list the configured properties for the XenDesktop root-level location (XDHyp:), execute the following command: Get-ChildItemXDHyp:HostingUnits Please refer to the PSPath, Storage, and PersonalvDiskStorage output fields to retrieve information on the storage configuration. Execute the following cmdlet to add a storage resource to the XenDesktop Controller host: Add-HypHostingUnitStorage –LiteralPath<HostPathLocation> -StoragePath<StoragePath> -StorageType<OSStorage|PersonalvDiskStorage> - AdminAddress<BrokerAddress> To generate a snapshot for an existing VM, perform the following task: New-HypVMSnapshot –LiteralPath<HostPathLocation> -SnapshotDescription<Description> Use the Get-HypVMMacAddress -LiteralPath<HostPathLocation> command to list the MAC address of specified desktop VMs. To provision machine instances starting from the Desktop base image template, run the following command: New-ProvScheme –ProvisioningSchemeName<SchemeName> -HostingUnitName<HypervisorServer> -IdentityPoolName<PoolName> -MasterImageVM<BaseImageTemplatePath> -VMMemoryMB<MemoryAssigned> -VMCpuCount<NumberofCPU> To specify the creation of instances with the Personal vDisk technology, use the following option: -UsePersonalVDiskStorage. After the creation process, retrieve the provisioning scheme information by running the following command: Get-ProvScheme –ProvisioningSchemeName<SchemeName> To modify the resources assigned to desktop instances in a provisioning scheme, use the Set-ProvScheme cmdlet. The permitted parameters are –ProvisioningSchemeName, -VMCpuCount, and –VMMemoryMB. To update the desktop instances to the latest version of the Desktop base image template, run the following cmdlet: Publish-ProvMasterVmImage –ProvisioningSchemeName<SchemeName> -MasterImageVM<BaseImageTemplatePath> If you do not want to maintain the pre-update instance version to use as a restore checkpoint, use the –DoNotStoreOldImage option. To create machine instances, based on the previously configured provisioning scheme for an MCS architecture, run this command: New-ProvVM –ProvisioningSchemeName<SchemeName> -ADAccountName"DomainMachineAccount" Use the -FastBuild option to make the machine creation process faster. On the other hand, you cannot start up the machines until the process has been completed. Retrieve the configured desktop instances by using the next cmdlet: Get-ProvVM –ProvisioningSchemeName<SchemeName> -VMName<MachineName> To remove an existing virtual desktop, use the following command: Remove-ProvVM –ProvisioningSchemeName<SchemeName> -VMName<MachineName> -AdminAddress<BrokerAddress> The next script will combine the use of part of the commands listed in this recipe: #------------ Script - Hosting + MCS #----------------------------------- #------------ Define Variables $LitPath = "XDHyp:HostingUnitsVMware01" $StorPath = "XDHyp:HostingUnitsVMware01datastore1.storage" $Controller_Address="192.168.110.30" $HostUnitName = "Vmware01" $IDPool = $(Get-AcctIdentityPool -IdentityPoolName VDI-DESKTOP) $BaseVMPath = "XDHyp:HostingUnitsVMware01VMXD7-W8MCS-01.vm" #------------ Creating a storage location Add-HypHostingUnitStorage –LiteralPath $LitPath -StoragePath $StorPath -StorageTypeOSStorage -AdminAddress $Controller_Address #---------- Creating a Provisioning Scheme New-ProvScheme –ProvisioningSchemeName Deploy_01 -HostingUnitName $HostUnitName -IdentityPoolName $IDPool.IdentityPoolName -MasterImageVM $BaseVMPathT0-Post.snapshot -VMMemoryMB 4096 -VMCpuCount 2 -CleanOnBoot #---------- List the VM configured on the Hypervisor Host dir $LitPath*.vm exit How it works... The Host and MachineCreation cmdlet groups manage the interfacing with the Hypervisor hosts, in terms of machines and storage resources. This allows you to create the desktop instances to assign to the end user, starting from an existing and mapped Desktop virtual machine. The Get-HypHypervisorPlugin command retrieves and lists the available hypervisors to use to deploy virtual desktops and to configure the storage types. You need to configure an operating system storage area or a Personal vDisk storage zone. The way to map an existing storage location from the Hypervisor to the XenDesktop controller is by running the Add-HypHostingUnitStorage cmdlet. In this case you have to specify the destination path on which the storage object will be created (LiteralPath), the source storage path on the Hypervisor machine(s) (StoragePath), and the StorageType previously discussed. The storage types are in the form of XDHyp:HostingUnits<UnitName>. To list all the configured storage objects, execute the following command: dirXDHyp:HostingUnits<UnitName> *.storage After configuring the storage area, we have discussed the Machine Creation Service (MCS) architecture. In this cmdlets collection, we have the availability of commands to generate VM snapshots from which we can deploy desktop instances (New-HypVMSnapshot), and specify a name and a description for the generated disk snapshot. Starting from the available disk image, the New-ProvScheme command permits you to create a resource provisioning scheme, on which to specify the desktop base image, and the resources to assign to the desktop instances (in terms of CPU and RAM -VMCpuCount and –VMMemoryMB), and if generating these virtual desktops in a non-persistent mode (-CleanOnBoot option), with or without the use of the Personal vDisk technology (-UsePersonalVDiskStorage). It's possible to update the deployed instances to the latest base image update through the use of the Publish-ProvMasterVmImage command. In the generated script, we have located all the main storage locations (the LitPath and StorPath variables) useful to realize a provisioning scheme, then we have implemented a provisioning procedure for a desktop based on an existing base image snapshot, with two vCPUs and 4GB of RAM for the delivered instances, which will be cleaned every time they stop and start (by using the -CleanOnBoot option). You can navigate the local and remote storage paths configured with the XenDesktop Broker machine; to list an object category (such as VM or Snapshot) you can execute this command: dirXDHyp:HostingUnits<UnitName>*.<category> There's more... The discussed cmdlets also offer you the technique to preserve a virtual desktop from an accidental deletion or unauthorized use. With the Machine Creation cmdlets group, you have the ability to use a particular command, which allows you to lock critical desktops: Lock-ProvVM. This cmdlet requires as parameters the name of the scheme to which they refer (-ProvisioningSchemeName) and the ID of the virtual desktop to lock (-VMID). You can retrieve the Virtual Machine ID by running the Get-ProvVM command discussed previously. To revert the machine lock, and free the desktop instance from accidental deletion or improper use, you have to execute the Unlock-ProvVM cmdlet, using the same parameter showed for the lock procedure. Managing additional components – StoreFrontÔ admin and logging cmdlets In this recipe, we will use and explain how to manage and configure the StoreFront component, by using the available Citrix PowerShell cmdlets. Moreover, we will explain how to manage and check the configurations for the system logging activities. Getting ready No preliminary tasks are required. You have already installed the Citrix XenDesktop PowerShell SDK during the installation of the Desktop Controller role machine(s). To be able to run a PowerShell script (in the.ps1 format), you have to enable the script execution from the PowerShell prompt in this way: Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Force How to do it… In this section, we will explain and execute the commands associated with the Citrix Storefront system: Connect to one of the Desktop Broker servers. Click on the PowerShell icon installed on the Windows taskbar. Load the Citrix PowerShell modules by typing the following command, and then press the Enter key: Asnp Citrix* To execute a command, you have to press the Enter button after completing the right command syntax. Retrieve the currently existing StoreFront service instances, by running the following command: Get-SfService To limit the number of rows as output result, you can add the –MaxRecordCount<value> parameter. To list the detailed information about the StoreFront service(s) currently configured, execute the following command: Get-SfServiceInstance –AdminAddress<ControllerAddress> The status of the currently active StoreFront instances can be retrieved by using the Get-SfServiceStatus command. The OK output will confirm the correct service execution. To list the task history associated with the configured StoreFront instances, you have to run the following command: Get-SfTask You can filter the desired information for the ID of the researched task (-taskid) and sort the results by the use of the –sortby parameter. To retrieve the installed database schema versions, you can execute the following command: Get-SfInstalledDBVersion By applying the –Upgrade and –Downgrade filters, you will receive respectively the schemas for which the database version can be updated or reverted to a previous compatible one. To modify the StoreFront configurations to register its state on a different database, you can use the following command: Set-SfDBConnection-DBConnection<DBConnectionString> -AdminAddress<ControllerAddress> Be careful when you specify the database connection string; if not specified, the existing database connections and configurations will be cleared! To check that the database connection has been correctly configured, the following command is available: Test-SfDBConnection-DBConnection<DBConnectionString>-AdminAddress<ControllerAddress> The second discussed cmdlets allows the logging group to retrieve information about the current status of the logging service and run the following command: Get-LogServiceStatus To verify the language used and whether the logging service has been enabled, run the following command: Get-LogSite The available configurable locales are en, ja, zh-CN, de, es, and fr. The available states are Enabled, Disabled, NotSupported, and Mandatory. The NotSupported state will show you an incorrect configuration for the listed parameters. To retrieve detailed information about the running logging service, you have to use the following command: Get-LogService As discussed earlier for the StoreFront commands, you can filter the output by applying the –MaxRecordCount<value> parameter. In order to get all the operations logged within a specified time range, run the following command; this will return the global operations count: Get-LogSummary –StartDateRange<StartDate>-EndDateRange<EndDate> The date format must be the following: AAAA-MM-GGHH:MM:SS. To list the collected operations per day in the specified time period, run the previous command in the following way: Get-LogSummary –StartDateRange<StartDate> -EndDateRange<EndDate>-intervalSeconds 86400 The value 86400 is the number of seconds that are present in a day. To retrieve the connection string information about the database on which logging data is stored, execute the following command: Get-LogDataStore To retrieve detailed information about the high level operations performed on the XenDesktop infrastructure, you have to run the following command: Get-LogHighLevelOperation –Text <TextincludedintheOperation> -StartTime<FormattedDateandTime> -EndTime<FormattedDateandTime>-IsSuccessful<true | false>-User <DomainUserName>-OperationType<AdminActivity | ConfigurationChange> The indicated filters are not mandatory. If you do not apply any filters, all the logged operations will be returned. This could be a very long output. The same information can be retrieved for the low level system operations in the following way: Get-LogLowLevelOperation-StartTime<FormattedDateandTime> -EndTime<FormattedDateandTime> -IsSuccessful<true | false>-User <DomainUserName> -OperationType<AdminActivity | ConfigurationChange> In the How it works section we will explain the difference between the high and low level operations. To log when a high level operation starts and stops respectively, use the following two commands: Start-LogHighLevelOperation –Text <OperationDescriptionText>- Source <OperationSource> -StartTime<FormattedDateandTime> -OperationType<AdminActivity | ConfigurationChange> Stop-LogHighLevelOperation –HighLevelOperationId<OperationID> -IsSuccessful<true | false> The Stop-LogHighLevelOperation must be related to an existing start high level operation, because they are related tasks. How it works... Here, we have discussed two new PowerShell command collections for the XenDesktop 7 versions: the cmdlet related to the StoreFront platform; and the activities Logging set of commands. The first collection is quite limited in terms of operations, despite the other discussed cmdlets. In fact, the only actions permitted with the StoreFront PowerShell set of commands are retrieving configurations and settings about the configured stores and the linked database. More activities can be performed regarding the modification of existing StoreFront clusters, by using the Get-SfCluster, Add-SfServerToCluster, New-SfCluster, and Set-SfCluster set of operations. More interesting is the PowerShell Logging collection. In this case, you can retrieve all the system-logged data, putting it into two principal categories: High-level operations: These tasks group all the system configuration changes that are executed by using the Desktop Studio, the Desktop Director, or Citrix PowerShell. Low-level operations: This category is related to all the system configuration changes that are executed by a service and not by using the system software's consoles. With the low level operations command, you can filter for a specific high level operation to which the low level refers, by specifying the -HighLevelOperationId parameter. This cmdlet category also gives you the ability to track the start and stop of a high level operation, by the use of Start-LogHighLevelOperation and Stop-LogHighLevelOperation. In this second case, you have to specify the previously started high level operation. There's more... In case of too much information in the log store, you have the ability to clear all of it. To refresh all the log entries, we use the following command: Remove-LogOperation -UserName<DBAdministrativeCredentials> -Password <DBUserPassword>-StartDateRange <StartDate> -EndDateRange <EndDate> The not encrypted –Password parameter can be substituted by –SecurePassword, the password indicated in secure string form. The credentials must be database administrative credentials, with deleting permissions on the destination database. This is a not reversible operation, so ensure that you want to delete the logs in the specified time range, or verify that you have some form of data backup. Resources for Article: Further resources on this subject: Working with Virtual Machines [article] Virtualization [article] Upgrading VMware Virtual Infrastructure Setups [article]
Read more
  • 0
  • 0
  • 20202

article-image-reusable-grid-system-sass
Brian Hough
07 Sep 2015
6 min read
Save for later

Reusable Grid System With SASS

Brian Hough
07 Sep 2015
6 min read
Grid systems have become an essential part of front-end web development. Whether you are building a web app or a marketing landing page, a grid system is the core of your layout. The problem I kept coming across is that grid systems are not one size fits all, so often I would have to go find a new system for each project. This led me to look for a way to avoid this search, which lead me to this solution. Thanks to some of SASS's functionality, we can actually create a grid system we can reuse and customize quickly for every project. Getting Started We're going to start out by setting up some variables that we will need to generate our grid system: $columnCount: 12; $columnPadding: 20px; $gridWidth: 100%; $baseColumnWidth: $gridWidth / $columnCount; $columnCount will do just what it says on the tin, set the number of columns for our grid layout. $columnPadding sets the spacing between each column as well as our outside gutters.$gridwidth sets how wide we want our layout to be. This can be set to a percentage for a fluid layout or another unit (such as px) for a fixed layout. Finally, we have $baseColumnWidth, which is a helper variable that determines the width of a single column based on the total layout width and the number of columns. We are going to finish our initial setup by adding some high-level styles: *, *:before, *:after { box-sizing: border-box; } img, picture { max-width: 100%; } This will set everything on our page to use box-sizing: border-box, making it much easier to calculate our layout since the browser will handle the math of deducting padding from our widths for us. The other thing we did was make all the images in our layout responsive, so we set a max-width: 100% on our image and picture tags. Rows Now that we have our basic setup done, let's start crafting our actual grid system. Our first task is to create our row wrapper: .row { width: $gridWidth; padding: 0 ( $gutterWidth / 2 ); &:after { content: ""; display: table; clear: both; } } Here we set the width of our row to the $gridWidth value from earlier. For this example, we are using a fully fluid width of 100%, but you could also add a max-width here in order to constrain the layout on larger screens. Next, we apply our outside gutters by taking $gutterWidth and dividing it in half. We do this because each column will have 10px of padding on either side of it, so that 10px plus the 10px we are adding to the outside of the row will give us our desired 20px gutter. Lastly, since we will be using floats to layout our columns, we will clear them after we close out each row. One of the features I always require out of a grid-system is the ability to create nested columns. This is the ability to start a new row of columns that is nested within another column. Let's modify our row styling to accommodate nesting: .row { width: $gridWidth; padding: 0 ( $gutterWidth / 2 ); &:after { content: ""; display: table; clear: both; } .row { width: auto; padding: 0 ( $gutterWidth / -2 ); } } This second .row class will handle our nesting. We set width: auto so that our nested row will fill its parent column, and to override a possible fix width that could be inherited from the original unnested .row class. Since this row is nested, we are not going to remove those outside gutters. We achieve this by taking our $gutterWidth value and divide it by -2, which will pull the edges of the row out to compensate for the parent column's padding. This will now let us nest till our heart's content. Columns Columns are the meat of our grid system. Most of the styles for our columns will be shared, so let's create that block first: [class*="column-"] { float: left; padding: 0 ( $columnPadding / 2 ); } Using a wildcard attribute selector, we target all of our column classes, floating them left and applying our familiar padding formula. If [browser compatibility] is a concern for you, you can also make this block a placeholder and @extend it from your individual column classes. Now that we have that out of the way, it's time for the real magic. To generate our individual column styles, we will use a SASS loop that iterates over $columnCount: @for $i from 1 through $columnCount { .column-#{$i} { width: ( $baseColumnWidth * $i) ; } } If you are familiar with JavaScript loops, then this code shouldn't be too foreign to you. For every column, we create a .column-x block that will span X number of columns. The #{$i} at the end of the class name prints out i to create each columns class name. We then set its width to $baseColumnWidth times the number of columns we want to span (represented by i). This will loop for the number of columns we set $columnCount. This is the core of what makes this pattern so powerful, as no matter how many or few columns we need this loop will generate all the necessary styles. This same pattern can be extended to make our grid-system even more flexible. Let's add the ability to offset columns to the left or right by making the following modifications: @for $i from 1 through $columnCount { .column-#{$i} { width: ( $baseColumnWidth * $i) ; } .prepend-#{$i} { margin-left: ( $baseColumnWidth * $i); } .append-#{$i} { margin-right: ( $baseColumnWidth * $i ); } } This creates two new blocks on each iteration that can be used to offset a row by X number of columns to the left or right. Plus, because you can have multiple loops, you can also use this pattern to create styles for different breakpoints: @media only screen and (min-width: 768px) { @for $i from 1 through $columnCount { .tablet-column-#{$i} { width: ( $baseColumnWidth * $i) ; } .tablet-prepend-#{$i} { margin-left: ( $baseColumnWidth * $i); } .tablet-append-#{$i} { margin-right: ( $baseColumnWidth * $i ); } } } Conclusion We now have a full-featured grid system that we can customize for individual use cases by adjusting just a few variables. As technologies and browser support changes, we can continue to modify this base file including support for things like flex-box and continue to use it for years to come. This has been a great addition to my toolbox, and I hope it is to yours as well. About The Author Brian is a Front-End Architect, Designer, and Product Manager at Piqora. By day, he is working to prove that the days of bad Enterprise User Experiences are a thing of the past. By night, he obsesses about ways to bring designers and developers together using technology. He blogs about his early stage startup experience at lostinpixelation.com, or you can read his general musings on twitter @b_hough.
Read more
  • 0
  • 0
  • 1849

article-image-storage-policy-based-management
Packt
07 Sep 2015
10 min read
Save for later

Storage Policy-based Management

Packt
07 Sep 2015
10 min read
In this article by Jeffery Taylor, the author of the book VMware Virtual SAN Cookbook, we have a functional VSAN cluster, we can leverage the power of Storage Policy-based Management (SPBM) to control how we deploy our virtual machines (VMs).We will discuss the following topics, with a recipe for each: Creating storage policies Applying storage policies to a new VM or a VM deployed from a template Applying storage policies to an existing VM migrating to VSAN Viewing a VM's storage policies and object distribution Changing storage policies on a VM already residing in VSAN Modifying existing storage policies (For more resources related to this topic, see here.) Introduction SPBM is where the administrative power of converged infrastructure becomes apparent. You can define VM-thick provisioning on a sliding scale, define how fault tolerant the VM's storage should be, make distribution and performance decisions, and more. RAID-type decisions for VMs resident on VSAN are also driven through the use of SPBM. VSAN can provide RAID-1 (mirrored) and RAID-0 (striped) configurations, or a combination of the two in the form of RAID-10 (mirrored set of stripes). All of this is done on a per-VM basis. As the storage and compute infrastructures are now converged, you can define how you want a VM to run in the most logical place—at the VM level or its disks. The need for a datastore-centric configuration, storage tiering, and so on is obviated and made redundant through the power of SPBM. Technically, the configuration of storage policies is optional. If you choose not to define any storage policies, VSAN will create VMs and disks according to its default cluster-wide storage policy. While this will provide for generic levels of fault tolerance and performance, it is strongly recommended to create and apply storage policies according to your administrative need. Much of the power given to you through a converged infrastructure and VSAN is in the policy-driven and VM-centric nature of policy-based management.While some of these options will be discussed throughout the following recipes, it is strongly recommended that you review the storage-policy appendix to familiarize yourself with all the storage-policy options prior to continuing. Creating VM storage policies Before a storage policy can be applied, it must be created. Once created, the storage policy can be applied to any part of any VM resident on VSAN-connected storage. You will probably want to create a number of storage policies to suit your production needs. Once created, all storage policies are tracked by vCenter and enforced/maintained by VSAN itself. Because of this, your policy selections remain valid and production continues even in the event of a vCenter outage. In the example policy that we will create in this recipe, the VM policy will be defined as tolerating the failure of a single VSAN host. The VM will not be required to stripe across multiple disks and it will be 30percent thick-provisioned. Getting ready Your VSAN should be deployed and functional as per the previous article. You should be logged in to the vSphere Web Client as an administrator or as a user with rights to create, modify, apply, and delete storage policies. How to do it… From the vSphere 5.5 Web Client, navigate to Home | VM Storage Policies. From the vSphere 6.0 Web Client, navigate to Home | Policies and Profiles | VM Storage Policies. Initially, there will be no storage policies defined unless you have already created storage policies for other solutions. This is normal. In VSAN 6.0, you will have the VSAN default policy defined here prior to the creation of your own policies. Click the Create a new VM storage policy button:   A wizard will launch to guide you through the process. If you have multiple vCenter Server systems in linked-mode, ensure that you have selected the appropriate vCenter Server system from the drop-down menu. Give your storage policy a name that will be useful to you and a description of what the policy does. Then, click Next:   The next page describes the concept of rule sets and requires no interaction. Click the Next button to proceed. When creating the rule set, ensure that you select VSAN from the Rules based on vendor-specific capabilities drop-down menu. This will expose the <Add capability> button. Select Number of failures to tolerate from the drop-down menu and specify a value of 1:   Add other capabilities as desired. For this example, we will specify a single stripe with 30% space reservation. Once all required policy definitions have been applied, click Next:   The next page will tell you which datastores are compatible with the storage policy you have created. As this storage policy is based on specific capabilities exposed by VSAN, only your VSAN datastore will appear as a matching resource. Verify that the VSAN datastore appears, and then click Next. Review the summary page and ensure that the policy is being created on the basis of your specifications. When finished, click Finish. The policy will be created. Depending on the speed of your system, this operation should be nearly instantaneous but may take several seconds to finish. How it works… The VSAN-specific policy definitions are presented through VMware Profile-Driven Storage service, which runs with vCenter Server. Profile-Driven Storage Service determines which policy definitions are available by communicating with the ESXi hosts that are enabled for VSAN. Once VSAN is enabled, each host activates a VASA provider daemon, which is responsible for communicating policy options to and receiving policy instructions from Profile-Driven Storage Service. There's more… The nature of the storage policy definitions enabled by VSAN is additive. No policy option mutually excludes any other, and they can be combined in any way that your policy requirements demand. For example, specifying a number of failures to tolerate will not preclude the specification cache reservation. See also For a full explanation of all policy options and when you might want to use them Applying storage policies to a new VM or a VM deployed from a template When creating a new VM on VSAN, you will want to apply a storage policy to that VM according to your administrative needs. As VSAN is fully integrated into vSphere and vCenter, this is a straightforward option during the normal VM deployment wizard. The workflow described in this recipe is for creating a new VM on VSAN. If deployed from a template, the wizard process is functionally identical from step 4 of the How to do it… section in this recipe. Getting ready You should be logged into vSphere Web Client as an administrator or a user authorized to create virtual machines. You should have at least one storage policy defined (see previous recipe). How to do it… Navigate to Home | Hosts and Clusters | Datacenter | Cluster. Right-click the cluster, and then select New Virtual Machine…:   In the subsequent screen, select Create a new virtual machine. Proceed through the wizard through Step 2b. For the compute resource, ensure that you select your VSAN cluster or one of its hosts:   On the next step, select one of the VM storage policies that you created in the previous recipe. Once you select a VSAN storage policy, only the VSAN datastore will appear as compatible. Any other datastores that you have present will be ineligible for selection:   Complete the rest of the VM-deployment wizard as you normally would to select the guest OS, resources, and so on. Once completed, the VM will deploy and it will populate in the inventory tree on the left side. The VM summary will reflect that the VM resides on the VSAN storage:   How it works… All VMs resident on the VSAN storage will have a storage policy applied. Selecting the appropriate policy during VM creation means that the VM will be how you want it to be from the beginning of the VM's life. While policies can be changed later, this could involve a reconfiguration of the object, which can take time to complete and can result in increased disk and network traffic once it is initiated. Careful decision making during deployment can help you save time later. Applying storage policies to an existing VM migrating to VSAN When introducing VSAN into an existing infrastructure, you may have existing VMs that reside on the external storage, such as NFS, iSCSI, or Fibre Channel (FC). When the time comes to move these VMs into your converged infrastructure and VSAN, we will have to make policy decisions about how these VMs should be handled. Getting ready You should be logged into vSphere Web Client as an administrator or a user authorized to create, migrate, and modify VMs. How to do it… Navigate to Home | Hosts and Clusters | Datacenter | Cluster. Identify the VM that you wish to migrate to VSAN. For the example used in this recipe, we will migrate the VM called linux-vm02 that resides on NFS Datastore. Right-click the VM and select Migrate… from the context menu:   In the resulting page, select Change datastore or Change both host and datastore as applicable, and then click Next. If the VM does not already reside on one of your VSAN-enabled hosts, you must choose the Change both host and datastore option for your migration. In the next step, select one of the VM storage policies that you created in the previous recipe. Once you select a VSAN storage policy, only the VSAN datastore will appear as compatible. Any other datastores that you have present will be ineligible for selection:   You can apply different storage policies to different VM disks. This can be done by performing the following steps: Click on the Advanced >> button to reveal various parts of the VM: Once clicked, the Advanced >> button will change to << Basic.   In the Storage column, click the existing datastore to reveal a drop-down menu. Click Browse. In the subsequent window, select the desired policy from the VM Storage Policy drop-down menu. You will find that the only compatible datastore is your VSAN datastore. Click OK:   Repeat the preceding step as needed for other disks and the VM configuration file. After performing the preceding steps, click on Next. Review your selection on the final page, and then click Finish. Migrations can potentially take a long time, depending on how large the VM is, the speed of the network, and other considerations. Please monitor the progress of your VM relocation tasks using the Recent Tasks pane:   Once the migration task finishes, the VM's Summary tab will reflect that the datastore is now the VSAN datastore. For the example of this VM, the VM moved from NFS Datastore to vsanDatastore:   How it works… Much like the new VM workflow, we select the storage policy that we want to use during the migration of the VM to VSAN. However, unlike the deploy-from-template or VM-creation workflows, this process requires none of the VM configuration steps. We only have to select the storage policy, and then SPBM instructs VSAN how to place and distribute the objects. All object-distribution activities are completely transparent and automatic. This process can be used to change the storage policy of a VM already resident in the VSAN cluster, but it is more cumbersome than modifying the policies by other means. Summary In this article, we learned that storage policies give you granular control over how the data for any given VM or VM disk is handled. Storage policies allow you to define how many mirrors (RAID-1) and how many stripes (RAID-0) are associated with any given VM or VM disk. Resources for Article: Further resources on this subject: Working with Virtual Machines [article] Virtualization [article] Upgrading VMware Virtual Infrastructure Setups [article]
Read more
  • 0
  • 0
  • 5818

article-image-leveraging-python-world-big-data
Packt
07 Sep 2015
26 min read
Save for later

Leveraging Python in the World of Big Data

Packt
07 Sep 2015
26 min read
 We are generating more and more data day by day. We have generated more data this century than in the previous century and we are currently only 15 years into this century. big data is the new buzz word and everyone is talking about it. It brings new possibilities. Google Translate is able to translate any language, thanks to big data. We are able to decode our human genome due to it. We can predict the failure of a turbine and do the required maintenance on it because of big data. There are three Vs of big data and they are defined as follows: Volume: This defines the size of the data. Facebook has petabytes of data on its users. Velocity: This is the rate at which data is generated. Variety: Data is not only in a tabular form. We can get data from text, images, and sound. Data comes in the form of JSON, XML, and other types as well. Let's take a look at the following screenshot:   In this article by Samir Madhavan, author of Mastering Python for Data Science, we'll learn how to use Python in the world of big data by doing the following: Understanding Hadoop Writing a MapReduce program in Python Using a Hadoop library (For more resources related to this topic, see here.) What is Hadoop? According to the Apache Hadoop's website, Hadoop stores data in a distributed manner and helps in computing it. It has been designed to scale easily to any number of machines with the help of computing power and storage. Hadoop was created by Doug Cutting and Mike Cafarella in the year 2005. It was named after Doug Cutting's son's toy elephant.   The programming model Hadoop is a programming paradigm that takes a large distributed computation as a sequence of distributed operations on large datasets of key-value pairs. The MapReduce framework makes use of a cluster of machines and executes MapReduce jobs across these machines. There are two phases in MapReduce—a mapping phase and a reduce phase. The input data to MapReduce is key value pairs of data. During the mapping phase, Hadoop splits the data into smaller pieces, which is then fed to the mappers. These mappers are distributed across machines within the cluster. Each mapper takes the input key-value pairs and generates intermediate key-value pairs by invoking a user-defined function within them. After the mapper phase, Hadoop sorts the intermediate dataset by key and generates a set of key-value tuples so that all the values belonging to a particular key are together. During the reduce phase, the reducer takes in the intermediate key-value pair and invokes a user-defined function, which then generates a output key-value pair. Hadoop distributes the reducers across the machines and assigns a set of key-value pairs to each of the reducers.  Data processing through MapReduce The MapReduce architecture MapReduce has a master-slave architecture, where the master is the JobTracker and TaskTracker is the slave. When a MapReduce program is submitted to Hadoop, the JobTracker assigns the mapping/reducing task to the TaskTracker and it takes of the task over executing the program. The Hadoop DFS Hadoop's distributed filesystem has been designed to store very large datasets in a distributed manner. It has been inspired by the Google File system, which is a proprietary distributed filesystem designed by Google. The data in HDFS is stored in a sequence of blocks, and all blocks are of the same size except for the last block. The block sizes are configurable in Hadoop. Hadoop's DFS architecture It also has a master/slave architecture where NameNode is the master machine and DataNode is the slave machine. The actual data is stored in the data node. The NameNode keeps a tab on where certain kinds of data are stored and whether it has the required replication. It also helps in managing a filesystem by creating, deleting, and moving directories and files in the filesystem. Python MapReduce Hadoop can be downloaded and installed from https://hadoop.apache.org/. We'll be using the Hadoop streaming API to execute our Python MapReduce program in Hadoop. The Hadoop Streaming API helps in using any program that has a standard input and output as a MapReduce program. We'll be writing three MapReduce programs using Python, they are as follows: A basic word count Getting the sentiment Score of each review Getting the overall sentiment score from all the reviews The basic word count We'll start with the word count MapReduce. Save the following code in a word_mapper.py file: import sys for l in sys.stdin: # Trailing and Leading white space is removed l = l.strip() # words in the line is split word_tokens = l.split() # Key Value pair is outputted for w in word_tokens: print '%st%s' % (w, 1) In the preceding mapper code, each line of the file is stripped of the leading and trailing white spaces. The line is then divided into tokens of words and then these tokens of words are outputted as a key value pair of 1. Save the following code in a word_reducer.py file: from operator import itemgetter import sys current_word_token = None counter = 0 word = None # STDIN Input for l in sys.stdin: # Trailing and Leading white space is removed l = l.strip() # input from the mapper is parsed word_token, counter = l.split('t', 1) # count is converted to int try: counter = int(counter) except ValueError: # if count is not a number then ignore the line continue #Since Hadoop sorts the mapper output by key, the following # if else statement works if current_word_token == word_token: current_counter += counter else: if current_word_token: print '%st%s' % (current_word_token, current_counter) current_counter = counter current_word_token = word_token # The last word is outputed if current_word_token == word_token: print '%st%s' % (current_word_token, current_counter) In the preceding code, we use the current_word_token parameter to keep track of the current word that is being counted. In the for loop, we use the word_token parameter and a counter to get the value out of the key-value pair. We then convert the counter to an int type. In the if/else statement, if the word_token value is same as the previous instance, which is current_word_token, then we keep counting else statement's value. If it's a new word that has come as the output, then we output the word and its count. The last if statement is to output the last word. We can check out if the mapper is working fine by using the following command: $ echo 'dolly dolly max max jack tim max' | ./BigData/word_mapper.py The output of the preceding command is shown as follows: dolly1 dolly1 max1 max1 jack1 tim1 max1 Now, we can check if the reducer is also working fine by piping the reducer to the sorted list of the mapper output: $ echo "dolly dolly max max jack tim max" | ./BigData/word_mapper.py | sort -k1,1 | ./BigData/word_reducer.py The output of the preceding command is shown as follows: dolly2 jack1 max3 tim1 Now, let's try to apply the same code on a local file containing the summary of mobydick: $ cat ./Data/mobydick_summary.txt | ./BigData/word_mapper.py | sort -k1,1 | ./BigData/word_reducer.py The output of the preceding command is shown as follows: a28 A2 abilities1 aboard3 about2 A sentiment score for each review We'll extend this to write a MapReduce program to determine the sentiment score for each review. Write the following code in the senti_mapper.py file: import sys import re positive_words = open('positive-words.txt').read().split('n') negative_words = open('negative-words.txt').read().split('n') def sentiment_score(text, pos_list, neg_list): positive_score = 0 negative_score = 0 for w in text.split(''): if w in pos_list: positive_score+=1 if w in neg_list: negative_score+=1 return positive_score - negative_score for l in sys.stdin: # Trailing and Leading white space is removed l = l.strip() #Convert to lower case l = l.lower() #Getting the sentiment score score = sentiment_score(l, positive_words, negative_words) # Key Value pair is outputted print '%st%s' % (l, score) In the preceding code, we used the sentiment_score function, which was designed to give the sentiment score as output. For each line, we strip the leading and trailing white spaces and then get the sentiment score for a review. Finally, we output a sentence and the score. For this program, we don't require a reducer as we can calculate the sentiment in the mapper itself and we just have to output the sentiment score. Let's test whether the mapper is working fine locally with a file containing the reviews for Jurassic World: $ cat ./Data/jurassic_world_review.txt | ./BigData/senti_mapper.py there is plenty here to divert, but little to leave you enraptored. such is the fate of the sequel: bigger. louder. fewer teeth.0 if you limit your expectations for jurassic world to "more teeth," it will deliver on that promise. if you dare to hope for anything more-relatable characters, narrative coherence-you'll only set yourself up for disappointment.-1 there's a problem when the most complex character in a film is the dinosaur-2 not so much another bloated sequel as it is the fruition of dreams deferred in the previous films. too bad the genre dictates that those dreams are once again destined for disaster.-2 We can see that our program is able to calculate the sentiment score well. The overall sentiment score To calculate the overall sentiment score, we would require the reducer and we'll use the same mapper but with slight modifications. Here is the mapper code that we'll use stored in the overall_senti_mapper.py file: import sys import hashlib positive_words = open('./Data/positive-words.txt').read().split('n') negative_words = open('./Data/negative-words.txt').read().split('n') def sentiment_score(text, pos_list, neg_list): positive_score = 0 negative_score = 0 for w in text.split(''): if w in pos_list: positive_score+=1 if w in neg_list: negative_score+=1 return positive_score - negative_score for l in sys.stdin: # Trailing and Leading white space is removed l = l.strip() #Convert to lower case l = l.lower() #Getting the sentiment score score = sentiment_score(l, positive_words, negative_words) #Hashing the review to use it as a string hash_object = hashlib.md5(l) # Key Value pair is outputted print '%st%s' % (hash_object.hexdigest(), score) This mapper code is similar to the previous mapper code, but here we use the MD5 hash library to review and then to get the output as the key. Here is the reducer code that is utilized to determine the overall sentiments score of the movie. Store the following code in the overall_senti_reducer.py file: from operator import itemgetter import sys total_score = 0 # STDIN Input for l in sys.stdin: # input from the mapper is parsed key, score = l.split('t', 1) # count is converted to int try: score = int(score) except ValueError: # if score is not a number then ignore the line continue #Updating the total score total_score += score print '%s' % (total_score,) In the preceding code, we strip the value containing the score and we then keep adding to the total_score variable. Finally, we output the total_score variable, which shows the sentiment of the movie. Let's locally test the overall sentiment on Jurassic World, which is a good movie, and then test the sentiment for the movie, Unfinished Business, which was critically deemed poor: $ cat ./Data/jurassic_world_review.txt | ./BigData/overall_senti_mapper.py | sort -k1,1 | ./BigData/overall_senti_reducer.py 19 $ cat ./Data/unfinished_business_review.txt | ./BigData/overall_senti_mapper.py | sort -k1,1 | ./BigData/overall_senti_reducer.py -8 We can see that our code is working well and we also see that Jurassic World has a more positive score, which means that people have liked it a lot. On the contrary, Unfinished Business has a negative value, which shows that people haven't liked it much. Deploying the MapReduce code on Hadoop We'll create a directory for data on Moby Dick, Jurassic World, and Unfinished Business in the HDFS tmp folder: $ Hadoop fs -mkdir /tmp/moby_dick $ Hadoop fs -mkdir /tmp/jurassic_world $ Hadoop fs -mkdir /tmp/unfinished_business Let's check if the folders are created: $ Hadoop fs -ls /tmp/ Found 6 items drwxrwxrwx - mapred Hadoop 0 2014-11-14 15:42 /tmp/Hadoop-mapred drwxr-xr-x - samzer Hadoop 0 2015-06-18 18:31 /tmp/jurassic_world drwxrwxrwx - hdfs Hadoop 0 2014-11-14 15:41 /tmp/mapred drwxr-xr-x - samzer Hadoop 0 2015-06-18 18:31 /tmp/moby_dick drwxr-xr-x - samzer Hadoop 0 2015-06-16 18:17 /tmp/temp635459726 drwxr-xr-x - samzer Hadoop 0 2015-06-18 18:31 /tmp/unfinished_business Once the folders are created, let's copy the data files to the respective folders. $ Hadoop fs -copyFromLocal ./Data/mobydick_summary.txt /tmp/moby_dick $ Hadoop fs -copyFromLocal ./Data/jurassic_world_review.txt /tmp/jurassic_world $ Hadoop fs -copyFromLocal ./Data/unfinished_business_review.txt /tmp/unfinished_business Let's verify that the file is copied: $ Hadoop fs -ls /tmp/moby_dick $ Hadoop fs -ls /tmp/jurassic_world $ Hadoop fs -ls /tmp/unfinished_business Found 1 items -rw-r--r-- 3 samzer Hadoop 5973 2015-06-18 18:34 /tmp/moby_dick/mobydick_summary.txt Found 1 items -rw-r--r-- 3 samzer Hadoop 3185 2015-06-18 18:34 /tmp/jurassic_world/jurassic_world_review.txt Found 1 items -rw-r--r-- 3 samzer Hadoop 2294 2015-06-18 18:34 /tmp/unfinished_business/unfinished_business_review.txt We can see that files have been copied successfully. With the following command, we'll execute our mapper and reducer's script in Hadoop. In this command, we define the mapper, reducer, input, and output file locations, and then use Hadoop streaming to execute our scripts. Let's execute the word count program first: $ Hadoop jar /usr/lib/Hadoop-0.20-mapreduce/contrib/streaming/Hadoop-*streaming*.jar -file ./BigData/word_mapper.py -mapper word_mapper.py -file ./BigData/word_reducer.py -reducer word_reducer.py -input /tmp/moby_dick/* -output /tmp/moby_output Let's verify that the word count MapReduce program is working successfully: $ Hadoop fs -cat /tmp/moby_output/* The output of the preceding command is shown as follows: (Queequeg1 A2 Africa1 Africa,1 After1 Ahab13 Ahab,1 Ahab's6 All1 American1 As1 At1 Bedford,1 Bildad1 Bildad,1 Boomer,2 Captain1 Christmas1 Day1 Delight,1 Dick6 Dick,2 The program is working as intended. Now, we'll deploy the program that calculates the sentiment score for each of the reviews. Note that we can add the positive and negative dictionary files to the Hadoop streaming: $ Hadoop jar /usr/lib/hadoop-0.20-mapreduce/contrib/streaming/hadoop-*streaming*.jar -file ./BigData/word_mapper.py -mapper word_mapper.py -file ./BigData/word_reducer.py -reducer word_reducer.py -input /tmp/moby_dick/* -output /tmp/moby_output In the preceding code, we use the Hadoop command with the Hadoop streaming JAR file and then define the mapper and reducer files, and finally, the input and output directories in Hadoop. Let's check the sentiments score of the movies review: $ Hadoop fs -cat /tmp/jurassic_output/* The output of the preceding command is shown as follows: "jurassic world," like its predecessors, fills up the screen with roaring, slathering, earth-shaking dinosaurs, then fills in mere humans around the edges. it's a formula that works as well in 2015 as it did in 1993.3 a perfectly fine movie and entertaining enough to keep you watching until the closing credits.4 an angry movie with a tragic moral ... meta-adoration and criticism ends with a genetically modified dinosaur fighting off waves of dinosaurs.-3 if you limit your expectations for jurassic world to "more teeth," it will deliver on that promise. if you dare to hope for anything more-relatable characters, narrative coherence-you'll only set yourself up for disappointment.-1 This program is also working as intended. Now, we'll try out the overall sentiment of a movie: $ Hadoop jar /usr/lib/Hadoop-0.20-mapreduce/contrib/streaming/Hadoop-*streaming*.jar -file ./BigData/overall_senti_mapper.py -mapper Let's verify the result: $ Hadoop fs -cat /tmp/unfinished_business_output/* The output of the preceding command is shown as follows: -8 We can see that the overall sentiment score comes out correctly from MapReduce. Here is a screenshot of the JobTracker status page:   The preceding image shows a portal where the jobs submitted to the JobTracker can be viewed and the status can be seen. This can be seen on port 50070 of the master system. From the preceding image, we can see that a job is running, and the status above the image shows that the job has been completed successfully. File handling with Hadoopy Hadoopy is a library in Python, which provides an API to interact with Hadoop to manage files and perform MapReduce on it. Hadoopy can be downloaded from http://www.Hadoopy.com/en/latest/tutorial.html#installing-Hadoopy. Let's try to put a few files in Hadoop through Hadoopy in a directory created within HDFS, called data: $ Hadoop fs -mkdir data Here is the code that puts the data into HDFS: importHadoopy import os hdfs_path = '' def read_local_dir(local_path): for fn in os.listdir(local_path): path = os.path.join(local_path, fn) if os.path.isfile(path): yield path def main(): local_path = './BigData/dummy_data' for file in read_local_dir(local_path): Hadoopy.put(file, 'data') print"The file %s has been put into hdfs"% (file,) if __name__ =='__main__': main() The file ./BigData/dummy_data/test9 has been put into hdfs The file ./BigData/dummy_data/test7 has been put into hdfs The file ./BigData/dummy_data/test1 has been put into hdfs The file ./BigData/dummy_data/test8 has been put into hdfs The file ./BigData/dummy_data/test6 has been put into hdfs The file ./BigData/dummy_data/test5 has been put into hdfs The file ./BigData/dummy_data/test3 has been put into hdfs The file ./BigData/dummy_data/test4 has been put into hdfs The file ./BigData/dummy_data/test2 has been put into hdfs In the preceding code, we list all the files in a directory and then put each of the files into Hadoop using the put() method of Hadoopy. Let's check if all the files have been put into HDFS: $ Hadoop fs -ls data The output of the preceding command is shown as follows: Found 9 items -rw-r--r-- 3 samzer Hadoop 0 2015-06-23 00:19 data/test1 -rw-r--r-- 3 samzer Hadoop 0 2015-06-23 00:19 data/test2 -rw-r--r-- 3 samzer Hadoop 0 2015-06-23 00:19 data/test3 -rw-r--r-- 3 samzer Hadoop 0 2015-06-23 00:19 data/test4 -rw-r--r-- 3 samzer Hadoop 0 2015-06-23 00:19 data/test5 -rw-r--r-- 3 samzer Hadoop 0 2015-06-23 00:19 data/test6 -rw-r--r-- 3 samzer Hadoop 0 2015-06-23 00:19 data/test7 -rw-r--r-- 3 samzer Hadoop 0 2015-06-23 00:19 data/test8 -rw-r--r-- 3 samzer Hadoop 0 2015-06-23 00:19 data/test9 So, we have successfully been able to put files into HDFS. Pig Pig is a platform that has a very expressive language to perform data transformations and querying. The code that is written in Pig is done in a scripting manner and this gets compiled to MapReduce programs, which execute on Hadoop. The following image is the logo of Pig Latin:  The Pig logo Pig helps in reducing the complexity of raw-level MapReduce programs, and enables the user to perform fast transformations. Pig Latin is the textual language that can be learned from http://pig.apache.org/docs/r0.7.0/piglatin_ref2.html. We'll be covering how to perform the top 10 most occurring words with Pig, and then we'll see how you can create a function in Python that can be used in Pig. Let's start with the word count. Here is the Pig Latin code, which you can save in thepig_wordcount.py file: data = load '/tmp/moby_dick/'; word_token = foreach data generate flatten(TOKENIZE((chararray)$0)) as word; group_word_token = group word_token by word; count_word_token = foreach group_word_token generate COUNT(word_token) as cnt, group; sort_word_token = ORDER count_word_token by cnt DESC; top10_word_count = LIMIT sort_word_token 10; DUMP top10_word_count; In the preceding code, we can load the summary of Moby Dick, which is then tokenized line by line and is basically split into individual elements. The flatten function converts a collection of individual word tokens in a line to a row-by-row form. We then group by the words and then take a count of the words for each word. Finally, we sort the count of words in a descending order and then we limit the count of the words to the first 10 rows to get the top 10 most occurring words. Let's execute the preceding pig script: $ pig ./BigData/pig_wordcount.pig The output of the preceding command is shown as follows: (83,the) (36,and) (28,a) (25,of) (24,to) (15,his) (14,Ahab) (14,Moby) (14,is) (14,in) We are able to get our top 10 words. Let's now create a user-defined function with Python, which will be used in Pig. We'll define two user-defined functions to score positive and negative sentiments of a sentence. The following code is the UDF used to score the positive sentiment and it's available in the positive_sentiment.py file: positive_words = [ 'a+', 'abound', 'abounds', 'abundance', 'abundant', 'accessable', 'accessible', 'acclaim', 'acclaimed', 'acclamation', 'acco$ ] @outputSchema("pnum:int") def sentiment_score(text): positive_score = 0 for w in text.split(''): if w in positive_words: positive_score+=1 return positive_score In the preceding code, we define the positive word list, which is used by the sentiment_score() function. The function checks for the positive words in a sentence and finally outputs their total count. There is an outputSchema() decorator that is used to tell Pig what type of data is being outputted, which in our case is int. Here is the code to score the negative sentiment and it's available in the negative_sentiment.py file. The code is almost similar to the positive sentiment: negative_words = ['2-faced', '2-faces', 'abnormal', 'abolish', 'abominable', 'abominably', 'abominate', 'abomination', 'abort', 'aborted', 'ab$....] @outputSchema("nnum:int") def sentiment_score(text): negative_score = 0 for w in text.split(''): if w in negative_words: negative_score-=1 return negative_score The following code is used by Pig to score the sentiments of the Jurassic World reviews and its available in the pig_sentiment.pig file: register 'positive_sentiment.py' using org.apache.pig.scripting.jython.JythonScriptEngine as positive; register 'negative_sentiment.py' using org.apache.pig.scripting.jython.JythonScriptEngine as negative; data = load '/tmp/jurassic_world/*'; feedback_sentiments = foreach data generate LOWER((chararray)$0) as feedback, positive.sentiment_score(LOWER((chararray)$0)) as psenti, negative.sentiment_score(LOWER((chararray)$0)) as nsenti; average_sentiments = foreach feedback,feedback_sentiments generate psenti + nsenti; dump average_sentiments; In the preceding Pig script, we first register the Python UDF scripts using the register command and give them an appropriate name. We then load our Jurassic World review. We then convert our reviews to lowercase and score the positive and negative sentiments of a review. Finally, we add the score to get the overall sentiments of a review. Let's execute the Pig script and see the results: $ pig ./BigData/pig_sentiment.pig The output of the preceding command is shown as follows: (there is plenty here to divert, but little to leave you enraptored. such is the fate of the sequel: bigger. louder. fewer teeth.,0) (if you limit your expectations for jurassic world to "more teeth," it will deliver on that promise. if you dare to hope for anything more-relatable characters, narrative coherence-you'll only set yourself up for disappointment.,-1) (there's a problem when the most complex character in a film is the dinosaur,-2) (not so much another bloated sequel as it is the fruition of dreams deferred in the previous films. too bad the genre dictates that those dreams are once again destined for disaster.,-2) (a perfectly fine movie and entertaining enough to keep you watching until the closing credits.,4) (this fourth installment of the jurassic park film series shows some wear and tear, but there is still some gas left in the tank. time is spent to set up the next film in the series. they will keep making more of these until we stop watching.,0) We have successfully scored the sentiments of the Jurassic World review using the Python UDF in Pig. Python with Apache Spark Apache Spark is a computing framework that works on top of HDFS and provides an alternative way of computing that is similar to MapReduce. It was developed by AmpLab of UC Berkeley. Spark does its computation mostly in the memory because of which, it is much faster than MapReduce, and is well suited for machine learning as it's able to handle iterative workloads really well.   Spark uses the programming abstraction of RDDs (Resilient Distributed Datasets) in which data is logically distributed into partitions, and transformations can be performed on top of this data. Python is one of the languages that is used to interact with Apache Spark, and we'll create a program to perform the sentiment scoring for each review of Jurassic Park as well as the overall sentiment. You can install Apache Spark by following the instructions at https://spark.apache.org/docs/1.0.1/spark-standalone.html. Scoring the sentiment Here is the Python code to score the sentiment: from __future__ import print_function import sys from operator import add from pyspark import SparkContext positive_words = open('positive-words.txt').read().split('n') negative_words = open('negative-words.txt').read().split('n') def sentiment_score(text, pos_list, neg_list): positive_score = 0 negative_score = 0 for w in text.split(''): if w in pos_list: positive_score+=1 if w in neg_list: negative_score+=1 return positive_score - negative_score if __name__ == "__main__": if len(sys.argv) != 2: print("Usage: sentiment <file>", file=sys.stderr) exit(-1) sc = SparkContext(appName="PythonSentiment") lines = sc.textFile(sys.argv[1], 1) scores = lines.map(lambda x: (x, sentiment_score(x.lower(), positive_words, negative_words))) output = scores.collect() for (key, score) in output: print("%s: %i" % (key, score)) sc.stop() In the preceding code, we define our standard sentiment_score() function, which we'll be reusing. The if statement checks whether the Python script and the text file is given. The sc variable is a Spark Context object with the PythonSentiment app name. The filename in the argument is passed into Spark through the textFile() method of the sc variable. In the map() function of Spark, we define a lambda function, where each line of the text file is passed, and then we obtain the line and its respective sentiment score. The output variable gets the result, and finally, we print the result on the screen. Let's score the sentiment of each of the reviews of Jurassic World. Replace the <hostname> with your hostname, this should suffice: $ ~/spark-1.3.0-bin-cdh4/bin/spark-submit --master spark://<hostname>:7077 ./BigData/spark_sentiment.py hdfs://localhost:8020/tmp/jurassic_world/* We'll get the following output for the preceding command: There is plenty here to divert but little to leave you enraptured. Such is the fate of the sequel: Bigger, Louder, Fewer teeth: 0 If you limit your expectations for Jurassic World to more teeth, it will deliver on this promise. If you dare to hope for anything more—relatable characters or narrative coherence—you'll only set yourself up for disappointment:-1 We can see that our Spark program was able to score the sentiment for each of the reviews. The number in the end of the output of the sentiment score shows that if the review has been positive or negative, the higher the number of the sentiment score—the better the review and the more negative the number of the sentiment score—the more negative the review has been. We use the Spark Submit command with the following parameters: A master node of the Spark system A Python script containing the transformation commands An argument to the Python script The overall sentiment Here is a Spark program to score the overall sentiment of all the reviews: from __future__ import print_function import sys from operator import add from pyspark import SparkContext positive_words = open('positive-words.txt').read().split('n') negative_words = open('negative-words.txt').read().split('n') def sentiment_score(text, pos_list, neg_list): positive_score = 0 negative_score = 0 for w in text.split(''): if w in pos_list: positive_score+=1 if w in neg_list: negative_score+=1 return positive_score - negative_score if __name__ =="__main__": if len(sys.argv) != 2: print("Usage: Overall Sentiment <file>", file=sys.stderr) exit(-1) sc = SparkContext(appName="PythonOverallSentiment") lines = sc.textFile(sys.argv[1], 1) scores = lines.map(lambda x: ("Total", sentiment_score(x.lower(), positive_words, negative_words))) .reduceByKey(add) output = scores.collect() for (key, score) in output: print("%s: %i"% (key, score)) sc.stop() In the preceding code, we have added a reduceByKey() method, which reduces the value by adding the output values, and we have also defined the key as Total, so that all the scores are reduced based on a single key. Let's try out the preceding code to get the overall sentiment of Jurassic World. Replace the <hostname> with your hostname, this should suffice: $ ~/spark-1.3.0-bin-cdh4/bin/spark-submit --master spark://<hostname>:7077 ./BigData/spark_overall_sentiment.py hdfs://localhost:8020/tmp/jurassic_world/* The output of the preceding command is shown as follows: Total: 19 We can see that Spark has given an overall sentiment score of 19. The applications that get executed on Spark can be viewed in the browser on the 8080 port of the Spark master. Here is a screenshot of it:   We can see that the number of nodes of Spark, applications that are getting executed currently, and the applications that have been executed. Summary In this article, you were introduced to big data, learned about how the Hadoop software works, and the architecture associated with it. You then learned how to create a mapper and a reducer for a MapReduce program, how to test it locally, and then put it into Hadoop and deploy it. You were then introduced to the Hadoopy library and using this library, you were able to put files into Hadoop. You also learned about Pig and how to create a user-defined function with it. Finally, you learned about Apache Spark, which is an alternative to MapReduce and how to use it to perform distributed computing. With this article, we have come to an end in our journey, and you should be in a state to perform data science tasks with Python. From here on, you can participate in Kaggle Competitions at https://www.kaggle.com/ to improve your data science skills with real-world problems. This will fine-tune your skills and help understand how to solve analytical problems. Also, you can sign up for the Andrew NG course on Machine Learning at https://www.coursera.org/learn/machine-learning to understand the nuances behind machine learning algorithms. Resources for Article: Further resources on this subject: Bizarre Python[article] Predicting Sports Winners with Decision Trees and pandas[article] Optimization in Python [article]
Read more
  • 0
  • 0
  • 4921
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-working-tables
Packt
07 Sep 2015
13 min read
Save for later

Working with Tables

Packt
07 Sep 2015
13 min read
 In this article by the author, Chukri Soueidi, of the book, Microsoft Azure Storage Essentials, we learn about service offered by Microsoft Azure. The topic of this article will be Table storage. Tables provide scalable, structured data stores that allow you to store huge amounts of data. With the growing needs of modern high-performance applications, choosing the right data store is a major factor in the success of any application. (For more resources related to this topic, see here.) The Table storage basics Azure Table storage is a key/value NoSQL database that allows you to store non-relational, structured, and semi-structured data that does not have to conform to a specific schema. It is ideal for storing huge amounts of data that is ready for simple and quick read/write operations, and is interfaced by the Table REST API using the OData protocol. A table can contain one or many entities (rows), each up to 1 MB in size. The whole table cannot exceed a 100 TB limit. Each table entity can hold up to 252 columns. Rows in the same table can have different schemas, as opposed to relational database rows that should all comply under one strict schema. Table storage is structured in a hierarchical relationship between storage accounts, tables, and entities. A storage account can have a zero or more non-related tables, each containing one or more entities. The components of the service are: Storage account: Governs access to the underlying service Table: A collection of entities Entity: A set of properties, a row in the table Properties: Key/value pairs that can hold scalar data such as string, byte array, guid, dateTime, integer, double, boolean Table address: http://<account-name>.table.core.windows.net/<table-name > The address differs when using the Storage Emulator, which emulates the Table storage service locally without incurring any cost. The default address will be the local machine loopback address followed by a predefined default storage account name. It would look as follows: http://127.0.0.1:10002/devstorageaccount1/<table-name> For more on the Storage Emulator, you can check out the Use the Azure Storage Emulator for Development and Testing article on the Azure documentation. Entities Entities, in tables, are associated with two key properties: the PartitionKey and the RowKey, which together form the entity's primary key. The PartitionKey must be a string with a maximum allowed size of 1,024 characters. As its name suggests, it divides the table logically into partitions. Entities with the same partition key will be always stored in the same physical node, promoting performance and scalability. When a table scales up, the Azure platform may automatically move different partitions of a table into separate physical storage nodes for load balancing purposes. Entities with the same partition key stay on the same node no matter how much their table scales; hence, the key selection is essential in determining the general performance of the table. The second column is the RowKey, which is also of the string type, with a maximum allowed size of 1,024 characters. It has to be unique inside a partition, since the partition key and row key combine to uniquely identify an entity within the table. In addition to that, each entity has a read-only mandatory Timestamp property, which is essential for managing concurrent updates on records. Naming rules When naming tables and properties, there are some naming rules to follow: Tables: A table name should be unique per storage account, ranging from 3 to 63 characters long. It can only contain alphanumeric characters and cannot begin with a number. Table names are case-insensitive, but will reflect the case that was used upon creation. (Mytable and MyTable are logically the same name, but if you create a table called Mytable then Mytable—not MyTable—will be displayed when you retrieve table names from the account.) Properties: A property name is case-sensitive with a size of up to 255 characters. It should follow C# identifier naming rules. When passed within a URL, certain characters must be percent-encoded, which is automatically done when using the .NET Library. Some characters are disallowed in the values of the row and partition keys, including the back slash (), forward slash (/), the number sign (#) and the question mark (?). Using the Azure Storage Client Library The Azure Storage Client Library for .NET has classes to manage and maintain Table storage objects. After creating a storage account, and getting the appropriate account storage access keys, we now have everything it takes to use the storage service. For the simplicity of this demo's setup, we will be using a C# Console Application template. To create a Console Application open Visual Studio and navigate to New Project | Templates | Visual C# | Console Application. By default, the application does not have a reference to the storage client library, so we need to add it using the Package Manager by typing Install-Package WindowsAzure.Storage. This will download and install the library along with all dependencies required. You can navigate through the library by expanding References in Solution Explorer in Visual Studio, right-clicking on Microsoft.WindowsAzure.Storage, and selecting View in Object Browser. This will show you all library classes and methods. After getting the storage library, we need to create the connection string by providing the storage account name and access key. The connection string would look like this: <appSettings> <add key="StorageConnectionString" value="DefaultEndpointsProtocol=https;AccountName=account name;AccountKey=access key/> </appSettings> In program.cs we need to reference the following libraries: using System.Configuration; // reference to System.Configuration should be added to the project using Microsoft.WindowsAzure.Storage; using Microsoft.WindowsAzure.Storage.Table; Starting with a table Using the connection string defined in the App.config file, we will proceed to create a CloudStorageAccount object which represents the storage account where the tables will be created. After that we need to use the CloudTableClient class, which is defined under the Microsoft.WindowsAzure.Storage.Table namespace, to create an object which represents a frontage for dealing with Table storage specifically and directly. After creating the table object we will call its CreateIfNotExists() method that will actually call the Table REST API to create a table. The method is idempotent: if the table already exists, the method will do nothing no matter how many times you call it. The following is the code to do create a table: string connectionString = ConfigurationManager.AppSettings["StorageConnectionString"]; CloudStorageAccount storageAccount = CloudStorageAccount.Parse(connectionString); CloudTableClient tableClient = storageAccount.CreateCloudTableClient(); CloudTable table = tableClient.GetTableReference("Weather"); table.CreateIfNotExists(); A table can be deleted using the following code: table.DeleteIfExists(); We created a Weather table for the purpose of demonstrating a broader overview of Table storage features. Our sample is a table for saving the weather history of all the cities around the world. For each city, the table holds a weather entry per day and time. Assuming there are more than 200,000 cities and each city has 24 entries per day, the table will contain millions of records in no time. Adding entities to a table After creating the table, Weather, we need to populate it with entities. In the .NET library, entities are mapped to C# objects. The library defines the ITableEntity interface which should be implemented by all classes representing entities. The ITableEntity interface defines the following properties: PartitionKey: It is a string for the partition key of the an entity RowKey: It is a string for the row key Timestamp: It is a DateTime offset for the last update on the entity ETag: It is an entity tag for managing concurrency The library also contains two classes that implement ITableEntity. The TableEntity is a base class that can be inherited by custom classes. The DynamicTableEntity is a class that holds an IDictionary<String,EntityProperty> property named Properties that is used to store properties inside it. Now, let's define our custom entity by inheriting from the TableEntity base class: public class WeatherEntity : TableEntity { public WeatherEntity(string cityId, string daytime) { this.PartitionKey = cityId; this.RowKey = daytime; } public WeatherEntity() { } public string CityName { get; set; } public string CountryCode { get; set; } public string Description { get; set; } public string Temperature { get; set; } public string Humidity { get; set; } public string Wind { get; set; } } In the preceding code, we have defined the Weather entity class with a constructor that takes the arguments of cityId and a string representing the day and time, which we chose to declare as PartitionKey and RowKey, respectively; since the entity should inherit from a TableEntity class, we assign the keys to the base class properties using this. The partition and row key selection was based on the table's querying needs. The PartitionKey is selected to be the cityId, and the RowKey is a string representation of the date and time in the format "yyyyMMddHHmm". Now, after defining the entity as an object, we will proceed to add the entity to the WeatherHistory table. First we have to create an instance of this object and populate its properties, then hand it to a TableOperation object to insert it in to our table. The following is the code for this: WeatherEntity weather = new WeatherEntity("5809844", "201503151200") { CityName = "Seattle", CountryCode = "US", Description = "Clear", Temperature = "25.5", Humidity = "44", Wind = "11 km/h", };   TableOperation insertOperation = TableOperation.Insert (weather);   table.Execute (insertOperation); The TableOperation class represents a single table operation. In addition to the insert operation, this class defines the static merge, replace, and delete, and other methods that we will see later on. After creating the operation, we pass it to the Execute method of the CloudTable table entity, which represents the Weather table. The Timestamp property of the entity is required, but we did not set it in our code. This is because this property will be filled by the server automatically; even if we set it the server will rewrite its own timestamp. Entity Group Transactions In case we have several entities to insert into the table, we have the option to perform a batch operation that can insert many records in one group transaction. We can use the TableBatchOperation and add the entities to it, then perform the CloudTable.ExecuteBatch. The following is some sample code to add two weathers at the same time: TableBatchOperation batchOperation = new TableBatchOperation(); WeatherEntity weather1 = new WeatherEntity("5809844", "201503151300") { CityName = "Seattle", CountryCode = "US", Description = "Light Rain", Temperature = "23", Humidity = "82", Wind = "16 km/h", }; WeatherEntity weather2 = new WeatherEntity("5809844", "201503151400") { CityName = "Seattle", CountryCode = "US", Description = "Heavy Rain", Temperature = "20", Humidity = "95", Wind = "16 km/h", }; batchOperation.Insert(weather1); batchOperation.Insert(weather2); table.ExecuteBatch(batchOperation); The following is a sample of how the table will look when it contains records for several days and cities; we omitted the timestamp for the clarity of the sample: PartitionKey RowKey City Country Description   Humidity Wind … … … … … … … … 276781 201503151300 Beirut LB Clear 25 65 5 km/h 276781 201503151400 Beirut LB Clear 26 65 5 km/h 5128638 201503151300 New York US Sunny 25 46 11 km/h 5128638 201503151400 New York US Few clouds 25 46 11 km/h 5809844 201503151300 Seattle US Light rain 23 82 16 km/h 5809844 201503151400 Seattle US Heavy rain 24 95 16 km/h 6173331 201503151300 Vancouver CA Scattered clouds 6 84 12 km/h 6173331 201503151400 Vancouver CA Haze rain 5 84 12 km/h 7284824 201503151300 Budapest HU Light rain 4 80 14 km/h 7284824 201503151400 Budapest HU Light rain 4 81 14 km/h … … … … … … … … Batch operations are quite an interesting feature of Tables, as they provide quasi-ACID transactions in the Table storage. Batch operations require all the entities included to have the same partition key or else an exception will be thrown. Batch operations can perform inserts, updates, and deletes in the same operation, and can include up to 100 entities with a payload size of 4 MB only. Updating entities In order to update an entity, we need to retrieve it first, then modify its properties to save it back again. TableOperation supports two types of updates: Replace and Merge. If the entity was modified between the retrieval and saving, an error will be generated and the execution of the operation will fail. The Replace method will replace the values of the properties of an existing item with the object we provide. Have a look at the following code that changes an existing record for Seattle city for a specific day and hour: TableOperation retrieveOperation = TableOperation.Retrieve<WeatherEntity> ("5809844", "201503151300"); TableResult result = table.Execute(retrieveOperation); WeatherEntity updateEntity = (WeatherEntity)result.Result; if(updateEntity != null) { updateEntity.Temperature = 26; TableOperation updateOperation = TableOperation.Merge(updateEntity); table.Execute(updateOperation); } Notice in the preceding code that the TableOperation.Retrieve method takes the partition key with the row key as arguments, and retrieves the entity into a TableResult object that contains the Result property that can be casted into a WeatherEntity object. But that's not all, changing an existing property in an entity is not the only type of update that may be required. As we have discussed previously, tables are schema-less, allowing entities in the same table to have different properties, which also means that you can add or remove columns even after the creation of a record. Suppose you want to add more properties to a weather record, you can achieve this by modifying the class WeatherEntity and adding the new properties to it. But what if you don't want to change the class since it's a rare collection of data that you are adding? This is where Merge steps in: it allows you to merge changes with an existing entity without overwriting old data. TableOperation retrieveOperation = TableOperation.Retrieve<WeatherEntity> ("5809844", "201503151300"); //all records for Seattle at March 15, 2015 TableResult result = table.Execute(retrieveOperation); WeatherEntity originalEntity =(WeatherEntity)result.Result; if (originalEntity != null) { DynamicTableEntity dynamicEntity = new DynamicTableEntity() { PartitionKey = originalEntity .PartitionKey, RowKey = originalEntity .RowKey, ETag = originalEntity .ETag, Properties = new Dictionary<string, EntityProperty> { {"Visibility", new EntityProperty("16093")}, {"Pressure", new EntityProperty("1015")} } }; TableResult results = table.Execute(TableOperation.Merge(dynamicEntity )); } The preceding code gets an existing weather entity and assigns it to the originalEntity, object. Then it creates a new object, DynamicTableEntity and copies the PartitionKey, RowKey, and ETag to the new object and adds two properties, Language and Country, as defined previously. TableOperation.Merge was executed over the newly created object. What this code does is that it adds two properties to the weather record without changing other existing properties. If we had used Replace instead, the original WeatherEntity properties CityName, CountryCode, Description, Temperature, Humidity, and Wind would have lost their values. Copying the three properties PartitionKey, RowKey, and ETag (for optimistic concurrency) to the newly created DynamicTableEntity is mandatory and without it the operation would have failed. Our Weather table now looks like this: PartitionKey RowKey City Country Description   Humidity Wind Visibility Pressure …. … … … … … … …     5128638 201503151400 New York US Few clouds 25 46 11 km/h     5809844 201503151300 Seattle US Light rain 23 82 16 km/h 16093 1015 5809844 201503151400 Seattle US Heavy rain 24 95 16 km/h     6173331 201503151300 Vancouver CA Scattered clouds 6 84 12 km/h     … … … … … … … …     As mentioned earlier, operations will fail if the entity has changed between the retrieval and the update. Sometimes you are not sure if the object already exists on the server or not. You might need to overwrite or insert your values regardless; to do that you can use the InsertOrReplace operation. This will overwrite your values even if they were changed, and if the record doesn't exist it will be inserted. Deleting an entity can be achieved after retrieving it, as follows: TableOperation deleteOperation = TableOperation.Delete(originalEntity); Summary In this article we have discussed the basics of Table storage, which is the key/value NoSQL storage option offered by Azure platform. We have performed basic operations on entities and tables. Resources for Article: Further resources on this subject: Windows Azure Service Bus: Key Features[article] Digging into Windows Azure Diagnostics[article] Windows Azure Diagnostics: Initializing the Configuration and Using a Configuration File [article]
Read more
  • 0
  • 0
  • 11576

article-image-modules-and-templates
Packt
07 Sep 2015
20 min read
Save for later

Modules and Templates

Packt
07 Sep 2015
20 min read
 In this article by Thomas Uphill, author of the book Troubleshooting Puppet, we will look at how the various parts of a module may cause issues. As a Puppet developer or a system administrator, modules are how you deliver your code to the nodes. Modules are great for organizing your code into manageable chunks, but modules are also where you'll see most of your problems when troubleshooting. Most modules contain classes in a manifests directory, but modules can also include custom facts, functions, types, providers, as well as files and templates. Each of these components can be a source of error. We will address each of these components in the following sections, starting with classes. (For more resources related to this topic, see here.) In Puppet, the namespace of classes is referred to as the scope. Classes can have multiple nested levels of subclasses. Each class and subclass defines a scope. Each scope is separate. To refer to variables in a different scope, you must refer to the fully scoped variable name. For instance, in the following example, we have a class and two subclasses with similar names defined within each of the classes: class leader { notify {'Leader-1': } } class autobots { include leader } class autobots::leader { notify {'Optimus Prime': } } class decepticons { include leader } class decepticons::leader { notify {'Megatron': } } We then include the leader, autobots, and decepticons classes in our node, as follows: include leader include autobots include decepticons When we run Puppet, we see the following output: t@mylaptop ~ $ puppet apply leaders.pp Notice: Compiled catalog for mylaptop.example.net in environment production in 0.03 seconds Notice: Optimus Prime Notice: /Stage[main]/Autobots::Leader/Notify[Optimus Prime]/message: defined 'message' as 'Optimus Prime' Notice: Leader-1 Notice: /Stage[main]/Leader/Notify[Leader-1]/message: defined 'message' as 'Leader-1' Notice: Megatron Notice: /Stage[main]/Decepticons::Leader/Notify[Megatron]/message: defined 'message' as 'Megatron' Notice: Finished catalog run in 0.06 seconds If this is the output that you expected, you can safely move on. If you are a little surprised, then read on. The problem here is the scope. Although we have a top scope class named leader, when we include leader from within the autobots and decepticons classes, the local scope is searched first. In both cases, a local match is found first and used. Instead of the three 'Leader-1' notifications, we see only one 'Leader-1', one 'Megatron', and one 'Optimus Prime'. If your normal procedure is to have the leader class defined and you forgot to do so, then you can end up being slightly confused. Consider the following modified example: class leader { notify {'Leader-1': } } class autobots { include leader } include autobots Now, when we apply this manifest, we see the following output: t@mylaptop ~ $ puppet apply leaders2.pp Notice: Compiled catalog for mylaptop.example.net in environment production in 0.02 seconds Notice: Leader-1 Notice: /Stage[main]/Leader/Notify[Leader-1]/message: defined 'message' as 'Leader-1' Notice: Finished catalog run in 0.04 seconds Since the leader class was not available in the scope within the autobot class, the top scope leader class was used. Knowing how Puppet evaluates scope can save you time when your issues turn out to be namespace-related. This example is contrived. The usual situation where people run into this problem is when they have multiple modules organized in the same way. The problem manifests itself when you have many different modules with subclasses in different modules with the same names. For example, two modules named myappa and myappb with config subclasses, myappa::config and myappb::config. This problem occurs when the developer forgets to write the myappc::config subclass and there is a top scope config module available. Metaparameters Metaparameters are parameters that are used by Puppet to compile the catalog but are not used when modifying the target system. Some metaparameters, such as tag, are used to specify or mark resources. Other metaparameters, such as before, require, notify, and subscribe, are used to specify the order in which the resources should be applied to a node. When the catalog is compiled, the resources are evaluated based on their dependencies as opposed to how they are defined in the manifests. The order in which the resources are evaluated can be a little confusing for a person who is new to Puppet. A common paradigm when creating files is to create the containing directory before creating the resource. Consider the following code: class apps { file {'/apps': ensure => 'directory', mode => '0755', } } class myapp { file {'/apps/myapp/config': content => 'on = true', mode => '0644', } file {'/apps/myapp': ensure => 'directory', mode => '0755', } } include myapp include apps When we apply this manifest, even though the order of the resources is not correct in the manifest, the catalog applies correctly, as follows: [root@trouble ~]# puppet apply order.pp Notice: Compiled catalog for trouble.example.com in environment production in 0.13 seconds Notice: /Stage[main]/Apps/File[/apps]/ensure: created Notice: /Stage[main]/Myapp/File[/apps/myapp]/ensure: created Notice: /Stage[main]/Myapp/File[/apps/myapp/config]/ensure: defined content as '{md5}1090eb22d3caa1a3efae39cdfbce5155' Notice: Finished catalog run in 0.05 seconds Recent versions of Puppet will automatically use the require metaparameter for certain resources. In the case of the preceding code, the '/apps/myapp' file has an implied require of the '/apps' file because directories autorequire their parents. We can safely rely on this autorequire mechanism but, when debugging, it is useful to know how to specify the resource order precisely. To ensure that the /apps directory exists before we try to create the /apps/myapp directory, we can use the require metaparameter to have the myapp directory require the /apps directory, as follows: classmyapp { file {'/apps/myapp/config': content => 'on = true', mode => '0644', require => File['/apps/myapp'], } file {'/apps/myapp': ensure => 'directory', mode => '0755', require => File['/apps'], } } The preceding require lines specify that each of the file resources requires its parent directory. Autorequires Certain resource relationships are ubiquitous. When the relationship is implied, a mechanism was developed to reduce resource ordering errors. This mechanism is called autorequire. A list of autorequire relationships is given in the type reference documentation at https://docs.puppetlabs.com/references/latest/type.html. When troubleshooting, you should know that the following autorequire relationships exist: A cron resource will autorequire the specified user. An exec resource will autorequire both the working directory of the exec as a file resource and the user as which the exec runs. A file resource will autorequire its owner and group. A mount will autorequire the mounts that it depends on (a mount resource of /apps/myapp will autorequire a mount resource of /apps). A user resource will autorequire its primary group. Autorequire relationships only work when the resources within the relationship are specified within the catalog. If your catalog does not specify the required resources, then your catalog will fail if those resources are not found on the node. For instance, if you have a mount resource of /apps/myapp but the /apps directory or mount does not exist, then the mount resource will fail. If the /apps mount is specified, then the autorequire mechanism will ensure that the /apps mount is mounted before the /apps/myapp mount. Explicit ordering When you are trying to determine an error in the evaluation of your class, it can be helpful to use the chaining arrow syntax to force your resources to evaluate in the order that you specified. For instance, if you have an exec resource that is failing, you can create another exec resource that outputs the information used within your failing exec. For example, we have the following exec code: file {'arrow': path => '/tmp/arrow', ensure => 'directory', } exec {'arrow_debug_before': command => 'echo debug_before', path => '/usr/bin:/bin', } exec {'arrow_example': command => 'echo arrow', path => '/usr/bin:/bin', require => File['arrow'], } exec {'arrow_debug_after': command => 'echo debug_after', path => '/usr/bin:/bin', } Now, when you apply this catalog, you will see that the arrow_before and arrow_after resources are not applied in the order that we were expecting: [root@trouble ~]# puppet agent -t Info: Retrieving pluginfacts Info: Retrieving plugin Info: Loading facts Info: Caching catalog for trouble.example.com Info: Applying configuration version '1431872398' Notice: /Stage[main]/Main/Node[default]/Exec[arrow_debug_before]/returns: executed successfully Notice: /Stage[main]/Main/Node[default]/Exec[arrow_debug_after]/returns: executed successfully Notice: /Stage[main]/Main/Node[default]/File[arrow]/ensure: created Notice: /Stage[main]/Main/Node[default]/Exec[arrow_example]/returns: executed successfully Notice: Finished catalog run in 0.23 seconds To enforce the sequence that we were expecting, you can use the chaining arrow syntax, as follows: exec {'arrow_debug_before': command => 'echo debug_before', path => '/usr/bin:/bin', }-> exec {'arrow_example': command => 'echo arrow', path => '/usr/bin:/bin', require => File['arrow'], }-> exec {'arrow_debug_after': command => 'echo debug_after', path => '/usr/bin:/bin', } Now, when we apply the agent this time, the order is what we expected: [root@trouble ~]# puppet agent -t Info: Retrieving pluginfacts Info: Retrieving plugin Info: Loading facts Info: Caching catalog for trouble.example.com Info: Applying configuration version '1431872778' Notice: /Stage[main]/Main/Node[default]/Exec[arrow_debug_before]/returns: executed successfully Notice: /Stage[main]/Main/Node[default]/Exec[arrow_example]/returns: executed successfully Notice: /Stage[main]/Main/Node[default]/Exec[arrow_debug_after]/returns: executed successfully Notice: Finished catalog run in 0.23 seconds A good way to use this sort of arrangement is to create an exec resource that outputs the environment information before your failing resource is applied. For example, you can create a class that runs a debug script and then use chaining arrows to have it applied before your failing resource. If your resource uses variables, then creating a notify that outputs the values of the variables can also help with debugging. Defined types Defined types are great for reducing the complexity and improving the readability of your code. However, they can lead to some interesting problems that may be difficult to diagnose. In the following code, we create a defined type that creates a host entry: define myhost ($short,$ip) { host {"$short": ip => $ip, host_aliases => [ "$title.example.com", "$title.example.net", "$short" ], } } In this define, the namevar for the host resource is an argument of the define, the $short variable. In Puppet, there are two important attributes of any resource—the namevar and the title. The confusion lies in the fact that, sometimes, both of these attributes have the same value. Both values must be unique, but they are used differently. The title is used to uniquely identify the resource to the compiler and need not be related to the actual resource. The namevar uniquely identifies the resource to the agent after the catalog is compiled. The namevar is specific to each resource. For example, the namevar for a package is the package name and the namevar for a file is the full path to the file. The problem with the preceding defined type is that you can end up with a duplicate resource that is difficult to find. The resource is defined within the defined type. So, when Puppet reports the duplicate definition, it will report it as though it were defined on the same line. Let's create the following node definition with two myhost resources: node default { $short = "trb" myhost {'trouble': short => 'trb',ip => '192.168.50.1' } myhost {'tribble': short => "$short",ip => '192.168.50.2' } } Even though the two myhost resources have different titles, when we run Puppet, we see a duplicate definition, as follows: [root@trouble~]# puppet agent -t Info: Retrieving pluginfacts Info: Retrieving plugin Info: Loading facts Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Duplicate declaration: Host[trb] is already declared in file /etc/puppet/environments/production/modules/myhost/manifests/init.pp:5; cannot redeclare at /etc/puppet/environments/production/modules/myhost/manifests/init.pp:5 on node trouble.example.com Warning: Not using cache on failed catalog Error: Could not retrieve catalog; skipping run Tracking down this issue can be difficult if we have several myhost definitions throughout the node definition. To make this problem a lot easier to solve, we should use the title attribute of the defined type as the title attribute of the resources within the define method. The following rewrite shows this difference: define myhost ($short,$ip) { host {"$title": ip => $ip, host_aliases => [ "$title.example.com", "$title.example.net", "$short" ], } } Custom facts When you define custom facts within your modules (in the lib/facter directory), they are automatically transferred to your node via the pluginsync method. The issue here is that the facts are synced to the same directory. So, if you created two facts with the same filename, then it can be difficult to determine which fact will be synced down to your node. Facter is run at the beginning of a Puppet agent run. The results of Facter are used to compile the catalog. If any of your facts take longer than the configured timeout (config_timeout in the [agent] section of puppet.conf) in Puppet, then the agent run will fail. Instead of increasing this timeout, when designing your custom facts keep them simple enough so that they will take no longer than a few seconds to run. You can debug Facter from the command line using the -d switch. To load custom facts that are synced from Puppet, add the -p option as well. If you are having trouble with the output of your fact, then you can also have the output formatted as a JSON document by adding the -j option. Combining all of these options, the following is a good starting point for the debugging of your Facter output: [root@puppet ~]# facter -p -d -j |more Found no suitable resolves of 1 for ec2_metadata value for ec2_metadata is still nil Found no suitable resolves of 1 for gce value for gce is still nil ... { "lsbminordistrelease": "6", "puppetversion": "3.7.5", "blockdevice_sda_vendor": "ATA", "ipaddress_lo": "127.0.0.1", ... Having Facter output to a JSON file is helpful because the returned values are wrapped in quotes. So, any trailing spaces or control characters will be visible. The easiest way to debug custom facts is to run them through Ruby directly. To run a custom fact through Ruby, start with the custom fact in the directory and use the irb command to run interactive Ruby, as follows: [root@puppetfacter]# irb -r facter -r iptables_version.rb irb(main):001:0> puts Facter.value("iptables_version") 1.4.7 =>nil This displays the value of the iptables_version fact. From within IRB, you can check the code line-by-line to figure out your problem. The preceding command was executed on a Linux host. Doing this on a Windows host is not so easy, but it is possible. Locate the irb executable on your system. For the Puppet Enterprise installation, this should be in C:Program Files (x86)/Puppet Labs/Puppet Enterprise/sys/ruby/bin. Run irb and then alter the $LOAD_PATH variable to add the path to facter.rb (the Facter library), as follows: irb(main):001:0>$LOAD_PATH.push("C:/Program Files (x86)/Puppet Labs/Puppet Enterprise/facter/lib") Now require the Facter library, as follows: irb(main):002:0> require 'facter' =>true Finally, run Facter.value with the name of a fact, which is similar to what we did in the previous example: irb(main):003:0>Facter.value("uptime") => "0:08 hours" Pry When debugging any Ruby code, using the Pry library will allow you to inspect the Ruby environment that is running at any breakpoint that you define. In the earlier iptables_version example, we could use the Pry library to inspect the calculation of the fact. To do so, modify the fact definition and comment out the setcode section (the breakpoint definition will not work within a setcode block). Then define a breakpoint by adding binding.pry to the fact at the point that you wish to inspect, as follows: Facter.add(:iptables_version) do confine :kernel => :linux #setcode do version = Facter::Util::Resolution.exec('iptables --version') if version version.match(/d+.d+.d+/).to_s else nil end binding.pry #end end Now run Ruby with the Pry and Facter libraries on the iptables_version fact definition, as follows: root@mylaptop # ruby -r pry -r facteriptables_version.rb From: /var/lib/puppet/lib/facter/iptables_version.rb @ line 10 : 5: if version 6: version.match(/d+.d+.d+/).to_s 7: else 8: nil 9: end => 10: binding.pry 11: #end 12: end This will cause the evaluation of the iptables_version fact to halt at the binding.pry line. We can then inspect the value of the version variable and execute the regular expression matching ourselves to verify that it is working correctly, as follows: [1] pry(#<Facter::Util::Resolution>)> version => "iptables v1.4.21" [2] pry(#<Facter::Util::Resolution>)>version.match(/d+.d+.d+/).to_s => "1.4.21" ok Environment When developing custom facts, it is useful to make your Ruby fact file executable and run the Ruby script from the command line. When you run custom facts from the command line, the environment variables defined in your current shell can affect how the fact is calculated. This can result in different values being returned for the fact when it is run through the Puppet agent. One of the most common variables that cause this sort of problem is JAVA_HOME. This can also be a problem when testing the exec resources. Environment variables and shell aliases will be available for exec when it is run interactively. When run through the agent, these customizations will not be available, which has the potential to cause inconsistency. Files Files are transferred between the master and the node via Puppet's internal fileserver. When working with files, it is important to remember that all the files that are served via Puppet are read into memory by the Puppet Server. Transferring large files via Puppet is inefficient. You should avoid transferring large and/or binary files. Most of the problems with files are related to path and URL syntax errors. The source parameter contains a URL with the following syntax: source => "puppet:///path/to/file" In the preceding syntax, the three slashes specify the beginning of the URL location and the Puppet Server that should be contacted. The following is also valid: source => "puppet://myserver/path/to/file" The path from which we can to download a file depends on the context of the manifest. If the manifest is found within the manifest directory or the manifest is the site.pp manifest, then the path to the file is relative to this location starting at the files subdirectory. If the manifest is found within a module, then the path should start with the modules path; then the files will be found within the files directory of the module. Templates ERB templates are written in Ruby. The current releases of Puppet also support EPP Puppet templates, which are written in Puppet. The debugging of ERB templates can be done by running the templates through Ruby. To simply check the syntax, use the following code: $ erb -P -x -T '-' template.erb |ruby -c Syntax OK If your template does not pass the preceding test, then you know that your syntax is incorrect. The usual error type that you will see is as follows: -:8: syntax error, unexpected end-of-input, expecting keyword_end The problem with the preceding command is that the line number is in the evaluated code that is returned by the erb script, not the original file. When checking for the syntax error, you will have to inspect the intermediate code that is generated by the erb command. Unfortunately, doing anything more than checking simple syntax is a problem. Although the ERB templates can be evaluated using the ERB library, the <%= block markers that are used in the Puppet ERB templates break the normal evaluation. The simplest way to evaluate Ruby templates is by creating a simple manifest with a file resource that applies the template. As an example, the resolv.conf template is shown in the following code: # resolv.conf built by Puppet domain<%= @domain %> search<% searchdomains.each do |domain| -%> <%= domain -%><% end -%><%= @domain %> <% nameservers.each do |server| -%> nameserver<%= server %> <% end -%> This template is then saved into a file named template.erb. We then create a file resource using this template.erb file, as shown in the following code: $searchdomains = ['trouble.example.com','packt.example.com'] $nameservers = ['8.8.8.8','8.8.4.4'] $domains = 'example.com' file {'/tmp/test': content => template('/tmp/template.erb') } We then use puppet apply to apply this template and create the /tmp/test file, as follows: $ puppet apply file.pp Notice: Compiled catalog for mylaptop.example.net in environment production in 0.20 seconds Notice: /Stage[main]/Main/File[/tmp/test]/ensure: defined content as '{md5}4d1c547c40a27c06726ecaf784b99e84' Notice: Finished catalog run in 0.04 seconds The following are the contents of the /tmp/test file: # resolv.conf built by Puppet domainexample.net search trouble.example.com packt.example.com example.net nameserver 8.8.8.8 nameserver 8.8.4.4 Debugging templates Templates can also be used in debugging. You can create a file resource that uses a template that outputs all the defined variables and their values. You can include the following resource in your node definition: file { "/tmp/puppet-debug.txt": content =>inline_template("<% vars = scope.to_hash.reject { |k,v| !( k.is_a?(String) &&v.is_a?(String) ) }; vars.sort.each do |k,v| %><%= k %>=<%= v %>n<% end %>"), } This uses an inline template, which may make it slightly hard to read. The template loops through the output of the scope function and prints the values if the value is a string. Focusing only on the inner loop, this can be shown as follows: vars = scope.to_hash.reject { |k,v| !( k.is_a?(String) && v.is_a?(String) ) }; vars.sort.each do |k,v| k=vn end Summary In this article, we examined metaparameters and how to deal with resource ordering issues. We built custom facts and defines and discussed the issues that may arise when using them. We then moved on to templates and showed how to use templates as an aid in debugging. Resources for Article: Further resources on this subject: My First Puppet Module[article] Puppet Language and Style[article] Puppet and OS Security Tools [article]
Read more
  • 0
  • 0
  • 3559

article-image-working-charts
Packt
07 Sep 2015
14 min read
Save for later

Working with Charts

Packt
07 Sep 2015
14 min read
In this article by Anand Dayalan, the author of the book Ext JS 6 By Example, he explores the different types of chart components in Ext JS and ends with a sample project called expense analyzer. The following topics will be covered: Charts types Bar and column charts Area and line charts Pie charts 3D charts The expense analyzer – a sample project (For more resources related to this topic, see here.) Charts Ext JS is almost like a one-stop shop for all your JavaScript framework needs. Yes, Ext JS also includes charts with all other rich components you learned so far. Chart types There are three types of charts: cartesian, polar, and spacefilling. The cartesian chart Ext.chart.CartesianChart (xtype: cartesian or chart) A cartesian chart has two directions: X and Y. By default, X is horizontal and Y is vertical. Charts that use the cartesian coordinates are column, bar, area, line, and scatter. The polar chart Ext.chart.PolarChart (xtype: polar) These charts have two axes: angular and radial. Charts that plot values using the polar coordinates are pie and radar. The spacefilling chart Ext.chart.SpaceFillingChart (xtype: spacefilling) These charts fill the complete area of the chart. Bar and column charts For bar and column charts, at the minimum, you need to provide a store, axes, and series. The basic column chart Let's start with a simple basic column chart. First, let's create a simple tree store with the inline hardcoded data as follows: Ext.define('MyApp.model.Population', { extend: 'Ext.data.Model', fields: ['year', 'population'] }); Ext.define('MyApp.store.Population', { extend: 'Ext.data.Store', storeId: 'population', model: 'MyApp.model.Population', data: [ { "year": "1610","population": 350 }, { "year": "1650","population": 50368 }, { "year": "1700", "population": 250888 }, { "year": "1750","population": 1170760 }, { "year": "1800","population": 5308483 }, { "year": "1900","population": 76212168 }, { "year": "1950","population": 151325798 }, { "year": "2000","population": 281421906 }, { "year": "2010","population": 308745538 }, ] }); var store = Ext.create("MyApp.store.Population"); Now, let's create the chart using Ext.chart.CartesianChart (xtype: cartesian or chart ) and use the store created above. Ext.create('Ext.Container', { renderTo: Ext.getBody(), width: 500, height: 500, layout: 'fit', items: [{ xtype: 'chart', insetPadding: { top: 60, bottom: 20, left: 20, right: 40 }, store: store, axes: [{ type: 'numeric', position: 'left', grid: true, title: { text: 'Population in Millions', fontSize: 16 }, }, { type: 'category', title: { text: 'Year', fontSize: 16 }, position: 'bottom', } ], series: [{ type: 'bar', xField: 'year', yField: ['population'] }], sprites: { type: 'text', text: 'United States Population', font: '25px Helvetica', width: 120, height: 35, x: 100, y: 40 } } ] }); Important things to note in the preceding code are axes, series, and sprite. Axes can be of one of the three types: numeric, time, and category. In series, you can see that the type is set to bar. In Ext JS, to render the column or bar chart, you have to specify the type as bar, but if you want a bar chart, you have to set flipXY to true in the chart config. The sprites config used here is quite straightforward. Sprites is optional, not a must. The grid property can be specified for both the axes, although we have specified it only for one axis here. The insetPadding is used to specify the padding for the chart to render other information, such as the title. If we don't specify insetPadding, the title and other information may get overlapped with the chart. The output of the preceding code is shown here: The bar chart As mentioned before, in order to get the bar chart, you can just use the same code, but specify flipXP to true and change the positions of axes accordingly, as shown in the following code: Ext.create('Ext.Container', { renderTo: Ext.getBody(), width: 500, height: 500, layout: 'fit', items: [{ xtype: 'chart', flipXY: true, insetPadding: { top: 60, bottom: 20, left: 20, right: 40 }, store: store, axes: [{ type: 'numeric', position: 'bottom', grid: true, title: { text: 'Population in Millions', fontSize: 16 }, }, { type: 'category', title: { text: 'Year', fontSize: 16 }, position: 'left', } ], series: [{ type: 'bar', xField: 'year', yField: ['population'] } ], sprites: { type: 'text', text: 'United States Population', font: '25px Helvetica', width: 120, height: 35, x: 100, y: 40 } } ] }); The output of the preceding code is shown in the following screenshot: The stacked chart Now, let's say you want to plot two values in each category in the column chart. You can either stack them or have two bar columns for each category. Let's modify our column chart example to render a stacked column chart. For this, we need an additional numeric field in the store, and we need to specify two fields for yField in the series. You can stack more than two fields, but for this example, we will stack only two fields. Take a look at the following code: Ext.define('MyApp.model.Population', { extend: 'Ext.data.Model', fields: ['year', 'total','slaves'] }); Ext.define('MyApp.store.Population', { extend: 'Ext.data.Store', storeId: 'population', model: 'MyApp.model.Population', data: [ { "year": "1790", "total": 3.9, "slaves": 0.7 }, { "year": "1800", "total": 5.3, "slaves": 0.9 }, { "year": "1810", "total": 7.2, "slaves": 1.2 }, { "year": "1820", "total": 9.6, "slaves": 1.5 }, { "year": "1830", "total": 12.9, "slaves": 2 }, { "year": "1840", "total": 17, "slaves": 2.5 }, { "year": "1850", "total": 23.2, "slaves": 3.2 }, { "year": "1860", "total": 31.4, "slaves": 4 }, ] }); var store = Ext.create("MyApp.store.Population"); Ext.create('Ext.Container', { renderTo: Ext.getBody(), width: 500, height: 500, layout: 'fit', items: [{ xtype: 'cartesian', store: store, insetPadding: { top: 60, bottom: 20, left: 20, right: 40 }, axes: [{ type: 'numeric', position: 'left', grid: true, title: { text: 'Population in Millions', fontSize: 16 }, }, { type: 'category', title: { text: 'Year', fontSize: 16 }, position: 'bottom', } ], series: [{ type: 'bar', xField: 'year', yField: ['total','slaves'] } ], sprites: { type: 'text', text: 'United States Slaves Distribution 1790 to 1860', font: '20px Helvetica', width: 120, height: 35, x: 60, y: 40 } } ] }); The output of the stacked column chart is shown here: If you want to render multiple fields without stacking, then you can simply set the stacked property of the series to false to get the following output: There are so many options available in the chart. Let's take a look at some of the commonly used options: tooltip: This can be added easily by setting a tooltip property in the series legend: This can be rendered to any of the four sides of the chart by specifying the legend config sprites: This can be an array if you want to specify multiple informations, such as header, footer, and so on Here is the code for the same store configured with some advanced options: Ext.create('Ext.Container', { renderTo: Ext.getBody(), width: 500, height: 500, layout: 'fit', items: [{ xtype: 'chart', legend: { docked: 'bottom' }, insetPadding: { top: 60, bottom: 20, left: 20, right: 40 }, store: store, axes: [{ type: 'numeric', position: 'left', grid: true, title: { text: 'Population in Millions', fontSize: 16 }, minimum: 0, }, { type: 'category', title: { text: 'Year', fontSize: 16 }, position: 'bottom', } ], series: [{ type: 'bar', xField: 'year', stacked: false, title: ['Total', 'Slaves'], yField: ['total', 'slaves'], tooltip: { trackMouse: true, style: 'background: #fff', renderer: function (storeItem, item) { this.setHtml('In ' + storeItem.get('year') + ' ' + item.field + ' population was ' + storeItem.get(item.field) + ' m'); } } ], sprites: [{ type: 'text', text: 'United States Slaves Distribution 1790 to 1860', font: '20px Helvetica', width: 120, height: 35, x: 60, y: 40 }, { type: 'text', text: 'Source: http://www.wikipedia.org', fontSize: 10, x: 12, y: 440 }] }] }); The output with tooltip, legend, and footer is shown here: The 3D bar chart If you simply change the type of the series to 3D bar instead of bar, you'll get the 3D column chart, as show in the following screenshot: Area and line charts Area and line charts are also cartesian charts. The area chart To render an area chart, simply replace the series in the previous example with the following code: series: [{ type: 'area', xField: 'year', stacked: false, title: ['Total','slaves'], yField: ['total', 'slaves'], style: { stroke: "#94ae0a", fillOpacity: 0.6, } }] The output of the preceding code is shown here: Similar to the stacked column chart, you can have the stacked area chart as well by setting stacked to true in the series. If you set stacked to true in the preceding example, you'll get the following output:  Figure 7.1 The line chart To get the line chart shown in Figure 7.1, use the following series config in the preceding example instead: series: [{ type: 'line', xField: 'year', title: ['Total'], yField: ['total'] }, { type: 'line', xField: 'year', title: ['Slaves'], yField: ['slaves'] }], The pie chart This is one of the frequently used charts in many applications and reporting tools. Ext.chart.PolarChart (xtype: polar) should be used to render a pie chart. The basic pie chart Specify the type as pie, and specify the angleField and label to render a basic pie chart, as as shown in the following code: Ext.define('MyApp.store.Expense', { extend: 'Ext.data.Store', alias: 'store.expense', fields: [ 'cat', 'spent'], data: [ { "cat": "Restaurant", "spent": 100}, { "cat": "Travel", "spent": 150}, { "cat": "Insurance", "spent": 500}, { "cat": "Rent", "spent": 1000}, { "cat": "Groceries", "spent": 400}, { "cat": "Utilities", "spent": 300}, ] }); var store = Ext.create("MyApp.store.Expense"); Ext.create('Ext.Container', { renderTo: Ext.getBody(), width: 600, height: 500, layout: 'fit', items: [{ xtype: 'polar', legend: { docked: 'bottom' }, insetPadding: { top: 100, bottom: 20, left: 20, right: 40 }, store: store, series: [{ type: 'pie', angleField: 'spent', label: { field: 'cat', }, tooltip: { trackMouse: true, renderer: function (storeItem, item) { var value = ((parseFloat(storeItem.get('spent') / storeItem.store.sum('spent')) * 100.0).toFixed(2)); this.setHtml(storeItem.get('cat') + ': ' + value + '%'); } } }] }] }); The donut chart Just by setting the donut property of the series in the preceding example to 40, you'll get the following chart. Here, donut is the percentage of the radius of the hole compared to the entire disk: The 3D pie chart In Ext JS 6, there were some improvements made to the 3D pie chart. The 3D pie chart in Ext JS 6 now supports the label and configurable 3D aspects, such as thickness, distortion, and so on. Let's use the same model and store that was used in the pie chart example and create a 3D pie chart as follows: Ext.create('Ext.Container', { renderTo: Ext.getBody(), width: 600, height: 500, layout: 'fit', items: [{ xtype: 'polar', legend: { docked: 'bottom' }, insetPadding: { top: 100, bottom: 20, left: 80, right: 80 }, store: store, series: [{ type: 'pie3d', donut: 50, thickness: 70, distortion: 0.5, angleField: 'spent', label: { field: 'cat' }, tooltip: { trackMouse: true, renderer: function (storeItem, item) { var value = ((parseFloat(storeItem.get('spent') / storeItem.store.sum('spent')) * 100.0).toFixed(2)); this.setHtml(storeItem.get('cat') + ': ' + value + '%'); } } }] }] }); The following image shows the output of the preceding code: The expense analyzer – a sample project Now that you have learned the different kinds of charts available in Ext JS, let's use them to create a sample project called Expense Analyzer. The following screenshot shows the design of this sample project: Let's use Sencha Cmd to scaffold our application. Run the following command in the terminal or command window: sencha -sdk <path to SDK>/ext-6.0.0.415/ generate app EA ./expense-analyzer Now, let's remove all the unwanted files and code and add some additional files to create this project. The final folder structure and some of the important files are shown in the following Figure 7.2: The complete source code is not given in this article. Here, only some of the important files are shown. In between, some less important code has been truncated. The complete source is available at https://github.com/ananddayalan/extjs-by-example-expense-analyzer.  Figure 7.2 Now, let's create the grid shown in the design. The following code is used to create the grid. This List view extends from Ext.grid.Panel, uses the expense store for the data, and has three columns: Ext.define('EA.view.main.List', { extend: 'Ext.grid.Panel', xtype: 'mainlist', maxHeight: 400, requires: [ 'EA.store.Expense'], title: 'Year to date expense by category', store: { type: 'expense' }, columns: { defaults: { flex:1 }, items: [{ text: 'Category', dataIndex: 'cat' }, { formatter: "date('F')", text: 'Month', dataIndex: 'date' }, { text: 'Spent', dataIndex: 'spent' }] } }); Here, I have not used the pagination. The maxHeight is used to limit the height of the grid, and this enables the scroll bar as well because we have more records that won't fit the given maximum height of the grid. The following code creates the expense store used in the preceding example. This is a simple store with the inline data. Here, we have not created a separate model and added fields directly in the store: Ext.define('EA.store.Expense', { extend: 'Ext.data.Store', alias: 'store.expense', storeId: 'expense', fields: [{ name:'date', type: 'date' }, 'cat', 'spent' ], data: { items: [ { "date": "1/1/2015", "cat": "Restaurant", "spent": 100 }, { "date": "1/1/2015", "cat": "Travel", "spent": 22 }, { "date": "1/1/2015", "cat": "Insurance", "spent": 343 }, // Truncated code ]}, proxy: { type: 'memory', reader: { type: 'json', rootProperty: 'items' } } }); Next, let's create the bar chart shown in the design. In the bar chart, we will use another store called expensebyMonthStore, in which we'll populate data from the expense data store. The following 3D bar chart has two types of axis: numeric and category. We have used the month part of the date field as a category. A renderer is used to render the month part of the date field: Ext.define('EA.view.main.Bar', { extend: 'Ext.chart.CartesianChart', requires: ['Ext.chart.axis.Category', 'Ext.chart.series.Bar3D', 'Ext.chart.axis.Numeric', 'Ext.chart.interactions.ItemHighlight'], xtype: 'mainbar', height: 500, padding: { top: 50, bottom: 20, left: 100, right: 100 }, legend: { docked: 'bottom' }, insetPadding: { top: 100, bottom: 20, left: 20, right: 40 }, store: { type: 'expensebyMonthStore' }, axes: [{ type: 'numeric', position: 'left', grid: true, minimum: 0, title: { text: 'Spendings in $', fontSize: 16 }, }, { type: 'category', position: 'bottom', title: { text: 'Month', fontSize: 16 }, label: { font: 'bold Arial', rotate: { degrees: 300 } }, renderer: function (date) { return ["Jan", "Feb", "Mar", "Apr", "May"][date.getMonth()]; } } ], series: [{ type: 'bar3d', xField: 'date', stacked: false, title: ['Total'], yField: ['total'] }], sprites: [{ type: 'text', text: 'Expense by Month', font: '20px Helvetica', width: 120, height: 35, x: 60, y: 40 }] }); Now, let's create the MyApp.model.ExpensebyMonth store used in the preceding bar chart view. This store will display the total amount spent in each month. This data is populated by grouping the expense store with the date field. Take a look at how the data property is configured to populate the data: Ext.define('MyApp.model.ExpensebyMonth', { extend: 'Ext.data.Model', fields: [{name:'date', type: 'date'}, 'total'] }); Ext.define('MyApp.store.ExpensebyMonth', { extend: 'Ext.data.Store', alias: 'store.expensebyMonthStore', model: 'MyApp.model.ExpensebyMonth', data: (function () { var data = []; var expense = Ext.createByAlias('store.expense'); expense.group('date'); var groups = expense.getGroups(); groups.each(function (group) { data.push({ date: group.config.groupKey, total: group.sum('spent') }); }); return data; })() }); Then, the following code is used to generate the pie chart. This chart uses the expense store, but only shows one selected month of data at a time. A drop-down box is added to the main view to select the month. The beforerender is used to filter the expense store to show the data only for the month of January on the load: Ext.define('EA.view.main.Pie', { extend: 'Ext.chart.PolarChart', requires: ['Ext.chart.series.Pie3D'], xtype: 'mainpie', height: 800, legend: { docked: 'bottom' }, insetPadding: { top: 100, bottom: 20, left: 80, right: 80 }, listeners: { beforerender: function () { var dateFiter = new Ext.util.Filter({ filterFn: function(item) { return item.data.date.getMonth() ==0; } }); Ext.getStore('expense').addFilter(dateFiter); } }, store: { type: 'expense' }, series: [{ type: 'pie3d', donut: 50, thickness: 70, distortion: 0.5, angleField: 'spent', label: { field: 'cat', } }] }); So far, we have created the grid, the bar chart, the pie chart, and the stores required for this sample application. Now, we need to link them together in the main view. The following code shows the main view from the classic toolkit. The main view is simply a tab control and specifies what view to render for each tab: Ext.define('EA.view.main.Main', { extend: 'Ext.tab.Panel', xtype: 'app-main', requires: [ 'Ext.plugin.Viewport', 'Ext.window.MessageBox', 'EA.view.main.MainController', 'EA.view.main.List', 'EA.view.main.Bar', 'EA.view.main.Pie' ], controller: 'main', autoScroll: true, ui: 'navigation', // Truncated code items: [{ title: 'Year to Date', iconCls: 'fa-bar-chart', items: [ { html: '<h3>Your average expense per month is: ' + Ext.createByAlias('store.expensebyMonthStore').average('total') + '</h3>', height: 70, }, { xtype: 'mainlist'}, { xtype: 'mainbar' } ] }, { title: 'By Month', iconCls: 'fa-pie-chart', items: [{ xtype: 'combo', value: 'Jan', fieldLabel: 'Select Month', store: ['Jan', 'Feb', 'Mar', 'Apr', 'May'], listeners: { select: 'onMonthSelect' } }, { xtype: 'mainpie' }] }] }); Summary In this article, we looked at the different kinds of charts available in Ext JS. We also created a simple sample project called Expense Analyzer and used some of the concepts you learned in this article. Resources for Article: Further resources on this subject: Ext JS 5 – an Introduction[article] Constructing Common UI Widgets[article] Static Data Management [article]
Read more
  • 0
  • 0
  • 7275

article-image-storage-configurations
Packt
07 Sep 2015
21 min read
Save for later

Storage Configurations

Packt
07 Sep 2015
21 min read
In this article by Wasim Ahmed, author of the book Proxmox Cookbook, we will cover topics such as local storage, shared storage, Ceph storage, and a recipe which shows you how to configure the Ceph RBD storage. (For more resources related to this topic, see here.) A storage is where virtual disk images of virtual machines reside. There are many different types of storage systems with many different features, performances, and use case scenarios. Whether it is a local storage configured with direct attached disks or a shared storage with hundreds of disks, the main responsibility of a storage is to hold virtual disk images, templates, backups, and so on. Proxmox supports different types of storages, such as NFS, Ceph, GlusterFS, and ZFS. Different storage types can hold different types of data. For example, a local storage can hold any type of data, such as disk images, ISO/container templates, backup files and so on. A Ceph storage, on the other hand, can only hold a .raw format disk image. In order to provide the right type of storage for the right scenario, it is vital to have a proper understanding of different types of storages. The full details of each storage is beyond the scope of this article, but we will look at how to connect them to Proxmox and maintain a storage system for VMs. Storages can be configured into two main categories: Local storage Shared storage Local storage Any storage that resides in the node itself by using directly attached disks is known as a local storage. This type of storage has no redundancy other than a RAID controller that manages an array. If the node itself fails, the storage becomes completely inaccessible. The live migration of a VM is impossible when VMs are stored on a local storage because during migration, the virtual disk of the VM has to be copied entirely to another node. A VM can only be live-migrated when there are several Proxmox nodes in a cluster and the virtual disk is stored on a shared storage accessed by all the nodes in the cluster. Shared storage A shared storage is one that is available to all the nodes in a cluster through some form of network media. In a virtual environment with shared storage, the actual virtual disk of the VM may be stored on a shared storage, while the VM actually runs on another Proxmox host node. With shared storage, the live migration of a VM becomes possible without powering down the VM. Multiple Proxmox nodes can share one shared storage, and VMs can be moved around since the virtual disk is stored on different shared storages. Usually, a few dedicated nodes are used to configure a shared storage with their own resources apart from sharing the resources of a Proxmox node, which could be used to host VMs. In recent releases, Proxmox has added some new storage plugins that allow users to take advantage of some great storage systems and integrating them with the Proxmox environment. Most of the storage configurations can be performed through the Proxmox GUI. Ceph storage Ceph is a powerful distributed storage system, which provides RADOS Block Device (RBD) object storage, Ceph filesystem (CephFS), and Ceph Object Storage. Ceph is built with a very high-level of reliability, scalability, and performance in mind. A Ceph cluster can be expanded to several petabytes without compromising data integrity, and can be configured using commodity hardware. Any data written to the storage gets replicated across a Ceph cluster. Ceph was originally designed with big data in mind. Unlike other types of storages, the bigger a Ceph cluster becomes, the higher the performance. However, it can also be used in small environments just as easily for data redundancy. A lower performance can be mitigated using SSD to store Ceph journals. Refer to the OSD Journal subsection in this section for information on journals. The built-in self-healing features of Ceph provide unprecedented resilience without a single point of failure. In a multinode Ceph cluster, the storage can tolerate not just hard drive failure, but also an entire node failure without losing data. Currently, only an RBD block device is supported in Proxmox. Ceph comprises a few components that are crucial for you to understand in order to configure and operate the storage. The following components are what Ceph is made of: Monitor daemon (MON) Object Storage Daemon (OSD) OSD Journal Metadata Server (MSD) Controlled Replication Under Scalable Hashing map (CRUSH map) Placement Group (PG) Pool MON Monitor daemons form quorums for a Ceph distributed cluster. There must be a minimum of three monitor daemons configured on separate nodes for each cluster. Monitor daemons can also be configured as virtual machines instead of using physical nodes. Monitors require a very small amount of resources to function, so allocated resources can be very small. A monitor can be set up through the Proxmox GUI after the initial cluster creation. OSD Object Storage Daemons (OSDs) are responsible for the storage and retrieval of actual cluster data. Usually, each physical storage device, such as HDD or SSD, is configured as a single OSD. Although several OSDs can be configured on a single physical disc, it is not recommended for any production environment at all. Each OSD requires a journal device where data first gets written and later gets transferred to an actual OSD. By storing journals on fast-performing SSDs, we can increase the Ceph I/O performance significantly. Thanks to the Ceph architecture, as more and more OSDs are added into the cluster, the I/O performance also increases. An SSD journal works very well on small clusters with about eight OSDs per node. OSDs can be set up through the Proxmox GUI after the initial MON creation. OSD Journal Every single piece of data that is destined to be a Ceph OSD first gets written in a journal. A journal allows OSD daemons to write smaller chunks to allow the actual drives to commit writes that give more time. In simpler terms, all data gets written to journals first, then the journal filesystem sends data to an actual drive for permanent writes. So, if the journal is kept on a fast-performing drive, such as SSD, incoming data will be written at a much higher speed, while behind the scenes, slower performing SATA drives can commit the writes at a slower speed. Journals on SSD can really improve the performance of a Ceph cluster, especially if the cluster is small, with only a few terabytes of data. It should also be noted that if there is a journal failure, it will take down all the OSDs that the journal is kept on the journal drive. In some environments, it may be necessary to put two SSDs to mirror RAIDs and use them as journaling. In a large environment with more than 12 OSDs per node, performance can actually be gained by collocating a journal on the same OSD drive instead of using SSD for a journal. MDS The Metadata Server (MDS) daemon is responsible for providing the Ceph filesystem (CephFS) in a Ceph distributed storage system. MDS can be configured on separate nodes or coexist with already configured monitor nodes or virtual machines. Although CephFS has come a long way, it is still not fully recommended to use in a production environment. It is worth mentioning here that there are many virtual environments actively running MDS and CephFS without any issues. Currently, it is not recommended to configure more than two MDSs in a Ceph cluster. CephFS is not currently supported by a Proxmox storage plugin. However, it can be configured as a local mount and then connected to a Proxmox cluster through the Directory storage. MDS cannot be set up through the Proxmox GUI as of version 3.4. CRUSH map A CRUSH map is the heart of the Ceph distributed storage. The algorithm for storing and retrieving user data in Ceph clusters is laid out in the CRUSH map. CRUSH allows a Ceph client to directly access an OSD. This eliminates a single point of failure and any physical limitations of scalability since there are no centralized servers or controllers to manage data in and out. Throughout Ceph clusters, CRUSH maintains a map of all MONs and OSDs. CRUSH determines how data should be chunked and replicated among OSDs spread across several local nodes or even nodes located remotely. A default CRUSH map is created on a freshly installed Ceph cluster. This can be further customized based on user requirements. For smaller Ceph clusters, this map should work just fine. However, when Ceph is deployed with very big data in mind, this map should be customized. A customized map will allow better control of a massive Ceph cluster. To operate Ceph clusters of any size successfully, a clear understanding of the CRUSH map is mandatory. For more details on the Ceph CRUSH map, visit http://ceph.com/docs/master/rados/operations/crush-map/ and http://cephnotes.ksperis.com/blog/2015/02/02/crushmap-example-of-a-hierarchical-cluster-map. As of Proxmox VE 3.4, we cannot customize the CRUSH map throughout the Proxmox GUI. It can only be viewed through a GUI and edited through a CLI. PG In a Ceph storage, data objects are aggregated in groups determined by CRUSH algorithms. This is known as a Placement Group (PG) since CRUSH places this group in various OSDs depending on the replication level set in the CRUSH map and the number of OSDs and nodes. By tracking a group of objects instead of the object itself, a massive amount of hardware resources can be saved. It would be impossible to track millions of individual objects in a cluster. The following diagram shows how objects are aggregated in groups and how PG relates to OSD: To balance available hardware resources, it is necessary to assign the right number of PGs. The number of PGs should vary depending on the number of OSDs in a cluster. The following is a table of PG suggestions made by Ceph developers: Number of OSDs Number of PGs Less than 5 OSDs 128 Between 5-10 OSDs 512 Between 10-50 OSDs 4096 Selecting the proper number of PGs is crucial since each PG will consume node resources. Too many PGs for the wrong number of OSDs will actually penalize the resource usage of an OSD node, while very few assigned PGs in a large cluster will put data at risk. A rule of thumb is to start with the lowest number of PGs possible, then increase them as the number of OSDs increases. For details on Placement Groups, visit http://ceph.com/docs/master/rados/operations/placement-groups/. There's a great PG calculator created by Ceph developers to calculate the recommended number of PGs for various sizes of Ceph clusters at http://ceph.com/pgcalc/. Pools Pools in Ceph are like partitions on a hard drive. We can create multiple pools on a Ceph cluster to separate stored data. For example, a pool named accounting can hold all the accounting department data, while another pool can store the human resources data of a company. When creating a pool, assigning the number of PGs is necessary. During the initial Ceph configuration, three default pools are created. They are data, metadata, and rbd. Deleting a pool will delete all stored objects permanently. For details on Ceph and its components, visit http://ceph.com/docs/master/. The following diagram shows a basic Proxmox+Ceph cluster: The preceding diagram shows four Proxmox nodes, three Monitor nodes, three OSD nodes, and two MDS nodes comprising a Proxmox+Ceph cluster. Note that Ceph is on a different network than the Proxmox public network. Depending on the set replication number, each incoming data object needs to be written more than once. This causes high bandwidth usage. By separating Ceph on a dedicated network, we can ensure that a Ceph network can fully utilize the bandwidth. On advanced clusters, a third network is created only between Ceph nodes for cluster replication, thus improving network performance even further. As of Proxmox VE 3.4, the same node can be used for both Proxmox and Ceph. This provides a great way to manage all the nodes from the same Proxmox GUI. It is not advisable to put Proxmox VMs on a node that is also configured as Ceph. During day-to-day operations, Ceph nodes do not consume large amounts of resources, such as CPU or memory. However, when Ceph goes into rebalancing mode due to OSD or node failure, a large amount of data replication occurs, which takes up lots of resources. Performance will degrade significantly if resources are shared by both VMs and Ceph. Ceph RBD storage can only store .raw virtual disk image files. Ceph itself does not come with a GUI to manage, so having the option to manage Ceph nodes through the Proxmox GUI makes administrative tasks mush easier. Refer to the Monitoring the Ceph storage subsection under the How to do it... section of the Connecting the Ceph RBD storage recipe later in this article to learn how to install a great read-only GUI to monitor Ceph clusters. Connecting the Ceph RBD storage In this recipe, we are going to see how to configure a Ceph block storage with a Proxmox cluster. Getting ready The initial Ceph configuration on a Proxmox cluster must be accomplished through a CLI. After the Ceph installation, initial configurations and one monitor creation for all other tasks can be accomplished through the Proxmox GUI. How to do it... We will now see how to configure the Ceph block storage with Proxmox. Installing Ceph on Proxmox Ceph is not installed by default. Prior to configuring a Proxmox node for the Ceph role, Ceph needs to be installed and the initial configuration must be created through a CLI. The following steps need to be performed on all Proxmox nodes that will be part of the Ceph cluster: Log in to each node through SSH or a console. Configure a second network interface to create a separate Ceph network with a different subnet. Reboot the nodes to initialize the network configuration. Using the following command, install the Ceph package on each node: # pveceph install –version giant Initializing the Ceph configuration Before Ceph is usable, we have to create the initial Ceph configuration file on one Proxmox+Ceph node. The following steps need to be performed only on one Proxmox node that will be part of the Ceph cluster: Log in to the node using SSH or a console. Run the following command create the initial Ceph configuration: # pveceph init –network <ceph_subnet>/CIDR Run the following command to create the first monitor: # pveceph createmon Configuring Ceph through the Proxmox GUI After the initial Ceph configuration and the creation of the first monitor, we can continue with further Ceph configurations through the Proxmox GUI or simply run the Ceph Monitor creation command on other nodes. The following steps show how to create Ceph Monitors and OSDs from the Proxmox GUI: Log in to the Proxmox GUI as a root or with any other administrative privilege. Select a node where the initial monitor was created in previous steps, and then click on Ceph from the tabbed menu. The following screenshot shows a Ceph cluster as it appears after the initial Ceph configuration: Since no OSDs have been created yet, it is normal for a new Ceph cluster to show PGs stuck and unclean error Click on Disks on the bottom tabbed menu under Ceph to display the disks attached to the node, as shown in the following screenshot: Select an available attached disk, then click on the Create: OSD button to open the OSD dialog box, as shown in the following screenshot: Click on the Journal Disk drop-down menu to select a different device or collocate the journal on the same OSD by keeping it as the default. Click on Create to finish the OSD creation. Create additional OSDs on Ceph nodes as needed. The following screenshot shows a Proxmox node with three OSDs configured: By default, Proxmox has created OSDs with an ext3 partition. However, sometimes, it may be necessary to create OSDs with different partition types due to a requirement or for performance improvement. Enter the following command format through the CLI to create an OSD with a different partition type: # pveceph createosd –fstype ext4 /dev/sdX The following steps show how to create Monitors through the Proxmox GUI: Click on Monitor from the tabbed menu under the Ceph feature. The following screenshot shows the Monitor status with the initial Ceph Monitor we created earlier in this recipe: Click on Create to open the Monitor dialog box. Select a Proxmox node from the drop-down menu. Click on the Create button to start the monitor creation process. Create a total of three Ceph monitors to establish a Ceph quorum. The following screenshot shows the Ceph status with three monitors and OSDs added: Note that even with three OSDs added, the PGs are still stuck with errors. This is because by default, the Ceph CRUSH is set up for two replicas. So far, we've only created OSDs on one node. For a successful replication, we need to add some OSDs on the second node so that data objects can be replicated twice. Follow the steps described earlier to create three additional OSDs on the second node. After creating three more OSDs, the Ceph status should look like the following screenshot: Managing Ceph pools It is possible to perform basic tasks, such as creating and removing Ceph pools through the Proxmox GUI. Besides these, we can see check the list, status, number of PGs, and usage of the Ceph pools. The following steps show how to check, create, and remove Ceph pools through the Proxmox GUI: Click on the Pools tabbed menu under Ceph in the Proxmox GUI. The following screenshot shows the status of the default rbd pool, which has replica 1, 256 PG, and 0% usage: Click on Create to open the pool creation dialog box. Fill in the required information, such as the name of the pool, replica size, and number of PGs. Unless the CRUSH map has been fully customized, the ruleset should be left at the default value 0. Click on OK to create the pool. To remove a pool, select the pool and click on Remove. Remember that once a Ceph pool is removed, all the data stored in this pool is deleted permanently. To increase the number of PGs, run the following command through the CLI: #ceph osd pool set <pool_name> pg_num <value> #ceph osd pool set <pool_name> pgp_num <value> It is only possible to increase the PG value. Once increased, the PG value can never be decreased. Connecting RBD to Proxmox Once a Ceph cluster is fully configured, we can proceed to attach it to the Proxmox cluster. During the initial configuration file creation, Ceph also creates an authentication keyring in the /etc/ceph/ceph.client.admin.keyring directory path. This keyring needs to be copied and renamed to match the name of the storage ID to be created in Proxmox. Run the following commands to create a directory and copy the keyring: # mkdir /etc/pve/priv/ceph # cd /etc/ceph/ # cp ceph.client.admin.keyring /etc/pve/priv/ceph/<storage>.keyring For our storage, we are naming it rbd.keyring. After the keyring is copied, we can attach the Ceph RBD storage with Proxmox using the GUI: Click on Datacenter, then click on Storage from the tabbed menu. Click on the Add drop-down menu and select the RBD storage plugin. Enter the information as described in the following table: Item Type of value Entered value ID The name of the storage. rbd Pool The name of the Ceph pool. rbd Monitor Host The IP address and port number of the Ceph MONs. We can enter multiple MON hosts for redundancy. 172.16.0.71:6789;172.16.0.72:6789; 172.16.0.73:6789 User name The default Ceph administrator. Admin Nodes The Proxmox nodes that will be able to use the storage. All Enable The checkbox for enabling/disabling the storage. Enabled Click on Add to attach the RBD storage. The following screenshot shows the RBD storage under Summary: Monitoring the Ceph storage Ceph itself does not come with any GUI to manage or monitor the cluster. We can view the cluster status and perform various Ceph-related tasks through the Proxmox GUI. There are several third-party software that allow Ceph-only GUI to manage and monitor the cluster. Some software provide management features, while others provide read-only features for Ceph monitoring. Ceph Dash is such a software that provides an appealing read-only GUI to monitor the entire Ceph cluster without logging on to the Proxmox GUI. Ceph Dash is freely available through GitHub. There are other heavyweight Ceph GUI dashboards, such as Kraken, Calamari, and others. In this section, we are only going to see how to set up the Ceph Dash cluster monitoring GUI. The following steps can be used to download and start Ceph Dash to monitor a Ceph cluster using any browser: Log in to any Proxmox node, which is also a Ceph MON. Run the following commands to download and start the dashboard: # mkdir /home/tools # apt-get install git # git clone https://github.com/Crapworks/ceph-dash # cd /home/tools/ceph-dash # ./ceph_dash.py Ceph Dash will now start listening on port 5000 of the node. If the node is behind a firewall, open port 5000 or any other ports with port forwarding in the firewall. Open any browser and enter <node_ip>:5000 to open the dashboard. The following screenshot shows the dashboard of the Ceph cluster we have created: We can also monitor the status of the Ceph cluster through a CLI using the following commands: To check the Ceph status: # ceph –s To view OSDs in different nodes: # ceph osd tree To display real-time Ceph logs: # ceph –w To display a list of Ceph pools: # rados lspools To change the number of replicas of a pool: # ceph osd pool set size <value> Besides the preceding commands, there are many more CLI commands to manage Ceph and perform advanced tasks. The Ceph official documentation has a wealth of information and how-to guides along with the CLI commands to perform them. The documentation can be found at http://ceph.com/docs/master/. How it works… At this point, we have successfully integrated a Ceph cluster with a Proxmox cluster, which comprises six OSDs, three MONs, and three nodes. By viewing the Ceph Status page, we can get lot of information about a Ceph cluster at a quick glance. From the previous figure, we can see that there are 256 PGs in the cluster and the total cluster storage space is 1.47 TB. A healthy cluster will have the PG status as active+clean. Based on the nature of issue, the PGs can have various states, such as active+unclean, inactive+degraded, active+stale, and so on. To learn details about all the states, visit http://ceph.com/docs/master/rados/operations/pg-states/. By configuring a second network interface, we can separate a Ceph network from the main network. The #pveceph init command creates a Ceph configuration file in the /etc/pve/ceph.conf directory path. A newly configured Ceph configuration file looks similar to the following screenshot: Since the ceph.conf configuration file is stored in pmxcfs, any changes made to it are immediately replicated in all the Proxmox nodes in the cluster. As of Proxmox VE 3.4, Ceph RBD can only store a .raw image format. No templates, containers, or backup files can be stored on the RBD block storage. Here is the content of a storage configuration file after adding the Ceph RBD storage: rbd: rbd monhost 172.16.0.71:6789;172.16.0.72:6789;172.16.0.73:6789 pool rbd content images username admin If a situation dictates the IP address change of any node, we can simply edit this content in the configuration file to manually change the IP address of the Ceph MON nodes. See also To learn about Ceph in greater detail, visit http://ceph.com/docs/master/ for the official Ceph documentation Also, visit https://indico.cern.ch/event/214784/session/6/contribution/68/material/slides/0.pdf to find out why Ceph is being used at CERN to store the massive data generated by the Large Hadron Collider (LHC) Summary In this article, we came across with different configurations for a variety of storage categories and got hands-on practice with various stages in configuring the Ceph RBD storage. Resources for Article: Further resources on this subject: Deploying New Hosts with vCenter [article] Let's Get Started with Active Di-rectory [article] Basic Concepts of Proxmox Virtual Environment [article]
Read more
  • 0
  • 0
  • 9102
article-image-my-first-puppet-module
Packt
07 Sep 2015
15 min read
Save for later

My First Puppet Module

Packt
07 Sep 2015
15 min read
In this article by Jussi Heinonen, the author of Learning Puppet, we will get started with creating a Puppet module and the various aspects associated with it. Together with all the manifest files that we created so far, there are several of them already, and we haven't yet started to develop Puppet manifests. As the number of manifests expand, one may start wondering how files can be distributed and applied efficiently across multiple systems. This article will introduce you to Puppet modules and show you how to prepare a simple web server environment with Puppet. (For more resources related to this topic, see here.) Introducing the Puppet module The Puppet module is a collection of code and data that usually solves a particular problem, such as the installation and configuration of a web server. A module is packaged and distributed in the TAR (tape archive) format. When a module is installed, Puppet extracts the archive file on the disk, and the output of the installation process is a module directory that contains Puppet manifests (code), static files (data), and template files (code and data). Static files are typically some kind of configuration files that we want to distribute across all the nodes in the cluster. For example, if we want to ensure that all the nodes in the cluster are using the same DNS server configuration, we can include the /etc/resolv.conf file in the module and tell Puppet to apply it across all the nodes. This is just an example of how static files are used in Puppet and not a recommendation for how to configure DNS servers. Like static files, template files can also be used to provide configuration. The difference between a static and template file is that a static file will always have the same static content when applied across multiple nodes, whereas the template file can be customized based on the unique characteristics of a node. A good example of a unique characteristic is an IP address. Each node (or a host) in the network must have a unique IP address. Using the template file, we can easily customize the configuration on every node, wherever the template is applied. It's a good practice to keep the manifest files short and clean to make them easy to read and quick to debug. When I write manifests, I aim to keep the length of the manifest file in less than a hundred lines. If the manifest length exceeds 100 lines, then this means that I may have over-engineered the process a little bit. If I can't simplify the manifest to reduce the number of lines, then I have to split the manifest into multiple smaller manifest files and store these files within a Puppet module. The Puppet module structure The easiest way to get familiar with a module structure is to create an empty module with the puppet module generate command. As we are in the process of building a web server that runs a web application, we should give our module a meaningful name, such as learning-webapp. The Puppet module name format Before we create our first module, let's take a quick look at the Puppet module naming convention. The Puppet module name is typically in the format of <author>-<modulename>. A module name must contain one hyphen character (no more, no less) that separates the <author> and the <modulename> names. In the case of our learning-webapp module that we will soon create, the author is called learning and the module name is webapp, thus the module name learning-webapp. Generating a Puppet module Let's take a look at the following steps to create the learning-webapp Puppet module: Start the puppet-agent virtual machine. Using the cd command, navigate to the directory that is shared via the shared folder. On my virtual machine, my shared folder appears as /media/sf_learning, and I can move to the directory by running the following command: # cd /media/sf_learning Then, I'll create an empty puppet module with the command puppet module generate learning-webapp --skip-interview and the command returns a list of files and directories that the module contains: # puppet module generate learning-webapp --skip-interview Notice: Generating module at /media/sf_learning/learning-webapp Notice: Populating templates... Finished; module generated in learning-webapp. learning-webapp/metadata.json learning-webapp/Rakefile learning-webapp/manifests learning-webapp/manifests/init.pp learning-webapp/spec learning-webapp/spec/spec_helper.rb learning-webapp/spec/classes learning-webapp/spec/classes/init_spec.rb learning-webapp/Gemfile learning-webapp/tests learning-webapp/tests/init.pp learning-webapp/README.md To get a better view of how the files in the directories are organized in the learning-webapp module, you can run the tree learning-webapp command, and this command will produce the following tree structure of the files:   Here, we have a very simple Puppet module structure. Let's take a look at the files and directories inside the module in more detail: Gemfile: A file used for describing the Ruby package dependencies that are used for unit testing. For more information on Gemfile, visit http://bundler.io/v1.3/man/gemfile.5.html. manifests: A directory for all the Puppet manifest files in the module. manifests/init.pp: A default manifest file that declares the main Puppet class called webapp. metadata.json: A file that contains the module metadata, such as the name, version, and module dependencies. README.md: A file that contains information about the usage of the module. Spec: An optional directory for automated tests. Tests: A directory that contains examples that show how to call classes that are stored in the manifests directory. tests/init.pp: A file containing an example how to call the main class webapp in file manifests/init.pp. A Puppet class A Puppet class is a container for Puppet resources. A class typically includes references to multiple different types of resources and can also reference other Puppet classes. The syntax for declaring a Puppet class is not that different from declaring Puppet resources. A class definition begins with the keyword class, followed by the name of the class (unquoted) and an opening curly brace ({). A class definition ends with a closing curly brace (}). Here is a generic syntax of the Puppet class: class classname { } Let's take a look at the manifests/init.pp file that you just created with the puppet module generate command. Inside the file, you will find an empty Puppet class called webapp. You can view the contents of the manifests/init.pp file using the following command: # cat /media/sf_learning/learning-webapp/manifests/init.pp The init.pp file mostly contains the comment lines, which are prefixed with the # sign, and these lines can be ignored. At the end of the file, you can find the following declaration for the webapp class: class webapp { } The webapp class is a Puppet class that does nothing as it has no resources declared inside it. Resources inside the Puppet class Let's add a notify resource to the webapp class in the manifests/init.pp file before we go ahead and apply the class. The notify resource does not manage any operating system resources, such as files or users, but instead, it allows Puppet to report a message when a resource is processed. As the webapp module was created inside shared folders, you no longer have to use the Nano editor inside the virtual machine to edit manifests. Instead, you can use a graphical text editor, such as a Notepad on Windows or Gedit on the Linux host. This should make the process of editing manifests a bit easier and more user friendly. The directory that I shared on the host computer is /home/jussi/learning. When I take a look inside this directory, I can find a subdirectory called learning-webapp, which is the Puppet module directory that we created a moment ago. Inside this, there is a directory called manifests, which contains the init.pp file. Open the init.pp file in the text editor on the host computer and scroll down the file until you find the webapp class code block that looks like the following: class webapp { } If you prefer to carry on using the Nano editor to edit manifest files (I salute you!), you can open the init.pp file inside the virtual machine with the nano /media/sf_learning/learning-webapp/manifests/init.pp command. The notify resource that we are adding must be added inside the curly braces that begins and ends the class statement; otherwise, the resource will not be processed when we apply the class. Now we can add a simple notify resource that makes the webapp class look like the following when completed: class webapp { notify { 'Applying class webapp': } } Let's take a look at the preceding lines one by one: Line 1 begins with the webapp class, followed by the opening curly brace. Line 2 declares a notify resource and a new opening curly brace, followed by the resource name. The name of the notify resource will become the message that Puppet prints on the screen when the resource from a class is processed. Line 3 closes the notify resource statement. Line 4 indicates that the webapp class finishes here. Once you have added the notify resource to the webapp class, save the init.pp file. Rename the module directory Before we can apply our webapp class, we must rename our module directory. It is unclear to me as to why the puppet module generate command creates a directory name that contains a hyphen character (as in learning-webapp). The hyphen character is not allowed to be present in the Puppet module directory name. For this reason, we must rename the learning-webapp directory before we can apply the webapp class inside it. As the learning-webapp module directory lives in the shared folders, you can either use your preferred file manager program to rename the directory, or you can run the following two commands inside the Puppet Learning VM to change the directory name from learning-webapp to webapp: # cd /media/sf_learning # mv learning-webapp webapp Your module directory name should now be webapp, and we can move on to apply the webapp class inside the module and see what happens. Applying a Puppet class You can try running the puppet apply webapp/manifests/init.pp command but don't be disappointed when nothing happens. Why is that? The reason is because there is nothing inside the init.pp file that references the webapp class. If you are familiar with object-oriented programming, you may know that a class must be instantiated in order to get services from it. In this case, Puppet behaves in a similar way to object-oriented programming languages, as you must make a reference to the class in order to tell Puppet to process the class. Puppet has an include keyword that is used to reference a class. The include keyword in Puppet is only available for class resources, and it cannot be used in conjunction with any other type of Puppet resources. To apply the webapp class, we can make use of the init.pp file under the tests directory that was created when the module was generated. If you take a look inside the tests/init.pp file, you will find a line include webapp. The tests/init.pp file is the one that we should use to apply the webapp class. Here are the steps on how to apply the webapp class inside the Puppet Learning VM: Go to the parent directory of the webapp module: # cd /media/sf_learning Apply the webapp class that is included in the tests/init.pp file: # puppet apply --modulepath=./ webapp/tests/init.pp When the class is applied successfully, you should see the notify resource that was added to the webapp class that appears on lines 2 and 3 in the following Puppet report: Notice: Compiled catalog for web.development.vm in environment production in 0.05 seconds Notice: Applying class webapp Notice: /Stage[main]/Webapp/Notify[Applying class webapp]/message: defined 'message' as 'Applying class webapp' Notice: Finished catalog run in 0.81 seconds Let's take a step back and look again at the command that we used to apply to the webapp class: # puppet apply --modulepath=./ webapp/tests/init.pp The command can be broken down into three elements: puppet apply: The puppet apply command is used when applying a manifest from the command line. modulepath=./: This option is used to tell Puppet what filesystem path to use to look for the webapp module. The ./ (dot forward slash) notation means that we want our current /media/sf_learning working directory to be used as the modulepath value. webapp/tests/init.pp: This is the file that the puppet apply command should read. Installing a module from Puppet Forge Puppet Forge is a public Puppet module repository (https://forge.puppetlabs.com) for modules that are created by the community around Puppet. Making use of the modules in Puppet Forge is a great way to build a software stack quickly, without having to write all the manifests yourself from scratch. The web server that we are going to install is a highly popular Apache HTTP Server (http://httpd.apache.org/), and there is a module in Puppet Forge called puppetlabs-apache that we can install. The Puppetlabs-apache module provides all the necessary Puppet resources for the Apache HTTP Server installation. Note that the puppet module installation requires an Internet connection. To test whether the Puppet Learning VM can connect to the Internet, run the following command on the command line: # host www.google.com On successful completion, the command will return the following output: www.google.com has address 216.58.211.164www.google.com has IPv6 address 2a00:1450:400b:801::2004 Note that the reported IP address may vary. As long as the host command returns www.google.com has address …, the Internet connection works. Now that the Internet connection has been tested, you can now proceed with the module installation. Before we install the puppetlabs-apache module, let's do a quick search to confirm that the module is available in Puppet Forge. The following command will search for the puppetlabs-apache module: # puppet module search puppetlabs-apache When the search is successful, it returns the following results:   Then, we can install the module. Follow these steps to install the puppetlabs-apache module: In the Puppet Learning VM, go to the shared folders /media/sf_learning directory by running the cd /media/sf_learning command. Then, run the following command: # puppet module install --modulepath=./ puppetlabs-apache The --modulepath=./ option specifies that the module should be installed in the current /media/sf_learning working directory The installation will take a couple of minutes to complete, and once it is complete, you will see the following lines appear on the screen: Notice: Preparing to install into /media/sf_learning ... Notice: Preparing to install into /media/sf_learning ... Notice: Downloading from https://forgeapi.puppetlabs.com ... Notice: Installing -- do not interrupt ... /media/sf_learning └─┬ puppetlabs-apache (v1.2.0) ├── puppetlabs-concat (v1.1.2) └── puppetlabs-stdlib (v4.8.0) Let's take a look at the output line by line to fully understand what happened during the installation process: Line 1 tells us that the module is going to be installed in the /media/sf_learning directory, which is our current working directory. This directory was specified with the --modulepath=./ option in the puppet module install command. Line 2 says that the module is going to be installed from https://forgeapi.puppetlabs.com/, which is the address for Puppet Forge. Line 3 is fairly self-explanatory and indicates that the installation process is running. Lines 4 and 5 tell us that the puppetlabs-apache module was installed in the current /media/sf_learning working directory. Line 6 indicates that as part of the puppetlabs-apache module installation, a puppetlabs-concat dependency module was also installed. Line 7 lists another dependency module called puppetlabs-stdlib that got installed in the process. Now you can run the tree -L 1 command to see what new directories got created in /media/sf_learning as a result of the puppet module install command: # tree -L 1 ├── apache ├── concat ├── stdlib └── webapp 4 directories, 0 files The argument -L 1 in the tree command specifies that it should only traverse one level of directory hierarchy. Installing Apache HTTP Server Now that the puppetlabs-apache module is installed in the filesystem, we can proceed with the Apache HTTP Server installation. Earlier, we talked about how a Puppet class can be referenced with the include keyword. Let's see how this works in practice by adding the include apache statement to our webapp class, and then applying the webapp class from the command line. Open the webapp/manifests/init.pp file in your preferred text editor, and add the include apache statement inside the webapp class. I like to place the include statements at the beginning of the class before any resource statement. In my text editor, the webapp class looks like the following after the include statement has been added to it:   Once you have saved the webapp/manifests/init.pp file, you can apply the webapp class with the following command: # puppet apply --modulepath=./ webapp/tests/init.pp This time, the command output is much longer compared to what it was when we applied the webapp class for the first time. In fact, the output is too long to be included in full, so I'm only going to show you the last two lines of the Puppet report, which shows you the step where the state of the Service[httpd] resource has changed from stopped to running: Notice: /Stage[main]/Apache::Service/Service[httpd]/ensure: ensure changed 'stopped' to 'running'Notice: Finished catalog run in 65.20 seconds Summary So we have now come to the end of this article. I hope you found the content useful and not too challenging. One of the key deliverables of this article was to experiment with Puppet modules and learn how to create your own module
Read more
  • 0
  • 0
  • 5813

article-image-using-mannequin-editor
Packt
07 Sep 2015
14 min read
Save for later

Using the Mannequin editor

Packt
07 Sep 2015
14 min read
In this article, Richard Marcoux, Chris Goodswen, Riham Toulan, and Sam Howels, the authors of the book CRYENGINE Game Development Blueprints, will take us through animation in CRYENGINE. In the past, animation states were handled by a tool called Animation Graph. This is akin to Flow Graph but handled animations and transitions for all animated entities, and unfortunately reduced any transitions or variation in the animations to a spaghetti graph. Thankfully, we now have Mannequin! This is an animation system where the methods by which animation states are handled is all dealt with behind the scenes—all we need to take care of are the animations themselves. In Mannequin, an animation and its associated data is known as a fragment. Any extra detail that we might want to add (such as animation variation, styles, or effects) can be very simply layered on top of the fragment in the Mannequin editor. While complex and detailed results can be achieved with all manner of first and third person animation in Mannequin, for level design we're only really interested in basic fragments we want our NPCs to play as part of flavor and readability within level scripting. Before we look at generating some new fragments, we'll start off with looking at how we can add detail to an existing fragment—triggering a flare particle as part of our flare firing animation. (For more resources related to this topic, see here.) Getting familiar with the interface First things first, let's open Mannequin! Go to View | Open View Pane | Mannequin Editor. This is initially quite a busy view pane so let's get our bearings on what's important to our work. You may want to drag and adjust the sizes of the windows to better see the information displayed. In the top left, we have the Fragments window. This lists all the fragments in the game that pertain to the currently loaded preview. Let's look at what this means for us when editing fragment entries. The preview workflow A preview is a complete list of fragments that pertains to a certain type of animation. For example, the default preview loaded is sdk_playerpreview1p.xml, which contains all the first person fragments used in the SDK. You can browse the list of fragments in this window to get an idea of what this means—everything from climbing ladders to sprinting is defined as a fragment. However, we're interested in the NPC animations. To change the currently loaded preview, go to File | Load Preview Setup and pick sdk_humanpreview.xml. This is the XML file that contains all the third person animations for human characters in the SDK. Once this is loaded, your fragment list should update to display a larger list of available fragments usable by AI. This is shown in the following screenshot:   If you don't want to perform this step every time you load Mannequin, you are able to change the default preview setup for the editor in the preferences. Go to Tools | Preferences | Mannequin | General and change the Default Preview File setting to the XML of your choice. Working with fragments Now we have the correct preview populating our fragment list, let's find our flare fragment. In the box with <FragmentID Filter> in it, type flare and press Enter. This will filter down the list, leaving you with the fireFlare fragment we used earlier. You'll see that the fragment is comprised of a tree. Expanding this tree one level brings us to the tag. A tag in mannequin is a method of choosing animations within a fragment based on a game condition. For example, in the player preview we were in earlier, the begin_reload fragment has two tags: one for SDKRifle and one for SDKShotgun. Depending on the weapon selected by the player, it applies a different tag and consequently picks a different animation. This allows animators to group together animations of the same type that are required in different situations. For our fireFlare fragment, as there are no differing scenarios of this type, it simply has a <default> tag. This is shown in the following screenshot:   Inside this tag, we can see there's one fragment entry: Option 1. These are the possible variations that Mannequin will choose from when the fragment is chosen and the required tags are applied. We only have one variation within fireFlare, but other fragments in the human preview (for example, IA_talkFunny) offer extra entries to add variety to AI actions. To load this entry for further editing, double-click Option 1. Let's get to adding that flare! Adding effects to fragments After loading the fragment entry, the Fragment Editor window has now updated. This is the main window in the center of Mannequin and comprises of a preview window to view the animation and a list of all the available layers and details we can add. The main piece of information currently visible here is the animation itself, shown in AnimLayer under FullBody3P: At the bottom of the Fragment Editor window, some buttons are available that are useful for editing and previewing the fragment. These include a play/pause toggle (along with a playspeed dropdown) and a jump to start button. You are also able to zoom in and out of the timeline with the mouse wheel, and scrub the timeline by click-dragging the red timeline marker around the fragment. These controls are similar to the Track View cinematics tool and should be familiar if you've utilized this in the past. Procedural layers Here, we are able to add our particle effect to the animation fragment. To do this, we need to add ProcLayer (procedural layer) to the FullBody3P section. The ProcLayer runs parallel to AnimLayer and is where any extra layers of detail that fragments can contain are specified, from removing character collision to attaching props. For our purposes, we need to add a particle effect clip. To do this, double-click on the timeline within ProcLayer. This will spawn a blank proc clip for us to categorize. Select this clip and Procedural Clip Properties on the right-hand side of the Fragment Editor window will be populated with a list of parameters. All we need to do now is change the type of this clip from None to ParticleEffect. This is editable in the dropdown Type list. This should present us with a ParticleEffect proc clip visible in the ProcLayer alongside our animation, as shown in the following screenshot:   Now that we have our proc clip loaded with the correct type, we need to specify the effect. The SDK has a couple of flare effects in the particle libraries (searchable by going to RollupBar | Objects Tab | Particle Entity); I'm going to pick explosions.flare.a. To apply this, select the proc clip and paste your chosen effect name into the Effect parameter. If you now scrub through fragment, you should see the particle effect trigger! However, currently the effect fires from the base of the character in the wrong direction. We need to align the effect to the weapon of the enemy. Thankfully, the ParticleEffect proc clip already has support for this in its properties. In the Reference Bone parameter, enter weapon_bone and hit Enter. The weapon_bone is the generic bone name that character's weapons are attached too, and as such it is a good bet for any cases where we require effects or objects to be placed in a character's weapon position. Scrubbing through the fragment again, the effect will now fire from the weapon hand of the character. If we ever need to find out bone names, there are a few ways to access this information within the editor. Hovering over the character in the Mannequin previewer will display the bone name. Alternatively, in Character Editor (we'll go into the details later), you can scroll down in the Rollup window on the right-hand side, expand Debug Options, and tick ShowJointNames. This will display the names of all bones over the character in the previewer. With the particle attached, we can now ensure that the timing of the particle effect matches the animation. To do this, you can click-and-drag the proc clip around timeline—around 1.5 seconds seems to match the timings for this animation. With the effect timed correctly, we now have a fully functioning fireFlare fragment! Try testing out the setup we made earlier with this change. We should now have a far more polished looking event. The previewer in Mannequin shares the same viewport controls as the perspective view in Sandbox. You can use this to zoom in and look around to gain a better view of the animation preview. The final thing we need to do is save our changes to the Mannequin databases! To do this, go to File | Save Changes. When the list of changed files is displayed, press Save. Mannequin will then tell you that you're editing data from the .pak files. Click Yes to this prompt and your data will be saved to your project. The resulting changed database files will appear in GameSDKAnimationsMannequinADB, and it should be distributed with your project if you package it for release. Adding a new fragment Now that we know how to add some effects feedback to existing fragments, let's look at making a new fragment to use as part of our scripting. This is useful to know if you have animators on your project and you want to get their assets in game quickly to hook up to your content. In our humble SDK project, we can effectively simulate this as there are a few animations that ship with the SDK that have no corresponding fragment. Now, we'll see how to browse the raw animation assets themselves, before adding them to a brand new Mannequin fragment. The Character Editor window Let's open the Character Editor. Apart from being used for editing characters and their attachments in the engine, this is a really handy way to browse the library of animation assets available and preview them in a viewport. To open the Character Editor, go to View | Open View Pane | Character Editor. On some machines, the expense of rendering two scenes at once (that is, the main viewport and the viewports in the Character Editor or Mannequin Editor) can cause both to drop to a fairly sluggish frame rate. If you experience this, either close one of the other view panes you have on the screen or if you have it tabbed to other panes, simply select another tab. You can also open the Mannequin Editor or the Character Editors without a level loaded, which allows for better performance and minimal load times to edit content. Similar to Mannequin, the Character Editor will initially look quite overwhelming. The primary aspects to focus on are the Animations window in the top-left corner and the Preview viewport in the middle. In the Filter option in the Animations window, we can search for search terms to narrow down the list of animations. An example of an animation that hasn't yet been turned into a Mannequin fragment is the stand_tac_callreinforcements_nw_3p_01 animation. You can find this by entering reinforcements into the search filter:   Selecting this animation will update the debug character in the Character Editor viewport so that they start to play the chosen animation. You can see this specific animation is a oneshot wave and might be useful as another trigger for enemy reinforcements further in our scripting. Let's turn this into a fragment! We need to make sure we don't forget this animation though; right-click on the animation and click Copy. This will copy the name to the clipboard for future reference in Mannequin. The animation can also be dragged and dropped into Mannequin manually to achieve the same result. Creating fragment entries With our animation located, let's get back to Mannequin and set up our fragment. Ensuring that we're still in the sdk_humanpreview.xml preview setup, take another look at the Fragments window in the top left of Mannequin. You'll see there are two rows of buttons: the top row controls creation and editing of fragment entries (the animation options we looked at earlier). The second row covers adding and editing of fragment IDs themselves: the top level fragment name. This is where we need to start. Press the New ID button on the second row of buttons to bring up the New FragmentID Name dialog. Here, we need to add a name that conforms to the prefixes we discussed earlier. As this is an action, make sure you add IA_ (interest action) as the prefix for the name you choose; otherwise, it won't appear in the fragment browser in the Flow Graph. Once our fragment is named, we'll be presented with Mannequin FragmentID Editor. For the most part, we won't need to worry about these options. But it's useful to be aware of how they might be useful (and don't worry, these can be edited after creation). The main parameters to note are the Scope options. These control which elements of the character are controlled by the fragment. By default, all these boxes are ticked, which means that our fragment will take control of each ticked aspect of the character. An example of where we might want to change this would be the character LookAt control. If we want to get an NPC to look at another entity in the world as part of a scripted sequence (using the AI:LookAt Flow Graph node), it would not be possible with the current settings. This is because the LookPose and Looking scopes are controlled by the fragment. If we were to want to control this via Flow Graph, these would need to be unticked, freeing up the look scopes for scripted control. With scopes covered, press OK at the bottom of the dialog box to continue adding our callReinforcements animation! We now have a fragment ID created in our Fragments window, but it has no entries! With our new fragment selected, press the New button on the first row of buttons to add an entry. This will automatically add itself under the <default> tag, which is the desired behavior as our fragment will be tag-agnostic for the moment. This has now created a blank fragment in the Fragment Editor. Adding the AnimLayer This is where our animation from earlier comes in. Right-click on the FullBody3P track in the editor and go to Add Track | AnimLayer. As we did previously with our effect on ProcLayer, double-click on AnimLayer to add a new clip. This will create our new Anim Clip, with some red None markup to signify the lack of animation. Now, all we need to do is select the clip, go to the Anim Clip Properties, and paste in our animation name by double-clicking the Animation parameter. The Animation parameter has a helpful browser that will allow you to search for animations—simply click on the browse icon in the parameter entry section. It lacks the previewer found in the Character Editor but can be a quick way to find animation candidates by name within Mannequin. With our animation finally loaded into a fragment, we should now have a fragment setup that displays a valid animation name on the AnimLayer. Clicking on Play will now play our reinforcements wave animation!   Once we save our changes, all we need to do now is load our fragment in an AISequence:Animation node in Flow Graph. This can be done by repeating the steps outlined earlier. This time, our new fragment should appear in the fragment dialog. Summary Mannequin is a very powerful tool to help with animations in CRYENGINE. We have looked at how to get started with it. Resources for Article: Further resources on this subject: Making an entity multiplayer-ready[article] Creating and Utilizing Custom Entities[article] CryENGINE 3: Breaking Ground with Sandbox [article]
Read more
  • 0
  • 0
  • 12487

article-image-introduction-watchkit
Packt
04 Sep 2015
7 min read
Save for later

Introduction to WatchKit

Packt
04 Sep 2015
7 min read
In this article by Hossam Ghareeb, author of the book Application Development with Swift, we will talk about a new technology, WatchKit, and a new era of wearable technologies. Now technology is a part of all aspects of our lives, even wearable objects. You can see smart watches such as the new Apple watch or glasses such as Google glass. We will go through the new WatchKit framework to learn how to extend your iPhone app functionalities to your wrist. (For more resources related to this topic, see here.) Apple watch Apple watch is a new device on your wrist that can be used to extend your iPhone app functionality; you can access the most important information and respond in easy ways using the watch. The watch is now available in most countries in different styles and models so that everyone can find a watch that suits them. When you get your Apple watch, you can pair it with your iPhone. The watch can't be paired with the iPad; it can only be paired with your iPhone. To run third-party apps on your watch, iPhone should be paired with the watch. Once paired, when you install any app on your iPhone that has an extension for the watch, the app will be installed on the watch automatically and wirelessly. WatchKit WatchKit is a new framework to build apps for Apple watch. To run third-party apps on Apple watch, you need the watch to be connected to the iPhone. WatchKit helps you create apps for Apple watch by creating two targets in Xcode: The WatchKit app: The WatchKit app is an executable app to be installed on your watch, and you can install or uninstall it from your iPhone. The WatchKit app contains the storyboard file and resources files. It doesn't contain any source code, just the interface and resource files. The WatchKit extension: This extension runs on the iPhone and has the InterfaceControllers file for your storyboard. This extension just contains the model and controller classes. The actions and outlets from the previous WatchKit app will be linked to these controller files in the WatchKit extension. These bundles—the WatchKit extension and WatchKit app—are put together and packed inside the iPhone application. When the user installs the iPhone app, the system will prompt the user to install the WatchKit app if there is a paired watch. Using WatchKit, you can extend your iOS app in three different ways: The WatchKit app As we mentioned earlier, the WatchKit app is an app installed on Apple watch and the user can find it in the list of Watch apps. The user can launch, control, and interact with the app. Once the app is launched, the WatchKit extension on the iPhone app will run in the background to update a user interface, perform any logic required, and respond to user actions. Note that the iPhone app can't launch or wake up the WatchKit extension or the WatchKit app. However, the WatchKit extension can ask the system to launch the iPhone app and this will be performed in the background. Glances Glances are single interfaces that the user can navigate between. The glance view is just read-only information, which means that you can't add any interactive UI controls such as buttons and switches. Apps should use glances to display very important and timely information. The glance view is a nonscrolling view, so your glance view should fit the watch screen. Avoid using tables and maps in interface controllers and focus on delivering the most important information in a nice way. Once the user clicks on the glance view, the watch app will be launched. The glance view is optional in your app. The glance interface and its interface controller files are a part of your WatchKit extension and WatchKit app. The glance interface resides in a storyboard, which resides in the WatchKit app. The interface controller that is responsible for filling the view with the timely important information is located in the WatchKit extension, which runs in the background in the iPhone app, as we said before. Actionable notifications For sure, you can handle and respond to local and remote notifications in an easy and fast way using Apple watch. WatchKit helps you build user interfaces for the notification that you want to handle in your WatchKit app. WatchKit helps you add actionable buttons so that the user can take action based on the notification. For example, if a notification for an invitation is sent to you, you can take action to accept or reject the notification from your wrist. You can respond to these actions easily in interface controllers in WatchKit extension. Working with WatchKit Enough talking about theory, lets see some action. Go to our lovely Xcode and create a new single-view application and name it WatchKitDemo. Don't forget to select Swift as the app language. Then navigate to File | New | Target to create a new target for the WatchKit app: After you select the target, in the pop-up window, from the left side under iOS choose Apple Watch and select WatchKit App. Check the following screenshot: After you click on Next, it will ask you which application to embed the target in and which scenes to include. Please check the Include Notification Scene and Include Glance Scene options, as shown in the following screenshot: Click on Finish, and now you have an iPhone app with the built-in WatchKit extension and WatchKit app. Xcode targets Now your project should be divided into three parts. Check the following screenshot and let's explain these parts: As you see in this screenshot, the project files are divided into three sections. In section 1, you can see the iPhone app source files, interface files or storyboard, and resources files. In section 2, you can find the WatchKit extension, which contains only interface controllers and model files. Again, as we said before, this extension also runs in iPhone in the background. In section 3, you can see the WatchKit app, which runs in Apple watch itself. As we see, it contains the storyboard and resources files. No source code can be added in this target. Interface controllers In the WatchKit extension of your Xcode project, open InterfaceController.swift. You will find the interface controller file for the scene that exists in Interface.storyboard in the WatchKit app. The InterfaceController file extends from WKInterfaceController, which is the base class for interface controllers. Forget the UI classes that you were using in the iOS apps from the UIKit framework, as it has different interface controller classes in WatchKit and they are very limited in configuration and customization. In the InterfaceController file, you can find three important methods that explain the lifecycle of your controller: awakeWithContext, willActivate, and didDeactivate. Another important method that can be overridden for the lifecycle is called init, but it's not implemented in the controller file. Let's now explain the four lifecycle methods: init: You can consider this as your first chance to update your interface elements. awakeWithContext: This is called after the init method and contains context data that can be used to update your interface elements or to perform some logical operations on these data. Context data is passed between interface controllers when you push or present another controller and you want to pass some data. willActivate: Here, your scene is about to be visible onscreen, and its your last chance to update your UI. Try to put simple UI changes here in this method so as not to cause freezing in UI. didDeactivate: Your scene is about to be invisible and, if you want to clean up your code, it's time to stop animations or timers. Summary In this article, we covered a very important topic: how to develop apps for the new wearable technology, Apple watch. We first gave a quick introduction about the new device and how it can communicate with paired iPhones. We then talked about WatchKit, the new framework, that enables you to develop apps for Apple watch and design its interface. Apple has designed the watch to contain only the storyboard and resources files. All logic and operations are performed in the iPhone app in the background. Resources for Article: Further resources on this subject: Flappy Swift [article] Playing with Swift [article] Using OpenStack Swift [article]
Read more
  • 0
  • 0
  • 9475
article-image-javascript-execution-selenium
Packt
04 Sep 2015
23 min read
Save for later

JavaScript Execution with Selenium

Packt
04 Sep 2015
23 min read
In this article, by Mark Collin, the author of the book, Mastering Selenium WebDriver, we will look at how we can directly execute JavaScript snippets in Selenium. We will explore the sort of things that you can do and how they can help you work around some of the limitations that you will come across while writing your scripts. We will also have a look at some examples of things that you should avoid doing. (For more resources related to this topic, see here.) Introducing the JavaScript executor Selenium has a mature API that caters to the majority of automation tasks that you may want to throw at it. That being said, you will occasionally come across problems that the API doesn't really seem to support. This was very much on the development team's mind when Selenium was written. So, they provided a way for you to easily inject and execute arbitrary blocks of JavaScript. Let's have a look at a basic example of using a JavaScript executor in Selenium: JavascriptExecutor js = (JavascriptExecutor) driver; js.executeScript("console.log('I logged something to the Javascript console');"); Note that the first thing we do is cast a WebDriver object into a JavascriptExecutor object. The JavascriptExecutor interface is implemented through the RemoteWebDriver class. So, it's not a part of the core set of API functions. Since we normally pass around a WebDriver object, the executeScript functions will not be available unless we perform this cast. If you are directly using an instance of RemoteWebDriver or something that extends it (most driver implementations now do this), you will have direct access to the .executeScript() function. Here's an example: FirefoxDriver driver = new FirefoxDriver(new FirefoxProfile()); driver.executeScript("console.log('I logged something to the Javascript console');"); The second line (in both the preceding examples) is just telling Selenium to execute an arbitrary piece of JavaScript. In this case, we are just going to print something to the JavaScript console in the browser. We can also get the .executeScript() function to return things to us. For example, if we tweak the script of JavaScript in the first example, we can get Selenium to tell us whether it managed to write to the JavaScript console or not, as follows: JavascriptExecutor js = (JavascriptExecutor) driver; Object response = js.executeScript("return console.log('I logged something to the Javascript console');"); In the preceding example, we will get a result of true coming back from the JavaScript executor. Why does our JavaScript start with return? Well, the JavaScript executed by Selenium is executed as a body of an anonymous function. This means that if we did not add a return statement to the start of our JavaScript snippet, we would actually be running this JavaScript function using Selenium: var anonymous = function () { console.log('I logged something to the Javascript console'); }; This function does log to the console, but it does not return anything. So, we can't access the result of the JavaScript snippet. If we prefix it with a return, it will execute this anonymous function: var anonymous = function () { return console.log('I logged something to the Javascript console'); }; This does return something for us to work with. In this case, it will be the result of our attempt to write some text to the console. If we succeeded in writing some text to the console, we will get back a true value. If we failed, we will get back a false value. Note that in our example, we saved the response as an object—not a string or a Boolean. This is because the JavaScript executor can return lots of different types of objects. What we get as a response can be one of the following: If the result is null or there is no return value, a null will be returned If the result is an HTML element, a WebElement will be returned If the result is a decimal, a double will be returned If the result is a nondecimal number, a long will be returned If the result is a Boolean, a Boolean will be returned If the result is an array, a List object with each object that it contains, along with all of these rules, will be returned (nested lists are supported) For all other cases, a string will be returned It is an impressive list, and it makes you realize just how powerful this method is. There is more as well. You can also pass arguments into the .executeScript() function. The arguments that you pass in can be any one of the following: Number Boolean String WebElement List They are then put into a magic variable called arguments, which can be accessed by the JavaScript. Let's extend our example a little bit to pass in some arguments, as follows: String animal = "Lion"; int seen = 5; JavascriptExecutor js = (JavascriptExecutor) driver; js.executeScript("console.log('I have seen a ' + arguments[0] + ' ' + arguments[1] + ' times(s)');", animal, seen); This time, you will see that we managed to print the following text into the console: I have seen a Lion 5 times(s) As you can see, there is a huge amount of flexibility with the JavaScript executor. You can write some complex bits of JavaScript code and pass in lots of different types of arguments from your Java code. Think of all the things that you could do! Let's not get carried away We now know the basics of how one can execute JavaScript snippets in Selenium. This is where some people can start to get a bit carried away. If you go through the mailing list of the users of Selenium, you will see many instances of people asking why they can't click on an element. Most of the time, this is due to the element that they are trying to interact with not being visible, which is blocking a click action. The real solution to this problem is to perform an action (the same one that they would perform if they were manually using the website) to make the element visible so that they can interact with it. However, there is a shortcut offered by many, which is a very bad practice. You can use a JavaScript executor to trigger a click event on this element. Doing this will probably make your test pass. So why is it a bad solution? The Selenium development team has spent quite a lot of time writing code that works out if a user can interact with an element. It's pretty reliable. So, if Selenium says that you cannot currently interact with an element, it's highly unlikely that it's wrong. When figuring out whether you can interact with an element, lots of things are taken into account, including the z-index of an element. For example, you may have a transparent element that is covering the element that you want to click on and blocking the click action so that you can't reach it. Visually, it will be visible to you, but Selenium will correctly see it as not visible. If you now invoke a JavaScript executor to trigger a click event on this element, your test will pass, but users will not be able to interact with it when they try to manually use your website. However, what if Selenium got it wrong and I can interact with the element that I want to click manually? Well, that's great, but there are two things that you need to think about. First of all, does it work in all browsers? If Selenium thinks that it is something that you cannot interact with, it's probably for a good reason. Is the markup, or the CSS, overly complicated? Can it be simplified? Secondly, if you invoke a JavaScript executor, you will never know whether the element that you want to interact with really does get blocked at some point in the future. Your test may as well keep passing when your application is broken. Tests that can't fail when something goes wrong are worse than no test at all! If you think of Selenium as a toolbox, a JavaScript executor is a very powerful tool that is present in it. However, it really should be seen as a last resort when all other avenues have failed you. Too many people use it as a solution to any slightly sticky problem that they come across. If you are writing JavaScript code that attempts to mirror existing Selenium functions but are removing the restrictions, you are probably doing it wrong! Your code is unlikely to be better. The Selenium development team have been doing this for a long time with a lot of input from a lot of people, many of them being experts in their field. If you are thinking of writing methods to find elements on a page, don't! Use the .findElement() method provided by Selenium. Occasionally, you may find a bug in Selenium that prevents you from interacting with an element in the way you would expect to. Many people first respond by reaching for the JavascriptExecutor to code around the problem in Selenium. Hang on for just one moment though. Have you upgraded to the latest version of Selenium to see if that fixes your problem? Alternatively, did you just upgrade to the latest version of Selenium when you didn't need to? Using a slightly older version of Selenium that works correctly is perfectly acceptable. Don't feel forced to upgrade for no reason, especially if it means that you have to write your own hacks around problems that didn't exist before. The correct thing to do is to use a stable version of Selenium that works for you. You can always raise bugs for functionality that doesn't work, or even code a fix and submit a pull request. Don't give yourself the additional work of writing a workaround that's probably not the ideal solution, unless you need to. So what should we do with it? Let's have a look at some examples of the things that we can do with the JavaScript executor that aren't really possible using the base Selenium API. First of all, we will start off by getting the element text. Wait a minute, element text? But, that’s easy! You can use the existing Selenium API with the following code: WebElement myElement = driver.findElement(By.id("foo")); String elementText = myElement.getText(); So why would we want to use a JavaScript executor to find the text of an element? Getting text is easy using the Selenium API, but only under certain conditions. The element that you are collecting the text from needs to be displayed. If Selenium thinks that the element from which you are collecting the text is not displayed, it will return an empty string. If you want to collect some text from a hidden element, you are out of luck. You will need to implement a way to do it with a JavaScript executor. Why would you want to do this? Well, maybe you have a responsive website that shows different elements based on different resolutions. You may want to check whether these two different elements are displaying the same text to the user. To do this, you will need to get the text of the visible and invisible elements so that you can compare them. Let's create a method to collect some hidden text for us: private String getHiddenText(WebElement element) { JavascriptExecutor js = (JavascriptExecutor) ((RemoteWebElement) element).getWrappedDriver(); return (String) js.executeScript("return arguments[0].text", element); } There is some cleverness in this method. First of all, we took the element that we wanted to interact with and then extracted the driver object associated with it. We did this by casting the WebElement into a RemoteWebElement, which allowed us to use the getWrappedDriver() method. This removes the need to pass a driver object around the place all the time (this is something that happens a lot in some code bases). We then took the driver object and cast it into a JavascriptExecutor so that we would have the ability to invoke the executeScript() method. Next, we executed the JavaScript snippet and passed in the original element as an argument. Finally, we took the response of the executeScript() call and cast it into a string that we can return as a result of the method. Generally, getting text is a code smell. Your tests should not rely on specific text being displayed on a website because content always changes. Maintaining tests that check the content of a site is a lot of work, and it makes your functional tests brittle. The best thing to do is test the mechanism that injects the content into the website. If you use a CMS that injects text into a specific template key, you can test whether each element has the correct template key associated with it. I want to see a more complex example! So you want to see something more complicated. The Advanced User Interactions API cannot deal with HTML5 drag and drop. So, what happens if we come across an HTML5 drag-and-drop implementation that we want to automate? Well, we can use the JavascriptExecutor. Let's have a look at the markup for the HTML5 drag-and-drop page: <!DOCTYPE html> <html lang="en"> <head> <meta charset=utf-8> <title>Drag and drop</title> <style type="text/css"> li { list-style: none; } li a { text-decoration: none; color: #000; margin: 10px; width: 150px; border-width: 2px; border-color: black; border-style: groove; background: #eee; padding: 10px; display: block; } *[draggable=true] { cursor: move; } ul { margin-left: 200px; min-height: 300px; } #obliterate { background-color: green; height: 250px; width: 166px; float: left; border: 5px solid #000; position: relative; margin-top: 0; } #obliterate.over { background-color: red; } </style> </head> <body> <header> <h1>Drag and drop</h1> </header> <article> <p>Drag items over to the green square to remove them</p> <div id="obliterate"></div> <ul> <li><a id="one" href="#" draggable="true">one</a></li> <li><a id="two" href="#" draggable="true">two</a></li> <li><a id="three" href="#" draggable="true">three</a></li> <li><a id="four" href="#" draggable="true">four</a></li> <li><a id="five" href="#" draggable="true">five</a></li> </ul> </article> </body> <script> var draggableElements = document.querySelectorAll('li > a'), obliterator = document.getElementById('obliterate'); for (var i = 0; i < draggableElements.length; i++) { element = draggableElements[i]; element.addEventListener('dragstart', function (event) { event.dataTransfer.effectAllowed = 'copy'; event.dataTransfer.setData('being-dragged', this.id); }); } obliterator.addEventListener('dragover', function (event) { if (event.preventDefault) event.preventDefault(); obliterator.className = 'over'; event.dataTransfer.dropEffect = 'copy'; return false; }); obliterator.addEventListener('dragleave', function () { obliterator.className = ''; return false; }); obliterator.addEventListener('drop', function (event) { var elementToDelete = document.getElementById( event.dataTransfer.getData('being-dragged')); elementToDelete.parentNode.removeChild(elementToDelete); obliterator.className = ''; return false; }); </script> </html> Note that you need a browser that supports HTML5/CSS3 for this page to work. The latest versions of Google Chrome, Opera Blink, Safari, and Firefox will work. You may have issues with Internet Explorer (depending on the version that you are using). For an up-to-date list of HTML5/CSS3 support, have a look at http://caniuse.com. If you try to use the Advanced User Interactions API to automate this page, you will find that it just doesn't work. It looks like it's time to reach for JavascriptExecutor. First of all, we need to write some JavaScript that can simulate the events that we need to trigger to perform the drag-and-drop action. To do this, we are going to create three JavaScript functions. The first function is going to create a JavaScript event: function createEvent(typeOfEvent) { var event = document.createEvent("CustomEvent"); event.initCustomEvent(typeOfEvent, true, true, null); event.dataTransfer = { data: {}, setData: function (key, value) { this.data[key] = value; }, getData: function (key) { return this.data[key]; } }; return event; } We then need to write a function that will fire events that we have created. This also allows you to pass in the dataTransfer value set on an element. We need this to keep track of the element that we are dragging: function dispatchEvent(element, event, transferData) { if (transferData !== undefined) { event.dataTransfer = transferData; } if (element.dispatchEvent) { element.dispatchEvent(event); } else if (element.fireEvent) { element.fireEvent("on" + event.type, event); } } Finally, we need something that will use these two functions to simulate the drag-and-drop action: function simulateHTML5DragAndDrop(element, target) { var dragStartEvent = createEvent('dragstart'); dispatchEvent(element, dragStartEvent); var dropEvent = createEvent('drop'); dispatchEvent(target, dropEvent, dragStartEvent.dataTransfer); var dragEndEvent = createEvent('dragend'); dispatchEvent(element, dragEndEvent, dropEvent.dataTransfer); } Note that the simulateHTML5DragAndDrop function needs us to pass in two elements—the element that we want to drag, and the element that we want to drag it to. It's always a good idea to try out your JavaScript in a browser first. You can copy the preceding functions into the JavaScript console in a modern browser and then try using them to make sure that they work as expected. If things go wrong in your Selenium test, you then know that it is most likely an error invoking it via the JavascriptExecutor rather than a bad piece of JavaScript. We now need to take these scripts and put them into a JavascriptExecutor along with something that will call the simulateHTML5DragAndDrop function: private void simulateDragAndDrop(WebElement elementToDrag, WebElement target) throws Exception { WebDriver driver = getDriver(); JavascriptExecutor js = (JavascriptExecutor) driver; js.executeScript("function createEvent(typeOfEvent) {n" + "var event = document.createEvent("CustomEvent");n" + "event.initCustomEvent(typeOfEvent, true, true, null);n" + " event.dataTransfer = {n" + " data: {},n" + " setData: function (key, value) {n" + " this.data[key] = value;n" + " },n" + " getData: function (key) {n" + " return this.data[key];n" + " }n" + " };n" + " return event;n" + "}n" + "n" + "function dispatchEvent(element, event, transferData) {n" + " if (transferData !== undefined) {n" + " event.dataTransfer = transferData;n" + " }n" + " if (element.dispatchEvent) {n" + " element.dispatchEvent(event);n" + " } else if (element.fireEvent) {n" + " element.fireEvent("on" + event.type, event);n" + " }n" + "}n" + "n" + "function simulateHTML5DragAndDrop(element, target) {n" + " var dragStartEvent = createEvent('dragstart');n" + " dispatchEvent(element, dragStartEvent);n" + " var dropEvent = createEvent('drop');n" + " dispatchEvent(target, dropEvent, dragStartEvent.dataTransfer);n" + " var dragEndEvent = createEvent('dragend'); n" + " dispatchEvent(element, dragEndEvent, dropEvent.dataTransfer);n" + "}n" + "n" + "var elementToDrag = arguments[0];n" + "var target = arguments[1];n" + "simulateHTML5DragAndDrop(elementToDrag, target);", elementToDrag, target); } This method is really just a wrapper around the JavaScript code. We take a driver object and cast it into a JavascriptExecutor. We then pass the JavaScript code into the executor as a string. We have made a couple of additions to the JavaScript functions that we previously wrote. Firstly, we set a couple of variables (mainly for code clarity; they can quite easily be inlined) that take the WebElements that we have passed in as arguments. Finally, we invoke the simulateHTML5DragAndDrop function using these elements. The final piece of the puzzle is to write a test that utilizes the simulateDragAndDrop method, as follows: @Test public void dragAndDropHTML5() throws Exception { WebDriver driver = getDriver(); driver.get("http://ch6.masteringselenium.com/ dragAndDrop.html"); final By destroyableBoxes = By.cssSelector("ul > li > a"); WebElement obliterator = driver.findElement(By.id("obliterate")); WebElement firstBox = driver.findElement(By.id("one")); WebElement secondBox = driver.findElement(By.id("two")); assertThat(driver.findElements(destroyableBoxes).size(), is(equalTo(5))); simulateDragAndDrop(firstBox, obliterator); assertThat(driver.findElements(destroyableBoxes). size(), is(equalTo(4))); simulateDragAndDrop(secondBox, obliterator); assertThat(driver.findElements(destroyableBoxes). size(), is(equalTo(3))); } This test finds a couple of boxes and destroys them one by one using the simulated drag and drop. As you can see, the JavascriptExcutor is extremely powerful. Can I use JavaScript libraries? The logical progression is, of course, to write your own JavaScript libraries that you can import instead of sending everything over as a string. Alternatively, maybe you would just like to import an existing library. Let's write some code that allows you to import a JavaScript library of your choice. It's not a particularly complex JavaScript. All that we are going to do is create a new <script> element in a page and then load our library into it, as follows: public void injectScript(String scriptURL) throws Exception { WebDriver driver = getDriver(); JavascriptExecutor js = (JavascriptExecutor) driver; js.executeScript("function injectScript(url) {n" + " var script = document.createElement ('script');n" + " script.src = url;n" + " var head = document.getElementsByTagName( 'head')[0];n" + " head.appendChild(script);n" + "}n" + "n" + "var scriptURL = arguments[0];n" + "injectScript(scriptURL);" , scriptURL); } We have again set arguments[0] to a variable before injecting it for clarity, but you can inline this part if you want to. All that remains now is to inject this into a page and check whether it works. Let's write a test! We are going to use this function to inject jQuery into the Google website. The first thing that we need to do is write a method that can tell us whether jQuery has been loaded or not, as follows: public Boolean isjQueryLoaded() throws Exception { WebDriver driver = getDriver(); JavascriptExecutor js = (JavascriptExecutor) driver; return (Boolean) js.executeScript("return typeof jQuery != 'undefined';"); } Now, we need to put all of this together in a test, as follows: @Test public void injectjQueryIntoGoogle() throws Exception { WebDriver driver = DriverFactory.getDriver(); driver.get("http://www.google.com"); assertThat(isjQueryLoaded(), is(equalTo(false))); injectScript("https://code.jquery.com/jquery-latest.min.js"); assertThat(isjQueryLoaded(), is(equalTo(true))); } It's a very simple test. We loaded the Google website. Then, we checked whether jQuery existed. Once we were sure that it didn't exist, we injected jQuery into the page. Finally, we again checked whether jQuery existed. We have used jQuery in our example, but you don't have to use jQuery. You can inject any script that you desire. Should I inject JavaScript libraries? It's very easy to inject JavaScript into a page, but stop and think before you do it. Adding lots of different JavaScript libraries may affect the existing functionality of the site. You may have functions in your JavaScript that overwrite existing functions that are already on the page and break the core functionality. If you are testing a site, it may make all of your tests invalid. Failures may arise because there is a clash between the scripts that you inject and the existing scripts used on the site. The flip side is also true—injecting a script may make the functionality that is broken, work. If you are going to inject scripts into an existing site, be sure that you know what the consequences are. If you are going to regularly inject a script, it may be a good idea to add some assertions to ensure that the functions that you are injecting do not already exist before you inject the script. This way, your tests will fail if the developers add a JavaScript function with the same name at some point in the future without your knowledge. What about asynchronous scripts? Everything that we have looked at so far has been a synchronous piece of JavaScript. However, what if we wanted to perform some asynchronous JavaScript calls as a part of our test? Well, we can do this. The JavascriptExecutor also has a method called executeAsyncScript(). This will allow you to run some JavaScript that does not respond instantly. Let's have a look at some examples. First of all, we are going to write a very simple bit of JavaScript that will wait for 25 seconds before triggering a callback, as follows: @Test private void javascriptExample() throws Exception { WebDriver driver = DriverFactory.getDriver(); driver.manage().timeouts().setScriptTimeout(60, TimeUnit.SECONDS); JavascriptExecutor js = (JavascriptExecutor) driver; js.executeAsyncScript("var callback = arguments[ arguments.length - 1]; window.setTimeout(callback, 25000);"); driver.get("http://www.google.com"); } Note that we defined a JavaScript variable named callback, which uses a script argument that we have not set. For asynchronous scripts, Selenium needs to have a callback defined, which is used to detect when the JavaScript that you are executing has finished. This callback object is automatically added to the end of your arguments array. This is what we have defined as the callback variable. If we now run the script, it will load our browser and then sit there for 25 seconds as it waits for the JavaScript snippet to complete and call the callback. It will then load the Google website and finish. We have also set a script timeout on the driver object that will wait for up to 60 seconds for our piece of JavaScript to execute. Let's see what happens if our script takes longer to execute than the script timeout: @Test private void javascriptExample() throws Exception { WebDriver driver = DriverFactory.getDriver(); driver.manage().timeouts().setScriptTimeout(5, TimeUnit.SECONDS); JavascriptExecutor js = (JavascriptExecutor) driver; js.executeAsyncScript("var callback = arguments[ arguments.length - 1]; window.setTimeout(callback, 25000);"); driver.get("http://www.google.com"); } This time, when we run our test, it waits for 5 seconds and then throws a TimoutException. It is important to set a script timeout on the driver object when running asynchronous scripts, to give them enough time to execute. What do you think will happen if we execute this as a normal script? @Test private void javascriptExample() throws Exception { WebDriver driver = DriverFactory.getDriver(); driver.manage().timeouts().setScriptTimeout( 5, TimeUnit.SECONDS); JavascriptExecutor js = (JavascriptExecutor) driver; js.executeScript("var callback = arguments[arguments. length - 1]; window.setTimeout(callback, 25000);"); driver.get("http://www.google.com"); } You may have been expecting an error, but that's not what you got. The script got executed as normal because Selenium was not waiting for a callback; it didn't wait for it to complete. Since Selenium did not wait for the script to complete, it didn't hit the script timeout. Hence, no error was thrown. Wait a minute. What about the callback definition? There was no argument that was used to set the callback variable. Why didn't it blow up? Well, JavaScript isn't as strict as Java. What it has done is try and work out what arguments[arguments.length - 1] would resolve and realized that it is not defined. Since it is not defined, it has set the callback variable to null. Our test then completed before setTimeout() had a chance to complete its call. So, you won't see any console errors. As you can see, it's very easy to make a small error that stops things from working when working with asynchronous JavaScript. It's also very hard to find these errors because there can be very little user feedback. Always take extra care when using the JavascriptExecutor to execute asynchronous bits of JavaScript. Summary In this article, we: Learned how to use a JavaScript executor to execute JavaScript snippets in the browser through Selenium Learned about passing arguments into a JavaScript executor and the sort of arguments that are supported Learned what the possible return types are for a JavaScript executor Gained a good understanding of when we shouldn't use a JavaScript executor Worked through a series of examples that showed ways in which we can use a JavaScript executor to enhance our tests Resources for Article: Further resources on this subject: JavaScript tech page Cross-browser Tests using Selenium WebDriver Selenium Testing Tools Learning Selenium Testing Tools with Python
Read more
  • 0
  • 0
  • 22339

article-image-creating-poi-listview-layout
Packt
04 Sep 2015
27 min read
Save for later

Creating the POI ListView layout

Packt
04 Sep 2015
27 min read
In this article by Nilanchala Panigrahy, author of the book Xamarin Mobile Application Development for Android Second Edition, will walk you through the activities related to creating and populating a ListView, which includes the following topics: Creating the POIApp activity layout Creating a custom list row item layout The ListView and ListAdapter classes (For more resources related to this topic, see here.) It is technically "possible to create and attach the user interface elements to your activity using C# code. However, it is a bit of a mess. We will go with the most common approach by declaring the XML-based layout. Rather than deleting these files, let's give them more appropriate names and remove unnecessary content as follows: Select the Main.axml file in Resources | Layout and rename it to "POIList.axml. Double-click on the POIList.axml file to open it in a layout "designer window.Currently, the POIList.axml file contains the layout that was created as part of the default Xamarin Studio template. As per our requirement, we need to add a ListView widget that takes the complete screen width and a ProgressBar in the middle of the screen. The indeterminate progress "bar will be displayed to the user while the data is being downloaded "from the server. Once the download is complete and the data is ready, "the indeterminate progress bar will be hidden before the POI data is rendered on the list view. Now, open the "Document Outline tab in the designer window and delete both the button and LinearLayout. Now, in the designer Toolbox, search for RelativeLayout and drag it onto the designer layout preview window. Search for ListView in the Toolbox search field and drag it over the layout designer preview window. Alternatively, you can drag and drop it over RelativeLayout in the Document Outline tab. We have just added a ListView widget to POIList.axml. Let's now open the Properties pad view in the designer window and edit some of its attributes: There are five buttons at the top of the pad that switch the set of properties being edited. The @+id notation notifies the compiler that a new resource ID needs to be created to identify the widget in API calls, and listView1 identifies the name of the constant. Now, perform the following steps: Change the ID name to poiListView and save the changes. Switch back to the Document Outline pad and notice that the ListView ID is updated. Again, switch back to the Properties pad and click on the Layout button. Under the View Group section of the layout properties, set both the Width and Height properties to match_parent. The match_parent value "for the Height and Width properties tells us that the ListView can use the entire content area provided by the parent, excluding any margins specified. In our case, the parent would be the top-level RelativeLayout. Prior to API level 8, fill_parent was used instead of match_parent to accomplish the same effect. In API level 8, fill_parent was deprecated and replaced with match_parent for clarity. Currently, both the constants are defined as the same value, so they have exactly the same effect. However, fill_ parent may be removed from the future releases of the API; so, going forward, match_parent should be used. So far, we have added a ListView to RelativeLayout, let's now add a Progress Bar to the center of the screen. Search for Progress Bar in the Toolbox search field. You will notice that several types of progress bars will be listed, including horizontal, large, normal, and small. Drag the normal progress bar onto RelativeLayout.By default, the Progress Bar widget is aligned to the top left of its parent layout. To align it to the center of the screen, select the progress bar in the Document Outline tab, switch to the Properties view, and click on the Layout tab. Now select the Center In Parent checkbox, and you will notice that the progress bar is aligned to the center of the screen and will appear at the top of the list view. Currently, the progress bar is visible in the center of the screen. By default, this could be hidden in the layout and will be made visible only while the data is being downloaded. Change the Progress Bar ID to progressBar and save the changes. To hide the Progress Bar from the layout, click on the Behavior tab in the Properties view. From Visibility, select Box, and then select gone.This behavior can also be controlled by calling setVisibility() on any view by passing any of the following behaviors. The View.Visibility property" allows you to control whether a view is visible or not. It is based on the ViewStates enum, which defines the following values: Value Description Gone This value tells the parent ViewGroup to treat the View as though it does not exist, so no space will be allocated in the layout Invisible This value tells the parent ViewGroup to hide the content for the View; however, it occupies the layout space Visible This value tells the parent ViewGroup to display the content of the View Click on the Source tab to switch the IDE context from visual designer to code, and see what we have built so far. Notice that the following code is generated for the POIList.axml layout: <?xml version="1.0" encoding="utf-8"?> <RelativeLayout p1_layout_width="match_parent" p1_layout_height="match_parent" p1_id="@+id/relativeLayout1"> <ListView p1_minWidth="25px" p1_minHeight="25px" p1_layout_width="match_parent" p1_layout_height="match_parent" p1_id="@+id/poiListView" /> <ProgressBar p1_layout_width="wrap_content" p1_layout_height="wrap_content" p1_id="@+id/progressBar" p1_layout_centerInParent="true" p1_visibility="gone" /> </RelativeLayout> Creating POIListActivity When we created the" POIApp solution, along with the default layout, a default activity (MainActivity.cs) was created. Let's rename the MainActivity.cs file "to POIListActivity.cs: Select the MainActivity.cs file from Solution Explorer and rename to POIListActivity.cs. Open the POIListActivity.cs file in the code editor and rename the "class to POIListActivity. The POIListActivity class currently contains the code that was "created automatically while creating the solution using Xamarin Studio. We will write our own activity code, so let's remove all the code from the POIListActivity class. Override the OnCreate() activity life cycle callback method. This method will be used to attach the activity layout, instantiate the views, and write other activity initialization logic. Add the following code blocks to the POIListActivity class: namespace POIApp { [Activity (Label = "POIApp", MainLauncher = true, Icon = "@ drawable/icon")] public class POIListActivity : Activity { protected override void OnCreate (Bundle savedInstanceState) { base.OnCreate (savedInstanceState); } } } Now let's set the activity content layout by calling the SetContentView(layoutId) method. This method places the layout content directly into the activity's view hierarchy. Let's provide the reference to the POIList layout created in previous steps. At this point, the POIListActivity class looks as follows: namespace POIApp { [Activity (Label = "POIApp", MainLauncher = true, Icon = "@drawable/icon")] public class POIListActivity : Activity { protected override void OnCreate (Bundle savedInstanceState) { base.OnCreate (savedInstanceState); SetContentView (Resource.Layout.POIList); } } } Notice that in the preceding code snippet, the POIListActivity class uses some of the [Activity] attributes such as Label, MainLauncher, and Icon. During the build process, Xamarin.Android uses these attributes to create an entry in the AndroidManifest.xml file. Xamarin makes it easier by allowing all of the Manifest properties to set using attributes so that you never have to modify them manually in AndroidManifest.xml. So far, we have "declared an activity and attached the layout to it. At this point, if you run the app on your Android device or emulator, you will notice that a blank screen will be displayed. Creating the POI list row layout We now turn our attention to the" layout for each row in the ListView widget. "The" Android platform provides a number of default layouts out of the box that "can be used with a ListView widget: Layout Description SimpleListItem1 A "single line with a single caption field SimpleListItem2 A "two-line layout with a larger font and a brighter text color for the first field TwoLineListItem A "two-line layout with an equal sized font for both lines and a brighter text color for the first line ActivityListItem A "single line of text with an image view All of the preceding three layouts provide a pretty standard design, but for more control over content layout, a custom layout can also be created, which is what is needed for poiListView. To create a new layout, perform the following steps: In the Solution pad, navigate to Resources | Layout, right-click on it, and navigate to Add | New File. Select Android from the list on the left-hand side, Android Layout from the template list, enter POIListItem in the name column, and click on New. Before we proceed to lay out the design for each of the row items in the list, we must draw on a piece of paper and analyze how the UI will look like. In our example, the POI data will be organized as follows: There are a "number of ways to achieve this layout, but we will use RelativeLayout to achieve the same result. There is a lot going on in this diagram. Let's break it down as follows: A RelativeLayout view group is used as the top-level container; it provides a number of flexible options for positioning relative content, its edges, or other content. An ImageView widget is used to display a photo of the POI, and it is anchored to the left-hand side of the RelativeLayout utility. Two TextView widgets are used to display the POI name and address information. They need to be anchored to the right-hand side of the ImageView widget and centered within the parent RelativeLayout "utility. The easiest way to accomplish this is to place both the TextView classes inside another layout; in this case, a LinearLayout widget with "the orientation set to vertical. An additional TextView widget is used to display the distance, and it is anchored on the right-hand side of the RelativeLayout view group and centered vertically. Now, our task is to get this definition into POIListItem.axml. The next few sections describe how to "accomplish this using the Content view of the designer when feasible and the Source view when required. Adding a RelativeLayout view group The RelativeLayout layout "manager allows its child views to be positioned relative to each other or relative to the container or another container. In our case, for building the row layout, as shown in the preceding diagram, we can use RelativeLayout as a top-level view group. When the POIListItem.axml layout file was created, by default a top-level LinearLayout was added. First, we need to change the top-level ViewGroup to RelativeLayout. The following section will take you through the steps to complete the layout design for the POI list row: With POIListItem.axml opened in the content mode, select the entire layout by clicking on the content area. You should see a blue outline going around the edge. Press Delete. The LinearLayout view group will be deleted, and you will see a message indicating that the layout is empty. Alternatively, you can also select the LinearLayout view group from the Document Outline tab and press Delete. Locate the RelativeLayout view group in the toolbox and drag it onto the layout. Select the RelativeLayout view group from Document Outline. Open the Properties pad and change the following properties: The Padding option to 5dp The Layout Height option to wrap_content The Layout Width option to match_parent The padding property controls how much space will be placed around each item as a margin, and the height determines the height of each list row. Setting the Layout Width option to match_ parent will cause the POIListItem content to consume the entire width of the screen, while setting the Layout Height option to wrap_content will cause each row to be equal to the longest control. Switch to the Code view to see what has been added to the layout. Notice that the following lines of code have been added to RelativeLayout: <RelativeLayout p1_minWidth="25px" p1_minHeight="25px" p1_layout_width="match_parent" p1_layout_height="wrap_content" p1_id="@+id/relativeLayout1" p1_padding="5dp"/> Android runs on a "variety of devices that offer different screen sizes and densities. When specifying dimensions, you can use a number of different units, including pixels (px), inches (in), and density-independent pixels (dp). Density-independent pixels are abstract units based on 1 dp being 1 pixel on a 160 dpi screen. At runtime, Android will scale the actual size up or down based on the actual screen density. It is a best practice to specify dimensions using density-independent pixels. Adding an ImageView widget The ImageView widget in "Android is used to display the arbitrary image for different sources. In our case, we will download the images from the server and display them in the list. Let's add an ImageView widget to the left-hand side of the layout and set the following configurations: Locate the ImageView widget in the toolbox and drag it onto RelativeLayout. With the ImageView widget selected, use the Properties pad to set the ID to poiImageView. Now, click on the Layout tab in the Properties pad and set the Height and Width values to 65 dp. In the property grouping named RelativeLayout, set Center Vertical to true. Simply clicking on the checkbox does not seem to work, but you can click on the small icon that looks like an edit box, which is to the right-hand side, and just enter true. If everything else fails, just switch to the Source view and enter the following code: p1:layout_centerVertical="true" In the property grouping named ViewGroup, set the Margin Right to 5dp. This brings some space between the POI image and the POI name. Switch to the Code view to see what has been added to the layout. Notice the following lines of code added to ImageView: <ImageView p1_src="@android:drawable/ic_menu_gallery" p1_layout_width="65dp" p1_layout_height="65dp" p1_layout_marginRight="5dp" p1_id="@+id/poiImageView" /> Adding a LinearLayout widget LinearLayout is one of the "most basic layout managers that organizes its child "views either horizontally or vertically based on the value of its orientation property. Let's add a LinearLayout view group that will be used to lay out "the POI name and address data as follows: Locate the LinearLayout (vertical) view group in the toolbox. Adding this widget is a little trickier because we want it anchored to the right-hand side of the ImageView widget. Drag the LinearLayout view group to the right-hand side of the ImageView widget until the edge turns to a blue dashed line, and then drop the LinearLayout view group. It will be aligned with the right-hand side of the ImageView widget. In the property grouping named RelativeLayout of the Layout section, set Center Vertical to true. As before, you will need to enter true in the edit box or manually add it to the Source view. Switch to the Code view to see what has been added to the layout. Notice "the following lines of code added to LinearLayout: <LinearLayout p1_orientation="vertical" p1_minWidth="25px" p1_minHeight="25px" p1_layout_width="wrap_content" p1_layout_height="wrap_content" p1_layout_toRightOf="@id/poiImageView" p1_id="@+id/linearLayout1" p1_layout_centerVertical="true" /> Adding the name and address TextView classes Add the TextView classes to display the POI name and address: Locate TextView in" the Toolbox and add a TextView class to the layout. "This TextView needs to be added within the LinearLayout view group we just added, so drag TextView over the LinearLayout view group until it turns blue and then drop it. Name the TextView ID as nameTextView and set the text size to 20sp. "The text size can be set in the Style section of the Properties pad; you will need to expand the Text Appearance group by clicking on the ellipsis (...) button on the right-hand side. Scale-independent pixels (sp) "are like dp units, but they are also scaled by the user's font size preference. Android allows users to select a font size in the Accessibility section of Settings. When font sizes are specified using sp, Android will not only take into account the screen density when scaling text, but will also consider the user's accessibility settings. It is recommended that you specify font sizes using sp. Add "another TextView to the LinearLayout view group using the same technique except dragging the new widget to the bottom edge of the nameTextView until it changes to a blue dashed line and then drop it. This will cause the second TextView to be added below nameTextView. Set the font size to 14sp. Change the ID of the newly added TextView to addrTextView. Now change the sample text for both nameTextView and addrTextView to POI Name and City, State, Postal Code. To edit the text shown in TextView, just double tap the widget on the content panel. This enables a small editor that allows you to enter the text directly. Alternately, you can change the text by entering a value for the Text property in the Widget section of the Properties pad. It is a design practice to declare all your static strings in the Resources/values/string.xml file. By declaring the strings in the strings.xml file, you can easily translate your whole app to support other languages. Let's add the following strings to string.xml: <string name="poi_name_hint">POI Name</string> <string name="address_hint">City, State, Postal Code.</string> You can now change the Text property of both nameTextView and addrTextView by selecting the ellipsis (…) button, which is next to the Text property in the Widget section of the Properties pad. Notice that this will open a dialog window that lists all the strings declared in the string.xml file. Select the appropriate strings for both TextView objects. Now let's switch to the Code view to see what has been added to the layout. Notice the following lines of code added inside LinearLayout: <TextView p1_layout_width="match_parent" p1_layout_height="wrap_content" p1_id="@+id/nameTextView " p1_textSize="20sp" p1_text="@string/app_name" /> <TextView p1_text="@string/address_hint" p1_layout_width="match_parent" p1_layout_height="wrap_content" p1_id="@+id/addrTextView " p1_textSize="14sp" /> Adding the distance TextView Add a TextView to show "the distance from POI: Locate the TextView in the toolbox and add a TextView to the layout. This TextView needs to be anchored to the right-hand side of the RelativeLayout view group, but there is no way to visually accomplish this; so, we will use a multistep process. Initially, align the TextView with the right-hand edge of the LinearLayout view group by dragging it to the left-hand side until the edge changes to a dashed blue line and drop it. In the Widget section of the Properties pad, name the widget as distanceTextView and set the font size to 14sp. In the Layout section of the Properties pad, set Align Parent Right to true, Center Vertical to true, and clear out the linearLayout1 view group name in the To Right Of layout property. Change the sample text to 204 miles. To do this, let's add a new string entry to string.xml and set the Text property from the Properties pad.The following screenshot depicts what should be seen from the Content view "at this point: Switch back to the "Source tab in the layout designer, and notice the following code generated for the POIListItem.axml layout: <?xml version="1.0" encoding="utf-8"?> <RelativeLayout p1_minWidth="25px" p1_minHeight="25px" p1_layout_width="match_parent" p1_layout_height="wrap_content" p1_id="@+id/relativeLayout1" p1_padding="5dp"> <ImageView p1_src="@android:drawable/ic_menu_gallery" p1_layout_width="65dp" p1_layout_height="65dp" p1_layout_marginRight="5dp" p1_id="@+id/poiImageView" /> <LinearLayout p1_orientation="vertical" p1_layout_width="wrap_content" p1_layout_height="wrap_content" p1_layout_toRightOf="@id/poiImageView" p1_id="@+id/linearLayout1" p1_layout_centerVertical="true"> <TextView p1_layout_width="match_parent" p1_layout_height="wrap_content" p1_id="@+id/nameTextView " p1_textSize="20sp" p1_text="@string/app_name" /> <TextView p1_text="@string/address_hint" p1_layout_width="match_parent" p1_layout_height="wrap_content" p1_id="@+id/addrTextView " p1_textSize="14sp" /> </LinearLayout> <TextView p1_text="@string/distance_hint" p1_layout_width="wrap_content" p1_layout_height="wrap_content" p1_id="@+id/textView1" p1_layout_centerVertical="true" p1_layout_alignParentRight="true" /> </RelativeLayout> Creating the PointOfInterest apps entity class The first class that is needed is the one that represents the primary focus of the application, a PointofInterest class. POIApp will allow the following attributes "to be captured for the Point Of Interest app: Id Name Description Address Latitude Longitude Image The POI entity class can be nothing more than a simple .NET class, which houses these attributes. To create a POI entity class, perform the following steps: Select the POIApp project from the Solution Explorer in Xamarin Studio. Select the POIApp project and not the solution, which is the top-level node in the Solution pad. Right-click on it and select New File. On the left-hand side of the New File dialog box, select General. At the top of the template list, in the middle of the dialog box, select Empty Class (C#). Enter the name PointOfInterest and click on OK. The class will be created in the POIApp project folder. Change the "visibility of the class to public and fill in the attributes based on the list previously identified. The following code snippet is from POIAppPOIAppPointOfInterest.cs from the code bundle available for this article: public class PointOfInterest { public int Id { get; set;} public string Name { get; set; } public string Description { get; set; } public string Address { get; set; } public string Image { get; set; } public double? Latitude { get; set; } public double? Longitude { get; set; } } Note that the Latitude and Longitude attributes are all marked as nullable. In the case of latitude and longitude, (0, 0) is actually a valid location so a null value indicates that the attributes have never been set. Populating the ListView item All the adapter "views such as ListView and GridView use an Adapter that acts as a bridge between the data and views. The Adapter iterates through the content and generates Views for each data item in the list. The Android SDK provides three different adapter implementations such as ArrayAdapter, CursorAdapter, and SimpleAdapter. An ArrayAdapter expects an array or a list as input, while CursorAdapter accepts the instance of the Cursor, and SimpleAdapter maps the static data defined in the resources. The type of adapter that suits your app need is purely based on the input data type. The BaseAdapter is the generic implementation for all of the three adapter types, and it implements the IListAdapter, ISpinnerAdapter, and IDisposable interfaces. This means that the BaseAdapter can be used for ListView, GridView, "or Spinners. For POIApp, we will create a subtype of BaseAdapter<T> as it meets our specific needs, works well in many scenarios, and allows the use of our custom layout. Creating POIListViewAdapter In order to create "POIListViewAdapter, we will start by creating a custom adapter "as follows: Create a new class named POIListViewAdapter. Open the POIListViewAdapter class file, make the class a public class, "and specify that it inherits from BaseAdapter<PointOfInterest>. Now that the adapter class has been created, we need to provide a constructor "and implement four abstract methods. Implementing a constructor Let's implement a "constructor that accepts all the information we will need to work with to populate the list. Typically, you need to pass at least two parameters: an instance of an activity because we need the activity context while accessing the standard common "resources and an input data list that can be enumerated to populate the ListView. The following code shows the constructor from the code bundle: private readonly Activity context; private List<PointOfInterest> poiListData; public POIListViewAdapter (Activity _context, List<PointOfInterest> _poiListData) :base() { this.context = _context; this.poiListData = _poiListData; } Implementing Count { get } The BaseAdapter<T> class "provides an abstract definition for a read-only Count property. In our case, we simply need to provide the count of POIs as provided in poiListData. The following code example demonstrates the implementation from the code bundle: public override int Count { get { return poiListData.Count; } } Implementing GetItemId() The BaseAdapter<T> class "provides an abstract definition for a method that returns a long ID for a row in the data source. We can use the position parameter to access a POI object in the list and return the corresponding ID. The following code example demonstrates the implementation from the code bundle: public override long GetItemId (int position) { return position; } Implementing the index getter method The BaseAdapter<T> class "provides an abstract definition for an index getter method that returns a typed object based on a position parameter passed in as an index. We can use the position parameter to access the POI object from poiListData and return an instance. The following code example demonstrates the implementation from the code bundle: public override PointOfInterest this[int index] { get{ return poiListData [index]; } } Implementing GetView() The BaseAdapter<T> class "provides an abstract definition for GetView(), which returns a view instance that represents a single row in the ListView item. As in other scenarios, you can choose to construct the view entirely in code or to inflate it from a layout file. We will use the layout file we previously created. The following code example demonstrates inflating a view from a layout file: view = context.LayoutInflater.Inflate (Resource.Layout.POIListItem, null, false); The first parameter of Inflate is a resource ID and the second is a root ViewGroup, which in this case can be left null since the view will be added to the ListView item when it is returned. Reusing row Views The GetView() method is called" for each row in the source dataset. For datasets with large numbers of rows, hundreds, or even thousands, it would require a great deal of resources to create a separate view for each row, and it would seem wasteful since only a few rows are visible at any given time. The AdapterView architecture addresses this need by placing row Views into a queue that can be reused as they" scroll out of view of the user. The GetView() method accepts a parameter named convertView, which is of type view. When a view is available for reuse, convertView will contain a reference to the view; otherwise, it will be null and a new view should be created. The following code example depicts the use of convertView to facilitate the reuse of row Views: var view = convertView; if (view == null){ view = context.LayoutInflater.Inflate (Resource.Layout.POIListItem, null); } Populating row Views Now that we have an instance of the "view, we need to populate the fields. The View class defines a named FindViewById<T> method, which returns a typed instance of a widget contained in the view. You pass in the resource ID defined in the layout file to specify the control you wish to access. The following code returns access to nameTextView and sets the Text property: PointOfInterest poi = this [position]; view.FindViewById<TextView>(Resource.Id.nameTextView).Text = poi.Name; Populating addrTextView is slightly more complicated because we only want to use the portions of the address we have, and we want to hide the TextView if none of the address components are present. The View.Visibility property allows you to control the visibility property "of a view. In our case, we want to use the ViewState.Gone value if none of "the components of the address are present. The following code shows the "logic in GetView: if (String.IsNullOrEmpty (poi.Address)) { view.FindViewById<TextView> (Resource.Id.addrTextView).Visibility = ViewStates.Gone; } else{ view.FindViewById<TextView>(Resource.Id.addrTextView).Text = poi.Address; } Populating the value for the distance text view requires an understanding of the location services. We need to do some calculation, by considering the user's current location with the POI latitude and longitude. Populating the list thumbnail image Image downloading and" processing is a complex task. You need to consider the various aspects, such as network logic, to download images from the server, caching downloaded images for performance, and image resizing for avoiding the memory out conditions. Instead of writing our own logic for doing all the earlier mentioned tasks, we can use UrlImageViewHelper, which is a free component available in the Xamarin Component Store. The Xamarin Component Store provides a set of reusable components, "including both free and premium components, that can be easily plugged into "any Xamarin-based application. Using UrlImageViewHelper The following steps will walk you "through the process of adding a component from the Xamarin Component Store: To include the UrlImageViewHelper component in POIApp, you can either double-click on the Components folder in the Solution pad, or right-click and select Edit Components. Notice that the component manager will be loaded with the already downloaded components and a Get More Components button that allows you to open the Components store from the window. Note that to access the component manager, you need to log in to your Xamarin account: Search for UrlImageViewHelper in the components search box available in the left-hand side pane. Now click on the download button to add your Xamarin Studio solution. Now that we have added the UrlImageViewHelper component, let's go back to the GetView() method in the POIListViewAdapter class. Let's take a look at the following section of the code: var imageView = view.FindViewById<ImageView> (Resource.Id.poiImageView); if (!String.IsNullOrEmpty (poi.Address)) { Koush.UrlImageViewHelper.SetUrlDrawable (imageView, poi.Image, Resource.Drawable.ic_placeholder); } Let us examine how the preceding code snippet works: The SetUrlDrawable() method defined in the UrlImageViewHelper "component provides a logic to download an image using a single line of code. It accepts three parameters: an instance of imageView, where the image is to be displayed after the download, the image source URL, and the placeholder image. Add a new image ic_placeholder.png to the drawable Resources directory. While the image is being downloaded, the placeholder image will be displayed on imageView. Downloading the image over the network requires Internet permissions. The following section will walk you through the steps involved in defining permissions in your AndroidManifest.xml file. Adding Internet permissions Android apps must be "granted permissions while accessing certain features, such as downloading data from the Internet, saving an image in storage, and so on. You must specify the permissions that an app requires in the AndroidManifest.xml file. This allows the installer to show potential users the set of permissions an app requires at the time of installation. To set the appropriate permissions, perform the following steps: Double-click on AndroidManifest.xml in the Properties directory in the Solution pad. The file will open in the manifest editor. There are two tabs: Application and Source, at the bottom of the screen, that can be used to toggle between viewing a form for editing the file and the raw XML, as shown in the following screenshot: In the" Required permissions list, check Internet and navigate to File | Save. Switch to the Source view to view the XML as follows: Summary In this article, we covered a lot about how to create user interface elements using different layout managers and widgets such as TextView, ImageView, ProgressBar, and ListView. Resources for Article: Further resources on this subject: Code Sharing Between iOS and Android[article] XamChat – a Cross-platform App[article] Heads up to MvvmCross [article]
Read more
  • 0
  • 0
  • 5098
Modal Close icon
Modal Close icon