Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
article-image-scribus-managing-colors
Packt
10 Dec 2010
7 min read
Save for later

Scribus: Managing Colors

Packt
10 Dec 2010
7 min read
  Scribus 1.3.5: Beginner's Guide Create optimum page layouts for your documents using productive tools of Scribus. Master desktop publishing with Scribus Create professional-looking documents with ease Enhance the readability of your documents using powerful layout tools of Scribus Packed with interesting examples and screenshots that show you the most important Scribus tools to create and publish your documents. Time for action – managing new colors To define your own color set, you'll need to go to Edit | Colors. Here you will have several options. The most import will be the New button, which displays a window that will give you all that you need to define your color perfectly. Give a unique and meaningful name to your color; it will help you recognize it in the color lists later. For the color model, you'll need to choose between CMYK, RGB, or Web safe RGB. If you intend to print the document, choose CMYK. If you need to put it on a website, you can choose the RGB model. Web safe, will be more restricted but you'll be sure that the chosen colors will have a similar render on every kind of monitor. Old and New show an overview of the previous state of a color when editing an existing color and the state of the actual, chosen color. It's very practical to compare. To choose your color, everything is placed on the right-hand side. You can click in the color spectrum area, drag the primary sliders, or enter the value of each primary in the field if you already know exactly which color you want. The HSV Color Map on top is the setting that gives you the spectrum. If you choose another, you'll see predefined swatches. Most of them are RGB and should not be used directly for printed documents. Click on OK to validate it in the Edit Color window and in the Colors window too. (Move the mouse over the image to enlarge.) If no document is opened, the Colors window will have some more buttons that will be very helpful. The Current Color Set should be set to Scribus Basic, which is the simplest color set. You can choose any other set but they contain RGB colors only. Then you can add your own colors, if you haven't already done so. Click on Save Color Set and give it a name. Your set will now be listed in the list and will be available for every new document. What just happened? Creating colors is very simple and can be done in few steps. In fact, creating some colors is much faster than having to choose the same color from a long, default color list. My advice would be: don't lose your time looking for a color in a predefined swatch unless you really need this color (like a Pantone or any other spot). Consider the following points: You should know the average color you need before looking for it It will take some time to take a look at all the available colors The color might not be in a predefined swatch Don't use the set everybody uses, it will help you make your document recognizable If no document is opened, the color will be added to the default swatch unless you create your own color name. If a Scribus document is open, even empty, the color will be saved in the document. Let's see how to reuse it if needed. Reusing colors from other files If you already have the colors somewhere, there might be a way to pick it without having to create it again. If the color is defined in an imported vector (mainly EPS or SVG) file, the colors will automatically be added in the color list with a name beginning with FromEPS or FromSVG followed by hexadecimal values of the color. In an EPS, colors can be CMYK or spot, but in SVG they will be RGB. CMYK between Inkscape and Scribus Inkscape colors are RGB but this software is color managed, so you can have an accurate on screen-rendering and you can add a 5-digit color-profile value to the color style property. Actually, no software adds this automatically. Doing it manually in Inkscape through the XML editor will require some knowledge of SVG and CSS. It will be easier to simply get your RGB colors and then go, after import, to the Edit | Colors window and refine the colors by clicking on the Edit button. If your color is in an imported picture or is placed somewhere else, you can use the Eye Dropper tool (the last icon of the toolbar). When you click on a color, you will be asked for a name and the color will be added as RGB in the color list. If you want to use it in CMYK, just edit the color and change the color model. The last important use case is an internal Scribus case. The color list swatch defined in a document is available only in that document and saved within it. The bad point of this is that they won't automatically be available for future documents. But the good point is that you can send your file to anyone and your colors will still be there. You have several ways of doing this. Time for action – importing from a Scribus document We have already seen how to import style and master pages from other existing Scribus documents; importing colors will be very similar. The simplest method to reuse existing already defined colors is to go to Edit | Colors. Click on the Import button. Browse your directories to find the Scribus file that contains the colors you want and select it. All the colors of this document will be added to your new document swatch. If you don't need some colors, just select them in the Edit | Colors list and click on the Delete button. Scribus will ask you which color will replace this deleted color. If this color is unused in your new document, it doesn't matter. What just happened? The Edit Colors window provides a simple way to import the colors from another Scribus document: if the colors are already set in it, you just have to choose it. But there are many other ways to do it, especially because colors are considered as frame options and can be imported with them. In fact, if you really need the same colors, you certainly won't like importing them each time you create a new document. The best you can do is create a file with your master pages, styles, and colors defined and save it as a model. Each new document will be created from this model, so you'll get them easily each time. The same will happen if you use a scrapbook. Performing those steps can help you get in few seconds everything you have already defined in another similar document. Finally, you may need to reuse those colors but not in the same kind of document. You can create a swatch in GIMP .gpl format or use any EPS or AI file. GIMP .gpl format is very simple but can be only RGB. Give the value of each RGB color. Press the Tab key and write the name of the color (for example, medium grey would be: 127 127 127 grey50). Each color has to be alone on its line. GPL, EPS, and AI files have to be placed in the Scribus swatch install directory (on Linux /usr/lib/scribus/swatches, on Macs Applications/Scribus/Contents/lib/scribus/swatches, and on Microsoft Windows Programs/scribus/lib/scribus/swatches). When using an EPS file you might get too many colors. Create as many sample shapes as needed on a page and apply a color that you want to keep on each. Then go to Edit | Colors and click on Remove Unused. Then close this window and delete the shapes. The best way will be the one you'll prefer. Test them all and maybe find your own.
Read more
  • 0
  • 0
  • 6636

article-image-dart-javascript
Packt
18 Nov 2014
12 min read
Save for later

Dart with JavaScript

Packt
18 Nov 2014
12 min read
In this article by Sergey Akopkokhyants, author of Mastering Dart, we will combine the simplicity of jQuery and the power of Dart in a real example. (For more resources related to this topic, see here.) Integrating Dart with jQuery For demonstration purposes, we have created the js_proxy package to help the Dart code to communicate with jQuery. It is available on the pub manager at https://pub.dartlang.org/packages/js_proxy. This package is layered on dart:js and has a library of the same name and sole class JProxy. An instance of the JProxy class can be created via the generative constructor where we can specify the optional reference on the proxied JsObject: JProxy([this._object]); We can create an instance of JProxy with a named constructor and provide the name of the JavaScript object accessible through the dart:js context as follows: JProxy.fromContext(String name) { _object = js.context[name]; } The JProxy instance keeps the reference on the proxied JsObject class and makes all the manipulation on it, as shown in the following code: js.JsObject _object;    js.JsObject get object => _object; How to create a shortcut to jQuery? We can use JProxy to create a reference to jQuery via the context from the dart:js library as follows: var jquery = new JProxy.fromContext('jQuery'); Another very popular way is to use the dollar sign as a shortcut to the jQuery variable as shown in the following code: var $ = new JProxy.fromContext('jQuery'); Bear in mind that the original jQuery and $ variables from JavaScript are functions, so our variables reference to the JsFunction class. From now, jQuery lovers who moved to Dart have a chance to use both the syntax to work with selectors via parentheses. Why JProxy needs a method call? Usually, jQuery send a request to select HTML elements based on IDs, classes, types, attributes, and values of their attributes or their combination, and then performs some action on the results. We can use the basic syntax to pass the search criteria in the jQuery or $ function to select the HTML elements: $(selector) Dart has syntactic sugar method call that helps us to emulate a function and we can use the call method in the jQuery syntax. Dart knows nothing about the number of arguments passing through the function, so we use the fixed number of optional arguments in the call method. Through this method, we invoke the proxied function (because jquery and $ are functions) and returns results within JProxy: dynamic call([arg0 = null, arg1 = null, arg2 = null,    arg3 = null, arg4 = null, arg5 = null, arg6 = null,    arg7 = null, arg8 = null, arg9 = null]) { var args = []; if (arg0 != null) args.add(arg0); if (arg1 != null) args.add(arg1); if (arg2 != null) args.add(arg2); if (arg3 != null) args.add(arg3); if (arg4 != null) args.add(arg4); if (arg5 != null) args.add(arg5); if (arg6 != null) args.add(arg6); if (arg7 != null) args.add(arg7); if (arg8 != null) args.add(arg8); if (arg9 != null) args.add(arg9); return _proxify((_object as js.JsFunction).apply(args)); } How JProxy invokes jQuery? The JProxy class is a proxy to other classes, so it marks with the @proxy annotation. We override noSuchMethod intentionally to call the proxied methods and properties of jQuery when the methods or properties of the proxy are invoked. The logic flow in noSuchMethod is pretty straightforward. It invokes callMethod of the proxied JsObject when we invoke the method on proxy, or returns a value of property of the proxied object if we call the corresponding operation on proxy. The code is as follows: @override dynamic noSuchMethod(Invocation invocation) { if (invocation.isMethod) {    return _proxify(_object.callMethod(      symbolAsString(invocation.memberName),      _jsify(invocation.positionalArguments))); } else if (invocation.isGetter) {    return      _proxify(_object[symbolAsString(invocation.memberName)]); } else if (invocation.isSetter) {    throw new Exception('The setter feature was not implemented      yet.'); } return super.noSuchMethod(invocation); } As you might remember, all map or Iterable arguments must be converted to JsObject with the help of the jsify method. In our case, we call the _jsify method to check and convert passed arguments aligned with a called function, as shown in the following code: List _jsify(List params) { List res = []; params.forEach((item) {    if (item is Map || item is List) {      res.add(new js.JsObject.jsify(item));    } else {      res.add(item);    } }); return res; } Before return, the result must be passed through the _proxify function as follows: dynamic _proxify(value) {    return value is js.JsObject ? new JProxy(value) : value; } This function wraps all JsObject within a JProxy class and passes other values as it is. An example project Now create the jquery project, open the pubspec.yaml file, and add js_proxy to the dependencies. Open the jquery.html file and make the following changes: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <title>jQuery</title> <link rel="stylesheet" href="jquery.css"> </head> <body> <h1>Jquery</h1> <p>I'm a paragraph</p> <p>Click on me to hide</p> <button>Click me</button> <div class="container"> <div class="box"></div> </div> </body> <script src="//code.jquery.com/jquery-1.11.0.min.js"></script> <script type="application/dart" src="jquery.dart"></script> <script src="packages/browser/dart.js"></script> </html> This project aims to demonstrate that: Communication is easy between Dart and JavaScript The syntax of the Dart code could be similar to the jQuery code In general, you may copy the JavaScript code, paste it in the Dart code, and probably make slightly small changes. How to get the jQuery version? It's time to add js_proxy in our code. Open jquery.dart and make the following changes: import 'dart:html'; import 'package:js_proxy/js_proxy.dart'; /** * Shortcut for jQuery. */ var $ = new JProxy.fromContext('jQuery'); /** * Shortcut for browser console object. */ var console = window.console; main() { printVersion(); } /** * jQuery code: * *   var ver = $().jquery; *   console.log("jQuery version is " + ver); * * JS_Proxy based analog: */ printVersion() { var ver = $().jquery; console.log("jQuery version is " + ver); } You should be familiar with jQuery and console shortcuts yet. The call to jQuery with empty parentheses returns JProxy and contains JsObject with reference to jQuery from JavaScript. The jQuery object has a jQuery property that contains the current version number, so we reach this one via noSuchMethod of JProxy. Run the application, and you will see the following result in the console: jQuery version is 1.11.1 Let's move on and perform some actions on the selected HTML elements. How to perform actions in jQuery? The syntax of jQuery is based on selecting the HTML elements and it also performs some actions on them: $(selector).action(); Let's select a button on the HTML page and fire the click event as shown in the following code: /** * jQuery code: * *   $("button").click(function(){ *     alert('You click on button'); *   }); * * JS_Proxy based analog: */ events() { // We remove 'function' and add 'event' here $("button").click((event) {    // Call method 'alert' of 'window'    window.alert('You click on button'); }); } All we need to do here is just remove the function keyword, because anonymous functions on Dart do not use it and add the event parameter. This is because this argument is required in the Dart version of the event listener. The code calls jQuery to find all the HTML button elements to add the click event listener to each of them. So when we click on any button, a specified alert message will be displayed. On running the application, you will see the following message: How to use effects in jQuery? The jQuery supports animation out of the box, so it sounds very tempting to use it from Dart. Let's take an example of the following code snippet: /** * jQuery code: * *   $("p").click(function() { *     this.hide("slow",function(){ *       alert("The paragraph is now hidden"); *     }); *   }); *   $(".box").click(function(){ *     var box = this; *     startAnimation(); *     function startAnimation(){ *       box.animate({height:300},"slow"); *       box.animate({width:300},"slow"); *       box.css("background-color","blue"); *       box.animate({height:100},"slow"); *       box.animate({width:100},"slow",startAnimation); *     } *   }); * * JS_Proxy based analog: */ effects() { $("p").click((event) {    $(event['target']).hide("slow",(){      window.alert("The paragraph is now hidden");    }); }); $(".box").click((event) {    var box = $(event['target']);    startAnimation() {      box.animate({'height':300},"slow");      box.animate({'width':300},"slow");      box.css("background-color","blue");      box.animate({'height':100},"slow");      box.animate({'width':100},"slow",startAnimation);    };    startAnimation(); }); } This code finds all the paragraphs on the web page to add a click event listener to each one. The JavaScript code uses the this keyword as a reference to the selected paragraph to start the hiding animation. The this keyword has a different notion on JavaScript and Dart, so we cannot use it directly in anonymous functions on Dart. The target property of event keeps the reference to the clicked element and presents JsObject in Dart. We wrap the clicked element to return a JProxy instance and use it to call the hide method. The jQuery is big enough and we have no space in this article to discover all its features, but you can find more examples at https://github.com/akserg/js_proxy. What are the performance impacts? Now, we should talk about the performance impacts of using different approaches across several modern web browsers. The algorithm must perform all the following actions: It should create 10000 DIV elements Each element should be added into the same DIV container Each element should be updated with one style All elements must be removed one by one This algorithm must be implemented in the following solutions: The clear jQuery solution on JavaScript The jQuery solution calling via JProxy and dart:js from Dart The clear Dart solution based on dart:html We implemented this algorithm on all of them, so we have a chance to compare the results and choose the champion. The following HTML code has three buttons to run independent tests, three paragraph elements to show the results of the tests, and one DIV element used as a container. The code is as follows: <div>  <button id="run_js" onclick="run_js_test()">Run JS</button> <button id="run_jproxy">Run JProxy</button> <button id="run_dart">Run Dart</button> </div>   <p id="result_js"></p> <p id="result_jproxy"></p> <p id="result_dart"></p> <div id="container"></div> The JavaScript code based on jQuery is as follows: function run_js_test() { var startTime = new Date(); process_js(); var diff = new Date(new Date().getTime() –    startTime.getTime()).getTime(); $('#result_js').text('jQuery tooks ' + diff +    ' ms to process 10000 HTML elements.'); }     function process_js() { var container = $('#container'); // Create 10000 DIV elements for (var i = 0; i < 10000; i++) {    $('<div>Test</div>').appendTo(container); } // Find and update classes of all DIV elements $('#container > div').css("color","red"); // Remove all DIV elements $('#container > div').remove(); } The main code registers the click event listeners and the call function run_dart_js_test. The first parameter of this function must be investigated. The second and third parameters are used to pass the selector of the result element and test the title: void main() { querySelector('#run_jproxy').onClick.listen((event) {    run_dart_js_test(process_jproxy, '#result_jproxy', 'JProxy'); }); querySelector('#run_dart').onClick.listen((event) {    run_dart_js_test(process_dart, '#result_dart', 'Dart'); }); } run_dart_js_test(Function fun, String el, String title) { var startTime = new DateTime.now(); fun(); var diff = new DateTime.now().difference(startTime); querySelector(el).text = '$title tooks ${diff.inMilliseconds} ms to process 10000 HTML elements.'; } Here is the Dart solution based on JProxy and dart:js: process_jproxy() { var container = $('#container'); // Create 10000 DIV elements for (var i = 0; i < 10000; i++) {    $('<div>Test</div>').appendTo(container.object); } // Find and update classes of all DIV elements $('#container > div').css("color","red"); // Remove all DIV elements $('#container > div').remove(); } Finally, a clear Dart solution based on dart:html is as follows: process_dart() { // Create 10000 DIV elements var container = querySelector('#container'); for (var i = 0; i < 10000; i++) {    container.appendHtml('<div>Test</div>'); } // Find and update classes of all DIV elements querySelectorAll('#container > div').forEach((Element el) {    el.style.color = 'red'; }); // Remove all DIV elements querySelectorAll('#container > div').forEach((Element el) {    el.remove(); }); } All the results are in milliseconds. Run the application and wait until the web page is fully loaded. Run each test by clicking on the appropriate button. My result of the tests on Dartium, Chrome, Firefox, and Internet Explorer are shown in the following table: Web browser jQuery framework jQuery via JProxy Library dart:html Dartium 2173 3156 714 Chrome 2935 6512 795 Firefox 2485 5787 582 Internet Explorer 12262 17748 2956 Now, we have the absolute champion—the Dart-based solution. Even the Dart code compiled in the JavaScript code to be executed in Chrome, Firefox, and Internet Explorer works quicker than jQuery (four to five times) and much quicker than dart:js and JProxy class-based solution (four to ten times). Summary This article showed you how to use Dart and JavaScript together to build web applications. It listed problems and solutions you can use to communicate between Dart and JavaScript and the existing JavaScript program. We compared jQuery, JProxy, and dart:js and cleared the Dart code based on the dart:html solutions to identify who is quicker than others. Resources for Article: Further resources on this subject: Handling the DOM in Dart [article] Dart Server with Dartling and MongoDB [article] Handle Web Applications [article]
Read more
  • 0
  • 0
  • 6627

article-image-creating-first-python-script
Packt
09 Aug 2017
27 min read
Save for later

Creating the First Python Script

Packt
09 Aug 2017
27 min read
In this article by Silas Toms, the author of the book ArcPy and ArcGIS - Second Edition, we will demonstrate how to use ModelBuilder, which ArcGIS professionals are already familiar with, to model their first analysis and then export it out as a script. With the Python  environment configured to fit our needs, we can now create and execute ArcPy scripts. To ease into the creation of Python scripts, this article will use ArcGIS ModelBuilder to model a simple analysis, and export it as a Python script. ModelBuilder is very useful for creating Python scripts. It has an operational and a visual component, and all models can be outputted as Python scripts, where they can be further customized.  This article we will cover the following topics: Modeling a simple analysis using ModelBuilder Exporting the model out to a Python script Window file paths versus Pythonic file paths String formatting methods (For more resources related to this topic, see here.) Prerequisites The following are the prerequisites for this article: ArcGIS 10x and Python 2.7, with arcpy available as a module. For this article, the accompanying data and scripts should be downloaded from Packt Publishing's website. The completed scripts are available for comparison purposes, and the data will be used for this article's analysis. To run the code and test code examples, use your favorite IDE or open the IDLE (Python GUI) program from the Start Menu/ArcGIS/Python2.7 folder after installing ArcGIS for Desktop. Use the built-in "interpreter" or code entry interface, indicated by the triple chevron >>> and a blinking cursor. ModelBuilder ArcGIS has been in development since the 1970s. Since that time, it has included a variety of programming languages and tools to help GIS users automate analysis and map production. These include the Avenue scripting language in the ArcGIS 3x series, and the ARC Macro Language (AML) in the ARCInfo Workstation days as well as VBScript up until ArcGIS 10x, when Python was introduced. Another useful tool introduced in ArcGIS 9x was ModelBuilder, a visual programming environment used for both modeling analysis and creating tools that can be used repeatedly with different input feature classes. A useful feature of ModelBuilder is an export function, which allows modelers to create Python scripts directly from a model. This makes it easier to compare how parameters in a ModelBuilder tool are accepted as compared to how a Python script calls the same tool and supplies its parameters, and how generated feature classes are named and placed within the file structure. ModelBuilder is a helpful tool on its own, and its Python export functionality makes it easy for a GIS analyst to generate and customize ArcPy scripts. Creating a model and exporting to Python This article and the associated scripts depend on the downloadable file SanFrancisco.gdb geodatabase available from Packt. SanFrancisco.gdb contains data downloaded from https://datasf.org/ and the US Census' American Factfinder website at https://factfinder.census.gov/faces/nav/jsf/pages/index.xhtml. All census and geographic data included in the geodatabase is from the 2010 census. The data is contained within a feature dataset called SanFrancisco. The data in this feature dataset is in NAD 83 California State Plane Zone 3, and the linear unit of measure is the US foot. This corresponds to SRID 2227 in the European Petroleum Survey Group (EPSG) format. The analysis which will create with the model, and eventually export to Python for further refinement, will use bus stops along a specific line in San Francisco. These bus stops will be buffered to create a representative region around each bus stop. The buffered areas will then be intersected with census blocks to find out how many people live within each representative region around the bus stops. Modeling the Select and Buffer tools Using ModelBuilder, we will model the basic bus stop analysis. Once it has been modeled, it will be exported as an automatically generated Python script. Follow these steps to begin the analysis: Open up ArcCatalog, and create a folder connection to the folder containing SanFrancisco.gdb. I have put the geodatabase in a C drive folder called "Projects" for a resulting file path of C:ProjectsSanFrancisco.gdb. Right-click on  geodatabase, and add a new toolbox called Chapter2Tools. Right-click on geodatabase; select New, and then Feature Dataset, from the menu. A dialogue will appear that asks for a name; call it Chapter2Results, and push Next. It will ask for a spatial reference system; enter 2227 into the search bar, and push the magnifying glass icon. This will locate the correct spatial reference system: NAD 1983 StatePlane California III FIPS 0403 Feet. Don't select a vertical reference system, as we are not doing any Z value analysis. Push next, select the default tolerances, and push Finish. Next, open ModelBuilder using the ModelBuilder icon or by right-clicking on the Toolbox, and create a new Model. Save the model in the Chapter2Tools toolbox as Chapter2Model1. Drag in the  Bus_Stops feature class and the Select tool from the Analysis/Extract toolset in ArcToolbox. Open up the Select tool, and name the output feature class Inbound71. Make sure that the feature class is written to the Chapter2Results feature dataset. Open up the Expression SQL Query Builder, and create the following SQL expression : NAME = '71 IB' AND BUS_SIGNAG = 'Ferry Plaza'. The next step is to add a Buffer Tool from the Analysis/Proximity toolset. The Buffer tool will be used to create buffers around each bus stop. The buffered bus stops allow us to intersect with census data in the form of census blocks, creating the representative regions around each bus stop. Connect the output of the Select tool (Inbound71) to the Buffer tool. Open up the Buffer tool, add 400 to the Distance field, and change the units to Feet. Leave the rest of the options blank. Click on OK, and return to the model: Adding in the Intersect tool Now that we have selected the bus line of interest, and buffered the stops to create representative regions, we will need to intersect the regions with the census blocks to find the population of each representative region. This can be done as follows: First, add the CensusBlocks2010 feature class from the SanFrancisco feature dataset to the model. Next, add in the Intersect tool located in the Analysis/Overlay toolset in the ArcToolbox. While we could use a Spatial Join to achieve a similar result, I have used the Intersect tool to capture the area of intersect for use later in the model and script. At this point, our model should look like this: Tallying the analysis results After we have created this simple analysis, the next step is to determine the results for each bus stop. Finding the number of people that live in census blocks, touched by the 400-foot buffer of each bus stop, involves examining each row of data in the final feature class, and selecting rows that correspond to the bus stop. Once these are selected, a sum of the selected rows would be calculated either using the Field Calculator or the Summarize tool. All of these methods will work, and yet none are perfect. They take too long, and worse, are not repeatable automatically if an assumption in the model is adjusted (if the buffer is adjusted from 400 feet to 500 feet, for instance). This is where the traditional uses of ModelBuilder begin to fail analysts. It should be easy to instruct the model to select all rows associated with each bus stop, and then generate a summed population figure for each bus stop's representative region. It would be even better to have the model create a spreadsheet to contain the final results of the analysis. It's time to use Python to take this analysis to the next level. Exporting the model and adjusting the script While modeling analysis in ModelBuilder has its drawbacks, there is one fantastic option built into ModelBuilder: the ability to create a model, and then export the model to Python. Along with the ArcGIS Help Documentation, it is the best way to discover the correct Python syntax to use when writing ArcPy scripts. Create a folder that can hold the exported scripts next to the SanFrancisco geodatabase (for example, C:ProjectsScripts). This will hold both the exported scripts that ArcGIS automatically generates, and the versions that we will build from those generated scripts. Now, perform the following steps: Open up the model called Chapter2Model1. Click on the Model menu in the upper-left side of the screen. Select Export from the menu. Select To Python Script. Save the script as Chapter2Model1.py. Note that there is also the option to export the model as a graphic. Creating a graphic of the model is a good way to share what the model is doing with other analysts without the need to share the model and the data, and can also be useful when sharing Python scripts as well. The Automatically generated script Open the automatically generated script in an IDE. It should look like this: # -*- coding: utf-8 -*- # --------------------------------------------------------------------------- # Chapter2Model1.py # Created on: 2017-01-26 04:26:31.00000 # (generated by ArcGIS/ModelBuilder) # Description: # --------------------------------------------------------------------------- # Import arcpy module import arcpy # Local variables: Bus_Stops = "C:ProjectsSanFrancisco.gdbSanFranciscoBus_Stops" Inbound71 = "C:ProjectsSanFrancisco.gdbChapter2ResultsInbound71" Inbound71_400ft_Buffer = "C:ProjectsSanFrancisco.gdbChapter2ResultsInbound71_400ft_Buffer" CensusBlocks2010 = "C:ProjectsSanFrancisco.gdbSanFranciscoCensusBlocks2010" Intersect71Census = "C:ProjectsSanFrancisco.gdbChapter2ResultsIntersect71Census" # Process: Select arcpy.Select_analysis(Bus_Stops, Inbound71, "NAME = '71 IB' AND BUS_SIGNAG = 'Ferry Plaza'") # Process: Buffer arcpy.Buffer_analysis(Inbound71, Inbound71_400ft_buffer, "400 Feet", "FULL", "ROUND", "NONE", "") # Process: Intersect arcpy.Intersect_analysis("C:ProjectsSanFrancisco.gdbChapter2ResultsInbound71_400ft_Buffer #;C:ProjectsSanFrancisco.gdbSanFranciscoCensusBlocks2010 #",Intersect71Census, "ALL", "", "INPUT") Let's examine this script line by line. The first line is preceded by a pound sign ("#"), which again means that this line is a comment; however, it is not ignored by the Python interpreter when the script is executed as usual, but is used to help Python interpret the encoding of the script as described here: http://legacy.python.org/dev/peps/pep-0263. The second commented line and the third line are included for decorative purposes. The next four lines, all commented, are used for providing readers information about the script: what it is called and when it was created along with a description, which is pulled from the model's properties. Another decorative line is included to visually separate out the informative header from the body of the script. While the commented information section is nice to include in a script for other users of the script, it is not necessary. The body of the script, or the executable portion of the script, starts with the import arcpy line. Import statements are, by convention, included at the top of the body of the script. In this instance, the only module that is being imported is ArcPy. ModelBuilder's export function creates not only an executable script, but also comments each section to help mark the different sections of the script. The comments let user know where the variables are located, and where the ArcToolbox tools are being executed.  After the import statements come the variables. In this case, the variables represent the file paths to the input and output feature classes. The variable names are derived from the names of the feature classes (the base names of the file paths). The file paths are assigned to the variables using the assignment operator ("="), and the parts of the file paths are separated by two backslashes. File paths in Python To store and retrieve data, it is important to understand how file paths are used in Python as compared to how they are represented in Windows. In Python, file paths are strings, and strings in Python have special characters used to represent tabs "t", newlines "n", or carriage returns "r", among many others. These special characters all incorporate single backslashes, making it very hard to create a file path that uses single backslashes. File paths in Windows Explorer all use single backslashes. Windows Explorer: C:ProjectsSanFrancisco.gdbChapter2ResultsIntersect71Census Python was developed within the Linux environment, where file paths have forward slashes. There are a number of methods used to avoid this issue. The first is using filepaths with forward slashes. The Python interpreter will understand file paths with forward slashes as seen in this code: Python version: "C:/Projects/SanFrancisco.gdb/Chapter2Results/Intersect71Census" Within a Python script, the Python file path with the forward slashes will definitely work, while the Windows Explorer version might cause the script to throw an exception as Python strings can have special characters like the newline character n, or tab t. that will cause the string file path to be read incorrectly by the Python interpreter. Another method used to avoid the issue with special characters is the one employed by ModelBuilder when it automatically creates the Python scripts from a model. In this case, the backslashes are "escaped" using a second backslash. The preceding script uses this second method to produce the following results: Python escaped version: "C:ProjectsSanFrancisco.gdbChapter2ResultsIntersect71Census" The third method, which I use when copying file paths from ArcCatalog or Windows Explorer into scripts, is to create what is known as a "raw" string. This is the same as a regular string, but it includes an "r" before the script begins. This "r" alerts the Python interpreter that the following script does not contain any special characters or escape characters. Here is an example of how it is used: Python raw string: r"C:ProjectsSanFrancisco.gdbSanFranciscoBus_Stops" Using raw strings makes it easier to grab a file path from Windows Explorer, and add it to a string inside a script. It also makes it easier to avoid accidentally forgetting to include a set of double backslashes in a file path, which happens all the time and is the cause of many script bugs. String manipulation There are three major methods for inserting variables into strings. Each has different advantages and disadvantages of a technical nature. It's good to know about all three, as they have uses beyond our needs here, so let's review them. String manipulation method 1: string addition String addition seems like an odd concept at first, as it would not seem possible to "add" strings together, unlike integers or floats which are numbers. However, within Python and other programming languages, this is a normal step. Using the plus sign "+", strings are "added" together to make longer strings, or to allow variables to be added into the middle of existing strings. Here are some examples of this process: >>> aString = "This is a string" >>> bString = " and this is another string" >>> cString = aString + bString >>> cString The output is as follows: 'This is a string and this is another string' Two or more strings can be "added" together, and the result can be assigned to a third variable for using it later in the script. This process can be useful for data processing and formatting.  Another similar offshoot of string addition is string multiplication, where strings are multiplied by an integer to produce repeating versions of the string, like this: >>> "string" * 3 'stringstringstring' String manipulation method 2: string formatting #1 The second method of string manipulation, known as string formatting, involves adding placeholders into the string, which accept specific kinds of data. This means that these special strings can accept other strings as well as integers and float values. These placeholders use the modulo "%" and a key letter to indicate the type of data to expect. Strings are represented using %s, floats using %f, and integers using %d. The floats can also be adjusted to limit the digits included by adding a modifying number after the modulo. If there is more than one placeholder in a string, the values are passed to the string in a tuple. This method has become less popular, since the third method discussed next was introduced in Python 2.6, but it is still valuable to know, as many older scripts use it. Here is an example of this method: >>> origString = "This string has as a placeholder %s" >>> newString = origString % "and this text was added" >>> print newString The output is as follows: This string has as a placeholder and this text was added Here is an example when using a float placeholder: >>> floatString1 = "This string has a float here: %f" >>> newString = floatString % 1.0 >>> newString = floatString1 % 1.0 >>> print newString The output is as follows: This string has a float here: 1.000000 Here is another example when using a float placeholder: >>> floatString2 = "This string has a float here: %.1f" >>> newString2 = floatString2 % 1.0 >>> print newString2 The output is as follows: This string has a float here: 1.0 Here is an example using an integer placeholder: >>> intString = "Here is an integer: %d" >>> newString = intString % 1 >>> print newString The output is as follows: Here is an integer: 1 String manipulation method 3: string formatting #2 The final method is known as string formatting. It is similar to the string formatting method 1, with the added benefit of not requiring a specific data type of placeholder. The placeholders, or tokens as they are also known, are only required to be in order to be accepted. The format function is built into strings; by adding .format to the string, and passing in parameters, the string accepts the values, as seen in the following example: >>> formatString = "This string has 3 tokens: {0}, {1}, {2}" >>> newString = formatString.format("String", 2.5, 4) >>> print newString This string has 3 tokens: String, 2.5, 4 The tokens don't have to be in order within the string, and can even be repeated by adding a token wherever it is needed within the template. The order of the values applied to the template is derived from the parameters supplied to the .format function, which passes the values to the string. The third method has become my go-to method for string manipulation because of the ability to add the values repeatedly, and because it makes it possible to avoid supplying the wrong type of data to a specific placeholder, unlike the second method. The ArcPy tools After the import statements and the variable definitions, the next section of the script is where the analysis is executed. The same tools that we created in the model--the Select, Buffer, and Intersect tools, are included in this section. The same parameters that we supplied in the model are also included here: the inputs and outputs, plus the SQL statement in the Select tool, and the buffer distance in the Buffer tool. The tool parameters are supplied to the tools in the script in the same order as they appear in the tool interfaces in the model. Here is the Select tool in the script: arcpy.Select_analysis(Bus_Stops, Inbound71, "NAME = '71 IB' AND BUS_SIGNAG = 'Ferry Plaza'") It works like this: the arcpy module has a "method", or tool, called Select_analysis. This method, when called, requires three parameters: the input feature class (or shapefile), the output feature class, and the SQL statement. In this example, the input is represented by the variable Bus_Stops, and the output feature class is represented by the variable Inbound71, both of which are defined in the variable section. The SQL statement is included as the third parameter. Note that it could also be represented by a variable if the variable was defined preceding to this line; the SQL statement, as a string, could be assigned to a variable, and the variable could replace the SQL statement as the third parameter. Here is an example of parameter replacement using a variable: sqlStatement = "NAME = '71 IB' AND BUS_SIGNAG = 'Ferry Plaza'" arcpy.Select_analysis(Bus_Stops, Inbound71, sqlStatement) While ModelBuilder is good for assigning input and output feature classes to variables, it does not assign variables to every portion of the parameters. This will be an important thing to correct when we adjust and build our own scripts. The Buffer tool accepts a similar set of parameters as the Select tool. There is an input feature class represented by a variable, an output feature class variable, and the distance that we provided (400 feet in this case) along with a series of parameters that were supplied by default. Note that the parameters rely on keywords, and these keywords can be adjusted within the text of the script to adjust the resulting buffer output. For instance, "Feet" could be adjusted to "Meters", and the buffer would be much larger. Check the help section of the tool to understand better how the other parameters will affect the buffer, and to find the keyword arguments that are accepted by the Buffer tool in ArcPy. Also, as noted earlier, all of the parameters could be assigned to variables, which can save time if the same parameters are used repeatedly throughout a script. Sometimes, the supplied parameter is merely an empty string, as in this case here with the last parameter: arcpy.Buffer_analysis(Inbound71,Inbound71_400ft_buffer, "400 Feet", "FULL", "ROUND", "NONE", "") The empty string for the last parameter, which, in this case, signifies that there is no dissolve field for this buffer, is found quite frequently within ArcPy. It could also be represented by two single quotes, but ModelBuilder has been built to use double quotes to encase strings. The Intersect tool The last tool, the Intersect tool, uses a different method to represent the files that need to be intersected together when the tool is executed. Because the tool accepts multiple files in the input section (meaning, there is no limit to the number of files that can be intersected together in one operation), it stores all of the file paths within one string. This string can be manipulated using one of the string manipulation methods discussed earlier, or it can be reorganized to accept a Python list that contains the file paths, or variables representing file paths as a list, as the first parameter in any order. The Intersect tool will find the intersection of all of the strings. Adjusting the script Now is the time to take the automatically generated script, and adjust it to fit our needs. We want the script to both produce the output data, and to have it analyze the data and tally the results into a spreadsheet. This spreadsheet will hold an averaged population value for each bus stop. The average will be derived from each census block that the buffered representative region surrounding the stops intersected. Save the original script as "Chapter2Model1Modified.py". Adding the CSV module to the script For this script, we will use the csv module, a useful module for creating Comma-Separated Value spreadsheets. Its simple syntax will make it a useful tool for creating script outputs. ArcGIS for Desktop also installs the xlrd and xlwt modules, used to read or generate Excel spreadsheets respectively, when it is installed. These modules are also great for data analysis output. After the import arcpy line, add import csv. This will allow us to use the csv module for creating the spreadsheet. # Import arcpy module import arcpy import csv The next adjustment is made to the Intersect tool. Notice that the two paths included in the input string are also defined as variables in the variable section. Remove the file paths from the input strings, and replace them with a list containing the variable names of the input datasets, as follows: # Process: Intersect arcpy.Intersect_analysis([Inbound71_400ft_buffer,CensusBlocks2010],Intersect71Census, "ALL", "", "INPUT") Accessing the data: using a cursor Now that the script is in place to generate the raw data we need, we need a way to access the data held in the output feature class from the Intersect tool. This access will allow us to aggregate the rows of data representing each bus stop. We also need a data container to hold the aggregated data in memory before it is written to the spreadsheet. To accomplish the second part, we will use a Python dictionary. To accomplish the first part, we will use a method built into the ArcPy module: the Data Access SearchCursor. The Python dictionary will be added after the Intersect tool. A dictionary in Python is created using curly brackets {}. Add the following line to the script, below the analysis section: dataDictionary = {} This script will use the bus stop IDs as keys for the dictionary. The values will be lists, which will hold all of the population values associated with each busStopID. Add the following lines to generate a Data Cursor: with arcpy.da.SearchCursor(Intersect71Census, ["STOPID","POP10"]) as cursor: for row in cursor: busStopID = row[0] pop10 = row[1] if busStopID not in dataDictionary.keys(): dataDictionary[busStopID] = [pop10] else: dataDictionary[busStopID].append(pop10) This iteration combines a few ideas in Python and ArcPy. The with...as statement is used to create a variable (cursor), which represents the arcpy.da.SearchCursor object. It could also be written like this: cursor = arcpy.da.SearchCursor(Intersect71Census, ["STOPID","POP10"]) The advantage of the with...as structure is that the cursor object is erased from memory when the iteration is completed, which eliminates locks on the feature classes being evaluated. The arcpy.da.SearchCursor function requires an input feature class, and a list of fields to be returned. Optionally, an SQL statement can limit the number of rows returned. The next line, for row in cursor:, is the iteration through the data. It is not a normal Pythonic iteration, a distinction that will have ramifications in certain instances. For instance, one cannot pass index parameters to the cursor object to only evaluate specific rows within the cursor object, as one can do with a list.  When using a Search Cursor, each row of data is returned as a tuple, which cannot be modified. The data can be accessed using indexes. The if...else condition allows the data to be sorted. As noted earlier, the bus stop ID, which is the first member of the data included in the tuple, will be used as a key. The conditional evaluates if the bus stop ID is included in the dictionary's existing keys (which are contained in a list, and accessed using the dictionary.keys() method). If it is not, it is added to the keys, and assigned a value that is a list that contains (at first) one piece of data, the population value contained in that row. If it does exist in the keys, the list is appended with the next population value associated with that bus stop ID. With this code, we have now sorted each census block population according to the bus stop with which it is associated. Next we need to add code to create the spreadsheet. This code will use the same with...as structure, and will generate an average population value by using two built-in Python functions: sum, which creates a sum from a list of numbers, and len, which will get the length of a list, tuple, or string. with open(r'C:ProjectsAverages.csv', 'wb') as csvfile: csvwriter = csv.writer(csvfile, delimiter=',') for busStopID in dataDictionary.keys(): popList = dataDictionary[busStopID] averagePop = sum(popList)/len(popList) data = [busStopID, averagePop] csvwriter.writerow(data) The average population value is retrieved from the dictionary using the busStopID key, and then assigned to the variable averagePop. The two data pieces, the busStopID and the averagePop variable are then added to a list.This list is supplied to a csvwriter object, which knows how to accept the data and write it out to a file located at the file path supplied to the built-in Python function open, used to create simple files. The script is complete, although it is nice to add one more line to the end to give us visual confirmation that the script has run. print "Data Analysis Complete" This last line will create an output indicating that the script has run. Once it is done, go to the location of the output CSV file and open it using Excel or Notepad, and see the results of the analysis. Our first script is complete! Exceptions and  tracebacks During the process of writing and testing scripts, there will be errors that cause the code to break and throw exceptions. In Python, these are reported as a "traceback", which shows the last few lines of code executed before an exception occurred. To best understand the message, read them from the last line up. It will tell you the type of exception that occurred, and preceding to that will be the code that failed, with a line number, that should allow you to find and fix the code. It's not perfect, but it works. Overwriting files One common issue is that ArcGIS for Desktop does not allow you to overwrite files without turning on an environment variable. To avoid this issue, you can add a line after the import statements that will make overwriting files possible. Be aware that the original data will be unrecoverable once it is overwritten. It uses the env module to access the ArcGIS environment: import arcpy arcpy.env.overwriteOutput = True The final script Here is how the script should look in the end: # Chapter2Model1Modified.py # Import arcpy module import arcpy import csv # Local variables: Bus_Stops = r"C:ProjectsSanFrancisco.gdbSanFranciscoBus_Stops" CensusBlocks2010 = r"C:ProjectsSanFrancisco.gdbSanFranciscoCensusBlocks2010" Inbound71 = r"C:ProjectsSanFrancisco.gdbChapter2ResultsInbound71" Inbound71_400ft_buffer = r"C:ProjectsSanFrancisco.gdbChapter2ResultsInbound71_400ft_buffer" Intersect71Census = r"C:ProjectsSanFrancisco.gdbChapter2ResultsIntersect71Census" # Process: Select arcpy.Select_analysis(Bus_Stops, Inbound71, "NAME = '71 IB' AND BUS_SIGNAG = 'Ferry Plaza'") # Process: Buffer arcpy.Buffer_analysis(Inbound71, Inbound71_400ft_buffer, "400 Feet", "FULL", "ROUND", "NONE", "") # Process: Intersect arcpy.Intersect_analysis([Inbound71_400ft_buffe,CensusBlocks2010], Intersect71Census, "ALL", "", "INPUT") dataDictionary = {} with arcpy.da.SearchCursor(Intersect71Census, ["STOPID","POP10"]) as cursor: for row in cursor: busStopID = row[0] pop10 = row[1] if busStopID not in dataDictionary.keys(): dataDictionary[busStopID] = [pop10] else: dataDictionary[busStopID].append(pop10) with open(r'C:ProjectsAverages.csv', 'wb') as csvfile: csvwriter = csv.writer(csvfile, delimiter=',') for busStopID in dataDictionary.keys(): popList = dataDictionary[busStopID] averagePop = sum(popList)/len(popList) data = [busStopID, averagePop] csvwriter.writerow(data) print "Data Analysis Complete" Summary In this article, you learned how to craft a model of an analysis and export it out to a script. In particular, you learned how to use ModelBuilder to create an analysis and export it out as a script and how to adjust the script to be more "Pythonic". After explaining about the auto-generated script, we adjusted the script to include a results analysis and summation, which was outputted to a CSV file. We also briefly touched on the use of Search Cursors. Also, we saw how built-in modules such as the csv module can be used along with ArcPy to capture analysis output in formatted spreadsheets. Resources for Article: Further resources on this subject: Using the ArcPy DataAccess Module withFeature Classesand Tables [article] Measuring Geographic Distributions with ArcGIS Tool [article] Learning to Create and Edit Data in ArcGIS [article]
Read more
  • 0
  • 0
  • 6626

article-image-cross-browser-tests-using-selenium-webdriver
Packt
25 Mar 2015
18 min read
Save for later

Cross-browser Tests using Selenium WebDriver

Packt
25 Mar 2015
18 min read
In this article by Prashanth Sams, author of the book Selenium Essentials, helps you to perform efficient compatibility tests. Here, we will also learn about how to run tests on cloud. You will cover the following topics in the article: Selenium WebDriver compatibility tests Selenium cross-browser tests on cloud Selenium headless browser testing (For more resources related to this topic, see here.) Selenium WebDriver compatibility tests Selenium WebDriver handles browser compatibility tests on almost every popular browser, including Chrome, Firefox, Internet Explorer, Safari, and Opera. In general, every browser's JavaScript engine differs from the others, and each browser interprets the HTML tags differently. The WebDriver API drives the web browser as the real user would drive it. By default, FirefoxDriver comes with the selenium-server-standalone.jar library added; however, for Chrome, IE, Safari, and Opera, there are libraries that need to be added or instantiated externally. Let's see how we can instantiate each of the following browsers through its own driver: Mozilla Firefox: The selenium-server-standalone library is bundled with FirefoxDriver to initialize and run tests in a Firefox browser. FirefoxDriver is added to the Firefox profile as a file extension on starting a new instance of FirefoxDriver. Please check the Firefox versions and its suitable drivers at http://selenium.googlecode.com/git/java/CHANGELOG. The following is the code snippet to kick start Mozilla Firefox: WebDriver driver = new FirefoxDriver(); Google Chrome: Unlike FirefoxDriver, the ChromeDriver is an external library file that makes use of WebDriver's wire protocol to run Selenium tests in a Google Chrome web browser. The following is the code snippet to kick start Google Chrome: System.setProperty("webdriver.chrome.driver","C:\chromedriver.exe"); WebDriver driver = new ChromeDriver(); To download ChromeDriver, refer to http://chromedriver.storage.googleapis.com/index.html. Internet Explorer: IEDriverServer is an executable file that uses the WebDriver wire protocol to control the IE browser in Windows. Currently, IEDriverServer supports the IE versions 6, 7, 8, 9, and 10. The following code snippet helps you to instantiate IEDriverServer: System.setProperty("webdriver.ie.driver","C:\IEDriverServer.exe"); DesiredCapabilities dc = DesiredCapabilities.internetExplorer(); dc.setCapability(InternetExplorerDriver.INTRODUCE_FLAKINESS_BY_IGNORING_SECURITY_DOMAINS, true); WebDriver driver = new InternetExplorerDriver(dc); To download IEDriverServer, refer to http://selenium-release.storage.googleapis.com/index.html. Apple Safari: Similar to FirefoxDriver, SafariDriver is internally bound with the latest Selenium servers, which starts the Apple Safari browser without any external library. SafariDriver supports the Safari browser versions 5.1.x and runs only on MAC. For more details, refer to http://elementalselenium.com/tips/69-safari. The following code snippet helps you to instantiate SafariDriver: WebDriver driver = new SafariDriver(); Opera: OperaPrestoDriver (formerly called OperaDriver) is available only for Presto-based Opera browsers. Currently, it does not support Opera versions 12.x and above. However, the recent releases (Opera 15.x and above) of Blink-based Opera browsers are handled using OperaChromiumDriver. For more details, refer to https://github.com/operasoftware/operachromiumdriver. The following code snippet helps you to instantiate OperaChrumiumDriver: DesiredCapabilities capabilities = new DesiredCapabilities(); capabilities.setCapability("opera.binary", "C://Program Files (x86)//Opera//opera.exe"); capabilities.setCapability("opera.log.level", "CONFIG"); WebDriver driver = new OperaDriver(capabilities); To download OperaChromiumDriver, refer to https://github.com/operasoftware/operachromiumdriver/releases. TestNG TestNG (Next Generation) is one of the most widely used unit-testing frameworks implemented for Java. It runs Selenium-based browser compatibility tests with the most popular browsers. The Eclipse IDE users must ensure that the TestNG plugin is integrated with the IDE manually. However, the TestNG plugin is bundled with IntelliJ IDEA as default. The testng.xml file is a TestNG build file to control test execution; the XML file can run through Maven tests using POM.xmlwith the help of the following code snippet: <plugin>    <groupId>org.apache.maven.plugins</groupId>    <artifactId>maven-surefire-plugin</artifactId>    <version>2.12.2</version>    <configuration>      <suiteXmlFiles>      <suiteXmlFile>testng.xml</suiteXmlFile>      </suiteXmlFiles>    </configuration> </plugin> To create a testng.xml file, right-click on the project folder in the Eclipse IDE, navigate to TestNG | Convert to TestNG, and click on Convert to TestNG, as shown in the following screenshot: The testng.xml file manages the entire tests; it acts as a mini data source by passing the parameters directly into the test methods. The location of the testng.xml file is hsown in the following screenshot: As an example, create a Selenium project (for example, Selenium Essentials) along with the testng.xml file, as shown in the previous screenshot. Modify the testng.xml file with the following tags: <?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd"> <suite name="Suite" verbose="3" parallel="tests" thread-count="5">   <test name="Test on Firefox">    <parameter name="browser" value="Firefox" />    <classes>      <class name="package.classname" />      </classes> </test>   <test name="Test on Chrome">    <parameter name="browser" value="Chrome" />    <classes>    <class name="package.classname" />    </classes> </test>   <test name="Test on InternetExplorer">    <parameter name="browser" value="InternetExplorer" />    <classes>       <class name="package.classname" />    </classes> </test>   <test name="Test on Safari">    <parameter name="browser" value="Safari" />    <classes>      <class name="package.classname" />    </classes> </test>   <test name="Test on Opera">    <parameter name="browser" value="Opera" />    <classes>      <class name="package.classname" />    </classes> </test> </suite> <!-- Suite --> Download all the external drivers except FirefoxDriver and SafariDriver, extract the zipped folders, and locate the external drivers in the test script as mentioned in the preceding snippets for each browser. The following Java snippet will explain to you how you can get parameters directly from the testng.xml file and how you can run cross-browser tests as a whole: @BeforeTest @Parameters({"browser"}) public void setUp(String browser) throws MalformedURLException { if (browser.equalsIgnoreCase("Firefox")) {    System.out.println("Running Firefox");    driver = new FirefoxDriver(); } else if (browser.equalsIgnoreCase("chrome")) {    System.out.println("Running Chrome"); System.setProperty("webdriver.chrome.driver", "C:\chromedriver.exe");    driver = new ChromeDriver(); } else if (browser.equalsIgnoreCase("InternetExplorer")) {    System.out.println("Running Internet Explorer"); System.setProperty("webdriver.ie.driver", "C:\IEDriverServer.exe"); DesiredCapabilities dc = DesiredCapabilities.internetExplorer();    dc.setCapability (InternetExplorerDriver.INTRODUCE_FLAKINESS_BY_IGNORING_SECURITY_DOMAINS, true); //If IE fail to work, please remove this line and remove enable protected mode for all the 4 zones from Internet options    driver = new InternetExplorerDriver(dc); } else if (browser.equalsIgnoreCase("safari")) {    System.out.println("Running Safari");    driver = new SafariDriver(); } else if (browser.equalsIgnoreCase("opera")) {    System.out.println("Running Opera"); // driver = new OperaDriver();       --Use this if the location is set properly--    DesiredCapabilities capabilities = new DesiredCapabilities(); capabilities.setCapability("opera.binary", "C://Program Files (x86)//Opera//opera.exe");    capabilities.setCapability("opera.log.level", "CONFIG");    driver = new OperaDriver(capabilities); } } SafariDriver is not yet stable. A few of the major issues in SafariDriver are as follows: SafariDriver won't work properly in Windows SafariDriver does not support modal dialog box interaction You cannot navigate forward or backwards in browser history through SafariDriver Selenium cross-browser tests on the cloud The ability to automate Selenium tests on the cloud is quite interesting, with instant access to real devices. Sauce Labs, BrowserStack, and TestingBot are the leading web-based tools used for cross-browser compatibility checking. These tools contain unique test automation features, such as diagnosing failures through screenshots and video, executing parallel tests, running Appium mobile automation tests, executing tests on internal local servers, and so on. SauceLabs SauceLabs is the standard Selenium test automation web app to do cross-browser compatibility tests on the cloud. It lets you automate tests on your favorite programming languages using test frameworks such as JUnit, TestNG, Rspec, and many more. SauceLabs cloud tests can also be executed from the Selenium Builder IDE interface. Check for the available SauceLabs devices, OS, and platforms at https://saucelabs.com/platforms. Access the websitefrom your web browser, log in, and obtain the Sauce username and Access Key. Make use of the obtained credentials to drive tests over the SauceLabs cloud. SauceLabs creates a new instance of the virtual machine while launching the tests. Parallel automation tests are also possible using SauceLabs. The following is a Java program to run tests over the SauceLabs cloud: package packagename;   import java.net.URL; import org.openqa.selenium.remote.DesiredCapabilities; import org.openqa.selenium.remote.RemoteWebDriver; import java.lang.reflect.*;   public class saucelabs {   private WebDriver driver;   @Parameters({"username", "key", "browser", "browserVersion"}) @BeforeMethod public void setUp(@Optional("yourusername") String username,                     @Optional("youraccesskey") String key,                      @Optional("iphone") String browser,                      @Optional("5.0") String browserVersion,                      Method method) throws Exception {   // Choose the browser, version, and platform to test DesiredCapabilities capabilities = new DesiredCapabilities(); capabilities.setBrowserName(browser); capabilities.setCapability("version", browserVersion); capabilities.setCapability("platform", Platform.MAC); capabilities.setCapability("name", method.getName()); // Create the connection to SauceLabs to run the tests this.driver = new RemoteWebDriver( new URL("http://" + username + ":" + key + "@ondemand.saucelabs.com:80/wd/hub"), capabilities); }   @Test public void Selenium_Essentials() throws Exception {    // Make the browser get the page and check its title driver.get("http://www.google.com"); System.out.println("Page title is: " + driver.getTitle()); Assert.assertEquals("Google", driver.getTitle()); WebElement element = driver.findElement(By.name("q")); element.sendKeys("Selenium Essentials"); element.submit(); } @AfterMethod public void tearDown() throws Exception { driver.quit(); } } SauceLabs has a setup similar to BrowserStack on test execution and generates detailed logs. The breakpoints feature allows the user to manually take control over the virtual machine and pause tests, which helps the user to investigate and debug problems. By capturing JavaScript's console log, the JS errors and network requests are displayed for quick diagnosis while running tests against Google Chrome browser. BrowserStack BrowserStack is a cloud-testing web app to access virtual machines instantly. It allows users to perform multi-browser testing of their applications on different platforms. It provides a setup similar to SauceLabs for cloud-based automation using Selenium. Access the site https://www.browserstack.com from your web browser, log in, and obtain the BrowserStack username and Access Key. Make use of the obtained credentials to drive tests over the BrowserStack cloud. For example, the following generic Java program with TestNG provides a detailed overview of the process that runs on the BrowserStack cloud. Customize the browser name, version, platform, and so on, using capabilities. Let's see the Java program we just talked about: package packagename;   import org.openqa.selenium.remote.DesiredCapabilities; import org.openqa.selenium.remote.RemoteWebDriver;   public class browserstack {   public static final String USERNAME = "yourusername"; public static final String ACCESS_KEY = "youraccesskey"; public static final String URL = "http://" + USERNAME + ":" + ACCESS_KEY + "@hub.browserstack.com/wd/hub";   private WebDriver driver;   @BeforeClass public void setUp() throws Exception {    DesiredCapabilities caps = new DesiredCapabilities();    caps.setCapability("browser", "Firefox");    caps.setCapability("browser_version", "23.0");    caps.setCapability("os", "Windows");    caps.setCapability("os_version", "XP");    caps.setCapability("browserstack.debug", "true"); //This enable Visual Logs      driver = new RemoteWebDriver(new URL(URL), caps); }   @Test public void testOnCloud() throws Exception {    driver.get("http://www.google.com");    System.out.println("Page title is: " + driver.getTitle());    Assert.assertEquals("Google", driver.getTitle());    WebElement element = driver.findElement(By.name("q"));    element.sendKeys("seleniumworks");    element.submit(); }   @AfterClass public void tearDown() throws Exception {    driver.quit(); } } The app generates and stores test logs for the user to access anytime. The generated logs provide a detailed analysis with step-by-step explanations. To enhance the test speed, run parallel Selenium tests on the BrowserStack cloud; however, the automation plan has to be upgraded to increase the number of parallel test runs. TestingBot TestingBot also provides a setup similar to BrowserStack and SauceLabs for cloud-based cross-browser test automation using Selenium. It records a video of the running tests to analyze problems and debug. Additionally, it provides support to capture the screenshots on test failure. To run local Selenium tests, it provides an SSH tunnel tool that lets you run tests against local servers or other web servers. TestingBot uses Amazon's cloud infrastructure to run Selenium scripts in various browsers. Access the site https://testingbot.com/, log in, and obtain Client Key and Client Secret from your TestingBot account. Make use of the obtained credentials to drive tests over the TestingBot cloud. Let's see an example Java test program with TestNG using the Eclipse IDE that runs on the TestingBot cloud: package packagename;   import java.net.URL;   import org.openqa.selenium.remote.DesiredCapabilities; import org.openqa.selenium.remote.RemoteWebDriver;   public class testingbot { private WebDriver driver;   @BeforeClass public void setUp() throws Exception { DesiredCapabilitiescapabillities = DesiredCapabilities.firefox();    capabillities.setCapability("version", "24");    capabillities.setCapability("platform", Platform.WINDOWS);    capabillities.setCapability("name", "testOnCloud");    capabillities.setCapability("screenshot", true);    capabillities.setCapability("screenrecorder", true);    driver = new RemoteWebDriver( new URL ("http://ClientKey:ClientSecret@hub.testingbot.com:4444/wd/hub"), capabillities); }   @Test public void testOnCloud() throws Exception {    driver.get      ("http://www.google.co.in/?gws_rd=cr&ei=zS_mUryqJoeMrQf-yICYCA");    driver.findElement(By.id("gbqfq")).clear();    WebElement element = driver.findElement(By.id("gbqfq"));    element.sendKeys("selenium"); Assert.assertEquals("selenium - Google Search", driver.getTitle()); }   @AfterClass public void tearDown() throws Exception {    driver.quit(); } } Click on the Tests tab to check the log results. The logs are well organized with test steps, screenshots, videos, and a summary. Screenshots are captured on each and every step to make the tests more precise, as follows: capabillities.setCapability("screenshot", true); // screenshot capabillities.setCapability("screenrecorder", true); // video capture TestingBot provides a unique feature by scheduling and running tests directly from the site. The tests can be prescheduled to repeat tests any number of times on a daily or weekly basis. It's even more accurate on scheduling the test start time. You will be apprised of test failures with an alert through e-mail, an API call, an SMS, or a Prowl notification. This feature enables error handling to rerun failed tests automatically as per the user settings. Launch Selenium IDE, record tests, and save the test case or test suite in default format (HTML). Access the https://testingbot.com/ URL from your web browser and click on the Test Lab tab. Now, try to upload the already-saved Selenium test case, select the OS platform and browser name and version. Finally, save the settings and execute tests. The test results are recorded and displayed under Tests. Selenium headless browser testing A headless browser is a web browser without Graphical User Interface (GUI). It accesses and renders web pages but doesn't show them to any human being. A headless browser should be able to parse JavaScript. Currently, most of the systems encourage tests against headless browsers due to its efficiency and time-saving properties. PhantomJS and HTMLUnit are the most commonly used headless browsers. Capybara-webkit is another efficient headless WebKit for rails-based applications. PhantomJS PhantomJS is a headless WebKit scriptable with JavaScript API. It is generally used for headless testing of web applications that comes with built-in GhostDriver. Tests on PhantomJs are obviously fast since it has fast and native support for various web standards, such as DOM handling, CSS selector, JSON, canvas, and SVG. In general, WebKit is a layout engine that allows the web browsers to render web pages. Some of the browsers, such as Safari and Chrome, use WebKit. Apparently, PhantomJS is not a test framework; it is a headless browser that is used only to launch tests via a suitable test runner called GhostDriver. GhostDriver is a JS implementation of WebDriver Wire Protocol for PhantomJS; WebDriver Wire Protocol is a standard API that communicates with the browser. By default, the GhostDriver is embedded with PhantomJS. To download PhantomJS, refer to http://phantomjs.org/download.html. Download PhantomJS, extract the zipped file (for example, phantomjs-1.x.x-windows.zip for Windows) and locate the phantomjs.exe folder. Add the following imports to your test code: import org.openqa.selenium.phantomjs.PhantomJSDriver; import org.openqa.selenium.phantomjs.PhantomJSDriverService; import org.openqa.selenium.remote.DesiredCapabilities; Introduce PhantomJSDriver using capabilities to enable or disable JavaScript or to locate the phantomjs executable file path: DesiredCapabilities caps = new DesiredCapabilities(); caps.setCapability("takesScreenshot", true); caps.setJavascriptEnabled(true); // not really needed; JS is enabled by default caps.setCapability(PhantomJSDriverService.PHANTOMJS_EXECUTABLE_PATH_PROPERTY, "C:/phantomjs.exe"); WebDriver driver = new PhantomJSDriver(caps); Alternatively, PhantomJSDriver can also be initialized as follows: System.setProperty("phantomjs.binary.path", "/phantomjs.exe"); WebDriver driver = new PhantomJSDriver(); PhantomJS supports screen capture as well. Since PhantomJS is a WebKit and a real layout and rendering engine, it is feasible to capture a web page as a screenshot. It can be set as follows: caps.setCapability("takesScreenshot", true); The following is the test snippet to capture a screenshot on test run: File scrFile = ((TakesScreenshot)driver).getScreenshotAs(OutputType.FILE); FileUtils.copyFile(scrFile, new File("c:\sample.jpeg"),true); For example, check the following test program for more details: package packagename;   import java.io.File; import java.util.concurrent.TimeUnit; import org.apache.commons.io.FileUtils; import org.openqa.selenium.*; import org.openqa.selenium.phantomjs.PhantomJSDriver;   public class phantomjs { private WebDriver driver; private String baseUrl;   @BeforeTest public void setUp() throws Exception { System.setProperty("phantomjs.binary.path", "/phantomjs.exe");    driver = new PhantomJSDriver();    baseUrl = "https://www.google.co.in"; driver.manage().timeouts().implicitlyWait(30, TimeUnit.SECONDS); }   @Test public void headlesstest() throws Exception { driver.get(baseUrl + "/"); driver.findElement(By.name("q")).sendKeys("selenium essentials"); File scrFile = ((TakesScreenshot) driver).getScreenshotAs(OutputType.FILE); FileUtils.copyFile(scrFile, new File("c:\screen_shot.jpeg"), true); }   @AfterTest public void tearDown() throws Exception {    driver.quit(); } } HTMLUnitDriver HTMLUnit is a headless (GUI-less) browser written in Java and is typically used for testing. HTMLUnitDriver, which is based on HTMLUnit, is the fastest and most lightweight implementation of WebDriver. It runs tests using a plain HTTP request, which is quicker than launching a browser and executes tests way faster than other drivers. The HTMLUnitDriver is added to the latest Selenium servers (2.35 or above). The JavaScript engine used by HTMLUnit (Rhino) is unique and different from any other popular browsers available on the market. HTMLUnitDriver supports JavaScript and is platform independent. By default, the JavaScript support for HTMLUnitDriver is disabled. Enabling JavaScript in HTMLUnitDriver slows down the test execution; however, it is advised to enable JavaScript support because most of the modern sites are Ajax-based web apps. By enabling JavaScript, it also throws a number of JavaScript warning messages in the console during test execution. The following snippet lets you enable JavaScript for HTMLUnitDriver: HtmlUnitDriver driver = new HtmlUnitDriver(); driver.setJavascriptEnabled(true); // enable JavaScript The following line of code is an alternate way to enable JavaScript: HtmlUnitDriver driver = new HtmlUnitDriver(true); The following piece of code lets you handle a transparent proxy using HTMLUnitDriver: HtmlUnitDriver driver = new HtmlUnitDriver(); driver.setProxy("xxx.xxx.xxx.xxx", port); // set proxy for handling Transparent Proxy driver.setJavascriptEnabled(true); // enable JavaScript [this emulate IE's js by default] HTMLUnitDriver can emulate the popular browser's JavaScript in a better way. By default, HTMLUnitDriver emulates IE's JavaScript. For example, to handle the Firefox web browser with version 17, use the following snippet: HtmlUnitDriver driver = new HtmlUnitDriver(BrowserVersion.FIREFOX_17); driver.setJavascriptEnabled(true); Here is the snippet to emulate a specific browser's JavaScript using capabilities: DesiredCapabilities capabilities = DesiredCapabilities.htmlUnit(); driver = new HtmlUnitDriver(capabilities); DesiredCapabilities capabilities = DesiredCapabilities.firefox(); capabilities.setBrowserName("Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Firefox/24.0"); capabilities.setVersion("24.0"); driver = new HtmlUnitDriver(capabilities); Summary In this article, you learned to perform efficient compatibility tests and also learned about how to run tests on cloud. Resources for Article: Further resources on this subject: Selenium Testing Tools [article] First Steps with Selenium RC [article] Quick Start into Selenium Tests [article]
Read more
  • 0
  • 0
  • 6603

article-image-creating-simple-maps-openlayers-3
Packt
22 Jan 2016
14 min read
Save for later

Creating Simple Maps with OpenLayers 3

Packt
22 Jan 2016
14 min read
In this article by Gábor Farkas, the author of the book Mastering OpenLayers 3, you will learn about OpenLayers 3 which is the most robust open source web mapping library out there, highly capable of handling the client side of a WebGIS environment. Whether you know how to use OpenLayers 3 or you are new to it, this article will help you to create a simple map and either refresh some concepts or get introduced to them. As this is a mastering book, we will mainly discuss the library's structure and capabilities in greater depth. In this article we will create a simple map with the library, and revise the basic terms related to it. In this article we will cover the following topics: Structure of OpenLayers 3 Architectural considerations Creating a simple map Using the API documentation effectively Debugging the code (For more resources related to this topic, see here.) Before getting started Take a look at the code provided with the book. You should see a js folder in which the required libraries are stored. For this article, ol.js, and ol.css in the ol3-3.11.0 folder will be sufficient. The code is also available on GitHub. You can download a copy from the following URL: https://github.com/GaborFarkas/mastering_openlayers3/releases. You can download the latest release of OpenLayers 3 from its GitHub repository at https://github.com/openlayers/ol3/releases. For now, grabbing the distribution version (v3.11.0-dist.zip) should be enough. Creating a working environment There is a security restriction in front end development, called CORS (Cross Origin Resource Sharing). By default, this restriction prevents the application from grabbing content from a different domain. On top of that, some browsers disallow reaching content from the hard drive when a web page is opened from the file system. To prevent this behavior, please make sure you possess one of the following: A running web server (highly recommended) Firefox web browser with security.fileuri.strict_origin_policy set to false (you can reach flags in Firefox by opening about:config from the address bar) Google Chrome web browser started with the --disable-web-security parameter (make sure you have closed every other instance of Chrome before) Safari web browser with Disable Local File Restrictions (in the Develop menu, which can be enabled in the Advanced tab of Preferences) You can easily create a web server if you have Python 2 with SimpleHTTPServer, or if you have Python 3 with http.server. For basic tutorials, you can consult the appropriate Python documentation pages. Structure of OpenLayers 3 OpenLayers 3 is a well structured, modular, and complex library, where flexibility, and consistency take a higher priority than performance. However, this does not mean OpenLayers 3 is slow. On the contrary, the library highly outperforms its predecessor; therefore its comfortable and logical design does not really adversely affect its performance. The relationship of some of the most essential parts of the library can be described with a radial UML (Universal Modeling Language) diagram, such as the following : Reading an UML scheme can seem difficult, and can be difficult if it is a proper one. However, this simplified scheme is quite easy to understand. With regard to the arrows, a single 1 represents a one-to-one relation, while the 0..n and 1 symbols denote a one-to-many relationship. You will probably never get into direct contact with the two superclasses at the top of the OpenLayers 3 hierarchy: ol.Observable, and ol.Object. However, most of the classes you actively use are children of these classes. You can always count with their methods, when you design a web mapping or WebGIS application. In the diagram we can see, that the parent of the most essential objects is the ol.Observable class. This superclass ensures all of its children have consistent listener methods. For example, every descendant of this superclass bears the on, once, and un functions, making registering event listeners to them as easy as possible. The next superclass, ol.Object, extends its parent with methods capable of easy property management. Every inner property managed by its methods (get, set, and unset) are observable. There are also convenience methods for bulk setting and getting properties, called getProperties, and setProperties. Most of the other frequently used classes are direct, or indirect, descendants of this superclass. Building the layout Now, that we covered some of the most essential structural aspects of the library, let's consider the architecture of an application deployed in a production environment. Take another look at the code. There is a chapters folder, in which you can access the examples within the appropriate subfolder. If you open ch01, you can see three file types in it. As you have noticed, the different parts of the web page (HTML, CSS, and JavaScript) are separated. There is one main reason behind this: the code remains as clean as possible. With a clean and rational design, you will always know where to look when you would like to make a modification. Moreover, if you're working for a company there is a good chance someone else will also work with your code. This kind of design will make sure your colleague can easily handle your code. On top of that, if you have to develop a wrapper API around OpenLayers 3, this is the only way your code can be integrated into future projects. Creating the appeal As the different parts of the application are separated, we will create a minimalistic HTML document. It will expand with time, as the application becomes more complicated and needs more container elements. For now, let's write a simple HTML document: <!DOCTYPE html> <html lang="en"> <head> <title>chapter 1 - Creating a simple map</title> <link href="../../js/ol3-3.11.0/ol.css" rel="stylesheet"> <link href="ch01.css" rel="stylesheet"> <script type="text/javascript" src="../../js/ol3- 3.11.0/ol.js"></script> <script type="text/javascript" src="ch01_simple_map.js"></script> </head> <body> <div id="map" class="map"></div> </body> </html> In this simple document, we defined the connection points between the external resources, and our web page. In the body, we created a simple div element with the required properties. We don't really need anything else; the magic will happen entirely in our code. Now we can go on with our CSS file and define one simple class, called map: .map { width: 100%; height: 100%; } Save this simple rule to a file named ch01.css, in the same folder you just saved the HTML file. If you are using a different file layout, don't forget to change the relative paths in the link, and script tags appropriately. Writing the code Now that we have a nice container for our map, let's concentrate on the code. In this book, most of the action will take place in the code; therefore this will be the most important part. First, we write the main function for our code. function init() { document.removeEventListener('DOMContentLoaded', init); } document.addEventListener('DOMContentLoaded', init); By using an event listener, we can make sure the code only runs when the structure of the web page has been initialized. This design enables us to use relative values for sizing, which is important for making adaptable applications. Also, we make sure the map variable is wrapped into a function (therefore we do not expose it) and seal a potential security breach. In the init function, we detach the event listener from the document, because it will not be needed once the DOM structure has been created. The DOMContentLoaded event waits for the DOM structure to build up. It does not wait for images, frames, and dynamically added content; therefore the application will load faster. Only IE 8, and prior versions, do not support this event type, but if you have to fall back you can always use the window object's load event. To check a feature's support in major browsers, you can consult the following site: http://www.caniuse.com/. Next, we extend the init function, by creating a vector layer and assigning it to a variable. Note that, in OpenLayers 3.5.0, creating vector layers has been simplified. Now, a vector layer has only a single source class, and the parser can be defined as a format in the source. var vectorLayer = new ol.layer.Vector({ source: new ol.source.Vector({ format: new ol.format.GeoJSON({ defaultDataProjection: 'EPSG:4326' }), url: '../../res/world_capitals.geojson', attributions: [ new ol.Attribution({ html: 'World Capitals © Natural Earth' }) ] }) }); We are using a GeoJSON data source with a WGS84 projection. As the map will use a Web Mercator projection, we provide a defaultDataProjection value to the parser, so the data will be transformed automatically into the view's projection. We also give attribution to the creators of the vector dataset. You can only give attribution with an array of ol.Attribution instances passed to the layer's source. Remember: giving attribution is not a matter of choice. Always give proper attribution to every piece of data used. This is the only way to avoid copyright infringement. Finally, construct the map object, with some extra controls and one extra interaction. var map = new ol.Map({ target: 'map', layers: [ new ol.layer.Tile({ source: new ol.source.OSM() }), vectorLayer ], controls: [ //Define the default controls new ol.control.Zoom(), new ol.control.Rotate(), new ol.control.Attribution(), //Define some new controls new ol.control.ZoomSlider(), new ol.control.MousePosition(), new ol.control.ScaleLine(), new ol.control.OverviewMap() ], interactions: ol.interaction.defaults().extend([ new ol.interaction.Select({ layers: [vectorLayer] }) ]), view: new ol.View({ center: [0, 0], zoom: 2 }) }); In this example, we provide two layers: a simple OpenStreetMap tile layer and the custom vector layer saved into a separate variable. For the controls, we define the default ones, then provide a zoom slider, a scale bar, a mouse position notifier, and an overview map. There are too many default interactions, therefore we extend the default set of interactions with ol.interaction.Select. This is the point where saving the vector layer into a variable becomes necessary. The view object is a simple view that defaults to projection EPSG:3857 (Web Mercator). OpenLayers 3 also has a default set of controls that can be accessed similarly to the interactions, under ol.control.defaults(). Default controls and interactions are instances of ol.Collection, therefore both of them can be extended and modified like any other collection object. Note that the extend method requires an array of features. Save the code to a file named ch01_simple_map.js in the same folder as your HTML file. If you open the HTML file, you should see the following map: You have different, or no results? Do not worry, not even a bit! Open up your browser's developer console (F12 in modern ones, or CTRL + J if F12 does not work), and resolve the error(s) noted there. If there is no result, double-check the HTML and CSS files; if you have a different result, check the code or the CORS requirements based on the error message. If you use Internet Explorer, make sure you have version 9, or better. Using the API documentation The API documentation for OpenLayers 3.11.0, the version we are using, can be found at http://www.openlayers.org/en/v3.11.0/apidoc/. The API docs, like the library itself, are versioned, thus you can browse the appropriate documentation for your OpenLayers 3 version by changing v3.11.0 in the URL to the version you are currently using. The development version of the API is also documented; you can always reach it at http://www.openlayers.org/en/master/apidoc/. Be careful when you use it, though. It contains all of the newly implemented methods, which probably won't work with the latest stable version. Check the API documentation by typing one of the preceding links in your browser. You should see the home page with the most frequently used classes. There is also a handy search box, with all of the classes listed on the left side. We have talked about default interactions, and their lengthy nature before. On the home page you can see a link to the default interactions. If you click on it, you will be directed to the following page: Now you can also see that nine interactions are added to the map by default. It would be quite verbose to add them one by one just to keep them when we define only one extra interaction, wouldn't it? You can see some features marked as experimental while you browse the API documentation with the Stable Only checkbox unchecked. Do not consider those features to be unreliable. They are stable, but experimental, and therefore they can be modified or removed in future versions. If the developer team considers a feature is useful and does not need further optimization or refactoring, it will be marked as stable. Understanding type definitions For every constructor and function in the API, the input and expected output types are well documented. To see a good example, let's search for a function with inputs and outputs as well. If you search for ol.proj.fromLonLat, you will see the following function: The function takes two arguments as input, one named coordinate and one named projection; projection is an optional one. coordinate is an ol.Coordinate type (an array with two numbers), while projection is an ol.proj.ProjectionLike type (a string representing the projection). The returned value, as we can see next to the white arrow, is also an ol.Coordinate type, with the transformed values. A good developer always keeps track of future changes in the library. This is especially important with OpenLayers 3, as it lacks backward-compatibility, when a major change occurs. You can see all of the major changes in the library in the OpenLayers 3 GitHub repository: https://github.com/openlayers/ol3/blob/master/changelog/upgrade-notes.md. Debugging the code As you will have noticed, there was a third file in the OpenLayers 3 folder discussed at the beginning of the article (js/ol3-3.11.0). This file, named ol-debug.js, is the uncompressed source file, in which the library is concatenated with all of its dependencies. We will use this file for two purpose in this book. Now, we will use it for debugging. First, open up ch01_simple_map.js. Next, extend the init function with an obvious mistake: var geometry = new ol.geom.Point([0, 0]); vectorLayer.getSource().addFeature(geometry); Don't worry if you can't spot the error immediately. That's what is debugging for. Save this extended JavaScript file with the name ch01_error.js. Next, replace the old script with the new one in the HTML file, like this: <script type="text/javascript" src="ch01_error.js"></script> If you open the updated HTML, and open your browser's developer console, you will see the following error message: Now that we have an error, let's check it in the source file by clicking on the error link on the right side of the error message: Quite meaningless, isn't it? The compiled library is created with Google's Closure Library, which obfuscates everything by default in order to compress the code. We have to tell it which precise part of the code should be exported. We will learn how to do that in the last article. For now, let's use the debug file. Change the ol.js in the HTML to ol-debug.js, load up the map, and check for the error again. Finally, we can see, in a well-documented form, the part that caused the error. This is a validating method, which makes sure the added feature is compatible with the library. It requires an ol.Feature as an input, which is how we caught our error. We passed a simple geometry to the function, instead of wrapping it in an ol.Feature first. Summary In this article, you were introduced to the basics of OpenLayers 3 with a more advanced approach. We also discussed some architectural considerations, and some of the structural specialties of the library. Hopefully, along with the general revision, we acquired some insight in using the API documentation and debugging practices. Congratulations! You are now on your way to mastering OpenLayers 3. Resources for Article: Further resources on this subject: What is OpenLayers? [article] OpenLayers' Key Components [article] OpenLayers: Overview of Vector Layer [article]
Read more
  • 0
  • 0
  • 6591

article-image-primer-agi-asterisk-gateway-interface
Packt
16 Oct 2009
2 min read
Save for later

A Primer to AGI: Asterisk Gateway Interface

Packt
16 Oct 2009
2 min read
How does AGI work Let's examine the following diagram: As the previous diagram illustrates, an AGI script communicates with Asterisk via two standard data streams—STDIN (Standard Input) and STDOUT (Standard Output). From the AGI script point-of-view, any input coming in from Asterisk would be considered STDIN, while output to Asterisk would be considered as STDOUT. The idea of using STDIN/STDOUT data streams with applications isn't a new one, even if you're a junior level programmer. Think of it as regarding any input from Asterisk with a read directive and outputting to Asterisk with a print or echo directive. When thinking about it in such a simplistic manner, it is clear that AGI scripts can be written in any scripting or programming language, ranging from BASH scripting, through PERL/PHP scripting, to even writing C/C++ programs to perform the same task. Let's now examine how an AGI script is invoked from within the Asterisk dialplan: exten => _X.,1,AGI(some_script_name.agi,param1,param2,param3) As you can see, the invocation is similar to the invocation of any other Asterisk dialplan application. However, there is one major difference between a regular dialplan application and an AGI script—the resources an AGI script consumes.While an internal application consumes a well-known set of resources from Asterisk, an AGI script simply hands over the control to an external process. Thus, the resources required to execute the external AGI script are now unknown, while at the same time, Asterisk consumes the resources for managing the execution of the AGI script.Ok, so BASH isn't much of a resource hog, but what about Java? This means that the choice of programming language for your AGI scripts is important. Choosing the wrong programming language can often lead to slow systems and in most cases, non-operational systems. While one may argue that the underlying programming language has a direct impact on the performance of your AGI application, it is imperative to learn the impact of each. To be more exact, it's not the language itself, but more the technology of the programming language runtime that is important. The following table tries to distinguish between three programming languages' families and their applicability to AGI development.
Read more
  • 0
  • 0
  • 6544
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-plotting-data-using-matplotlib-part-2
Packt
19 Nov 2009
15 min read
Save for later

Plotting data using Matplotlib: Part 2

Packt
19 Nov 2009
15 min read
Plotting data from a CSV file A common format to export and distribute datasets is the Comma-Separated Values (CSV) format. For example, spreadsheet applications allow us to export a CSV from a working sheet, and some databases also allow for CSV data export. Additionally, it's a common format to distribute datasets on the Web. In this example, we'll be plotting the evolution of the world's population divided by continents, between 1950 and 2050 (of course they are predictions), using a new type of graph: bars stacked. Using the data available at http://www.xist.org/earth/pop_continent.aspx (that fetches data from the official UN data at http://esa.un.org/unpp/index.asp), we have prepared the following CSV file: Continent,1950,1975,2000,2010,2025,2050Africa,227270,418765,819462,1033043,1400184,1998466Asia,1402887,2379374,3698296,4166741,4772523,5231485Europe,547460,676207,726568,732759,729264,691048Latin America,167307,323323,521228,588649,669533,729184Northern America,171615,242360,318654,351659,397522,448464Oceania,12807,21286,31160,35838,42507,51338 In the first line, we can find the header with a description of what the data in the columns represent. The other lines contain the continent's name and its population (in thousands) for the given years. In the first line, we can find the header with a description of what the data in the columns represent. The other lines contain the continent's name and its population (in thousands) for the given years. There are several ways to parse a CSV file, for example: NumPy's loadtxt() (what we are going to use here) Matplotlib's mlab.csv2rec() The csv module (in the standard library) but we decided to go with loadtxt() because it's very powerful (and it's what Matplotlib is standardizing on). Let's look at how we can plot it then: # for file opening made easierfrom __future__ import with_statement We need this because we will use the with statement to read the file. # numpyimport numpy as np NumPy is used to load the CSV and for its useful array data type. # matplotlib plotting moduleimport matplotlib.pyplot as plt# matplotlib colormap moduleimport matplotlib.cm as cm# needed for formatting Y axisfrom matplotlib.ticker import FuncFormatter# Matplotlib font managerimport matplotlib.font_manager as font_manager In addition to the classic pyplot module, we need other Matplotlib submodules: cm (color map): Considering the way we're going to prepare the plot, we need to specify the color map of the graphical elements FuncFormatter: We will use this to change the way the Y-axis labels are displayed font_manager: We want to have a legend with a smaller font, and font_manager allows us to do that def billions(x, pos): """Formatter for Y axis, values are in billions""" return '%1.fbn' % (x*1e-6) This is the function that we will use to format the Y-axis labels. Our data is in thousands. Therefore, by dividing it by one million, we obtain values in the order of billions. The function is called at every label to draw, passing the label value and the position. # bar widthwidth = .8 As said earlier, we will plot bars, and here we defi ne their width. The following is the parsing code. We know that it's a bit hard to follow (the data preparation code is usually the hardest one) but we will show how powerful it is. # open CSV filewith open('population.csv') as f: The function we're going to use, NumPy loadtxt(), is able to receive either a filename or a file descriptor, as in this case. We have to open the file here because we have to strip the header line from the rest of the file and set up the data parsing structures. # read the first line, splitting the yearsyears = map(int, f.readline().split(',')[1:]) Here we read the first line, the header, and extract the years. We do that by calling the split() function and then mapping the int() function to the resulting list, from the second element onwards (as the first one is a string). # we prepare the dtype for exacting data; it's made of:# <1 string field> <len(years) integers fields>dtype = [('continents', 'S16')] + [('', np.int32)]*len(years) NumPy is flexible enough to allow us to define new data types. Here, we are creating one ad hoc for our data lines: a string (of maximum 16 characters) and as many integers as the length of years list. Also note how the fi rst element has a name, continents, while the last integers have none: we will need this in a bit. # we load the file, setting the delimiter and the dtype abovey = np.loadtxt(f, delimiter=',', dtype=dtype) With the new data type, we can actually call loadtxt(). Here is the description of the parameters: f: This is the file descriptor. Please note that it now contains all the lines except the first one (we've read above) which contains the headers, so no data is lost. delimiter: By default, loadtxt() expects the delimiter to be spaces, but since we are parsing a CSV file, the separator is comma. dtype: This is the data type that is used to apply to the text we read. By default, loadtxt() tries to match against float values # "map" the resulting structure to be easily accessible:# the first column (made of string) is called 'continents'# the remaining values are added to 'data' sub-matrix# where the real data arey = y.view(np.dtype([('continents', 'S16'), ('data', np.int32, len(years))])) Here we're using a trick: we view the resulting data structure as made up of two parts, continents and data. It's similar to the dtype that we defined earlier, but with an important difference. Now, the integer's values are mapped to a field name, data. This results in the column continents with all the continents names,and the matrix data that contains the year's values for each row of the file. data = y['data']continents = y['continents'] We can separate the data and the continents part into two variables for easier usage in the code. # prepare the bottom arraybottom = np.zeros(len(years)) We prepare an array of zeros of the same length as years. As said earlier, we plot stacked bars, so each dataset is plot over the previous ones, thus we need to know where the bars below finish. The bottom array keeps track of this, containing the height of bars already plotted. # for each line in datafor i in range(len(data)): Now that we have our information in data, we can loop over it. # create the bars for each element, on top of the previous barsbt = plt.bar(range(len(data[i])), data[i], width=width, color=cm.hsv(32*i), label=continents[i], bottom=bottom) and create the stacked bars. Some important notes: We select the the i-th row of data, and plot a bar according to its element's size (data[i]) with the chosen width. As the bars are generated in different loops, their colors would be all the same. To avoid this, we use a color map (in this case hsv), selecting a different color at each iteration, so the sub-bars will have different colors. We label each bar set with the relative continent's name (useful for the legend) As we have said, they are stacked bars. In fact, every iteration adds a piece of the global bars. To do so, we need to know where to start drawing the bar from (the lower limit) and bottom does this. It contains the value where to start drowing the current bar. # update the bottom arraybottom += data[i] We update the bottom array. By adding the current data line, we know what the bottom line will be to plot the next bars on top of it. # label the X ticks with yearsplt.xticks(np.arange(len(years))+width/2, [int(year) for year in years]) We then add the tick's labels, the years elements, right in the middle of the bar. # some information on the plotplt.xlabel('Years')plt.ylabel('Population (in billions)')plt.title('World Population: 1950 - 2050 (predictions)') Add some information to the graph. # draw a legend, with a smaller fontplt.legend(loc='upper left', prop=font_manager.FontProperties(size=7)) We now draw a legend in the upper-left position with a small font (to better fit the empty space). # apply the custom function as Y axis formatterplt.gca().yaxis.set_major_formatter(FuncFormatter(billions) Finally, we change the Y-axis label formatter, to use the custom formatting function that we defined earlier. The result is the next screenshot where we can see the composition of the world population divided by continents: In the preceding screenshot, the whole bar represents the total world population, and the sections in each bar tell us about how much a continent contributes to it. Also observe how the custom color map works: from bottom to top, we have represented Africa in red, Asia in orange, Europe in light green, Latin America in green, Northern America in light blue, and Oceania in blue (barely visible as the top of the bars). Plotting extrapolated data using curve fitting While plotting the CSV values, we have seen that there were some columns representing predictions of the world population in the coming years. We'd like to show how to obtain such predictions using the mathematical process of extrapolation with the help of curve fitting. Curve fitting is the process of constructing a curve (a mathematical function) that better fits to a series of data points. This process is related to other two concepts: interpolation: A method of constructing new data points within the range of a known set of points extrapolation: A method of constructing new data points outside a known set of points The results of extrapolation are subject to a greater degree of uncertainty and are influenced a lot by the fitting function that is used. So it works this way: First, a known set of measures is passed to the curve fitting procedure that computes a function to approximate these values With this function, we can compute additional values that are not present in the original dataset Let's first approach curve fitting with a simple example: # Numpy and Matplotlibimport numpy as npimport matplotlib.pyplot as plt These are the classic imports. # the known points setdata = [[2,2],[5,0],[9,5],[11,4],[12,7],[13,11],[17,12]] This is the data we will use for curve fitting. They are the points on a plane (so each has a X and a Y component) # we extract the X and Y components from previous pointsx, y = zip(*data) We aggregate the X and Y components in two distinct lists. # plot the data points with a black crossplt.plot(x, y, 'kx') Then plot the original dataset as a black cross on the Matplotlib image. # we want a bit more data and more fine grained for# the fitting functionsx2 = np.arange(min(x)-1, max(x)+1, .01) We prepare a new array for the X values because we wish to have a wider set of values (one unit on the right and one on to the left of the original list) and a fine grain to plot the fitting function nicely. # lines styles for the polynomialsstyles = [':', '-.', '--'] To differentiate better between the polynomial lines, we now define their styles list. # getting style and count one at timefor d, style in enumerate(styles): Then we loop over that list by also considering the item count. # degree of the polynomialdeg = d + 1 We define the actual polynomial degree. # calculate the coefficients of the fitting polynomialc = np.polyfit(x, y, deg) Then compute the coefficients of the fitting polynomial whose general format is: c[0]*x**deg + c[1]*x**(deg – 1) + ... + c[deg]# we evaluate the fitting function against x2y2 = np.polyval(c, x2) Here, we generate the new values by evaluating the fitting polynomial against the x2 array. # and then we plot itplt.plot(x2, y2, label="deg=%d" % deg, linestyle=style) Then we plot the resulting function, adding a label that indicates the degree of the polynomial and using a different style for each line. # show the legendplt.legend(loc='upper left') We then show the legend, and the final result is shown in the next screenshot: Here, the polynomial with degree=1 is drawn as a dotted blue line, the one with degree=2 is a dash-dot green line, and the one with degree=3 is a dashed red line. We can see that the higher the degree, the better is the fit of the function against the data. Let's now revert to our main intention, trying to provide an extrapolation for population data. First a note: we take the values for 2010 as real data and not predictions (well, we are quite near to that year) else we have very few values to create a realistic extrapolation. Let's see the code: # for file opening made easierfrom __future__ import with_statement# numpyimport numpy as np# matplotlib plotting moduleimport matplotlib.pyplot as plt# matplotlib colormap moduleimport matplotlib.cm as cm# Matplotlib font managerimport matplotlib.font_manager as font_manager# bar widthwidth = .8# open CSV filewith open('population.csv') as f: # read the first line, splitting the years years = map(int, f.readline().split(',')[1:]) # we prepare the dtype for exacting data; it's made of: # <1 string field> <6 integers fields> dtype = [('continents', 'S16')] + [('', np.int32)]*len(years) # we load the file, setting the delimiter and the dtype above y = np.loadtxt(f, delimiter=',', dtype=dtype) # "map" the resulting structure to be easily accessible: # the first column (made of string) is called 'continents' # the remaining values are added to 'data' sub-matrix # where the real data are y = y.view(np.dtype([('continents', 'S16'), ('data', np.int32, len(years))]))# extract fieldsdata = y['data']continents = y['continents'] This is the same code that is used for the CSV example (reported here for completeness). x = years[:-2]x2 = years[-2:] We are dividing the years into two groups: before and after 2010. This translates to split the last two elements of the years list. What we are going to do here is prepare the plot in two phases: First, we plot the data we consider certain values After this, we plot the data from the UN predictions next to our extrapolations # prepare the bottom arrayb1 = np.zeros(len(years)-2) We prepare the array (made of zeros) for the bottom argument of bar(). # for each line in datafor i in range(len(data)): # select all the data except the last 2 values d = data[i][:-2] For each data line, we extract the information we need, so we remove the last two values. # create bars for each element, on top of the previous barsbt = plt.bar(range(len(d)), d, width=width, color=cm.hsv(32*(i)), label=continents[i], bottom=b1)# update the bottom arrayb1 += d Then we plot the bar, and update the bottom array. # prepare the bottom arrayb2_1, b2_2 = np.zeros(2), np.zeros(2) We need two arrays because we will display two bars for the same year—one from the CSV and the other from our fitting function. # for each line in datafor i in range(len(data)): # extract the last 2 values d = data[i][-2:] Again, for each line in the data matrix, we extract the last two values that are needed to plot the bar for CSV. # select the data to compute the fitting functiony = data[i][:-2] Along with the other values needed to compute the fitting polynomial. # use a polynomial of degree 3c = np.polyfit(x, y, 3) Here, we set up a polynomial of degree 3; there is no need for higher degrees. # create a function out of those coefficientsp = np.poly1d(c) This method constructs a polynomial starting from the coefficients that we pass as parameter. # compute p on x2 values (we need integers, so the map)y2 = map(int, p(x2)) We use the polynomial that was defined earlier to compute its values for x2. We also map the resulting values to integer, as the bar() function expects them for height. # create bars for each element, on top of the previous barsbt = plt.bar(len(b1)+np.arange(len(d)), d, width=width/2, color=cm.hsv(32*(i)), bottom=b2_1) We draw a bar for the data from the CSV. Note how the width is half of that of the other bars. This is because in the same width we will draw the two sets of bars for a better visual comparison. # create the bars for the extrapolated valuesbt = plt.bar(len(b1)+np.arange(len(d))+width/2, y2, width=width/2, color=cm.bone(32*(i+2)), bottom=b2_2) Here, we plot the bars for the extrapolated values, using a dark color map so that we have an even better separation for the two datasets. # update the bottom arrayb2_1 += db2_2 += y2 We update both the bottom arrays. # label the X ticks with yearsplt.xticks(np.arange(len(years))+width/2, [int(year) for year in years]) We add the years as ticks for the X-axis. # draw a legend, with a smaller fontplt.legend(loc='upper left', prop=font_manager.FontProperties(size=7)) To avoid a very big legend, we used only the labels for the data from the CSV, skipping the interpolated values. We believe it's pretty clear what they're referring to. Here is the screenshot that is displayed on executing this example: The conclusion we can draw from this is that the United Nations uses a different function to prepare the predictions, especially because they have a continuous set of information, and they can also take into account other environmental circumstances while preparing such predictions. Tools using Matplotlib Given that it's has an easy and powerful API, Matplotlib is also used inside other programs and tools when plotting is needed. We are about to present a couple of these tools: NetworkX Mpmath
Read more
  • 0
  • 0
  • 6514

article-image-creating-jsf-composite-component
Packt
22 Oct 2014
9 min read
Save for later

Creating a JSF composite component

Packt
22 Oct 2014
9 min read
This article by David Salter, author of the book, NetBeans IDE 8 Cookbook, explains how to create a JSF composite component in NetBeans. (For more resources related to this topic, see here.) JSF is a rich component-based framework, which provides many components that developers can use to enrich their applications. JSF 2 also allows composite components to be easily created, which can then be inserted into other JSF pages in a similar way to any other JSF components such as buttons and labels. In this article, we'll see how to create a custom component that displays an input label and asks for corresponding input. If the input is not validated by the JSF runtime, we'll show an error message. The component is going to look like this: The custom component is built up from three different standard JSF components. On the left, we have a <h:outputText/> component that displays the label. Next, we have a <h:inputText /> component. Finally, we have a <h:message /> component. Putting these three components together like this is a very useful pattern when designing input forms within JSF. Getting ready To create a JSF composite component, you will need to have a working installation of WildFly that has been configured within NetBeans. We will be using the Enterprise download bundle of NetBeans as this includes all of the tools we need without having to download any additional plugins. How to do it… First of all, we need to create a web application and then create a JSF composite component within it. Perform the following steps: Click on File and then New Project…. Select Java Web from the list of Categories and Web Application form the list of Projects. Click on Next. Enter the Project Name value as CompositeComp. Click on Next. Ensure that Add to Enterprise Application is set to <None>, Server is set to WildFly Application Server, Java EE Version is set to Java EE 7 Web, and Context Path is set to /CompositeComp. Click on Next. Click on the checkbox next to JavaServer Faces as we are using this framework. All of the default JSF configurations are correct, so click on the Finish button to create the project. Right-click on the CompositeComp project within the Projects explorer and click on New and then Other…. In the New File dialog, select JavaServer Faces from the list of Categories and JSF Composite Component from the list of File Types. Click on Next. On the New JSF Composite Component dialog, enter the File Name value as inputWithLabel and change the folder to resourcescookbook. Click on Finish to create the custom component. In JSF, custom components are created as Facelets files that are stored within the resources folder of the web application. Within the resources folder, multiple subfolders can exist, each representing a namespace of a custom component. Within each namespace folder, individual custom components are stored with filenames that match the composite component names. We have just created a composite component within the cookbook namespace called inputWithLabel. Within each composite component file, there are two sections: an interface and an implementation. The interface lists all of the attributes that are required by the composite component and the implementation provides the XHTML code to represent the component. Let's now define our component by specifying the interface and the implementation. Perform the following steps: The inputWithLabel.xhtml file should be open for editing. If not, double–click on it within the Projects explorer to open it. For our composite component, we need two attributes to be passed into the component. We need the text for the label and the expression language to bind the input box to. Change the interface section of the file to read:    <cc:attribute name="labelValue" />   <cc:attribute name="editValue" /></cc:interface> To render the component, we need to instantiate a <h:outputText /> tag to display the label, a <h:inputText /> tag to receive the input from the user, and a <h:message /> tag to display any errors that are entered for the input field. Change the implementation section of the file to read: <cc:implementation>   <style>   .outputText{width: 100px; }   .inputText{width: 100px; }   .errorText{width: 200px; color: red; }   </style>   <h:panelGrid id="panel" columns="3" columnClasses="outputText, inputText, errorText">       <h:outputText value="#{cc.attrs.labelValue}" />       <h:inputText value="#{cc.attrs.editValue}" id="inputText" />       <h:message for="inputText" />   </h:panelGrid></cc:implementation> Click on the lightbulb on the left-hand side of the editor window and accept the fix to add the h=http://><html       > We can now reference the composite component from within the Facelets page. Add the following code inside the <h:body> code on the page: <h:form id="inputForm">   <cookbook:inputWithLabel labelValue="Forename" editValue="#{personController.person.foreName}"/>   <cookbook:inputWithLabel labelValue="Last Name" editValue="#{personController.person.lastName}"/>   <h:commandButton type="submit" value="Submit" action="#{personController.submit}"/></h:form> This code instantiates two instances of our inputWithLabel composite control and binds them to personController. We haven't got one of those yet, so let's create one and a class to represent a person. Perform the following steps: Create a new Java class within the project. Enter Class Name as Person and Package as com.davidsalter.cookbook.compositecomp. Click on Finish. Add members to the class to represent foreName and lastName: private String foreName;private String lastName; Use the Encapsulate Fields refactoring to generate getters and setters for these members. To allow error messages to be displayed if the foreName and lastName values are inputted incorrectly, we will add some Bean Validation annotations to the attributes of the class. Annotate the foreName member of the class as follows: @NotNull@Size(min=1, max=25)private String foreName; Annotate the lastName member of the class as follows: @NotNull@Size(min=1, max=50)private String lastName; Use the Fix Imports tool to add the required imports for the Bean Validation annotations. Create a new Java class within the project. Enter Class Name as PersonController and Package as com.davidsalter.cookbook.compositecomp. Click on Finish. We need to make the PersonController class an @Named bean so that it can be referenced via expression language from within JSF pages. Annotate the PersonController class as follows: @Named@RequestScopedpublic class PersonController { We need to add a Person instance into PersonController that will be used to transfer data from the JSF page to the named bean. We will also need to add a method onto the bean that will redirect JSF to an output page after the names have been entered. Add the following to the PersonController class: private Person person = new Person();public Person getPerson() {   return person;}public void setPerson(Person person) {   this.person = person;}public String submit() {   return "results.xhtml";} The final task before completing our application is to add a results page so we can see what input the user entered. This output page will simply display the values of foreName and lastName that have been entered. Create a new JSF page called results that uses the Facelets syntax. Change the <h:body> tag of this page to read: <h:body>   You Entered:   <h:outputText value="#{personController.person.foreName}" />&nbsp;   <h:outputText value="#{personController.person.lastName}" /></h:body> The application is now complete. Deploy and run the application by right-clicking on the project within the Projects explorer and selecting Run. Note that two instances of the composite component have been created and displayed within the browser. Click on the Submit button without entering any information and note how the error messages are displayed: Enter some valid information and click on Submit, and note how the information entered is echoed back on a second page. How it works… Creating composite components was a new feature added to JSF 2. Creating JSF components was a very tedious job in JSF 1.x, and the designers of JSF 2 thought that the majority of custom components created in JSF could probably be built by adding different existing components together. As it is seen, we've added together three different existing JSF components and made a very useful composite component. It's useful to distinguish between custom components and composite components. Custom components are entirely new components that did not exist before. They are created entirely in Java code and build into frameworks such as PrimeFaces and RichFaces. Composite components are built from existing components and their graphical view is designed in the .xhtml files. There's more... When creating composite components, it may be necessary to specify attributes. The default option is that the attributes are not mandatory when creating a custom component. They can, however, be made mandatory by adding the required="true" attribute to their definition, as follows: <cc:attribute name="labelValue" required="true" /> If an attribute is specified as required, but is not present, a JSF error will be produced, as follows: /index.xhtml @11,88 <cookbook:inputWithLabel> The following attribute(s) are required, but no values have been supplied for them: labelValue. Sometimes, it can be useful to specify a default value for an attribute. This is achieved by adding the default="…" attribute to their definition: <cc:attribute name="labelValue" default="Please enter a value" /> Summary In this article, we have learned to create a JSF composite component using NetBeans. Resources for Article: Further resources on this subject: Creating a Lazarus Component [article] Top Geany features you need to know about [article] Getting to know NetBeans [article]
Read more
  • 0
  • 0
  • 6501

article-image-python-built-functions
Packt
24 Dec 2010
10 min read
Save for later

Python Built-in Functions

Packt
24 Dec 2010
10 min read
  Python 3 Object Oriented Programming Harness the power of Python 3 objects Learn how to do Object Oriented Programming in Python using this step-by-step tutorial Design public interfaces using abstraction, encapsulation, and information hiding Turn your designs into working software by studying the Python syntax Raise, handle, define, and manipulate exceptions using special error objects Implement Object Oriented Programming in Python using practical examples         Read more about this book       (For more resources on Python, see here.) There are numerous functions in Python that perform a task or calculate a result on certain objects without being methods on the class. Their purpose is to abstract common calculations that apply to many types of classes. This is applied duck typing; these functions accept objects with certain attributes or methods that satisfy a given interface, and are able to perform generic tasks on the object. Len The simplest example is the len() function. This function counts the number of items in some kind of container object such as a dictionary or list. For example: >>> len([1,2,3,4])4 Why don't these objects have a length property instead of having to call a function on them? Technically, they do. Most objects that len() will apply to have a method called __len__() that returns the same value. So len(myobj) seems to callmyobj.__len__(). Why should we use the function instead of the method? Obviously the method is a special method with double-underscores suggesting that we shouldn't call it directly. There must be an explanation for this. The Python developers don't make such design decisions lightly. The main reason is efficiency. When we call __len__ on an object, the object has to look the method up in its namespace, and, if the special __getattribute__ method (which is called every time an attribute or method on an object is accessed) is defined on that object, it has to be called as well. Further __getattribute__ for that particular method may have been written to do something nasty like refusing to give us access to special methods such as __len__! The len function doesn't encounter any of this. It actually calls the __len__ function on the underlying class, so len(myobj) maps to MyObj.__len__(myobj). Another reason is maintainability. In the future, the Python developers may want to change len() so that it can calculate the length of objects that don't have a __len__, for example by counting the number of items returned in an iterator. They'll only have to change one function instead of countless __len__ methods across the board. Reversed The reversed() function takes any sequence as input, and returns a copy of that sequence in reverse order. It is normally used in for loops when we want to loop over items from back to front. Similar to len, reversed calls the __reversed__() function on the class for the parameter. If that method does not exist, reversed builds the reversed sequence itself using calls to __len__ and __getitem__. We only need to override __reversed__ if we want to somehow customize or optimize the process: normal_list=[1,2,3,4,5]class CustomSequence(): def __len__(self): return 5 def __getitem__(self, index): return "x{0}".format(index)class FunkyBackwards(CustomSequence): def __reversed__(self): return "BACKWARDS!"for seq in normal_list, CustomSequence(), FunkyBackwards(): print("n{}: ".format(seq.__class__.__name__), end="") for item in reversed(seq): print(item, end=", ") The for loops at the end print the reversed versions of a normal list, and instances of the two custom sequences. The output shows that reversed works on all three of them, but has very different results when we define __reversed__ ourselves: list: 5, 4, 3, 2, 1,CustomSequence: x4, x3, x2, x1, x0,FunkyBackwards: B, A, C, K, W, A, R, D, S, !, Note: the above two classes aren't very good sequences, as they don't define a proper version of __iter__ so a forward for loop over them will never end. Enumerate Sometimes when we're looping over an iterable object in a for loop, we want access to the index (the current position in the list) of the current item being processed. The for loop doesn't provide us with indexes, but the enumerate function gives us something better: it creates a list of tuples, where the first object in each tuple is the index and the second is the original item. This is useful if we want to use index numbers directly. Consider some simple code that outputs all the lines in a file with line numbers: import sysfilename = sys.argv[1]with open(filename) as file: for index, line in enumerate(file): print("{0}: {1}".format(index+1, line), end='') Running this code on itself as the input file shows how it works: 1: import sys2: filename = sys.argv[1]3:4: with open(filename) as file:5: for index, line in enumerate(file):6: print("{0}: {1}".format(index+1, line), end='') The enumerate function returns a list of tuples, our for loop splits each tuple into two values, and the print statement formats them together. It adds one to the index for each line number, since enumerate, like all sequences is zero based. Zip The zip function is one of the least object-oriented functions in Python's collection. It takes two or more sequences and creates a new sequence of tuples. Each tuple contains one element from each list. This is easily explained by an example; let's look at parsing a text file. Text data is often stored in tab-delimited format, with a "header" row as the first line in the file, and each line below it describing data for a unique record. A simple contact list in tab-delimited format might look like this: first last emailjohn smith jsmith@example.comjane doan janed@example.comdavid neilson dn@example.com A simple parser for this file can use zip to create lists of tuples that map headers to values. These lists can be used to create a dictionary, a much easier object to work with in Python than a file! import sysfilename = sys.argv[1]contacts = []with open(filename) as file: header = file.readline().strip().split('t') for line in file: line = line.strip().split('t') contact_map = zip(header, line) contacts.append(dict(contact_map))for contact in contacts: print("email: {email} -- {last}, {first}".format( **contact)) What's actually happening here? First we open the file, whose name is provided on the command line, and read the first line. We strip the trailing newline, and split what's left into a list of three elements. We pass 't' into the strip method to indicate that the string should be split at tab characters. The resulting header list looks like ["first", "last", "email"]. Next, we loop over the remaining lines in the file (after the header). We split each line into three elements. Then, we use zip to create a sequence of tuples for each line. The first sequence would look like [("first", "john"), ("last", "smith"), ("email", "jsmith@example.com")]. Pay attention to what zip is doing. The first list contains headers; the second contains values. The zip function created a tuple of header/value pairs for each matchup. The dict constructor takes the list of tuples, and maps the first element to a key and the second to a value to create a dictionary. The result is added to a list. At this point, we are free to use dictionaries to do all sorts of contact-related activities. For testing, we simply loop over the contacts and output them in a different format. The format line, as usual, takes variable arguments and keyword arguments. The use of **contact automatically converts the dictionary to a bunch of keyword arguments (we'll understand this syntax before the end of the chapter) Here's the output: email: jsmith@example.com -- smith, johnemail: janed@example.com -- doan, janeemail: dn@example.com -- neilson, david If we provide zip with lists of different lengths, it will stop at the end of the shortest list. There aren't many useful applications of this feature, but zip will not raise an exception if that is the case. We can always check the list lengths and add empty values to the shorter list, if necessary. The zip function is actually the inverse of itself. It can take multiple sequences and combine them into a single sequence of tuples. Because tuples are also sequences, we can "unzip" a zipped list of tuples by zipping it again. Huh? Have a look at this example: >>> list_one = ['a', 'b', 'c']>>> list_two = [1, 2, 3]>>> zipped = zip(list_one, list_two)>>> zipped = list(zipped)>>> zipped[('a', 1), ('b', 2), ('c', 3)]>>> unzipped = zip(*zipped)>>> list(unzipped)[('a', 'b', 'c'), (1, 2, 3)] First we zip the two lists and convert the result into a list of tuples. We can then use parameter unpacking to pass these individual sequences as arguments to the zip function. zip matches the first value in each tuple into one sequence and the second value into a second sequence; the result is the same two sequences we started with! Other functions Another key function is sorted(), which takes an iterable as input, and returns a list of the items in sorted order. It is very similar to the sort() method on lists, the difference being that it works on all iterables, not just lists. Like list.sort, sorted accepts a key argument that allows us to provide a function to return a sort value for each input. It can also accept a reverse argument. Three more functions that operate on sequences are min, max, and sum. These each take a sequence as input, and return the minimum or maximum value, or the sum of all values in the sequence. Naturally, sum only works if all values in the sequence are numbers. The max and min functions use the same kind of comparison mechanism as sorted and list.sort, and allow us to define a similar key function. For example, the following code uses enumerate, max, and min to return the indices of the values in a list with the maximum and minimum value: def min_max_indexes(seq): minimum = min(enumerate(seq), key=lambda s: s[1]) maximum = max(enumerate(seq), key=lambda s: s[1]) return minimum[0], maximum[0] The enumerate call converts the sequence into (index, item) tuples. The lambda function passed in as a key tells the function to search the second item in each tuple (the original item). The minimum and maximum variables are then set to the appropriate tuples returned by enumerate. The return statement takes the first value (the index from enumerate) of each tuple and returns the pair. The following interactive session shows how the returned values are, indeed, the indices of the minimum and maximum values: >>> alist = [5,0,1,4,6,3]>>> min_max_indexes(alist)(1, 4)>>> alist[1], alist[4](0, 6) We've only touched on a few of the more important Python built-in functions. There are numerous others in the standard library, including: all and any, which accept an iterable and returns True if all, or any, of the items evaluate to true (that is a non-empty string or list, a non-zero number, an object that is not None, or the literal True). eval, exec, and compile, which execute string as code inside the interpreter. hasattr, getattr, setattr, and delattr, which allow attributes on an object to be manipulated as string names. And many more! See the interpreter help documentation for each of the functions listed in dir(__builtins__). Summary In this article we took a look at many useful built-in functions. Further resources on this subject: Python Graphics: Animation Principles [Article] Animating Graphic Objects using Python [Article] Python 3: When to Use Object-oriented Programming [Article] Objects in Python [Article]
Read more
  • 0
  • 0
  • 6485

article-image-pulse-width-modulation-pwm
Packt
02 Jun 2015
7 min read
Save for later

Pulse-width Modulation – PWM

Packt
02 Jun 2015
7 min read
This is an article written by Rodolfo Giometti, the author of the book BeagleBone Essentials. The pulse-width modulation (PWM) is a technique used to encode a message in a pulsing signal, to generate an analog signal using a digital source, to allow the control of the power supplied to electrical devices, such as electrical motors, or to set the position of a servo motor. In this article, we're going to discuss how an embedded developer can use the BeagleBone Black's PWM generator to control a servo motor with a few Bash commands. (For more resources related to this topic, see here.) What is a PWM generator? A PWM generator is a device that can generate a PWM signal, according to its internal settings. The output of a PWM generator is just a sequence of pulse signals as square waveforms with well-defined characteristics: By referring to the preceding diagram, where we have a simple PWM waveform, we can define the following parameters: Amplitude (A): This is the difference between the maximum output value (ymax) and the minimum one (ymin) Period (T): This is the duration of one cycle of the output square waveform Duty cycle (dc): This is the ratio(in percentage) between the high state time (thigh) and the period (T) In the preceding diagram, the amplitude is 5 V (ymax=5 V and ymin=0 V), the period is 1 ms (the wave is periodic and it repeats itself every 0.001 seconds), and the duty cycle is 25 percent (thigh=0.25 ms and T=1 ms). You can find the details about the PWM at https://en.wikipedia.org/wiki/Pulse-width_modulation. The electrical lines The PWM generator lines are reported in the following table: Name Description PWM output The PWM output signal GND Common ground The PWM in Linux Our BeagleBone Black has eight PWM generators, and even if some of them may have their output lines multiplexed with another device, they cannot be used without disabling the conflicting device. The complete list is reported at the BeagleBone Black's support page (http://beagleboard.org/support/bone101) and summarized in the following table: Name PWM output pwm0 P9.22 or P9.31 pwm1 P9.21 or P9.29 pwm2 P9.42 pwm3 P8.36 or P9.14 pwm4 P8.34 or P9.16 pwm5 P8.19 or P8.45 pwm6 P8.13 or P8.46 pwm7 P9.28 In the preceding table, the notation P9.22 means that pin 22 is on the expansion connector P9. We can directly get these values from the BeagleBone Black firmware settings, using the following command: root@BeagleBone:~# dtc -I dtb -O dts <dtbo> | grep exclusive-use Here, <dtbo> is one of the firmware files available in the /lib/firmware/ directory: root@beaglebone:~# ls /lib/firmware/bone_pwm_*.dtbo/lib/firmware/bone_pwm_P8_13-00A0.dtbo    /lib/firmware/bone_pwm_P9_16-00A0.dtbo/lib/firmware/bone_pwm_P8_19-00A0.dtbo    /lib/firmware/bone_pwm_P9_21-00A0.dtbo/lib/firmware/bone_pwm_P8_34-00A0.dtbo    /lib/firmware/bone_pwm_P9_22-00A0.dtbo/lib/firmware/bone_pwm_P8_36-00A0.dtbo    /lib/firmware/bone_pwm_P9_28-00A0.dtbo/lib/firmware/bone_pwm_P8_45-00A0.dtbo    /lib/firmware/bone_pwm_P9_29-00A0.dtbo/lib/firmware/bone_pwm_P8_46-00A0.dtbo    /lib/firmware/bone_pwm_P9_31-00A0.dtbo/lib/firmware/bone_pwm_P9_14-00A0.dtbo    /lib/firmware/bone_pwm_P9_42-00A0.dtbo To enable a PWM generator, we have to use one of the preceding dtbo files in conjunction with the /lib/firmware/am33xx_pwm-00A0.dtbo file, as shown in the following code: root@beaglebone:~# echo am33xx_pwm > /sys/devices/bone_capemgr.9/slotsroot@beaglebone:~# echo bone_pwm_P9_22 > /sys/devices/bone_capemgr.9/slots This should cause the following kernel messages activity: [   31.350494] bone-capemgr bone_capemgr.9: slot #7: Applied #8 overlays.[   46.144068] bone-capemgr bone_capemgr.9: part_number 'bone_pwm_P9_22', version'N/A'[   46.144266] bone-capemgr bone_capemgr.9: slot #8: generic override[   46.144319] bone-capemgr bone_capemgr.9: bone: Using override eeprom dataat slot 8[   46.144374] bone-capemgr bone_capemgr.9: slot #8: 'Override BoardName,00A0,Override Manuf,bone_pwm_P9_22'[   46.144640] bone-capemgr bone_capemgr.9: slot #8: Requesting part number/version based 'bone_pwm_P9_22-00A0.dtbo[   46.144698] bone-capemgr bone_capemgr.9: slot #8: Requesting firmware'bone_pwm_P9_22-00A0.dtbo' for board-name 'Override Board Name',version '00A0'[   46.144762] bone-capemgr bone_capemgr.9: slot #8: dtbo 'bone_pwm_P9_22-00A0.dtbo' loaded; converting to live tree[   46.148901] bone-capemgr bone_capemgr.9: slot #8: #2 overlays[   46.155642] bone-capemgr bone_capemgr.9: slot #8: Applied #2 overlays Now, in the /sys/devices/ocp.3/ directory, we should see a new directory named pwm_test_P9_22.12, where we have the following files: root@beaglebone:~# ls /sys/devices/ocp.3/pwm_test_P9_22.12/driver  duty  modalias  period  polarity  power  run  subsystem  uevent Here, the important files are period, duty, and polarity. In the period file, we can store the period (T) of the desired PWM waveform in nanoseconds (ns), while in duty, we can store the high state time (thigh) in nanoseconds (of course, with this parameter, we can set the duty cycle too). In the end, with the polarity file, we can invert the waveform polarity (that is, by swapping the high state and low state). For example, the waveform of the preceding figure can be obtained using the following commands: root@beaglebone:~# echo 1000000 > /sys/devices/ocp.3/pwm_test_P9_22.12/periodroot@beaglebone:~# echo 250000 > /sys/devices/ocp.3/pwm_test_P9_22.12/ Managing a servo motor To show you how to use a PWM generator in order to manage a peripheral, we can use a servo motor. This is a really simple motor where we can set a specific gear position by setting a proper duty cycle of the PWM signal. The following image shows the servo motor used in this example: The device can be purchased at (or by surfing the Internet) http://www.cosino.io/product/nano-servo-motor. First of all, we've to set up the electrical connections. In the following table, we have reported the correspondence between the BeagleBone Black's pins and the servo motor's cables: BeagleBone Black pins – Label Servo motor's cables – Color P9.4 – Vcc Red P9.22 Yellow P9.2 – GND Black By taking a look at the datasheet available at http://hitecrcd.com/files/Servomanual.pdf, we discover that the servo can be managed using a periodic square waveform of 20 ms period (T) and a high state time (thigh) between 0.9 ms and 2.1 ms with 1.5 ms as (more or less) the center. So, once connected, we can set the center position using the following settings: root@beaglebone:~# echo 0 > /sys/devices/ocp.3/pwm_test_P9_22.12/polarityroot@beaglebone:~# echo 20000000  > /sys/devices/ocp.3/pwm_test_P9_22.12/periodroot@beaglebone:~# echo 1500000  > /sys/devices/ocp.3/pwm_test_P9_22.12/duty Then, we can move the gear totally clockwise using the following command: root@beaglebone:~# echo 2100000 > /sys/devices/ocp.3/pwm_test_P9_22.12/duty We can move the gear totally anticlockwise using the following command: root@beaglebone:~# echo 900000 > /sys/devices/ocp.3/pwm_test_P9_22.12/duty Summary In this article, we discovered that managing a BeagleBone Black's PWM generator is really as simple as controlling a servo motor! Resources for Article: Further resources on this subject: Home Security by BeagleBone [Article] Protecting GPG Keys in BeagleBone [Article] Learning BeagleBone [Article]
Read more
  • 0
  • 0
  • 6473
article-image-nservicebus-architecture
Packt
25 Aug 2014
11 min read
Save for later

The NServiceBus Architecture

Packt
25 Aug 2014
11 min read
In this article by Rich Helton, the author of Mastering NServiceBus and Persistence, we will focus on the NServiceBus architecture. We will discuss the different message and storage types supported in NSB. This discussion will include an introduction to some of the tools and advantages of using NSB. We will conceptually look at how some of the pieces fit together while backing up the discussions with code examples. (For more resources related to this topic, see here.) NSB is the cornerstone of automation. As an Enterprise Service Bus (ESB), NSB is the most popular C# ESB solution. NSB is a framework that is used to provide many of the benefits of implementing a service-oriented architecture (SOA). It uses an IBus and its ESB bus to handle messages between NSB services, without having to create custom interaction. This type of messaging between endpoints creates the bus. The services, which are autonomous Windows processes, use both Windows and NSB hosting services. NSB-hosting services provide extra functionalities, such as creating endpoints; setting up Microsoft Queuing (MSMQ), DTC for transactions across queues, subscription storage for publish/subscribe message information, NSB sagas; and much more. Deploying these pieces for messaging manually can lead to errors and a lot of work is involved to get it correct. NSB takes care of provisioning its needed pieces. NSB is not a frontend framework, such as Microsoft's Model-View-Controller (MVC). It is not used as an Object-to-Relationship Mapper (ORM), such as Microsoft's Entity Frameworks, to map objects to SQL Server tables. It is also not a web service framework, such as Microsoft's Windows Communication Foundation (WCF). NSB is a framework to provide the communication and support for services to communicate with each other and provide an end-to-end workflow to process all of these pieces. Benefits of NSB NSB provides many components needed for automation that are only found in ESBs. ESBs provide the following: Separation of duties: From the frontend to the backend by allowing the frontend to fire a message to a service and continue with its processing not worrying about the results until it needs an update. Also, you can separate workflow responsibilities by separating NSB services. One service could be used to send payments to a bank, and another service can be used to provide feedback of the current status of the payment to the MVC-EF database so that a user may see the status of their payment. Message durability: Messages are saved in queues between services so that if the services are stopped, they can start from the messages saved in the queues when they are restarted. This is done so that the messages will persist, until told otherwise. Workflow retries: Messages, or endpoints, can be told to retry a number of times until they completely fail and send an error. The error is automated to return to an error queue. For instance, a web service message can be sent to a bank, and it can be set to retry the web service every 5 minutes for 20 minutes before giving up completely. This is useful while fixing any network or server issues. Monitoring: NSB's ServicePulse can keep a check on the heartbeat of its services. Other monitoring checks can be easily performed on NSB queues to report the number of messages. Encryption: Messages between services and endpoints can be easily encrypted. High availability: Multiple services, or subscribers, could be processing the same or similar messages from various services that live on different servers. When one server, or a service, goes down, others could be made available to take over that are already running. More on endpoints While working with a service-to-service interaction, messages are transmitted in the form of XML through queues that are normally part of Microsoft Server such as MSMQ, SQL Server such as SQL queuing, or even part of Microsoft Azure queues for cloud computing. There are other endpoints that services use to process resources that are not part of service-to-service communications. These endpoints are used to process commands and messages as well, for instance, sending a file to non-NSB-hosted services, sending SFTP files to non-NSB-hosted services, or sending web services, such as payments, to non-NSB services. While at the other end of these communications are non-NSB-hosted services, NSB offers a lot of integrity by checking how these endpoints were processed. NSB provides information on whether a web service was processed or not, with or without errors, and provides feedback and monitoring, and maintains the records through queues. It also provides saga patterns to provide feedback to the originating NSB services of the outcome while storing messages from a particular NSB service to the NSB service of everything that has happened. In many NSB services, an audit queue is used to keep a backup of each message that occurred successfully, and the error queue is used to keep track of any message that was not processed successfully. The application security perspective From the application security perspective, OWASP's top ten list of concerns, available at https://www.owasp.org/index.php/Top_10_2013-Top_10, seems to always surround injection, such as SQL injection, broken authentication, and cross-site scripting (XSS). Once an organization puts a product in production, they usually have policies in place for the company's security personnel to scan the product at will. Not all organizations have these policies in place, but once an organization attaches their product to the Internet, there are armies of hackers that may try various methods to attack the site, depending on whether there is money to be gained or not. Money comes in a new economy these days in the form of using a site as a proxy to stage other attacks, or to grab usernames and passwords that a user may have for a different system in order to acquire a user's identity or financial information. Many companies have suffered bankruptcy over the last decades thinking that they were secure. NSB offers processing pieces to the backend that would normally be behind a firewall to provide some protection. Firewalls provide some protection as well as Intrusion Detection Systems (IDSes), but there is so much white noise for viruses and scans that many real hack attacks may go unnoticed, except by very skilled antihackers. NSB offers additional layers of security by using queuing and messaging. The messages can be encrypted, and the queues may be set for limited authorization from production administrators. NSB hosting versus self-hosting NServiceBus.Host is an executable that will deploy the NSB service. When the NSB service is compiled, it turns into a Windows DLL that may contain all the configuration settings for the IBus. If there are additional settings needed for the endpoint's configuration that are not coded in the IBus's configuration, then it can be resolved by setting these configurations in the Host command. However, NServiceBus.Host need not be used to create the program that is used in NServiceBus. As a developer, you can create a console program that is run by a Window's task scheduler, or even create your own services that run the NSB IBus code as an endpoint. Not using the NSB-hosting engine is normally referred to as self-hosting. The NServiceBus host streamlines service development and deployment, allows you to change technologies without code, and is administrator friendly when setting permissions and accounts. It will deploy your application as an NSB-hosted solution. It can also add configurations to your program at the NServiceBus.Host.exe command line. If you develop a program with the NServiceBus.Host reference, you can use EndpoinConfig.cs to define your IBus configuration in this code, or add it as part of the command line instead of creating your own Program.cs that will do a lot of the same work with more code. When debugging with the NServiceBus.Host reference, the Visual Studio project is creating a windows DLL program that is run by the NserviceBus.Host.exe command. Here's an example form of the properties of a Visual Studio project: The NServiceBus.Host.exe command line has support for deploying Window's services as NSB-hosted services: These configurations are typically referred to as the profile for which the service will be running. Here are some of the common profiles: MultiSite: This turns on the gateway. Master: This makes the endpoint a "master node endpoint". This means that it runs the gateway for multisite interaction, the timeout manager, and the distributor. It also starts a worker that is enlisted with the distributor. It cannot be combined with the worker or distributor profiles. Worker: This makes the current endpoint enlist as a worker with its distributor running on the master node. It cannot be combined with the master or distributor profiles. Distributor: This starts the endpoint only as a distributor. This means that the endpoint does no actual work and only distributes the load among its enlisted workers. It cannot be combined with the Master and Worker profiles. Performance counters: This turns on the NServiceBus-specific performance counters. Performance counters are installed by default when you run a Production profile. Lite: This keeps everything in memory with the most detailed logging. Integration: This uses technologies closer to production but without a scale-out option and less logging. It is used in testing. Production: This uses scale-out-friendly technologies and minimal file-based logging. It is used in production. Using Powershell commands Many items can be managed in the Package Manager console program of Visual Studio 2012. Just as we add commands to the NServiceBus.Host.exe file to extend profiles and configurations, we may also use VS2012 Package Manager to extend some of the functionalities while debugging and testing. We will use the ScaleOut solution discussed later just to double check that the performance counters are installed correctly. We need to make sure that the PowerShell commandlets are installed correctly first. We do this by using Package Manager: Install the package, NServiceBus.PowerShell Import the module, .packagesNServiceBus.PowerShell.4.3.0libnet40NServiceBus.PowerShell.dll Test NServiceBusPerformanceCountersInstallation The "Import module" step is dependent on where NService.PowerShell.dll was installed during the "Install package" process. The "Install-package" command will add the DLL into a package directory related to the solution. We can find out more on PowerShell commandlets at http://docs.particular.net/nservicebus/managing-nservicebus-using-powershell and even by reviewing the help section of Package Manager. Here, we see that we can insert configurations into App.config when we look at the help section, PM> get-help about_NServiceBus. Message exchange patterns Let's discuss the various exchange patterns now. The publish/subscribe pattern One of the biggest benefits of using the ESB technology is the benefits of the publish/subscribe message pattern; refer to http://en.wikipedia.org/wiki/Publish-subscribe_pattern. The publish/subscribe pattern has a publisher that sends messages to a queue, say a MSMQ MyPublisher queue. Subscribers, say Subscriber1 and Subscriber2, will listen for messages on the queue that the subscribers are defined to take from the queue. If MyPublisher cannot process the messages, it will return them to the queue or to an error queue, based on the reasons why it could not process the message. The queue that the subscribers are looking for on the queue are called endpoint mappings. The publisher endpoint mapping is usually based on the default of the project's name. This concept is the cornerstone to understand NSB and ESBs. No messages will be removed, unless they are explicitly told to be removed by a service. Therefore, no messages will be lost, and all are accounted for from the services. The configuration data is saved to the database. Also, the subscribers can respond back to MyPublisher with messages indicating that everything was alright or not using the queue. So why is this important? It's because all the messages can then be accounted for, and feedback can be provided to all the services. A service is a Windows service that is created and hosted by the NSB host program. It could also be a Windows command console program or even an MVC program, but the service program is always up and running on the server, continuously checking queues and messages that are sent to it from other endpoints. These messages could be commands, such as instructions to go and look at the remote server to see whether it is still running, or data messages such as sending a particular payment to the bank through a web service. For NSB, we formalize that events are used in publish/subscribe, and commands are used in a request-response message exchange pattern. Windows Server could have too many services, so some of these services could just be standing by, waiting to take over if one service is not responding or processing messages simultaneously. This provides a very high availability.
Read more
  • 0
  • 0
  • 6454

article-image-designing-your-very-own-aspnet-mvc-application
Packt
28 Oct 2009
8 min read
Save for later

Designing your very own ASP.NET MVC Application

Packt
28 Oct 2009
8 min read
When downloading and installing the ASP.NET MVC framework SDK, a new project template is installed in Visual Studio—the ASP.NET MVC project template. This article by Maarten Balliauw describes how to use this template. We will briefly touch all aspects of ASP.NET MVC by creating a new ASP.NET MVC web application based on this Visual Studio template. Besides view, controller, and model, new concepts including ViewData—a means of transferring data between controller and view, routing—the link between a web browser URL and a specific action method inside a controller, and unit testing of a controller are also illustrated in this article. (For more resources on .NET, see here.) Creating a new ASP.NET MVC web application project Before we start creating an ASP.NET MVC web application, make sure that you have installed the ASP.NET MVC framework SDK from http://www.asp.net/mvc. After installation, open Visual Studio 2008 and select menu option File | New | Project. The following screenshot will be displayed. Make sure that you select the .NET framework 3.5 as the target framework. You will notice a new project template called ASP.NET MVC Web Application. This project template creates the default project structure for an ASP.NET MVC application. After clicking on OK, Visual Studio will ask you if you want to create a test project. This dialog offers the choice between several unit testing frameworks that can be used for testing your ASP.NET MVC application. You can decide for yourself if you want to create a unit testing project right now—you can also add a testing project later on. Letting the ASP.NET MVC project template create a test project now is convenient because it creates all of the project references, and contains an example unit test, although this is not required. For this example, continue by adding the default unit test project. What's inside the box? After the ASP.NET MVC project has been created, you will notice a default folder structure. There's a Controllers folder, a Models folder, a Views folder, as well as a Content folder and a Scripts folder. ASP.NET MVC comes with the convention that these folders (and namespaces) are used for locating the different blocks used for building the ASP.NET MVC framework. The Controllers folder obviously contains all of the controller classes; the Models folder contains the model classes; while the Views folder contains the view pages. Content will typically contain web site content such as images and stylesheet files, and Scripts will contain all of the JavaScript files used by the web application. By default, the Scripts folder contains some JavaScript files required for the use of Microsoft AJAX or jQuery. Locating the different building blocks is done in the request life cycle. One of the first steps in the ASP.NET MVC request life cycle is mapping the requested URL to the correct controller action method. This process is referred to as routing. A default route is initialized in the Global.asax file and describes to the ASP.NET MVC framework how to handle a request. Double-clicking on the Global.asax file in the MvcApplication1 project will display the following code: using System;using System.Collections.Generic;using System.Linq;using System.Web;using System.Web.Mvc;using System.Web.Routing;namespace MvcApplication1{ public class GlobalApplication : System.Web.HttpApplication { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = "" } // Parameter defaults ); } protected void Application_Start() { RegisterRoutes(RouteTable.Routes); } }} In the Application_Start() event handler, which is fired whenever the application is compiled or the web server is restarted, a route table is registered. The default route is named Default, and responds to a URL in the form of http://www.example.com/{controller}/{action}/{id}. The variables between { and } are populated with actual values from the request URL or with the default values if no override is present in the URL. This default route will map to the Home controller and to the Index action method, according to the default routing parameters. We won't have any other action with this routing map. By default, all the possible URLs can be mapped through this default route. It is also possible to create our own routes. For example, let's map the URL http://www.example.com/Employee/Maarten to the Employee controller, the Show action, and the firstname parameter. The following code snippet can be inserted in the Global.asax file we've just opened. Because the ASP.NET MVC framework uses the first matching route, this code snippet should be inserted above the default route; otherwise the route will never be used. routes.MapRoute( "EmployeeShow", // Route name "Employee/{firstname}", // URL with parameters new { // Parameter defaults controller = "Employee", action = "Show", firstname = "" } ); Now, let's add the necessary components for this route. First of all, create a class named EmployeeController in the Controllers folder. You can do this by adding a new item to the project and selecting the MVC Controller Class template located under the Web | MVC category. Remove the Index action method, and replace it with a method or action named Show. This method accepts a firstname parameter and passes the data into the ViewData dictionary. This dictionary will be used by the view to display data. The EmployeeController class will pass an Employee object to the view. This Employee class should be added in the Models folder (right-click on this folder and then select Add | Class from the context menu). Here's the code for the Employee class: namespace MvcApplication1.Models{ public class Employee { public string FirstName { get; set; } public string LastName { get; set; } public string Email { get; set; } }} After adding the EmployeeController and Employee classes, the ASP.NET MVC project now appears as shown in the following screenshot: The EmployeeController class now looks like this: using System.Web.Mvc;using MvcApplication1.Models;namespace MvcApplication1.Controllers{ public class EmployeeController : Controller { public ActionResult Show(string firstname) { if (string.IsNullOrEmpty(firstname)) { ViewData["ErrorMessage"] = "No firstname provided!"; } else { Employee employee = new Employee { FirstName = firstname, LastName = "Example", Email = firstname + "@example.com" }; ViewData["FirstName"] = employee.FirstName; ViewData["LastName"] = employee.LastName; ViewData["Email"] = employee.Email; } return View(); } }} The action method we've just created can be requested by a user via a URL—in this case, something similar to http://www.example.com/Employee/Maarten. This URL is mapped to the action method by the route we've created before. By default, any public action method (that is, a method in a controller class) can be requested using the default routing scheme. If you want to avoid a method from being requested, simply make it private or protected, or if it has to be public, add a [NonAction] attribute to the method. Note that we are returning an ActionResult (created by the View() method), which can be a view-rendering command, a page redirect, a JSON result, a string, or any other custom class implementation inheriting the ActionResult that you want to return. Returning an ActionResult is not necessary. The controller can write content directly to the response stream if required, but this would be breaking the MVC pattern—the controller should never be responsible for the actual content of the response that is being returned. Next, create a Show.aspx page in the Views | Employee folder. You can create a view by adding a new item to the project and selecting the MVC View Content Page template, located under the Web | MVC category, as we want this view to render in a master page (located in Views | Shared). There is an alternative way to create a view related to an action method, which will be covered later in this article. In the view, you can display employee information or display an error message if an employee is not found. Add the following code to the Show.aspx page: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" AutoEventWireup="true" Inherits=" System.Web.Mvc.ViewPage" %><asp:Content ID="Content1" ContentPlaceHolderID="MainContent" runat="server"> <% if (ViewData["ErrorMessage"] != null) { %> <h1><%=ViewData["ErrorMessage"]%></h1> <% } else { %> <h1><%=ViewData["FirstName"]%> <%=ViewData["LastName"]%></h1> <p> E-mail: <%=ViewData["Email"]%> </p> <% } %></asp:Content> If the ViewData, set by the controller, is given an ErrorMessage, then the ErrorMessage is displayed on the resulting web page. Otherwise, the employee details are displayed. Press the F5 button on your keyboard to start the development web server. Alter the URL in your browser to something ending in /Employee/Your_Name_Here, and see the action method and the view we've just created in action.
Read more
  • 0
  • 0
  • 6445

article-image-dispatchers-and-routers
Packt
12 Nov 2012
5 min read
Save for later

Dispatchers and Routers

Packt
12 Nov 2012
5 min read
(For more resources related to this topic, see here.) Dispatchers In the real world, dispatchers are the communication coordinators that are responsible for receiving and passing messages. For the emergency services (for example, in U.S. – 911), the dispatchers are the people responsible for taking in the call, and passing on the message to the other departments (medical, police, fire station, or others). The dispatcher coordinates the route and activities of all these departments, to make sure that the right help reaches the destination as early as possible. Another example is how the airport manages airplanes taking off. The air traffic controllers (ATCs) coordinate the use of the runway between the various planes taking off and landing. On one side, air traffic controllers manage the runways (usually ranging from 1 to 3), and on the other, aircrafts of different sizes and capacity from different airlines ready to take off and land. An air traffic controller coordinates the various airplanes, gets the airplanes lined up, and allocates the runways to take off and land: As we can see, there are multiple runways available and multiple airlines, each having a different set of airplanes needing to take off. It is the responsibility of air traffic controller(s) to coordinate the take-off and landing of planes from each airline and do this activity as fast as possible. Dispatcher as a pattern Dispatcher is a well-recognized and used pattern in the Java world. Dispatchers are used to control the flow of execution. Based on the dispatching policy, dispatchers will route the incoming message or request to the business process. Dispatchers as a pattern provide the following advantages: Centralized control: Dispatchers provide a central place from where various messages/requests are dispatched. The word "centralized" means code is re-used, leading to improved maintainability and reduced duplication of code. Application partitioning: There is a clear separation between the business logic and display logic. There is no need to intermingle business logic with the display logic. Reduced inter-dependencies: Separation of the display logic from the business logic means there are reduced inter-dependencies between the two. Reduced inter-dependencies mean less contention on the same resources, leading to a scalable model. Dispatcher as a concept provides a centralized control mechanism that decouples different processing logic within the application, which in turn reduces inter-dependencies. Executor in Java In Akka, dispatchers are based on the Java Executor framework (part of java.util.concurrent).Executor provides the framework for the execution of asynchronous tasks. It is based on the producer–consumer model, meaning the act of task submission (producer) is decoupled from the act of task execution (consumer). The threads that submit tasks are different from the threads that execute the tasks. Two important implementations of the Executor framework are as follows: ThreadPoolExecutor: It executes each submitted task using thread from a predefined and configured thread pool. ForkJoinPool: It uses the same thread pool model but supplemented with work stealing. Threads in the pool will find and execute tasks (work stealing) created by other active tasks or tasks allocated to other threads in the pool that are pending execution. Fork/join is based a on fine-grained, parallel, divide-andconquer style, parallelism model. The idea is to break down large data chunks into smaller chunks and process them in parallel to take advantage of the underlying processor cores. Executor is backed by constructs that allow you to define and control how the tasks are executed. Using these Executor constructor constructs, one can specify the following: How many threads will be running? (thread pool size) How are the tasks queued until they come up for processing? How many tasks can be executed concurrently? What happens in case the system overloads, when tasks to be rejected are selected? What is the order of execution of tasks? (LIFO, FIFO, and so on) Which pre- and post-task execution actions can be run? In the book Java Concurrency in Practice, Addison-Wesley Publishing, the authors have described the Executor framework and its usage very nicely. It will be useful to read the book for more details on the concurrency constructs provided by Java language. Dispatchers in Akka In the Akka world, the dispatcher controls and coordinates the message dispatching to the actors mapped on the underlying threads. They make sure that the resources are optimized and messages are processed as fast as possible. Akka provides multiple dispatch policies that can be customized according to the underlying hardware resource (number of cores or memory available) and type of application workload. If we take our example of the airport and map it to the Akka world, we can see that the runways are mapped to the underlying resources—threads. The airlines with their planes are analogous to the mailbox with the messages. The ATC tower employs a dispatch policy to make sure the runways are optimally utilized and the planes are spending minimum time on waiting for clearance to take off or land: For Akka, the dispatchers, actors, mailbox, and threads look like the following diagram: The dispatchers run on their threads; they dispatch the actors and messages from the attached mailbox and allocate on heap to the executor threads. The executor threads are configured and tuned to the underlying processor cores that available for processing the messages.
Read more
  • 0
  • 0
  • 6425
article-image-r6-classes-retrieve-live-data-markets-wallets
Pravin Dhandre
23 Apr 2018
11 min read
Save for later

Using R6 classes in R to retrieve live data for markets and wallets

Pravin Dhandre
23 Apr 2018
11 min read
In this tutorial, you will learn to create a simple requester to request external information from an API over the internet. You will also learn to develop exchange and wallet infrastructure using R programming. Creating a simple requester to isolate API calls Now, we will focus on how we actually retrieve live data. This functionality will also be implemented using R6 classes, as the interactions can be complex. First of all, we create a simple Requester class that contains the logic to retrieve data from JSON APIs found elsewhere in the internet and that will be used to get our live cryptocurrency data for wallets and markets. We don't want logic that interacts with external APIs spread all over our classes, so we centralize it here to manage it as more specialized needs come into play later. As you can see, all this object does is offer the public request() method, and all it does is use the formJSON() function from the jsonlite package to call a URL that is being passed to it and send the data it got back to the user. Specifically, it sends it as a dataframe when the data received from the external API can be coerced into dataframe-form. library(jsonlite) Requester <- R6Class( "Requester", public = list( request = function(URL) { return(fromJSON(URL)) } ) ) Developing our exchanges infrastructure Our exchanges have multiple markets inside, and that's the abstraction we will define now. A Market has various private attributes, as we saw before when we defined what data is expected from each file, and that's the same data we see in our constructor. It also offers a data() method to send back a list with the data that should be saved to a database. Finally, it provides setters and getters as required. Note that the setter for the price depends on what units are requested, which can be either usd or btc, to get a market's asset price in terms of US Dollars or Bitcoin, respectively: Market <- R6Class( "Market", public = list( initialize = function(timestamp, name, symbol, rank, price_btc, price_usd) { private$timestamp <- timestamp private$name <- name private$symbol <- symbol private$rank <- rank private$price_btc <- price_btc private$price_usd <- price_usd }, data = function() { return(list( timestamp = private$timestamp, name = private$name, symbol = private$symbol, rank = private$rank, price_btc = private$price_btc, price_usd = private$price_usd )) }, set_timestamp = function(timestamp) { private$timestamp <- timestamp }, get_symbol = function() { return(private$symbol) }, get_rank = function() { return(private$rank) }, get_price = function(base) { if (base == 'btc') { return(private$price_btc) } else if (base == 'usd') { return(private$price_usd) } } ), private = list( timestamp = NULL, name = "", symbol = "", rank = NA, price_btc = NA, price_usd = NA ) ) Now that we have our Market definition, we proceed to create our Exchange definition. This class will receive an exchange name as name and will use the exchange_requester_factory() function to get an instance of the corresponding ExchangeRequester. It also offers an update_markets() method that will be used to retrieve market data with the private markets() method and store it to disk using the timestamp and storage objects being passed to it. Note that instead of passing the timestamp through the arguments for the private markets() method, it's saved as a class attribute and used within the private insert_metadata() method. This technique provides cleaner code, since the timestamp does not need to be passed through each function and can be retrieved when necessary. The private markets() method calls the public markets() method in the ExchangeRequester instance saved in the private requester attribute (which was assigned to by the factory) and applies the private insert_metadata() method to update the timestamp for such objects with the one sent to the public update_markets() method call before sending them to be written to the database: source("./requesters/exchange-requester-factory.R", chdir = TRUE) Exchange <- R6Class( "Exchange", public = list( initialize = function(name) { private$requester <- exchange_requester_factory(name) }, update_markets = function(timestamp, storage) { private$timestamp <- unclass(timestamp) storage$write_markets(private$markets()) } ), private = list( requester = NULL, timestamp = NULL, markets = function() { return(lapply(private$requester$markets(), private$insert_metadata)) }, insert_metadata = function(market) { market$set_timestamp(private$timestamp) return(market) } ) ) Now, we need to provide a definition for our ExchangeRequester implementations. As in the case of the Database, this ExchangeRequester will act as an interface definition that will be implemented by the CoinMarketCapRequester. We see that the ExchangeRequester specifies that all exchange requester instances should provide a public markets() method, and that a list is expected from such a method. From context, we know that this list should contain Market instances. Also, each ExchangeRequester implementation will contain a Requester object by default, since it's being created and assigned to the requester private attribute upon class instantiation. Finally, each implementation will also have to provide a create_market() private method and will be able to use the request() private method to communicate to the Requester method request() we defined previously: source("../../../utilities/requester.R") KNOWN_ASSETS = list( "BTC" = "Bitcoin", "LTC" = "Litecoin" ) ExchangeRequester <- R6Class( "ExchangeRequester", public = list( markets = function() list() ), private = list( requester = Requester$new(), create_market = function(resp) NULL, request = function(URL) { return(private$requester$request(URL)) } ) ) Now we proceed to provide an implementation for CoinMarketCapRequester. As you can see, it inherits from ExchangeRequester, and it provides the required method implementations. Specifically, the markets() public method calls the private request() method from ExchangeRequester, which in turn calls the request() method from Requester, as we have seen, to retrieve data from the private URL specified. If you request data from CoinMarketCap's API by opening a web browser and navigating to the URL shown (https:/​/​api.​coinmarketcap.​com/​v1/​ticker), you will get a list of market data. That is the data that will be received in our CoinMarketCapRequester instance in the form of a dataframe, thanks to the Requester object, and will be transformed into numeric data where appropriate using the private clean() method, so that it's later used to create Market instances with the apply() function call, which in turn calls the create_market() private method. Note that the timestamp is set to NULL for all markets created this way because, as you may remember from our Exchange class, it's set before writing it to the database. There's no need to send the timestamp information all the way down to the CoinMarketCapRequester, since we can simply write at the Exchange level right before we send the data to the database: source("./exchange-requester.R") source("../market.R") CoinMarketCapRequester <- R6Class( "CoinMarketCapRequester", inherit = ExchangeRequester, public = list( markets = function() { data <- private$clean(private$request(private$URL)) return(apply(data, 1, private$create_market)) } ), private = list( URL = "https://api.coinmarketcap.com/v1/ticker", create_market = function(row) { timestamp <- NULL return(Market$new( timestamp, row[["name"]], row[["symbol"]], row[["rank"]], row[["price_btc"]], row[["price_usd"]] )) }, clean = function(data) { data$price_usd <- as.numeric(data$price_usd) data$price_btc <- as.numeric(data$price_btc) data$rank <- as.numeric(data$rank) return(data) } ) ) Finally, here's the code for our exchange_requester_factory(). As you can see, it's basically the same idea we have used for our other factories, and its purpose is to easily let us add more implementations for our ExchangeRequeseter by simply adding else-if statements in it: source("./coinmarketcap-requester.R") exchange_requester_factory <- function(name) { if (name == "CoinMarketCap") { return(CoinMarketCapRequester$new()) } else { stop("Unknown exchange name") } } Developing our wallets infrastructure Now that we are able to retrieve live price data from exchanges, we turn to our Wallet definition. As you can see, it specifies the type of private attributes we expect for the data that it needs to handle, as well as the public data() method to create the list of data that needs to be saved to a database at some point. It also provides getters for email, symbol, and address, and the public pudate_assets() method, which will be used to get and save assets into the database, just as we did in the case of Exchange. As a matter of fact, the techniques followed are exactly the same, so we won't explain them again: source("./requesters/wallet-requester-factory.R", chdir = TRUE) Wallet <- R6Class( "Wallet", public = list( initialize = function(email, symbol, address, note) { private$requester <- wallet_requester_factory(symbol, address) private$email <- email private$symbol <- symbol private$address <- address private$note <- note }, data = function() { return(list( email = private$email, symbol = private$symbol, address = private$address, note = private$note )) }, get_email = function() { return(as.character(private$email)) }, get_symbol = function() { return(as.character(private$symbol)) }, get_address = function() { return(as.character(private$address)) }, update_assets = function(timestamp, storage) { private$timestamp <- timestamp storage$write_assets(private$assets()) } ), private = list( timestamp = NULL, requester = NULL, email = NULL, symbol = NULL, address = NULL, note = NULL, assets = function() { return (lapply ( private$requester$assets(), private$insert_metadata)) }, insert_metadata = function(asset) { timestamp(asset) <- unclass(private$timestamp) email(asset) <- private$email return(asset) } ) ) Implementing our wallet requesters The WalletRequester will be conceptually similar to the ExchangeRequester. It will be an interface, and will be implemented in our BTCRequester and LTCRequester interfaces. As you can see, it requires a public method called assets() to be implemented and to return a list of Asset instances. It also requires a private create_asset() method to be implemented, which should return individual Asset instances, and a private url method that will build the URL required for the API call. It offers a request() private method that will be used by implementations to retrieve data from external APIs: source("../../../utilities/requester.R") WalletRequester <- R6Class( "WalletRequester", public = list( assets = function() list() ), private = list( requester = Requester$new(), create_asset = function() NULL, url = function(address) "", request = function(URL) { return(private$requester$request(URL)) } ) ) The BTCRequester and LTCRequester implementations are shown below for completeness, but will not be explained. If you have followed everything so far, they should be easy to understand: source("./wallet-requester.R") source("../../asset.R") BTCRequester <- R6Class( "BTCRequester", inherit = WalletRequester, public = list( initialize = function(address) { private$address <- address }, assets = function() { total <- as.numeric(private$request(private$url())) if (total > 0) { return(list(private$create_asset(total))) } return(list()) } ), private = list( address = "", url = function(address) { return(paste( "https://chainz.cryptoid.info/btc/api.dws", "?q=getbalance", "&a=", private$address, sep = "" )) }, create_asset = function(total) { return(new( "Asset", email = "", timestamp = "", name = "Bitcoin", symbol = "BTC", total = total, address = private$address )) } ) ) source("./wallet-requester.R") source("../../asset.R") LTCRequester <- R6Class( "LTCRequester", inherit = WalletRequester, public = list( initialize = function(address) { private$address <- address }, assets = function() { total <- as.numeric(private$request(private$url())) if (total > 0) { return(list(private$create_asset(total))) } return(list()) } ), private = list( address = "", url = function(address) { return(paste( "https://chainz.cryptoid.info/ltc/api.dws", "?q=getbalance", "&a=", private$address, sep = "" )) }, create_asset = function(total) { return(new( "Asset", email = "", timestamp = "", name = "Litecoin", symbol = "LTC", total = total, address = private$address )) } ) ) The wallet_requester_factory() works just as the other factories; the only difference is that in this case, we have two possible implementations that can be returned, which can be seen in the if statement. If we decided to add a WalletRequester for another cryptocurrency, such as Ether, we could simply add the corresponding branch here, and it should work fine: source("./btc-requester.R") source("./ltc-requester.R") wallet_requester_factory <- function(symbol, address) { if (symbol == "BTC") { return(BTCRequester$new(address)) } else if (symbol == "LTC") { return(LTCRequester$new(address)) } else { stop("Unknown symbol") } } Hope you enjoyed this interesting tutorial and were able to retrieve live data for your application. To know more, do check out the R Programming By Example and start handling data efficiently with modular, maintainable and expressive codes. Read More Introduction to R Programming Language and Statistical Environment 20 ways to describe programming in 5 words  
Read more
  • 0
  • 0
  • 6401

article-image-practical-big-data-exploration-spark-and-python
Anant Asthana
06 Jun 2016
6 min read
Save for later

Practical Big Data Exploration with Spark and Python

Anant Asthana
06 Jun 2016
6 min read
The reader of this post should be familiar with basic concepts of Spark, such as the shell and RDDs. Data sizes have increased, but our exploration tools and techniques have not evolved as fast. Traditional Hadoop Map Reduce jobs are cumbersome and time consuming to develop. Also, Pig isn't quite as fully featured and easy to work with. Exploration can mean parsing/analyzing raw text documents, analyzing log files, processing tabular data in various formats, and exploring data that may or may not be correctly formatted. This is where a tool like Spark excels. It provides an interactive shell for quick processing, prototyping, exploring, and slicing and dicing data. Spark works with R, Scala, and Python. In conjunction with Jupyter notebooks, we get a clean web interface to write out python, R, or Scala code backed by a Spark cluster. Jupyter notebook is also a great tool for presenting our findings, since we can do inline visualizations and easily share them as a PDF on GitHub or through a web viewer. The power of this set up is that we make Spark do the heavy lifting while still having the flexibility to test code on a small subset of data via the interactive notebooks. Another powerful capability of Spark is its Data Frames API. After we have cleaned our data (dealt with badly formatted rows that can't be loaded correctly), we can load it as a Data Frame. Once the data is a loaded as a Data Frame, we can use the Spark SQL to explore the data. Since notebooks can be shared, this is also a great way to let the developers do the work of cleaning the data and loading it as a Data Frame. Analysts, data scientists, and the likes can then use this data for their tasks. Data Frames can also be exported as Hive tables, which are commonly used in Hadoop-based warehouses. Examples: For this section, we will be using examples that I have uploaded on GitHub. These examples can be found at here. In addition to the examples, there is also a Docker container for running these examples that have been provided. The container runs Spark in a pseudo-distributed mode, and has Jupyter notebook configured with to run Python/PYspark. The basics: To set this up, in your environment, you need a running spark cluster with Jupyter notebook installed. Jupyter notebook, by default, only has the Python kernel configured. You can download additional kernels for Jupyter notebook to run R and Scala. To run Jupyter notebook with Pyspark, use the following command on your cluster: IPYTHON_OPTS="notebook --pylab inline --notebook-dir=<directory sto store notebooks>" MASTER=local[6] ./bin/pyspark When you start Jupyter notebook in the way we mentioned earlier, it initializes a few critical variables. One of them is the Spark Context (sc), which is used to interact with all spark-related tasks. The other is sqlContext, which is the Spark SQL context. This is used to interact with Spark SQL (create Data Frames, run queries, and so on). You need to understand the following: Log Analysis In this example, we use a log file from Apache Server. The code for this example can be found at here. We load our log file in question using: log_file = sc.textFile("../data/log_file.txt") Spark can load files from HDFS, local filesystem, and S3 natively. Other storage formats libraries can be found freely on the Internet, or you could write you own formats (Blog post for another time). The previous command loads the log file. We then use Python’s native shlex library to split the file into different fields and use the Sparks map command to load them as a Row. An RDD consisting of rows can easily be registered as a DataFrame. How we arrived at this solution is where data exploration comes in. We use the Sparks takeSample method to sample the file and get five rows: log_file.takeSample(True, 5) These sample rows are helpful in determining how to parse and load the file. Once we have written our code to load the file, we can apply it to the dataset using map to create a new RDD consisting of Rows to test code on a subset of data in a similar manner using the take or takeSample methods. The take method sequentially reads rows from the file, so although it is faster, it may not be a good representation of the dataset. The take sample method on the other hand randomly picks sample rows from the file; this has a better representation. To create the new RDD and register it as a DataFrame, we use the following code: schema_DF = splits.map(create_schema).toDF() Once we have created the DataFrame and tested it using take/takeSample to make sure that our loading code is working, we can register it as a table using the following: sqlCtx.registerDataFrameAsTable(schema_DF, 'logs') Once it is registered as a table, we can run SQL queries on the log file: sample = sqlCtx.sql('SELECT * FROM logs LIMIT 10').collect() Note that the collect() method collects the result to the driver’s memory so this may not be feasible for large datasets. Use take/takeSample instead to sample data if your dataset is large. The beauty of using Spark with Jupyter is that all this exploration work takes only a few lines of code. It can be written interactively with all the trial and error we needed, the processed data can be easily shared, and running interactive queries on this data is easy. Last but not least, this can easily scale to massive (GB, TB) data sets. k-means on the Iris dataset In this example, we use data from the Iris dataset, which contains measurements of sepal and petal length and width. This is a popular open source dataset used to showcase classification algorithms. In this case, we use Spark’s k-Means algorithm from the MLlib library of Spark. MLlib is Spark’s machine learning library. The code and the output can be found at here. In this example, we are not going to get into too much detail since some of the concepts are outside the scope of this blog post. This example showcases how we load the Iris dataset and create a DataFrame with it. We then train a k-means classifier on this dataset, and then we visualize our classification results. The power of this is that we did a somewhat complex task of parsing a dataset, creating a DataFrame, training a machine learning classifier, and visualizing the data in an interactive and scalable manner. The repository contains several more examples. Feel free to reach out to me if you have any questions. If you would like to see more posts with practical examples, please let us know. About the Author Anant Asthana is a data scientist and principal architect at Pythian, and he can be found on Github at anantasty.
Read more
  • 0
  • 0
  • 6397
Modal Close icon
Modal Close icon