Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-optimizing-jquery-applications
Packt
13 Jan 2016
19 min read
Save for later

Optimizing jQuery Applications

Packt
13 Jan 2016
19 min read
This article, by Thodoris Greasidis, the author of jQuery Design Patterns, presents some optimization techniques that can be used to improve the performance of jQuery applications, especially when they become large and complex. We will start with simple practices to write performant JavaScript code, and learn how to write efficient CSS selectors in order to improve the page's rendering speed and DOM traversals using jQuery. We will continue with jQuery-specific practices, such as caching of jQuery Composite Collection Objects, how to minimize DOM manipulations, and how to use the Delegate Event Observer pattern as a good example of the Flyweight pattern. (For more resources related to this topic, see here.) Optimizing the common JavaScript code In this section, we will analyze some performance tips that are not jQuery-specific and can be applied to most JavaScript implementations. Writing better for loops When iterating over the items of an array or an array-like collection with a for loop, a simple way to improve the performance of the iteration is to avoid accessing the length property on every loop. This can easily be done by storing the iteration length in a separate variable, which is declared just before the loop or even along with it, as shown in the following code: for (var i = 0, len = myArray.length; i < len; i++) {     var item = myArray[i];     /*...*/ } Moreover, if we need to iterate over the items of an array that does not contain "falsy" values, we can use an even better pattern, which is commonly applied when iterating over arrays that contain objects: var objects = [{ }, { }, { }]; for (var i = 0, item; item = objects[i]; i++) {     console.log(item); } In this case, instead of relying on the length property of the array, we are exploiting the fact that access to an out-of-bounds position of the array returns undefined, which is "falsy" and stops the iteration. Another sample case in which this trick can be used is when iterating over Node Lists or jQuery Composite Collection Objects, as shown in the following code: var anchors = $('a'); // or document.getElementsByTagName('a'); for (var i = 0, anchor; anchor = anchors[i]; i++) {     console.log(anchor.href); } For more information on the "truthy" and "falsy" JavaScript values, you can visit https://developer.mozilla.org/en-US/docs/Glossary/Truthy and https://developer.mozilla.org/en-US/docs/Glossary/Falsy. Using performant CSS selectors Even though Sizzle (jQuery's selector engine) hides the complexity of DOM traversals that are based on a complex CSS selector, we should have an idea of how our selectors perform. By understanding how CSS selectors can be matched against the elements of the DOM can help us write more efficient selectors, which will perform in a better way when used with jQuery. The key characteristic of efficient CSS selectors is specificity. According to this, IDs and Class selectors will always be more efficient than selectors with many results, such as div and *. When writing complex CSS selectors, keep in mind that they are evaluated from right to left, and a selector gets rejected after recursively testing it against every parent element until the root of the DOM is reached. As a result, try to be as specific as possible with the right-most selector in order to cut down the matched elements as early as possible during the execution of the selector: // initially matches all the anchors of the page // and then removes those that are not children of the container $('.container a');   // performs better, since it matches fewer elements // in the first step of the selector's evaluation $('.container .mySpecialLinks'); Another performance tip is to use the Child Selector (parent > child) wherever applicable in an effort to eliminate the recursion over all the hierarchies of the DOM tree. A great example of this can be applied in cases where the target elements can be found at a specific descendant level of a common ancestor element: // initially matches all the div's of the page, which is bad $('.container div') ;   // a lot better better, since it avoids the recursion // until the root of the DOM tree $('.container > div');   // best of all, but can't be used always $('.container > .specialDivs'); The same tips can also be applied to CSS selectors that are used to style pages. Even though browsers have been trying to optimize any given CSS selector, the tips mentioned earlier can greatly reduce the time that is required to render a web page. For more information on jQuery CSS selector performance, you can visit http://learn.jquery.com/performance/optimize-selectors/. Writing efficient jQuery code Let's now proceed and analyze the most important jQuery-specific performance tips. For more information on the most up-to-date performance tips about jQuery, you can go to the respective page of jQuery's Learning Center at http://learn.jquery.com/performance. Minimizing DOM traversals Since jQuery has made DOM traversals such simple tasks, a big number of web developers have started to overuse the $() function everywhere, even in subsequent lines of code, making their implementations slower by executing unnecessary code. One of the main reasons that the complexity of the operation is so often overlooked is the elegant and minimalistic syntax that jQuery uses. Despite the fact that JavaScript browser engines have already become more faster, with performance comparable with many compiled languages, the DOM API is still one of their slowest parts, and as a result, developers have to minimize their interactions with it. Caching jQuery objects Storing the result of the $() function to a local variable, and subsequently, using it to operate on the retrieved elements is the simplest way to eliminate unnecessary executions of the same DOM traversals: var $element = $('.Header'); if ($element.css('position') === 'static') {     $element.css({ position: 'relative' }); } $element.height('40px'); $element.wrapInner('<b>'); It is also highly suggested that you store Composite Collection Objects of important page elements as properties of our modules and reuse them everywhere in our application: window.myApp = window.myApp || {}; myApp.$container = null; myApp.init = function() {     myApp.$container = $('.myAppContainer');     }; $(document).ready(myApp.init); Caching retrieved elements on modules is a very good practice when the elements are not going to be removed from the page. Keep in mind that when dealing with elements with shorter lifespans, in order to avoid memory leaks, you either need to ensure that you clear their references when they are removed from the page, or have a fresh reference retrieved when required and cache it only inside your functions. Scoping element traversals Instead of writing complex CSS selectors for your traversals, as follows: $('.myAppContainer .myAppSection'); You can instead have the same result in a more efficient way using an already retried ancestor element to scope the DOM traversal. This way, you are not only using simpler CSS selectors that are faster to match against page elements, but you are also reducing the number of elements that need to be checked. Moreover, the resulting implementations have less code repetitions (they are DRYer), and the CSS selectors used are simple and more readable: var $container = $('.myAppContainer'); $container.find('.myAppSection'); Additionally, this practice works even better with module-wide cached elements: var $sections = myApp.$container.find('.myAppSection'); Chaining jQuery methods One of the characteristics of all jQuery APIs is that they are fluent interface implementations that enable you to chain several method invocations on a single Composite Collection Object: $('.Content').html('')     .append('<a href="#">')     .height('40px')     .wrapInner('<b>'); Chaining allows us to reduce the number of used variables and leads us to more readable implementations with less code repetitions. Don't overdo it Keep in mind that jQuery also provides the $.fn.end() method (http://api.jquery.com/end/) as a way to move back from a chained traversal: $('.box')     .filter(':even')     .find('.Header')     .css('background-color', '#0F0')     .end()     .end() // undo the filter and find traversals     .filter(':odd') // applied on  the initial .box results     .find('.Header')     .css('background-color', '#F00'); Even though this is a handy method for many cases, you should avoid overusing it, since it can affect the readability of your code. In many cases, using cached element collections instead of $.fn.end() can result in faster and more readable implementations. Improving DOM manipulations Extensive use of the DOM API is one of the most common things that makes an application slower, especially when it is used to manipulate the state of the DOM tree. In this section, we will showcase some tips that can reduce the performance hit when manipulating the DOM tree. Creating DOM elements The most efficient way to create DOM elements is to construct an HTML string and append it to the DOM tree using the $.fn.html() method. Additionally, since this can be too restrictive for some use cases, you can also use the $.fn.append() and $.fn.prepend() methods, which are slightly slower but can be better matches for your implementation. Ideally, if multiple elements need to be created, you should try to minimize the invocation of these methods by creating an HTML string that defines all the elements and then inserting it into the DOM tree, as follows: var finalHtml = ''; for (var i = 0, len = questions.length; i < len; i++) {     var question = questions[i];     finalHtml += '<div><label><span>' + question.title + ':</span>' + '<input type="checkbox" name="' + question.name + '" />' + '</label></div>'; } $('form').html(finalHtml); Another way to achieve the same result is using an array to store the HTML for each intermediate element and then joining them right before inserting them into the DOM tree: var parts = []; for (var i = 0, len = questions.length; i < len; i++) {     var question = questions[i];     parts.push('<div><label><span>' + question.title + ':</span>' +     '<input type="checkbox" name="' + question.name + '" />' +     '</label></div>'); } $('form').html(parts.join('')); This is a commonly used pattern, since until recently it was performing better than concatenating the intermediate results with "+=". Styling and animating Whenever possible, try using CSS classes for your styling manipulations by utilizing the $.fn.addClass() and $.fn.removeClass() methods instead of manually manipulating the style of elements with the $.fn.css() method. This is especially beneficial when you need to style a big number of elements, since this is the main use case of CSS classes and the browsers have already spent years optimizing it. As an extra optimization step used to minimize the number of manipulated elements, you can try to apply CSS classes on a single common ancestor element, and use a descendant CSS selector to apply your styling ( https://developer.mozilla.org/en-US/docs/Web/CSS/Descendant_selectors). When you still need to use the $.fn.css() method; for example, when your implementation needs to be imperative, we prefer using the invocation overload that accepts object parameters: http://api.jquery.com/css/#css-properties. This way, when applying multiple styles on elements, the required method invocations are minimized, and your code also gets better organized. Moreover, we need to avoid mixing methods that manipulate the DOM with methods that are read from the DOM, since this will force a reflow of the page, so that the browser can calculate the new positions of the page elements. Instead of doing something like this: $('h1').css('padding-left', '2%'); $('h1').css('padding-right', '2%'); $('h1').append('<b>!!</b>'); var h1OuterWidth = $('h1').outerWidth();   $('h1').css('margin-top', '5%'); $('body').prepend('<b>--!!--</b>'); var h1Offset = $('h1').offset(); We will prefer grouping the nonconflicting manipulations together like this: $('h1').css({     'padding-left': '2%',     'padding-right': '2%',     'margin-top': '5%' }).append('<b>!!</b>'); $('body').prepend('<b>--!!--</b>');   var h1OuterWidth = $('h1').outerWidth(); var h1Offset = $('h1').offset(); This way, the browser can try to skip some rerenderings of the page, resulting in less pauses of the execution of your code. For more information on reflows, you can refer to https://developers.google.com/speed/articles/reflow. Lastly, note that all jQuery generated animations in v1.x and v2.x are implemented using the setTimeout() function. This is going to change in v3.x of jQuery, which is designed to use the requestAnimationFrame() function, which is a better match to create imperative animations. Until then, you can use the jQuery-requestAnimationFrame plugin (https://github.com/gnarf/jquery-requestAnimationFrame), which monkey-patches jQuery to use the requestAnimationFrame() function for its animations when it is available. Manipulating detached elements Another way to avoid unnecessary repaints of the page while manipulating DOM elements is to detach the element from the page and reattach it after completing your manipulations. Working with a detached in-memory element is more faster and does not cause reflows of the page. In order to achieve this, we will use the $.fn.detach() method, which in contrast with $.fn.remove() preserves all event handlers and jQuery data on the detached element: var $h1 = $('#pageHeader'); var $h1Cont = $h1.parent(); $h1.detach();   $h1.css({     'padding-left': '2%',     'padding-right': '2%',     'margin-top': '5%' }).append('<b>!!</b>');   $h1Cont.append($h1); Additionally, in order to be able to place the manipulated element back to its original position, we can create and insert a hidden "placeholder" element into the DOM. This empty and hidden element does not affect the rendering of the page and is removed right after the original item is placed back into its original position: var $h1PlaceHolder = $('<div style="display: none;"></div>'); var $h1 = $('#pageHeader'); $h1PlaceHolder.insertAfter($h1);   $h1.detach();   $h1.css({     'padding-left': '2%',     'padding-right': '2%',     'margin-top': '5%' }).append('<b>!!</b>');   $h1.insertAfter($h1PlaceHolder); $h1PlaceHolder.remove(); $h1PlaceHolder = null; For more information on the $.fn.detach() method, you can visit its documentation page at http://api.jquery.com/detach/. Using the Flyweight pattern According to computer science, a Flyweight is an object that is used to reduce the memory consumption of an implementation that provides the functionality and/or data that will be shared with other object instances. The prototypes of JavaScript constructor functions can be characterized as Flyweights to some degree since every object instance can use all the methods and properties that are defined on its Prototype, until it overwrites them. On the other hand, classical Flyweights are separate objects from the object family that they are used with and often hold the shared data and functionality in special data structures. Using Delegated Event Observers A great example of Flyweights in jQuery applications are Delegated Event Observers that can greatly reduce the memory requirements of an implementation by working as a centralized event handler for a large group of elements. This way, we can avoid the cost of setting up separate observers and event handlers for every element and utilize the browser's event bubbling mechanism to observe them on a single common ancestor element and filter their origin. Moreover, this pattern can also simplify our implementation when we need to deal with dynamically constructed elements since it removes the need of attaching extra event handlers for each created element. For example, the following code attaches a single observer on the common ancestor element of several <button> elements. Whenever a click happens on one of the <button> elements, the event will bubble up to the parent element with the buttonsContainer CSS class, and the attached handler will be fired. Even if we add extra buttons later to that container, clicking on them will still fire the original handler: $('.buttonsContainer').on('click', 'button', function() {     var $button = $(this);     alert($button.text()); }); The actual Flyweight object is the event handler along with the callback that is attached to the ancestor element. Using $.noop() The jQuery library offers the $.noop() method, which is actually an empty function that can be shared with implementations. Using empty functions as default callback values can simplify and increase the readability of an implementation by reducing the number of the required if statements. Such a thing can be greatly beneficial for jQuery plugins that encapsulate the complex functionality: function doLater(callbackFn) {     setTimeout(function() {         if (callbackFn) {             callbackFn();         }     }, 500); }   // with $.noop() function doLater(callbackFn) {     callbackFn = callbackFn || $.noop();     setTimeout(function() {         callbackFn();     }, 500); } In such situations, where the implementation requirements or the personal taste of the developer leads to using empty functions, the $.noop() method can be beneficial to lower the memory consumption by sharing a single empty function instance among all the different parts of an implementation. As an added benefit of using the $.noop() method for every part of an implementation, we can also check whether a passed function reference is actually the empty function by simply checking whether callbackFn is equal to $.noop(). For more information, you can visit its documentation page at http://api.jquery.com/jQuery.noop/. Using the $.single plugin Another simple example of the Flyweight pattern in a jQuery application is the jQuery.single plugin, as described by James Padolsey in his article 76 bytes for faster jQuery, which tries to eliminate the creation of new jQuery objects in cases where we need to apply jQuery methods on a single page element. The implementation is quite small and creates a single jQuery Composite Collection Object that is returned on every invocation of the jQuery.single() method, containing the page element that was used as an argument: jQuery.single = (function(){     var collection = jQuery([1]); // Fill with 1 item, to make sure length === 1     return function(element) {         collection[0] = element; // Give collection the element:         return collection; // Return the collection:     }; }()); The jQuery.single plugin can be quite useful when used in observers, such as $.fn.on() and iterations with methods, such as $.each(): $buttonsContainer.on('click', '.button', function() {     // var $button = $(this);     var $button = $.single(this); // this is not creating any new object     alert($button); }); The advantages of using the jQuery.single plugin originate from the fact that we are creating less objects, and as a result, the browser's Garbage Collector will also have less work to do when freeing the memory of short-lived objects. Keep in mind that the side effects of having a single jQuery object returned by every invocation of the $.single() method, and the fact that the last invocation argument will be stored until the next invocation of the method: var buttons = document.getElementsByTagName('button'); var $btn0 = $.single(buttons[0]); var $btn1 = $.single(buttons[1]); $btn0 === $btn1 Also, if you use something, such as $btn1.remove(), then the element will not be freed until the next invocation of the $.single() method, which will remove it from the plugin's internal collection object. Another similar but more extensive plugin is the jQuery.fly plugin, which supports the case of being invoked with arrays and jQuery objects as parameters. For more information on jQuery.single and jQuery.fly, you can visit http://james.padolsey.com/javascript/76-bytes-for-faster-jquery/ and https://github.com/matjaz/jquery.fly. On the other hand, the jQuery implementation that handles the invocation of the $() method with a single page element is not complex at all and only creates a single simple object: jQuery = function( selector, context ) {     return new jQuery.fn.init( selector, context ); }; /*...*/ jQuery = jQuery.fn.init = function( selector, context ) {     /*... else */     if ( selector.nodeType ) {         this.context = this[0] = selector;         this.length = 1;         return this;     } /* ... */ }; Moreover, the JavaScript engines of modern browsers have already become quite efficient when dealing with short lived objects, since such objects are commonly passed around an application as method invocation parameters. Summary In this article, we learned some optimization techniques that can be used to improve the performance of jQuery applications, especially when they become large and complex. We initially started with simple practices to write performant JavaScript code, and learned how to write efficient CSS selectors in order to improve the page's rendering speed and DOM traversals using jQuery. We continued with jQuery-specific practices, such as caching of jQuery Composite Collection Objects and ways to minimize DOM manipulations. Lastly, we saw some representatives of the Flyweight pattern and took a look at an example of the Delegated Event Observer pattern. Resources for Article: Further resources on this subject: Working with Events [article] Learning jQuery [article] Preparing Your First jQuery Mobile Project [article]
Read more
  • 0
  • 0
  • 13604

Packt
13 Jan 2016
7 min read
Save for later

C-Quence – A Memory Game

Packt
13 Jan 2016
7 min read
In this article by Stuart Grimshaw, the author of the book Building Apple Watch Projects, we will be crafting an entertaining app. Adding code which uses basic Swift features that most developers will find familiar and will address some of the topics that face the developer in creating software for a platform that presents some unique challenges. C-Quence will be a game that challenges players' ability to memorize a sequence of colors generated by the app. It is a game to be played in short bursts rather than prolonged activity, as one of the first things that become clear when using a physical device is that the watch is unsuited to tasks that take more than a short period of time to complete. We will keep this in mind as we look at the top-level design of this app. (For more resources related to this topic, see here.) Bear in mind that although this is a very modest app in terms of the amount of coding intends to bring it to completion, we still want to adhere to what some refer to as best practice (and others prefer to think of this as simply learning from others' mistakes without schadenfreude). The code presented here will reside fully on the watch, needing no support from the iPhone companion app. GameLogic Although, we have created the GameLogic.swiftfile, we have not actually created the class yet (this is different from Objective C). Create the GameLogic class Add the following code, below the import Foundation statement that is part of the template: import Foundation classGameLogic { } Plan the class We will create a class that will encapsulate the code that deals with the game itself in isolation from the user interface. The GameLogic class doesn't need to know anything about interactions with the user. This is something that will be taken care of by the InterfaceController class. So, let's first think about what we will need the class to do so that we can start to plan which methods we will need to be implemented. We need the class to do the following: Creating and maintaining a sequence of colors and adding a random color to it when required Evaluating whether a player's tap on a color is a correct answer Providing information as to whether the game is still being played or finished Clearing the data that collects during a game in preparation for a new round From these methods, we can estimate that we will need to maintain at least one variable: A sequence property, which will be an ordered collection of colors Create the class's interface Setting out our requirements as this has almost fully defined what the outside world needs the code to do. Not how, it's true, but that comes a little later. We have effectively defined the class's external interface (this is not to be confused with the app's user interface that is something quite different) through which other parts of the app will communicate with it. Defining some enums In order to keep the code easy to read and safe to use, we will define some enums. We need to do this outside of the class itself because the InterfaceController class will also need access to them. Add the following code directly after the import statement but before the class definition: import Foundation enum Color { case Red, Yellow, Blue, Green } enum GuessResult { case GuessCorrect, GuessWrong, GuessComplete } classGameLogic { } Traditionally, enums have been a way to give names to integer values to make them more readable, but Swift has gone several steps further and dispensed with the idea that an enum needs some underlying numerical value. If you declare a variable's type to be an enum type, such as the previous Color, the compiler will restrict you to these values, .Red and so on, and these values only. A method that is declared to return Color will only return Color and not some arbitrary integer (other convenient benefits include the fact that a switch statement needs no default once all the enum values are dealt with, which we'll see later). So, we now have four Color values, as we would expect, but why the third GuessResult value, GuessComplete? When InterfaceController asks the class to evaluate the player's answer, we can provide one of three possible scenarios; the guess is correct, but the sequence of guesses is not yet complete—the guess is either wrong or it is correct and the sequence has been correctly completed. Thus, we save ourselves an extra call from the interface asking whether the guessed sequence is complete or not. Stub the methods We can now stub the methods we'll need, which means we'll create them. In some places, we'll add the placeholder code to avoid complier error messages (often, we need to stub some actual functionality in the methods, but we do not need to do this here.) Functions or methods? Without going into an exhaustive definition of functions and methods worthy of a computer science degree course, it suffices to say here that when a function is coded as part of a class, it is referred to as a method of this class. If a function is defined outside of a class, it is called a function. In this app, we have no functions outside of classes. Add the following code inside (!) the GameLogic class: Extend the sequence First, add a method that will extend the colors sequence by one randomly generated color: funcextendSequence(){ } Evaluate Next, we will need a method that evaluates whether the player has guessed correctly or not. We include a stubbed return value (it will be replaced later) in order to keep the compiler from warning us (with a very dramatic-looking red circle and exclamation mark) that the method should return a GuessResult value: funcevaluateColor(color: Color) ->GuessResult{ return .GuessCorrect } Clear Finally, when the game restarts, we'll need to clear the game of any variable that we have set in the course of the play: funcclearGame(){ } Define properties We will need a variable to store the sequence. The obvious type here would be an Array of the Color values. We will initialize it to be empty at the beginning of the app's lifecycle. Add the following code above the extendSequence function: classGameLogic { var sequence: [Color] = [] funcextendSequence(){F Our GameLogic class is now a reflection of the requirements that we laid out earlier, and we can be confident that we have constructed a rugged frame to which we can add the logic that will make it perform its duties. Check your code Your code will now look like this: import Foundation enum Color { case Red, Yellow, Blue, Green } enum GuessResult: Int { case GuessCorrect, GuessWrong, GuessComplete } classGameLogic { var sequence: [Color] = [] funcextendSequence(){ } funcevaluateColor(color: Color) ->GuessResult{ return .GuessCorrect } funcclearGame(){ } } Check that there are no compiler errors or warnings. All good? Splendid! Onto the Interface Controller now... Summary In this article, you laid the groundwork for the rest of the app. You did so in a way that gives you the best possible chance of producing code that is robust, easy to maintain, and easy to comprehend when you return to it after six months on a surfing holiday. Resources for Article: Further resources on this subject: The Swift Programming Language[article] Exploring Swift[article] Playing with Swift[article]
Read more
  • 0
  • 0
  • 13268

article-image-using-javascript-html
Packt
12 Jan 2016
13 min read
Save for later

Using JavaScript with HTML

Packt
12 Jan 2016
13 min read
In this article by Syed Omar Faruk Towaha, author of the book JavaScript Projects for Kids, we will discuss about HTML, HTML canvas, implementing JavaScript codes on our HTML pages, and few JavaScript operations. (For more resources related to this topic, see here.) HTML HTML is a markup language. What does it mean? Well, a markup language processes and presents texts using specific codes for formatting, styling, and layout design. There are lots of markup languages; for example, Business Narrative Markup Language (BNML), ColdFusion Markup Language (CFML), Opera Binary Markup Language (OBML), Systems Biology Markup Language (SBML), Virtual Human Markup Language (VHML), and so on. However, in modern web, we use HTML. HTML is based on Standard Generalized Markup Language (SGML). SGML was basically used to design document papers. There are a number of versions of HTML. HTML 5 is the latest version. Throughout this book, we will use the latest version of HTML. Before you start learning HTML, think about your favorite website. What does the website contain? A few web pages? You may see some texts, few images, one or two text field, buttons, and some more elements on each of the webpages. Each of these elements are formatted by HTML. Let me introduce you to a web page. On your Internet browser, go to https://www.google.com. You will see a page similar to the following image: The first thing that you will see on the top of your browser is the title of the webpage: Here, the marked box, 1, is the title of the web page that we loaded. The second box, 2, indicates some links or some texts. The word Google in the middle of the page is an image. The third box, 3, indicates two buttons. Can you tell me what Sign in on the right-hand top of the page is? Yes, it is a button. Let's demonstrate the basic structure of HTML. The term tag will be used frequently to demonstrate the structure. An HTML tag is nothing but a few predefined words between the less than sign (<) and the greater than sign (>). Therefore, the structure of a tag is <WORD>, where WORD is the predefined text that is recognized by the Internet browsers. This type of tag is called open tag. There is another type of tag that is known as a close tag. The structure of a close tag is as </WORD>. You just have to put a forward slash after the less than sign. After this section, you will be able to make your own web page with a few texts using HTML. The structure of an HTML page is similar to the following image: This image has eight tags. Let me introduce all these tags with their activities: 1:This is the <html> tag, which is an open tag and it closes at line 15 with the </html> tag. These tags tell your Internet browser that all the texts and scripts in these two tags are HTML documents. 2:This is the <head> tag, which is an open tag and closes at line 7 with the </head> tag. These tags contain the title, script, style, and metadata of a web page. 3:This is the <title> tag, which closes at line 6 with the </title> tag. This tag contains the title of a webpage. The previous image had the title Google. To see this on the web browser you need to type like that: <title> Google </title> 4:This is the close tag of <title> tag 5:This is the closing tag of <head> tag 6:This is the <body> tag, closes at line 13 with tag </body> Whatever you can see on a web page is written between these two tags. Every element, image, link, and so on are formatted here. To see This is a web page on your browser, you need to type similar to the following:     <body> This is a web page </body> 7:The </body> tag closes here. 8:The </html> tag is closed here. Your first webpage You have just learned the eight basic tags of an HTML page. You can now make your own web page. How? Why not you try with me? Open your text editor. Press Ctrl + N, which will open a new untitled file as shown in the following image: Type the following HTML codes on the blank page: <html>     <head>       <title>         My Webpage!       </title>     </head>     <body>       This is my webpage :)     </body>   </html> Then, press Ctrl + Shift + S that will tell you to save your code somewhere on your computer, as follows: Type a suitable name on the File Name: field. I would like to name my HTML file webpage, therefore, I typed webpage.html. You may be thinking why I added an .html extension. As this is an HTML document, you need to add .html or .htm after the name that you give to your webpage. Press the Save button.This will create an HTML document on your computer. Go to the directory, where you saved your HTML file. Remember that you can give your web page any name. However, this name will not be visible on your browser. It is not the title of your webpage. It is a good practice not to keep a blank space on your web page's name. Consider that you want to name your HTML file This is my first webpage.html. Your computer will face no trouble showing the result on the Internet browsers; however, when your website will be on a server, this name may face a problem. Therefore, I suggest you to keep an underscore (_) where you need to add a space similar to the following: This_is_my_first_webpage.html. You will find a file similar to the following image: Now, double-click on the file. You will see your first web page on your Internet browser! You typed My Webpage! between the <title> and </title> tags, which is why your browser is showing this in the first selection box, 1. Also, you typed This is my webpage :) between the <body> and </body> tags. Therefore, you can see the text on your browser in the second selection box, 2. Congratulations! You created your first web page! You can edit your codes and other texts of the webpage.html file by right-clicking on the file and selecting Open with Atom. You must save (Ctrl + S) your codes and text before reopening the file with your browser. Implementing Canvas To add canvas on your HTML page, you need to define the height and width of your canvas in the <canvas> and </canvas> tags as shown in the following: <html>   <head>     <title>Canvas</title>   </head>   <body>   <canvas id="canvasTest" width="200" height="100"     style="border:2px solid #000;">       </canvas>   </body> </html> We have defined canvas id as canvasTest, which will be used to play with the canvas. We used the inline CSS on our canvas. A 2 pixel solid border is used to have a better view of the canvas. Adding JavaScript Now, we are going to add few lines of JavaScript to our canvas. We need to add our JavaScript just after the <canvas>…</canvas> tags in the <script> and </script> tags. Drawing a rectangle To test our canvas, let's draw a rectangle in the canvas by typing the following codes: <script type="text/javascript">   var canvas = document.getElementById("canvasTest"); //called our     canvas by id   var canvasElement = canvas.getContext("2d"); // made our canvas     2D   canvasElement.fillStyle = "black"; //Filled the canvas black   canvasElement.fillRect(10, 10, 50, 50); //created a rectangle </script> In the script, we declared two JavaScript variables. The canvas variable is used to hold the content of our canvas using the canvas ID, which we used on our <canvas>…</canvas> tags. The canvasElement variable is used to hold the context of the canvas. We made fillStyle black so that the rectangle that we want to draw becomes black when filled. We used canvasElement.fillRect(x, y, w, h); for the shape of the rectangle. Where x is the distance of the rectangle from the x axis, y is the distance of the rectangle from the y axis. The w and h parameters are the width and height of the rectangle, respectively. The full code is similar to the following: <html>   <head>     <title>Canvas</title>   </head>   <body>     <canvas id="canvasTest" width="200" height="100"       style="border:2px solid #000;">     </canvas>     <script type="text/javascript">       var canvas = document.getElementById("canvasTest"); //called         our canvas by id       var canvasElement = canvas.getContext("2d"); // made our         canvas 2D       canvasElement.fillStyle = "black"; //Filled the canvas black       canvasElement.fillRect(10, 10, 50, 50); //created a         rectangle     </script>   </body> </html> The output of the code is as follows: Drawing a line To draw a line in the canvas, you need to insert the following code in your <script> and </script> tags: <script type="text/javascript">   var c = document.getElementById("canvasTest");   var canvasElement = c.getContext("2d");   canvasElement.moveTo(0,0);   canvasElement.lineTo(100,100);   canvasElement.stroke(); </script> Here, canvasElement.moveTo(0,0); is used to make our line start from the (0,0) co-ordinate of our canvas. The canvasElement.lineTo(100,100); statement is used to make the line diagonal. The canvasElement.stroke(); statement is used to make the line visible. Here is the output of the code: A quick exercise Draw a line using canvas and JavaScript that will be parallel to the y axis of the canvas Draw a rectangle having 300 px height and 200 px width and draw a line on the same canvas touching the rectangle. Assignment operators An assignment operator assigns a value to an operator. I believe you that already know about assignment operators, don't you? Well, you used an equal sign (=) between a variable and its value. By doing this, you assigned the value to the variable. Let's see the following example: Var name = "Sherlock Holmes" The Sherlock Holmes string is assigned to the name variable. You already learned about the increment and decrement operators. Can you tell me what will be the output of the following codes: var x = 3; x *= 2; document.write(x); The output will be 6. Do you remember why this happened? The x *= 2; statement is similar to x = x * 2; as x is equal to 3 and later multiplied by 2. The final number (3 x 2 = 6) is assigned to the same x variable. That's why we got the output as shown in the following screenshot: Let's perform following exercise: What is the output of the following codes: var w = 32; var x = 12; var y = 9; var z = 5; w++; w--; x*2; y = x; y--; z%2; document.write("w = "+w+", x = "+x+", y = "+y+", z = "+z); The output that we get is w = 32, x = 12, y = 11, z = 5. JavaScript comparison and logical operators If you want to do something logical and compare two numbers or variables in JavaScript, you need to use a few logical operators. The following are a few examples of the comparison operators: Operator Description == Equal to != Not equal to > Greater than < Less than => Equal to or greater than <= Less than or equal to The example of each operator is shown in the following screenshot: According to mozilla.org, "Object-oriented programming (OOP) is a programming paradigm that uses abstraction to create models based on the real world. OOP uses several techniques from previously established paradigms, including modularity, polymorphism, and encapsulation." Nicholas C. Zakas states that "OOP languages typically are identified through their use of classes to create multiple objects that have the same properties and methods." You probably have assumed that JavaScript is an object-oriented programming language. Yes, you are absolutely right. Let's see why it is an OOP language. We call a computer programming language object oriented, if it has the following few features: Inheritance Polymorphism Encapsulation Abstraction Before going any further, let's discuss objects. We create objects in JavaScript in the following manner. var person = new Object(); person.name = "Harry Potter"; person.age = 22; person.job = "Magician"; We created an object for person. We added a few properties of person. If we want to access any property of the object, we need to call the property. Consider that you want to have a pop up of the name property of the preceding person object. You can do this with the following method: person.callName = function(){  alert(this.name); }; We can write the preceding code as the following: var person = {   name: "Harry Potter",   age: 22,   job: "Magician",   callName: function(){     alert(this.name);   } }; Inheritance in JavaScript Inherit means derive something (characteristics, quality, and so on) from one's parents or ancestors. In programming languages, when a class or an object is based on another class or object in order to maintain the same behavior of mother class or object is known as inheritance. We can also say that this is a concept of gaining properties or behaviors of something else. Suppose, X inherits something from Y, it is like X is a type of Y. JavaScript occupies the inheritance capability. Let's see an example. A bird inherits from animal as a bird is a type of animal. Therefore, a bird can do the same thing as an animal. This kind of relationship in JavaScript is a little complex and needs a syntax. We need to use a special object called prototype, which assigns the properties to a type. We need to remember that only function has prototypes. Our Animal function should look similar to the following: function Animal(){ //We can code here. }; To add few properties to the function, we need to add a prototype, as shown in the following: Animal.prototype.eat = function(){ alert("Animal can eat."); }; Summary In this article, you have learned how to write HTML code and implement JavaScript code with the HTML file and HTML canvas. You have learned a few arithmetic operations with JavaScript. The sections in this article are from different chapters of the book, therefore, the flow may look like not in line. I hope you read the original book and practice the code that we discussed there. Resources for Article: Further resources on this subject: Walking You Through Classes [article] JavaScript Execution with Selenium [article] Object-Oriented JavaScript with Backbone Classes [article]
Read more
  • 0
  • 0
  • 4903

article-image-introduction-devops
Packt
12 Jan 2016
7 min read
Save for later

Introduction to DevOps

Packt
12 Jan 2016
7 min read
In this article by Joakim Verona, the author of the book Practical DevOps, we will be introduced to DevOps. An important part of DevOps is being able to explain to coworkers in your organization what DevOps is, and what it isn't. The faster you can get everyone aboard the DevOps train, the faster you can get to the part where you do the actual technical implementation! (For more resources related to this topic, see here.) Introducing DevOps DevOps is, by definition, a field that spans several disciplines. It is a field that is very practical and hands-on, but at the same time, you must understand both the technical background and the non-technical cultural aspects. The word DevOps is a combination of the words development and operation. This wordplay already serves to give us a hint of the basic nature of the idea behind DevOps. It is a practice where collaboration between different disciplines of software development is encouraged. The DevOps movement has its roots in Agile software development principles. The Agile Manifesto was written in 2001 by a number of individuals wanting to improve the then-current status quo of system development and find new ways of working in the software development industry. The following is an excerpt from the Agile Manifesto, the now-classic text, which is available on the web at http://agilemanifesto.org/: "Individuals and interactions over processes and tools Working software over comprehensive documentation Customer collaboration over contract negotiation Responding to change over following a plan That is, while there is value in the items on the right, we value the items on the left more." In light of this, DevOps can be said to relate to the first principle, "Individuals and interactions over processes and tools". This might be seen as a fairly obviously beneficial way to work—why do we even have to state this obvious fact? Well, if you have ever worked in any large organization, you will know that the opposite principle seems to be in operation instead. Walls between different parts of an organization tend to form easily, even in smaller organizations where at first it would appear to be impossible for such walls to form. DevOps, then, tends to emphasize that interactions between individuals are very important, and that technology might possibly assist in making these interactions happen and tear down the walls inside organizations. This might seem counterintuitive given that the first principle favors interaction between people over tools, but the authors' opinion is that any tool can have several effects when used. If we use the tools properly, they can facilitate all of the desired properties of an Agile workplace. A very simple example might be the choice of systems used to report bugs. Quite often, development teams and quality assurance teams use different systems to handle tasks and bugs. This creates unnecessary friction between the teams and further separates them when they should really focus on working together instead. The operations team might in turn use a third system to handle requests for deployment to the organization's servers. An engineer with a DevOps mindset, on the other hand, will immediately recognize all three systems as being workflow systems with similar properties. It should be possible for everyone in the three different teams to use the same system, perhaps tweaked to generate different views for the different roles. A further benefit would be smaller maintenance costs, since three systems are replaced by one. Another core goal of DevOps is automation and continuous delivery. Simply put, automating repetitive and tedious tasks leaves more time for human interaction, where true value can be created. How fast is fast? The turnaround for DevOps processes must be fast. We need to consider time to market in the larger perspective, and simply stay focused on our tasks in the smaller perspective. This line of thought is also held by the Continuous Delivery movement. As with many things Agile, many of the ideas in DevOps and Continuous Delivery are in fact different names of the same basic concepts. There really isn't any contention between the two concepts; DevOps and Continuous Delivery are two sides to the same coin. DevOps engineers work on making enterprise processes faster, more efficient, and more reliable. Repetitive manual labor, which is error prone, is removed whenever possible. It's easy, however, to lose track of the goal when working with DevOps implementations. Doing nothing faster is of no use to anyone. Instead, we must keep track of delivering increased business value. For instance, increased communication between roles in the organization has clear value. Your product owners might be wondering how the development process is going and are eager to have a look. In this situation, it is useful to be able to deliver incremental improvements of code to the test environments quickly and efficiently. In the test environment, involved stake holders, such as product owners and, of course, the quality assurance teams, can follow the progress of the development process. Another way to look at it is this: If you ever feel yourself losing focus because of needless waiting, something is wrong with your processes or your tooling. If you find yourself watching videos of robots shooting balloons during compile time, your compile times are too long! The same is true for teams idling while waiting for deploys and so on. This idling is, of course, even more expensive than that of a single individual. While robot shooting-practice videos are fun, software development is inspiring too! We should help focus creative potential by eliminating unnecessary overhead. The Agile wheel of wheels There are several different cycles in Agile development, from the portfolio level through to the Scrum and Kanban cycles and down to the continuous integration cycle. The emphasis on which cadence work happens in is a bit different depending on which Agile framework you are working with. Kanban emphasizes the 24-hour cycle and is popular in operations teams. Scrum cycles can be between two to four weeks and are often used by development teams using the Scrum Agile process. Longer cycles are also common and are called program increments, which span several Scrum Sprint cycles, in Scaled Agile Framework. The Agile wheel of wheels DevOps must be able to support all these cycles. This is quite natural given the central theme of DevOps: cooperation between disciplines in an Agile organization. The most obvious and measurably concrete benefits of DevOps occur in the shorter cycles, which in turn make the longer cycles more efficient. Take care of the pennies, and the pounds will take care of themselves, as the old adage goes. Here are some examples of when DevOps can benefit Agile cycles: Deployment systems, maintained by DevOps engineers, make the deliveries at the end of Scrum cycles faster and more efficient. These can happen with a periodicity of 2 to 4 weeks. In organizations where deployments are done mostly by hand, the time to deploy can be several days. Organizations who have these inefficient deployment processes will benefit greatly from a DevOps mindset. The Kanban cycle is twenty-four hours, and it's therefore obvious that the deployment cycle needs to be much faster than that if we are to succeed with Kanban. A well-designed DevOps Continuous Delivery pipeline can deploy code from being committed to the code repository to production in the order of minutes, depending on the size of the change. Summary This article presented a brief overview of the background of the DevOps movement. We discussed the history of DevOps and its roots in development and operations as well as in the Agile movement. Resources for Article: Further resources on this subject: Command Line Tools for DevOps [article] What is continuous delivery and DevOps? [article] Continuous Delivery and DevOps FAQs [article]
Read more
  • 0
  • 0
  • 11142

Packt
12 Jan 2016
11 min read
Save for later

Façade Pattern – Being Adaptive with Façade

Packt
12 Jan 2016
11 min read
In this article by Chetan Giridhar, author of the book, Learning Python Design Patterns - Second Edition, we will get introduced to the Façade design pattern and how it is used in software application development. We will work with a sample use case and implement it in Python v3.5. In brief, we will cover the following topics in this article: An understanding of the Façade design pattern with a UML diagram A real-world use case with the Python v3.5 code implementation The Façade pattern and principle of least knowledge (For more resources related to this topic, see here.) Understanding the Façade design pattern Façade is generally referred to as the face of the building, especially an attractive one. It can be also referred to as a behavior or appearance that gives a false idea of someone's true feelings or situation. When people walk past a façade, they can appreciate the exterior face but aren't aware of the complexities of the structure within. This is how a façade pattern is used. Façade hides the complexities of the internal system and provides an interface to the client that can access the system in a very simplified way. Consider an example of a storekeeper. Now, when you, as a customer, visit a store to buy certain items, you're not aware of the layout of the store. You typically approach the storekeeper who is well aware of the store system. Based on your requirements, the storekeeper picks up items and hands them over to you. Isn't this easy? The customer need not know how the store looks and s/he gets the stuff done through a simple interface, the storekeeper. The Façade design pattern essentially does the following: It provides a unified interface to a set of interfaces in a subsystem and defines a high-level interface that helps the client use the subsystem in an easy way. Façade discusses representing a complex subsystem with a single interface object. It doesn't encapsulate the subsystem but actually combines the underlying subsystems. It promotes the decoupling of the implementation with multiple clients. A UML class diagram We will now discuss the Façade pattern with the help of the following UML diagram: As we observe the UML diagram, you'll realize that there are three main participants in this pattern: Façade: The main responsibility of a façade is to wrap up a complex group of subsystems so that it can provide a pleasing look to the outside world. System: This represents a set of varied subsystems that make the whole system compound and difficult to view or work with. Client: The client interacts with the Façade so that it can easily communicate with the subsystem and get the work completed. It doesn't have to bother about the complex nature of the system. You will now learn a little more about the three main participants from the data structure's perspective. Façade The following points will give us a better idea of Façade: It is an interface that knows which subsystems are responsible for a request It delegates the client's requests to the appropriate subsystem objects using composition For example, if the client is looking for some work to be accomplished, it need not have to go to individual subsystems but can simply contact the interface (Façade) that gets the work done System In the Façade world, System is an entity that performs the following: It implements subsystem functionality and is represented by a class. Ideally, a System is represented by a group of classes that are responsible for different operations. It handles the work assigned by the Façade object but has no knowledge of the façade and keeps no reference to it. For instance, when the client requests the Façade for a certain service, Façade chooses the right subsystem that delivers the service based on the type of service Client Here's how we can describe the client: The client is a class that instantiates the Façade It makes requests to the Façade to get the work done from the subsystems Implementing the Façade pattern in the real world To demonstrate the applications of the Façade pattern, let's take an example that we'd have experienced in our lifetime. Consider that you have a marriage in your family and you are in charge of all the arrangements. Whoa! That's a tough job on your hands. You have to book a hotel or place for marriage, talk to a caterer for food arrangements, organize a florist for all the decorations, and finally handle the musical arrangements expected for the event. In yesteryears, you'd have done all this by yourself, such as talking to the relevant folks, coordinating with them, negotiating on the pricing, but now life is simpler. You go and talk to an event manager who handles this for you. S/he will make sure that they talk to the individual service providers and get the best deal for you. From the Façade pattern perspective we will have the following three main participants: Client: It's you who need all the marriage preparations to be completed in time before the wedding. They should be top class and guests should love the celebrations. Façade: The event manager who's responsible for talking to all the folks that need to work on specific arrangements such as food, flower decorations, among others Subsystems: They represent the systems that provide services such as catering, hotel management, and flower decorations Let's develop an application in Python v3.5 and implement this use case. We start with the client first. It's you! Remember, you're the one who has been given the responsibility to make sure that the marriage preparations are done and the event goes fine! However, you're being clever here and passing on the responsibility to the event manager, isn't it? Let's now look at the You class. In this example, you create an object of the EventManager class so that the manager can work with the relevant folks on marriage preparations while you relax. class You(object):     def __init__(self):         print("You:: Whoa! Marriage Arrangements??!!!")     def askEventManager(self):         print("You:: Let's Contact the Event Manager\n\n")         em = EventManager()         em.arrange()     def __del__(self):         print("You:: Thanks to Event Manager, all preparations done! Phew!") Let's now move ahead and talk about the Façade class. As discussed earlier, the Façade class simplifies the interface for the client. In this case, EventManager acts as a façade and simplifies the work for You. Façade talks to the subsystems and does all the booking and preparations for the marriage on your behalf. Here is the Python code for the EventManager class: class EventManager(object):         def __init__(self):         print("Event Manager:: Let me talk to the folks\n")         def arrange(self):         self.hotelier = Hotelier()         self.hotelier.bookHotel()                 self.florist = Florist()         self.florist.setFlowerRequirements()                  self.caterer = Caterer()         self.caterer.setCuisine()                 self.musician = Musician()         self.musician.setMusicType() Now that we're done with the Façade and client, let's dive into the subsystems. We have developed the following classes for this scenario: Hotelier is for the hotel bookings. It has a method to check whether the hotel is free on that day (__isAvailable) and if it is free for booking the Hotel (bookHotel). The Florist class is responsible for flower decorations. Florist has the setFlowerRequirements() method to be used to set the expectations on the kind of flowers needed for the marriage decoration. The Caterer class is used to deal with the caterer and is responsible for the food arrangements. Caterer exposes the setCuisine() method to accept the type of cuisine to be served at the marriage. The Musician class is designed for musical arrangements at the marriage. It uses the setMusicType() method to understand the music requirements for the event. class Hotelier(object):     def __init__(self):         print("Arranging the Hotel for Marriage? --")         def __isAvailable(self):         print("Is the Hotel free for the event on given day?")         return True       def bookHotel(self):         if self.__isAvailable():             print("Registered the Booking\n\n")     class Florist(object):     def __init__(self):         print("Flower Decorations for the Event? --")         def setFlowerRequirements(self):         print("Carnations, Roses and Lilies would be used for Decorations\n\n")     class Caterer(object):     def __init__(self):         print("Food Arrangements for the Event --")         def setCuisine(self):         print("Chinese & Continental Cuisine to be served\n\n")     class Musician(object):     def __init__(self):         print("Musical Arrangements for the Marriage --")         def setMusicType(self):         print("Jazz and Classical will be played\n\n")   you = You() you.askEventManager() The output of the preceding code is given here: In the preceding code example: The EventManager class is the Façade that simplifies the interface for You EventManager uses composition to create objects of the subsystems such as Hotelier, Caterer, and others The principle of least knowledge As you have learned in the initial parts of this article, the Façade provides a unified system that makes subsystems easy to use. It also decouples the client from the subsystem of components. The design principle that is employed behind the Façade pattern is the principle of least knowledge. The principle of least knowledge guides us to reduce the interactions between objects to just a few friends that are close enough to you. In real terms, it means the following:: When designing a system, for every object created, one should look at the number of classes that it interacts with and the way in which the interaction happens. Following the principle, make sure that we avoid situations where there are many classes created tightly coupled to each other. If there are a lot of dependencies between classes, the system becomes hard to maintain. Any changes in one part of the system can lead to unintentional changes to other parts of the system, which means that the system is exposed to regressions and this should be avoided. Summary We began the article by first understanding the Façade design pattern and the context in which it's used. We understood the basis of Façade and how it is effectively used in software architecture. We looked at how Façade design patterns create a simplified interface for clients to use. It simplifies the complexity of subsystems so that the client benefits. The Façade doesn't encapsulate the subsystem and the client is free to access the subsystems even without going through the Façade. You also learned the pattern with a UML diagram and sample code implementation in Python v3.5. We understood the principle of least knowledge and how its philosophy governs the Façade design patterns. Further resources on this subject: Asynchronous Programming with Python [article] Optimization in Python [article] The Essentials of Working with Python Collections [article]
Read more
  • 0
  • 0
  • 14360

article-image-interactive-crime-map-using-flask
Packt
12 Jan 2016
18 min read
Save for later

Interactive Crime Map Using Flask

Packt
12 Jan 2016
18 min read
In this article by Gareth Dwyer, author of the book, Flask By Example, we will cover how to set up a MySQL database on our VPS and creating a database for the crime data. We'll follow on from this by setting up a basic page containing a map and textbox. We'll see how to link Flask to MySQL by storing data entered into the textbox in our database. We won't be using an ORM for our database queries or a JavaScript framework for user input and interaction. This means that there will be some laborious writing of SQL and vanilla JavaScript, but it's important to fully understand why tools and frameworks exist, and what problems they solve, before diving in and using them blindly. (For more resources related to this topic, see here.) We'll cover the following topics: Introduction to SQL Databases Installing MySQL on our VPS Connecting to MySQL from Python and creating the database Connecting to MySQL from Flask and inserting data Setting up We'll create a new git repository for our new code base, since although some of the setup will be similar, our new project should be completely unrelated to our first one. If you need more help with this step, head back to the setup of the first project and follow the detailed instructions there. If you're feeling confident, see if you can do it just with the following summary: Head over to the website for bitbucket, GitHub, or whichever hosting platform you used for the first project. Log in and use their Create a new repository functionality. Name your repo crimemap, and take note of the URL you're given. On your local machine, fire up a terminal and run the following commands: mkdir crimemap cd crimemap git init git remote add origin <git repository URL> We'll leave this repository empty for now as we need to set up a database on our VPS. Once we have the database installed, we'll come back here to set up our Flask project. Understanding relational databases In its simplest form, a relational database management system, such as MySQL, is a glorified spreadsheet program, such as Microsoft Excel: We store data in rows and columns. Every row is a "thing" and every column is a specific piece of information about the thing in the relevant row. I put "thing" in inverted commas because we're not limited to storing objects. In fact, the most common example, both in the real world and in explaining databases, is data about people. A basic database storing information about customers of an e-commerce website could look something like the following: ID First Name Surname Email Address Telephone 1 Frodo Baggins fbaggins@example.com +1 111 111 1111 2 Bilbo Baggins bbaggins@example.com +1 111 111 1010 3 Samwise Gamgee sgamgee@example.com +1 111 111 1001 If we look from left to right in a single row, we get all the information about one person. If we look at a single column from top to bottom, we get one piece of information (for example, an e-mail address) for everyone. Both can be useful—if we want to add a new person or contact a specific person, we're probably interested in a specific row. If we want to send a newsletter to all our customers, we're just interested in the e-mail column. So why can't we just use spreadsheets instead of databases then? Well, if we take the example of an e-commerce store further, we quickly see the limitations. If we want to store a list of all the items we have on offer, we can create another table similar to the preceding one, with columns such as "Item name", "Description", "Price", and "Quantity in stock". Our model continues to be useful. But now, if we want to store a list of all the items Frodo has ever purchased, there's no good place to put the data. We could add 1000 columns to our customer table: "Purchase 1", "Purchase 2", and so on up to "Purchase 1000", and hope that Frodo never buys more than 1000 items. This isn't scalable or easy to work with: How do we get the description for the item Frodo purchased last Tuesday? Do we just store the item's name in our new column? What happens with items that don't have unique names? Soon, we realise that we need to think about it backwards. Instead of storing the items purchased by a person in the "Customers" table, we create a new table called "Orders" and store a reference to the customer in every order. Thus, an order knows which customer it belongs to, but a customer has no inherent knowledge of what orders belong to them. While our model still fits into a spreadsheet at the push of a button, as we grow our data model and data size, our spreadsheet becomes cumbersome. We need to perform complicated queries such as "I want to see all the items that are in stock and have been ordered at least once in the last 6 months and cost more than $10." Enter Relational database management systems (RDBMS). They've been around for decades and are a tried and tested way of solving a common problem—storing data with complicated relations in an organized and accessible manner. We won't be touching on their full capabilities in our crime map (in fact, we could probably store our data in a .txt file if we needed to), but if you're interested in building web applications, you will need a database at some point. So, let's start small and add the powerful MySQL tool to our growing toolbox. I highly recommend learning more about databases. If the taster you experience while building our current project takes your fancy, go read and learn about databases. The history of RDBMS is interesting, and the complexities and subtleties of normalization and database varieties (including NoSQL databases, which we'll see something of in our next project) deserve more study time than we can devote to them in a book that focuses on Python web development. Installing and configuring MySQL Installing and configuring MySQL is an extremely common task. You can therefore find it in prebuilt images or in scripts that build entire stacks for you. A common stack is called the LAMP (Linux, Apache, MySQL, and PHP) stack, and many VPS providers provide a one-click LAMP stack image. As we are already using Linux and have already installed Apache manually, after installing MySQL, we'll be very close to the traditional LAMP stack, just using the P for Python instead of PHP. In keeping with our goal of "education first", we'll install MySQL manually and configure it through the command line instead of installing a GUI control panel. If you've used MySQL before, feel free to set it up as you see fit. Installing MySQL on our VPS Installing MySQL on our server is quite straightforward. SSH into your VPS and run the following commands: sudo apt-get update sudo apt-get install mysql-server You should see an interface prompting you for a root password for MySQL. Enter a password of your choice and repeat it when prompted. Once the installation has completed, you can get a live SQL shell by typing the following command and entering the password you chose earlier: mysql –p We could create a database and schema using this shell, but we'll be doing that through Python instead, so hit Ctrl + C to terminate the MySQL shell if you opened it. Installing Python drivers for MySQL Because we want to use Python to talk to our database, we need to install another package. There are two main MySQL connectors for Python: PyMySql and MySqlDB. The first is preferable from a simplicity and ease-of-use point of view. It is a pure Python library, meaning that it has no dependencies. MySqlDB is a C extension, and therefore has some dependencies, but is, in theory, a bit faster. They work very similarly once installed. To install it, run the following (still on your VPS): sudo pip install pymysql Creating our crimemap database in MySQL Some knowledge of SQL's syntax will be useful for the rest of this article, but you should be able to follow either way. The first thing we need to do is create a database for our web application. If you're comfortable using a command-line editor, you can create the following scripts directly on the VPS as we won't be running them locally and this can make them easier to debug. However, developing over an SSH session is far from ideal, so I recommend that you write them locally and use git to transfer them to the server before running. This can make debugging a bit frustrating, so be extra careful in writing these scripts. If you want, you can get them directly from the code bundle that comes with this book. In this case, you simply need to populate the Password field correctly and everything should work. Creating a database setup script In the crimemap directory where we initialised our git repo in the beginning, create a Python file called db_setup.py, containing the following code: import pymysql import dbconfig connection = pymysql.connect(host='localhost', user=dbconfig.db_user, passwd=dbconfig.db_password) try: with connection.cursor() as cursor: sql = "CREATE DATABASE IF NOT EXISTS crimemap" cursor.execute(sql) sql = """CREATE TABLE IF NOT EXISTS crimemap.crimes ( id int NOT NULL AUTO_INCREMENT, latitude FLOAT(10,6), longitude FLOAT(10,6), date DATETIME, category VARCHAR(50), description VARCHAR(1000), updated_at TIMESTAMP, PRIMARY KEY (id) )""" cursor.execute(sql); connection.commit() finally: connection.close() Let’s take a look at what this code does. First, we import the pymysql library we just installed. We also import dbconfig, which we’ll create locally in a bit and populate with the database credentials (we don’t want to store these in our repository). Then, we create a connection to our database using localhost (because our database is installed on the same machine as our code) and the credentials that don’t exist yet. Now that we have a connection to our database, we can get a cursor. You can think of a cursor as being a bit like the blinking object in your word processor that indicates where text will appear when you start typing. A database cursor is an object that points to a place in the database where we want to create, read, update, or delete data. Once we start dealing with database operations, there are various exceptions that could occur. We’ll always want to close our connection to the database, so we create a cursor (and do all subsequent operations) inside a try block with a connection.close() in a finally block (the finally block will get executed whether or not the try block succeeds). The cursor is also a resource, so we’ll grab one and use it in a with block so that it’ll automatically be closed when we’re done with it. With the setup done, we can start executing SQL code. Creating the database SQL reads similarly to English, so it's normally quite straightforward to work out what existing SQL does even if it's a bit more tricky to write new code. Our first SQL statement creates a database (crimemap) if it doesn't already exist (this means that if we come back to this script, we can leave this line in without deleting the entire database every time). We create our first SQL statement as a string and use the variable sql to store it. Then we execute the statement using the cursor we created. Using the database setup script We save our script locally and push it to the repository using the following command: git add db_setup.py git commit –m “database setup script” git push origin master We then SSH to our VPS and clone the new repository to our /var/www directory using the following command: ssh user@123.456.789.123 cd /var/www git clone <your-git-url> cd crimemap Adding credentials to our setup script Now, we still don’t have the credentials that our script relies on. We’ll do the following things before using our setup script: Create the dbconfig.py file with the database and password. Add this file to .gitignore to prevent it from being added to our repository. The following are the steps to do so: Create and edit dbconfig.py using the nano command: nano dbconfig.py Then, type the following (using the password you chose when you installed MySQL): db_username = “root” db_password = “<your-mysql-password>” Save it by hitting Ctrl + X and entering Y when prompted. Now, use similar nano commands to create, edit, and save .gitignore, which should contain this single line: dbconfig.py Running our database setup script With that done, you can run the following command: python db_setup.py Assuming everything goes smoothly, you should now have a database with a table to store crimes. Python will output any SQL errors, allowing you to debug if necessary. If you make changes to the script from the server, run the same git add, git commit, and git push commands that you did from your local machine. That concludes our preliminary database setup! Now we can create a basic Flask project that uses our database. Creating an outline for our Flask app We're going to start by building a skeleton of our crime map application. It'll be a basic Flask application with a single page that: Displays all data in the crimes table of our database Allows users to input data and stores this data in the database Has a "clear" button that deletes all the previously input data Although what we're going to be storing and displaying can't really be described as "crime data" yet, we'll be storing it in the crimes table that we created earlier. We'll just be using the description field for now, ignoring all the other ones. The process to set up the Flask application is very similar to what we used before. We're going to separate out the database logic into a separate file, leaving our main crimemap.py file for the Flask setup and routing. Setting up our directory structure On your local machine, change to the crimemap directory. If you created the database setup script on the server or made any changes to it there, then make sure you sync the changes locally. Then, create the templates directory and touch the files we're going to be using, as follows: cd crimemap git pull origin master mkdir templates touch templates/home.html touch crimemap.py touch dbhelper.py Looking at our application code The crimemap.py file contains nothing unexpected and should be entirely familiar from our headlines project. The only thing to point out is the DBHelper() function, whose code we'll see next. We simply create a global DBHelper() function right after initializing our app and then use it in the relevant methods to grab data from, insert data into, or delete all data from the database. from dbhelper import DBHelper from flask import Flask from flask import render_template from flask import request app = Flask(__name__) DB = DBHelper() @app.route("/") def home(): try: data = DB.get_all_inputs() except Exception as e: print e data = None return render_template("home.html", data=data) @app.route("/add", methods=["POST"]) def add(): try: data = request.form.get("userinput") DB.add_input(data) except Exception as e: print e return home() @app.route("/clear") def clear(): try: DB.clear_all() except Exception as e: print e return home() if __name__ == '__main__': app.run(debug=True) Looking at our SQL code There's a little bit more SQL to learn from our database helper code. In dbhelper.py, we need the following: import pymysql import dbconfig class DBHelper: def connect(self, database="crimemap"): return pymysql.connect(host='localhost', user=dbconfig.db_user, passwd=dbconfig.db_password, db=database) def get_all_inputs(self): connection = self.connect() try: query = "SELECT description FROM crimes;" with connection.cursor() as cursor: cursor.execute(query) return cursor.fetchall() finally: connection.close() def add_input(self, data): connection = self.connect() try: query = "INSERT INTO crimes (description) VALUES ('{}');".format(data) with connection.cursor() as cursor: cursor.execute(query) connection.commit() finally: connection.close() def clear_all(self): connection = self.connect() try: query = "DELETE FROM crimes;" with connection.cursor() as cursor: cursor.execute(query) connection.commit() finally: connection.close() As in our setup script, we need to make a connection to our database and then get a cursor from our connection in order to do anything meaningful. Again, we perform all our operations in try: ...finally: blocks in order to ensure that the connection is closed. In our helper code, we see three of the four main database operations. CRUD (Create, Read, Update, and Delete) describes the basic database operations. We are either creating and inserting new data or reading, modifying, or deleting existing data. We have no need to update data in our basic app, but creating, reading, and deleting are certainly useful. Creating our view code Python and SQL code is fun to write, and it is indeed the main part of our application. However, at the moment, we have a house without doors or windows—the difficult and impressive bit is done, but it's unusable. Let's add a few lines of HTML to allow the world to interact with the code we've written. In /templates/home.html, add the following: <html> <body> <head> <title>Crime Map</title> </head> <h1>Crime Map</h1> <form action="/add" method="POST"> <input type="text" name="userinput"> <input type="submit" value="Submit"> </form> <a href="/clear">clear</a> {% for userinput in data %} <p>{{userinput}}</p> {% endfor %} </body> </html> There's nothing we haven't seen before. We have a form with a single text input box to add data to our database by calling the /add function of our app, and directly below it, we loop through all the existing data and display each piece within <p> tags. Running the code on our VPS Finally, we just need to make our code accessible to the world. This means pushing it to our git repo, pulling it onto the VPS, and configuring Apache to serve it. Run the following commands locally: git add git commit –m "Skeleton CrimeMap" git push origin master ssh <username>@<vps-ip-address> And on your VPS use the following command: cd /var/www/crimemap git pull origin master Now, we need a .wsgi file to link our Python code to Apache: nano crimemap.wsgi The .wsgi file should contain the following: import sys sys.path.insert(0, "/var/www/crimemap") from crimemap import app as application Hit Ctrl + X and then Y when prompted to save. We also need to create a new Apache .conf file and set this as the default (instead of the headlines.conf file that is our current default), as follows: cd /etc/apache2/sites-available nano crimemap.conf This file should contain the following: <VirtualHost *> ServerName example.com WSGIScriptAlias / /var/www/crimemap/crimemap.wsgi WSGIDaemonProcess crimemap <Directory /var/www/crimemap> WSGIProcessGroup crimemap WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all </Directory> </VirtualHost> This is so similar to the headlines.conf file we created for our previous project that you might find it easier to just copy that one and substitute code as necessary. Finally, we need to deactivate the old site (later on, we'll look at how to run multiple sites simultaneously off the same server) and activate the new one: sudo a2dissite headlines.conf sudo a2enssite crimemap.conf sudo service apache2 reload Now, everything should be working. If you copied the code out manually, it's almost certain that there's a bug or two to deal with. Don't be discouraged by this—remember that debugging is expected to be a large part of development! If necessary, do a tail –f on /var/log/apache2/error.log while you load the site in order to see any errors. If this fails, add some print statements to crimemap.py and dbhelper.py to narrow down the places where things are breaking. Once everything is working, you should be able to see the following in your browser: Notice how the data we get from the database is a tuple, which is why it is surrounded by brackets and has a trailing comma. This is because we selected only a single field (description) from our crimes table when we could, in theory, be dealing with many columns for each crime (and soon will be). Summary That's it for the introduction to our crime map project. Resources for Article: Further resources on this subject: Web Scraping with Python[article] Python 3: Building a Wiki Application[article] Using memcached with Python[article]
Read more
  • 0
  • 0
  • 8005
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-getting-started-emberjspart2
Daniel Ochoa
11 Jan 2016
5 min read
Save for later

Getting started with Ember.js – Part 2

Daniel Ochoa
11 Jan 2016
5 min read
In Part 1 of this blog, we got started with Ember.js by examining how to set up your development environment from beginning to end with Ember.js using ember-cli – Ember’s build tool. Ember-cli minifies and concatenates your JavaScript, giving you a strong conventional project structure and a powerful add-on system for extensions. In this Part 2 post, I’ll guide you through the setting up of a very basic todo-like Ember.js application to get your feet wet with actual Ember.js development. Setting up a more detailed overview for the posts Feel free to change the title of our app header (see Part 1). Go to ‘app/templates/application.hbs’ and change the wording inside the h2 tag to something like ‘Funny posts’ or anything you’d like. Let’s change our app so when a user clicks on the title of a post, it will take them to a different route based on the id of the post, for example, /posts/1bbe3 . By doing so, we are telling ember to display a different route and template. Next, let's run the following on the terminal: ember generate route post This will modify our app/router.js file by creating a route file for our post and a template. Let’s go ahead and open the app/router.js file to make sure it looks like the following: import Ember from 'ember'; import config from './config/environment'; var Router = Ember.Router.extend({ location: config.locationType }); Router.map(function() { this.resource('posts'); this.route('post', {path: '/posts/:post_id'}); }); export default Router; In the router file, we make sure the new ‘post’ route has a specific path by passing it a second argument with an object that contains a key called path and a value of ‘/posts/:post_id’. The colon in that path means the second parth of the path after /posts/ is a dynamic URL. In this URL, we will be passing the id of the post so we can determine what specific post to load on our post route. (So far, we have posts and post routes, so don’t get confused). Now, let's go to app/templates/posts.hbs and make sure we only have the following: <ul> {{#each model as |post|}} {{#link-to 'post' post tagName='li'}} {{post.title}} {{/link-to}} {{/each}} </ul> As you can see, we replaced our <li> element with an ember helper called ‘link-to’. What link-to does is that it generates for you the link for our single post route. The first argument is the name of the route, ‘post’, the second argument is the actual post itself and in the last part of the helper, we are telling Handlebars to render the link to as a <li> element by providing the tagName property. Ember is smart enough to know that if you link to a route and pass it an object, your intent is to set the model on that route to a single post. Now open ‘app/templates/post.hbs’ and replace the contents with just the following: {{model.title }} Now if you refresh the app from ‘/posts’ and click on a post title, you’ll be taken to a different route and you’ll see only the title of the post. What happens if you refresh the page at this URL? You’ll see errors on the console and nothing will be displayed. This is because you arrived at this URL from the previous post's route where you passed a single post as the argument to be the model for the current post route. When you hit refresh you lose this step so no model is set for the current route. You can fix that by adding the following to ‘app/routes/post.js’ : import Ember from 'ember'; export default Ember.Route.extend({ model(params) { return Ember.$.getJSON('https://www.reddit.com/tb/' + params.post_id + '.json?jsonp=?').then(result => { return result[0].data.children[0].data; }); } }); Now, whenever you refresh on a single post page, Ember will see that you don’t have a model so the model hook will be triggered on the route. In this case, it will grab the id of the post from the dynamic URL, which is passed as an argument to the query hook and it will make a request to reddit for the relevant post. In this case, notice that we are also returned the request promise and then filtering the results to only return the single post object we need. Change the app/templates/post.hbs template to the following: <div class="title"> <h1>{{model.title}}</h1> </div> <div class="image"> <img src="{{model.preview.images.firstObject.source.url}}" height="400"/> </div> <div class="author"> submitted by: {{model.author}} </div> Now, if you look at an individual post, you’ll get the title, image, and author for the post. Congratulations, you’ve built your first Ember.js application with dynamic data and routes. Hopefully, you now have a better grasp and understanding of some basic concepts for building more ambitious web applications using Ember. About the Author: Daniel Ochoa is a senior software engineer at Frog with a passion for crafting beautiful web and mobile experiences. His current interests are Node.js, Ember.js, Ruby on Rails, iOS development with Swift, and the Haskell language. He can be found on Twitter @DanyOchoaOzz.
Read more
  • 0
  • 0
  • 20386

article-image-cython-wont-bite
Packt
10 Jan 2016
9 min read
Save for later

Cython Won't Bite

Packt
10 Jan 2016
9 min read
In this article by Philip Herron, the author of the book Learning Cython Programming - Second Edition, we see how Cython is much more than just a programminglanguage. Its origin can be traced to Sage, the mathematics software package, where it was used to increase the performance of mathematical computations, such as those involving matrices. More generally, I tend to consider Cython as an alternative to Swig to generate really good python bindings to native code. Language bindings have been around for years and Swig was one of the first and best tools to generate bindings for multitudes of languages. Cython generates bindings for Python code only, and this single purpose approach means it generates the best Python bindings you can get outside of doing it all manually; attempt the latter only if you're a Python core developer. For me, taking control of legacy software by generating language bindings is a great way to reuse any software package. Consider a legacy application written in C/C++; adding advanced modern features like a web server for a dashboard or message bus is not a trivial thing to do. More importantly, Python comes with thousands of packages that have been developed, tested, and used by people for a long time, and can do exactly that. Wouldn't it be great to take advantage of all of this code? With Cython, we can do exactly this, and I will demonstrate approaches with plenty of example codes along the way. This article will be dedicated to the core concepts on using Cython, including compilation, and will provide a solid reference and introduction for all to Cython core concepts. In this article, we will cover: Installing Cython Getting started - Hello World Using distutils with Cython Calling C functions from Python Type conversion (For more resources related to this topic, see here.) Installing Cython Since Cython is a programming language, we must install its respective compiler, which just so happens to be so aptly named Cython. There are many different ways to install Cython. The preferred one would be to use pip: $ pip install Cython This should work on both Linux and Mac. Alternatively, you can use your Linux distribution's package manager to install Cython: $ yum install cython # will work on Fedora and Centos $ apt-get install cython # will work on Debian based systems In Windows, although there are a plethora of options available, following this Wiki is the safest option to stay up to date: http://wiki.cython.org/InstallingOnWindows Emacs mode There is an emacs mode available for Cython. Although the syntax is nearly the same as Python, there are differences that conflict in simply using Python mode. You can choose to grab the cython-mode.el from the Cython source code (inside the Tools directory.) The preferred way of installing packages to emacs would be to use a package repository such as MELPA(). To add the package repository to emacs, open your ~/.emacs configuration file and add the following code: (when (>= emacs-major-version 24) (require 'package) (add-to-list 'package-archives '("melpa" . "http://melpa.org/packages/") t) (package-initialize)) Once you add this and reload your configuration to install the cython mode, you can simply run the following: 'M-x package-install RET cython-mode' Once this is installed, you can activate the mode by adding this into your emacs config file: (require 'cython-mode) You can always activate the mode manually at any time with the following: 'M-x cython-mode RET' Getting the code examples Throughout this book, I intend to show real examples that are easy to digest to help you get a feel of the different things you can achieve with Cython. To access and download the code used, please clone the following repository: $ git clone git://github.com/redbrain/cython-book.git Getting started – Hello World As you will see when running the Hello World program, Cython generates native python modules. Therefore, while running any Cython code, you will reference it via a module import in Python. Let's build the module: $ cd cython-book/chapter1/helloworld $ make You should have now created helloworld.so! This is a Cython module of the same name of the Cython source code file. While in the same directory of the shared object module, you can invoke this code by running a respective Python import: $ python Python 2.7.3 (default, Aug 1 2012, 05:16:07) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import helloworld Hello World from cython! As you can see from opening helloworld.pyx, it looks just like a normal Python Hello World application; but as previously stated, Cython generates modules. These modules need a name so that it can be correctly imported by the python runtime. The Cython compiler simply uses the name of the source code file. It then requires us to compile this to the same shared object name. Overall, Cython source code files have the .pyx,.pxd, and .pxi extensions. For now, all we care about are the .pyx files; the others are for cimports and includes respectively within a .pyx module file. The following screenshot depicts the compilation flow required to have a callable native python module: I wrote a basic makefile so that you can simply run make to compile these examples. Here's the code to do this manually: $ cython helloworld.pyx $ gcc/clang -g -O2 -fpic `python-config --cflags` -c helloworld.c -o helloworld.o $ gcc/clang -shared -o helloworld.so helloworld.o `python-config –libs Using DistUtils with Cython You can compile this using Python distutils and cythonize. Open setup.py: from distutils.core import setup from Cython.Build import cythonize setup( ext_modules = cythonize("helloworld.pyx") ) Using the cythonize function as part of the ext_modules section will build any specified Cython source into an installable Python module. This will compile helloworld.pyx into the same shared library. This provides the Python practice to distribute native modules as part of distutils. Calling C functions from Python We should be careful when talking about Python and Cython for clarity, since the syntax is so similar. Let's wrap a simple AddFunction in C and make it callable from Python. Firstly, open a file called AddFunction.c, and write a simple function into it: #include <stdio.h> int AddFunction(int a, int b) { printf("look we are within your c code!n"); return a + b; } This is the C code we will call, which is just a simple function to add two integers. Now, let's get Python to call it. Open a file called AddFunction.h, wherein we will declare our prototype: #ifndef __ADDFUNCTION_H__ #define __ADDFUNCTION_H__ extern int AddFunction (int, int); #endif //__ADDFUNCTION_H__ We need this so that Cython can see the prototype for the function we want to call. In practice, you will already have your headers in your own project with your prototypes and declarations already available. Open a file called AddFunction.pyx, and insert the following code in to it: cdef extern from "AddFunction.h": cdef int AddFunction(int, int) Here, we have to declare what code we want to call. The cdef is a keyword signifying that this is from the C code that will be linked in. Now, we need a Python entry point: def Add(a, b): return AddFunction(a, b) This Add is a Python callable inside a PyAddFunction module. Again, I have provided a handy makefile to produce the module:   $ cd cython-book/chapter1/ownmodule $ make cython -2 PyAddFunction.pyx gcc -g -O2 -fpic -c PyAddFunction.c -o PyAddFunction.o `python-config --includes` gcc -g -O2 -fpic -c AddFunction.c -o AddFunction.o gcc -g -O2 -shared -o PyAddFunction.so AddFunction.o PyAddFunc-tion.o `python-config --libs` Notice that AddFunction.c is compiled into the same PyAddFunction.so shared object. Now, let's call this AddFunction and check to see if C can add numbers correctly: $ python >>> from PyAddFunction import Add >>> Add(1,2) look we are within your c code!! 3 Notice the print statement inside AddFunction.c::AddFunction and that the final result is printed correctly. Therefore, we know the control hit the C code and did the calculation in C and not inside the Python runtime. This is a revelation to what is possible. Python can be cited to be slow in some circumstances. Using this technique, it makes it possible for Python code to bypass its own runtime and to run in an unsafe context, which is unrestricted by the Python runtime, which is much faster. Type conversion Notice that we had to declare a prototype inside the cython source code PyAddFunction.pyx: cdef extern from "AddFunction.h": cdef int AddFunction(int, int) It let the compiler know that there is a function called AddFunction, and that it takes two int's and returns an int. This is all the information the compiler needs to know besides the host and target operating system's calling convention in order to call this function safely. Then, we created the Python entry point, which is a python callable that takes two parameters: def Add(a, b): return AddFunction(a, b) Inside this entry point, it simply returned the native AddFunction and passed the two Python objects as parameters. This is what makes Cython so powerful. Here, the Cython compiler must inspect the function call and generate code to safely try and convert these Python objects to native C integers. This becomes difficult when precision is taken into account, and potential overflow, which just so happens to be a major use case since it handles everything so well. Also, remember that this function returns an integer and Cython also generates code to convert the integer return into a valid Python object. Summary Overall, we installed the Cython compiler, ran the Hello World example, and took into consideration that we need to compile all code into native shared objects. We also saw how to wrap native C code to be callable from Python, and how to do type conversion of parameters and return to C code and back to Python. Resources for Article: Further resources on this subject: Monte Carlo Simulation and Options [article] Understanding Cython [article] Scaling your Application Across Nodes with Spring Python's Remoting [article]
Read more
  • 0
  • 0
  • 2786

article-image-getting-started-emberjspart1
Daniel Ochoa
08 Jan 2016
9 min read
Save for later

Getting started with Ember.js – Part 1

Daniel Ochoa
08 Jan 2016
9 min read
Ember.js is a fantastic framework for developers and designers alike for building ambitious web applications. As touted by its website, Ember.js is built for productivity. Designed with the developer in mind, its friendly API’s help you get your job done fast. It also makes all the trivial choices for you. By taking care of certain architectural choices, it lets you concentrate on your application instead of reinventing the wheel or focusing on already solved problems. With Ember.js you will be empowered to rapidly prototype applications with more features for less code. Although Ember.js follows the MVC (Model-View-Controller) design pattern, it’s been slowly moving to a more component centric approach of building web applications. On this part 1 of 2 blog posts, I’ll be talking about how to quickly get started with Ember.js. I’ll go into detail on how to set up your development environment from beginning to end so you can immediately start building an Ember.js app with ember-cli – Ember’s build tool. Ember-cli provides an asset pipeline to handle your assets. It minifies and concatenates your JavaScript; it gives you a strong conventional project structure and a powerful addon system for extensions. In part two, I’ll guide you through setting up a very basic todo-like Ember.js application to get your feet wet with actual Ember.js development. Setup The first thing you need is node.js. Follow this guide on how to install node and npm from the npmjs.com website. Npm stands for Node Package Manager and it’s the most popular package manager out there for Node. Once you setup Node and Npm, you can install ember-cli with the following command on your terminal: npm install -g ember-cli You can verify whether you have correctly installed ember-cli by running the following command: ember –v If you see the output of the different versions you have for ember-cli, node, npm and your OS it means everything is correctly set up and you are ready to start building an Ember.js application. In order to get you more acquainted with ember-cli, you can run ember -h to see a list of useful ember-cli commands. Ember-cli gives you an easy way to install add-ons (packages created by the community to quickly set up functionality, so instead of creating a specific feature you need, someone may have already made a package for it. See http://emberobserver.com/). You can also generate a new project with ember init <app_name>, run the tests of your app with ember test and scaffold project files with ember generate. These are just one of the many useful commands ember-cli gives you by default. You can learn more specific subcommands for any given command by running ember <command_name> -help. Now that you know what ember-cli is useful for its time to move on to building a fun example application. Building an Ember.js app The application we will be building is an Ember.js application that will fetch a few posts from www.reddit.com/r/funny. It will display a list of these posts with some information about them such as title, author, and date. The purpose of this example application is to show you how easy it is to build an ember.js application that fetches data from a remote API and displays it. It will also show you have to leverage one of the most powerful features of Ember – its router. Now that you are more acquainted with ember-cli, let's create the skeleton of our application. There’s no need to worry and think about what packages and what features from ember we will need and we don’t even have to think about what local server to run in order to display our work in progress. First things first, run the following command on your terminal: ember new ember-r-funny We are running the ‘ember new’ command with the argument of ‘ember-r-funny’ which is what we are naming our app. Feel free to change the name to anything you’d like. From here, you’ll see a list of files being created. After it finishes, you’ll have the app directory with the app name being created in the current directory where you are working on. If you go into this directory and inspect the files, you’ll see that quite a few directories and files. For now don’t pay too much attention to these files except for the directory called ‘app’. This is where you’ll be mostly working on. On your terminal, if you got to the base path of your project (just inside of ember-r-funny/ ) and you run ember server, ember-cli will run a local server for you to see your app. If you now go on your browser to http://localhost:4200 you will see your newly created application, which is just a blank page with the wording Welcome to Ember. If you go into app/templates/application.js and change the <h1> tag and save the file, you’ll notice that your browser will automatically refresh the page. One thing to note before we continue, is that ember-cli projects allow you to use the ES6 JavaScript syntax. ES6 is a significant update to the language which although current browsers do not use it, ember-cli will compile your project to browser readable ES5. For a more in-depth explanation of ES6 visit: https://hacks.mozilla.org/2015/04/es6-in-depth-an-introduction/ Creating your first resource One of the strong points of Ember is the router. The router is responsible for displaying templates, loading data, and otherwise setting up application state. The next thing we need to do is to set up a route to display the /r/funny posts from reddit. Run the following command to create our base route: ember generate route index This will generate an index route. In Ember-speak, the index route is the base or lowest level route of the app. Now go to ‘app/routes/index.js’, and make sure the route looks like the following: import Ember from 'ember'; export default Ember.Route.extend({ beforeModel() { this.transitionTo('posts'); } }); This is telling the app that whenever a user lands on our base URL ‘/’, that it should transition to ‘posts’. Next, run the following command to generate our posts resource: ember generate resource posts If you open the ‘app/router.js file, you’ll see that that ‘this.route(‘posts’)’ was added. Change this to ‘this.resource(‘posts’)’ instead (since we want to deal with a resource and not a route). It should look like the following: import Ember from 'ember'; import config from './config/environment'; var Router = Ember.Router.extend({ location: config.locationType }); Router.map(function() { this.resource('posts'); }); export default Router;dfdf In this router.js file, we’ve created a ‘posts’ resource. So far, we’ve told the app to take the user to ‘/posts’ whenever the user goes to our app. Next, we’ll make sure to set up what data and templates to display when the user lands in ‘/posts’. Thanks to the generator, we now have a route file for posts under ‘app/routes/posts.js’. Open this file and make sure that it looks like the following: import Ember from 'ember'; export default Ember.Route.extend({ model() { return Ember.$.getJSON('https://www.reddit.com/r/funny.json?jsonp=?&limit=10').then(result => { return Ember.A(result.data.children).mapBy('data'); }); } }); Ember works by doing the asynchronous fetching of data on the model hook of our posts route. Routes are where we fetch the data so we can consume it and in this case, we are hitting reddit/r/funny and fetching the latest 10 posts. Once that data is returned, we filter out the unnecessary properties from the response so we can actually return an array with our 10 reddit post entries through the use of a handy function provided by ember called `mapBy`. One important thing to note is that you need to always return something on the model hook, be it either an array, object, or a Promise (which is what we are doing in this case; for a more in depth explanation of promises you can read more here). Now that we have our route wired up to fetch the information, let’s open the ‘app/templates/posts.hbs’ file, remove the current contents, and add the following: <ul> {{#each model as |post|}} <li>{{post.title}}</li> {{/each}} </ul> This is HTML mixed with the Handlebars syntax. Handlebars is the templating engine Ember.js uses to display your dynamic content. What this .hbs template is doing here is that it is looping through our model and displaying the title property for each object inside the model array. If you haven’t noticed yet, Ember.js is smart enough to know when the data has returned from the server and then displays it, so we don’t need to handle any callback functions as far as the model hook is concerned. At this point, it may be normal to see some deprecation warnings on the console, but if you see an error with the words ‘refused to load the script https://www.reddit.com/r/funny.json..’ you need to add the following key and value to the ‘config/environment.js’ file inside the ENV object: contentSecurityPolicy: { 'script-src': "'self' https://www.reddit.com" }, By default, ember-cli will prevent you from doing external requests and fetching external resources from different domain names. This is a security feature so we need to whitelist the reddit domain name so we can make requests against it. At this point, if you go to localhost:4200, you should be redirected to the /posts route and you should see something like the following: Congratulations, you’ve just created a simple ember.js app that displays the title of some reddit posts inside an html list element. So far we’ve added a few lines of code here and there and we already have most of what we need. In Part 2 of this blog, we will set up a more detailed view for each of our reddit posts. About the Author: Daniel Ochoa is a senior software engineer at Frog with a passion for crafting beautiful web and mobile experiences. His current interests are Node.js, Ember.js, Ruby on Rails, iOS development with Swift, and the Haskell language. He can be found on Twitter @DanyOchoaOzz.
Read more
  • 0
  • 0
  • 15204

article-image-advanced-shiny-functions
Packt
08 Jan 2016
14 min read
Save for later

Advanced Shiny Functions

Packt
08 Jan 2016
14 min read
In this article by Chris Beeley, author of the book, Web Application Development with R using Shiny - Second Edition, we are going to extend our toolkit by learning about advanced Shiny functions. These allow you to take control of the fine details of your application, including the interface, reactivity, data, and graphics. We will cover the following topics: Learn how to show and hide parts of the interface Change the interface reactively Finely control reactivity, so functions and outputs run at the appropriate time Use URLs and reactive Shiny functions to populate and alter the selections within an interface Upload and download data to and from a Shiny application Produce beautiful tables with the DataTables jQuery library (For more resources related to this topic, see here.) Summary of the application We're going to add a lot of new functionality to the application, and it won't be possible to explain every piece of code before we encounter it. Several of the new functions depend on at least one other function, which means that you will see some of the functions for the first time, whereas a different function is being introduced. It's important, therefore, that you concentrate on whichever function is being explained and wait until later in the article to understand the whole piece of code. In order to help you understand what the code does as you go along it is worth quickly reviewing the actual functionality of the application now. In terms of the functionality, which has been added to the application, it is now possible to select not only the network domain from which browser hits originate but also the country of origin. The draw map function now features a button in the UI, which prevents the application from updating the map each time new data is selected, the map is redrawn only when the button is pushed. This is to prevent minor updates to the data from wasting processor time before the user has finished making their final data selection. A Download report button has been added, which sends some of the output as a static file to a new webpage for the user to print or download. An animated graph of trend has been added; this will be explained in detail in the relevant section. Finally, a table of data has been added, which summarizes mean values of each of the selectable data summaries across the different countries of origin. Downloading data from RGoogleAnalytics The code is given and briefly summarized to give you a feeling for how to use it in the following section. Note that my username and password have been replaced with XXXX; you can get your own user details from the Google Analytics website. Also, note that this code is not included on the GitHub because it requires the username and password to be present in order for it to work: library(RGoogleAnalytics) ### Generate the oauth_token object oauth_token <- Auth(client.id = "xxxx", client.secret = "xxxx") # Save the token object for future sessions save(oauth_token, file = "oauth_token") Once you have your client.id and client.secret from the Google Analytics website, the preceding code will direct you to a browser to authenticate the application and save the authorization within oauth_token. This can be loaded in future sessions to save from reauthenticating each time as follows: # Load the token object and validate for new run load("oauth_token") ValidateToken(oauth_token) The preceding code will load the token in subsequent sessions. The validateToken() function is necessary each time because the authorization will expire after a time this function will renew the authentication: ## list of metrics and dimensions query.list <- Init(start.date = "2013-01-01", end.date = as.character(Sys.Date()), dimensions = "ga:country,ga:latitude,ga:longitude, ga:networkDomain,ga:date", metrics = "ga:users,ga:newUsers,ga:sessions, ga:bounceRate,ga:sessionDuration", max.results = 10000, table.id = "ga:71364313") gadf = GetReportData(QueryBuilder(query.list), token = oauth_token, paginate_query = FALSE) Finally, the metrics and dimensions of interest (for more on metrics and dimensions, see the documentation of the Google Analytics API online) are placed within a list and downloaded with the GetReportData() function as follows: ...[data tidying functions]... save(gadf, file = "gadf.Rdata") The data tidying that is carried out at the end is omitted here for brevity, as you can see at the end the data is saved as gadf.Rdata ready to load within the application. Animation Animation is surprisingly easy. The sliderInput() function, which gives an HTML widget that allows the selection of a number along a line, has an optional animation function that will increment a variable by a set amount every time a specified unit of time elapses. This allows you to very easily produce a graphic that animates. In the following example, we are going to look at the monthly graph and plot a linear trend line through the first 20% of the data (0–20% of the data). Then, we are going to increment the percentage value that selects the portion of the data by 5% and plot a linear through that portion of data (5–25% of the data). Then, increment again from 10% to 30% and plot another line and so on. There is a static image in the following screenshot: The slider input is set up as follows, with an ID, label, minimum value, maximum value, initial value, step between values, and the animation options, giving the delay in milliseconds and whether the animation should loop: sliderInput("animation", "Trend over time", min = 0, max = 80, value = 0, step = 5, animate = animationOptions(interval = 1000, loop = TRUE) ) Having set this up, the animated graph code is pretty simple, looking very much like the monthly graph data except with the linear smooth based on a subset of the data instead of the whole dataset. The graph is set up as before and then a subset of the data is produced on which the linear smooth can be based: groupByDate <- group_by(passData(), YearMonth, networkDomain) %>% summarise(meanSession = mean(sessionDuration, na.rm = TRUE), users = sum(users), newUsers = sum(newUsers), sessions = sum(sessions)) groupByDate$Date <- as.Date(paste0(groupByDate$YearMonth, "01"), format = "%Y%m%d") smoothData <- groupByDate[groupByDate$Date %in% quantile(groupByDate$Date, input$animation / 100, type = 1): quantile(groupByDate$Date, (input$animation + 20) / 100, type = 1), ] We won't get too distracted by this code, but essentially, it tests to see which of the whole date range falls in a range defined by percentage quantiles based on the sliderInput() values. See ?quantile for more information. Finally, the linear smooth is drawn with an extra data argument to tell ggplot2 to base the line only on the smaller smoothData object and not the whole range: ggplot(groupByDate, aes_string(x = "Date", y = input$outputRequired, group = "networkDomain", colour = "networkDomain") ) + geom_line() + geom_smooth(data = smoothData, method = "lm", colour = "black" ) Not bad for a few lines of code. We have both ggplot2 and Shiny to thank for how easy this is. Streamline the UI by hiding elements This is a simple function that you are certainly going to need if you build even a moderately complex application. Those of you who have been doing extra credit exercises and/or experimenting with your own applications will probably have already wished for this or, indeed, have already found it. conditionalPanel() allows you to show/hide UI elements based on other selections within the UI. The function takes a condition (in JavaScript, but the form and syntax will be familiar from many languages) and a UI element and displays the UI only when the condition is true. This has actually used a couple of times in the advanced GA application, and indeed in all the applications, I've ever written of even moderate complexity. We're going to show the option to smooth the trend graph only when the trend graph tab is displayed, and we're going to show the controls for the animated graph only when the animated graph tab is displayed. Naming tabPanel elements In order to allow testing for which tab is currently selected, we're going to have to first give the tabs of the tabbed output names. This is done as follows (with the new code in bold): tabsetPanel(id = "theTabs", # give tabsetPanel a name tabPanel("Summary", textOutput("textDisplay"), value = "summary"), tabPanel("Trend", plotOutput("trend"), value = "trend"), tabPanel("Animated", plotOutput("animated"), value = "animated"), tabPanel("Map", plotOutput("ggplotMap"), value = "map"), tabPanel("Table", DT::dataTableOutput("countryTable"), value = "table") As you can see, the whole panel is given an ID (theTabs) and then each tabPanel is also given a name (summary, trend, animated, map, and table). They are referred to in the server.R file very simply as input$theTabs. Finally, we can make our changes to ui.R to remove parts of the UI based on tab selection: conditionalPanel( condition = "input.theTabs == 'trend'", checkboxInput("smooth", label = "Add smoother?", # add smoother value = FALSE) ), conditionalPanel( condition = "input.theTabs == 'animated'", sliderInput("animation", "Trend over time", min = 0, max = 80, value = 0, step = 5, animate = animationOptions(interval = 1000, loop = TRUE) ) ) As you can see, the condition appears very R/Shiny-like, except with the . operator familiar to JavaScript users in place of $. This is a very simple but powerful way of making sure that your UI is not cluttered with an irrelevant material. Beautiful tables with DataTable The latest version of Shiny has added support to draw tables using the wonderful DataTables jQuery library. This will enable your users to search and sort through large tables very easily. To see DataTable in action, visit the homepage at http://datatables.net/. The version in this application summarizes the values of different variables across the different countries from which browser hits originate and looks as follows: The package can be installed using install.packages("DT") and needs to be loaded in the preamble to the server.R file with library(DT). Once this is done using the package is quite straightforward. There are two functions: one in server.R (renderDataTable) and other in ui.R (dataTableOutput). They are used as following: ### server. R output$countryTable <- DT::renderDataTable ({ groupCountry <- group_by(passData(), country) groupByCountry <- summarise(groupCountry, meanSession = mean(sessionDuration), users = log(sum(users)), sessions = log(sum(sessions)) ) datatable(groupByCountry) }) ### ui.R tabPanel("Table", DT::dataTableOutput("countryTable"), value = "table") Anything that returns a dataframe or a matrix can be used within renderDataTable(). Note that as of Shiny V. 0.12, the Shiny functions renderDataTable() and dataTableOutput() functions are deprecated: you should use the DT equivalents of the same name, as in the preceding code adding DT:: before each function name specifies that the function should be drawn from that package. Reactive user interfaces Another trick you will definitely want up your sleeve at some point is a reactive user interface. This enables you to change your UI (for example, the number or content of radio buttons) based on reactive functions. For example, consider an application that I wrote related to survey responses across a broad range of health services in different areas. The services are related to each other in quite a complex hierarchy, and over time, different areas and services respond (or cease to exist, or merge, or change their name), which means that for each time period the user might be interested in, there would be a totally different set of areas and services. The only sensible solution to this problem is to have the user tell you which area and date range they are interested in and then give them back the correct list of services that have survey responses within that area and date range. The example we're going to look at is a little simpler than this, just to keep from getting bogged down in too much detail, but the principle is exactly the same, and you should not find this idea too difficult to adapt to your own UI. We are going to allow users to constrain their data by the country of origin of the browser hit. Although we could design the UI by simply taking all the countries that exist in the entire dataset and placing them all in a combo box to be selected, it is a lot cleaner to only allow the user to select from the countries that are actually present within the particular date range they have selected. This has the added advantage of preventing the user from selecting any countries of origin, which do not have any browser hits within the currently selected dataset. In order to do this, we are going to create a reactive user interface, that is, one that changes based on data values that come about from user input. Reactive user interface example – server.R When you are making a reactive user interface, the big difference is that instead of writing your UI definition in your ui.R file, you place it in server.R and wrap it in renderUI(). Then, point to it from your ui.R file. Let's have a look at the relevant bit of the server.R file: output$reactCountries <- renderUI({ countryList = unique(as.character(passData()$country)) selectInput("theCountries", "Choose country", countryList) }) The first line takes the reactive dataset that contains only the data between the dates selected by the user and gives all the unique values of countries within it. The second line is a widget type we have not used yet, which generates a combo box. The usual id and label arguments are given, followed by the values that the combo box can take. This is taken from the variable defined in the first line. Reactive user interface example – ui.R The ui.R file merely needs to point to the reactive definition, as shown in the following line of code (just add it in to the list of widgets within sidebarPanel()): uiOutput("reactCountries") You can now point to the value of the widget in the usual way as input$subDomains. Note that you do not use the name as defined in the call to renderUI(), that is, reactCountries, but rather the name as defined within it, that is, theCountries. Progress bars It is quite common within Shiny applications and in analytics generally to have computations or data fetches that take a long time. However, even using all these tools, it will sometimes be necessary for the user to wait some time before their output is returned. In cases like this, it is a good practice to do two things: first, to inform that the server is processing the request and has not simply crashed or otherwise failed, and second to give the user some idea of how much time has elapsed since they requested the output and how much time they have remaining to wait. This is achieved very simply in Shiny using the withProgress() function. This function defaults to measuring progress on a scale from 0 to 1 and produces a loading bar at the top of the application with the information from the message and detail arguments of the loading function. You can see in the following code, the withProgress function is used to wrap a function (in this case, the function that draws the map), with message and detail arguments describing what is happened and an initial value of 0 (value = 0, that is, no progress yet): withProgress(message = 'Please wait', detail = 'Drawing map...', value = 0, { ... function code... } ) As the code is stepped through, the value of progress can steadily be increased from 0 to 1 (for example, in a for() loop) using the following code: incProgress(1/3) The third time this is called, the value of progress will be 1, which indicates that the function has completed (although other values of progress can be selected where necessary, see ?withProgess()). To summarize, the finished code looks as follows: withProgress(message = 'Please wait', detail = 'Drawing map...', value = 0, { ... function code... incProgress(1/3) .. . function code... incProgress(1/3) ... function code... incProgress(1/3) } ) It's very simple. Again, have a look at the application to see it in action. Summary In this article, you have now seen most of the functionality within Shiny. It's a relatively small but powerful toolbox with which you can build a vast array of useful and intuitive applications with comparatively little effort. In this respect, ggplot2 is rather a good companion for Shiny because it too offers you a fairly limited selection of functions with which knowledgeable users can very quickly build many different graphical outputs. Resources for Article: Further resources on this subject: Introducing R, RStudio, and Shiny[article] Introducing Bayesian Inference[article] R ─ Classification and Regression Trees[article]
Read more
  • 0
  • 0
  • 4752
article-image-scripting-capabilities-elasticsearch
Packt
08 Jan 2016
19 min read
Save for later

The scripting Capabilities of Elasticsearch

Packt
08 Jan 2016
19 min read
In this article by Rafał Kuć and Marek Rogozinski author of the book Elasticsearch Server - Third Edition, Elasticsearch has a few functionalities in which scripts can be used. Even though scripts seem to be a rather advanced topic, we will look at the possibilities offered by Elasticsearch. That's because scripts are priceless in certain situations. Elasticsearch can use several languages for scripting. When not explicitly declared, it assumes that Groovy (http://www.groovy-lang.org/) is used. Other languages available out of the box are the Lucene expression language and Mustache (https://mustache.github.io/). Of course, we can use plugins that will make Elasticsearch understand additional scripting languages such as JavaScript, Mvel, or Python. One thing worth mentioning is this: independently from the scripting language that we will choose, Elasticsearch exposes objects that we can use in our scripts. Let's start by briefly looking at what type of information we are allowed to use in our scripts. (For more resources related to this topic, see here.) Objects available during script execution During different operations, Elasticsearch allows us to use different objects in our scripts. To develop a script that fits our use case, we should be familiar with those objects. For example, during a search operation, the following objects are available: _doc (also available as doc): An instance of the org.elasticsearch.search.lookup.LeafDocLookup object. It gives us access to the current document found with the calculated score and field values. _source: An instance of the org.elasticsearch.search.lookup.SourceLookup object. It provides access to the source of the current document and the values defined in the source. _fields: An instance of the org.elasticsearch.search.lookup.LeafFieldsLookup object. It can be used to access the values of the document fields. On the other hand, during a document update operation, the variables mentioned above are not accessible. Elasticsearch exposes only the ctx object with the _source property, which provides access to the document currently processed in the update request. As we have previously seen, several methods are mentioned in the context of document fields and their values. Let's now look at the examples of how to get the value for a particular field using the previously mentioned object available during search operations. In the brackets, you can see what Elasticsearch will return for one of our example documents from the library index (we will use the document with identifier 4): _doc.title.value (and) _source.title (crime and punishment) _fields.title.value (null) A bit confusing, isn't it? During indexing, the original document is, by default, stored in the _source field. Of course, by default, all fields are present in that _source field. In addition to this, the document is parsed, and every field may be stored in an index if it is marked as stored (that is, if the store property is set to true; otherwise, by default, the fields are not stored). Finally, the field value may be configured as indexed. This means that the field value is analyzed and placed in the index. To sum up, one field may land in an Elasticsearch index in the following ways: As part of the _source document As a stored and unparsed original value As an indexed value that is processed by an analyzer In scripts, we have access to all of these field representations. The only exception is the update operation, which—as we've mentioned before—gives us access to  only the _source document as part of the ctx variable. You may wonder which version you should use. Well, if we want access to the processed form, the answer would be simple—use the _doc object. What about _source and _fields? In most cases, _source is a good choice. It is usually fast and needs fewer disk operations than reading the original field values from the index. This is especially true when you need to read values of multiple fields in your scripts—fetching a single _source field is faster than fetching multiple independent fields from the index. Script types Elasticsearch allows us to use scripts in three different ways: Inline scripts: The source of the script is directly defined in the query In-file scripts: The source is defined in the external file placed in the Elasticsearch config/scripts directory As a document in the dedicated index: The source of the script is defined as a document in a special index available by using the /_scripts API endpoint Choosing the way of defining scripts depends on several factors. If you have scripts that you will use in many different queries, the file or the dedicated index seems to be the best solution. "Scripts in the file" is probably less convenient, but it is preferred from the security point of view—they can't be overwritten and injected into your query, which might have caused a security breach. In-file scripts This is the only way that is turned on by default in Elasticsearch. The idea is that every script used by the queries is defined in its own file placed in the config/scripts directory. We will now look at this method of using scripts. Let's create an example file called tag_sort.groovy and place it in the config/scripts directory of our Elasticsearch instance (or instances if we are running a cluster). The content of the mentioned file should look like this: _doc.tags.values.size() > 0 ? _doc.tags.values[0] : 'u19999' After a few seconds, Elasticsearch should automatically load a new file. You should see something like this in the Elasticsearch logs: [2015-08-30 13:14:33,005][INFO ][script                   ] [Alex Wilder] compiling script file [/Users/negativ/Developer/ES/es-current/config/scripts/tag_sort.groovy] If you have a multinode cluster, you have to make sure that the script is available on every node. Now we are ready to use this script in our queries. A modified query that uses our script stored in the file looks as follows: curl -XGET 'localhost:9200/library/_search?pretty' -d '{   "query" : {     "match_all" : { }   },   "sort" : {     "_script" : {       "script" : {         "file" : "tag_sort"        },        "type" : "string",        "order" : "asc"      }   } }' First, we will see the next possible way of defining a script inline. Inline scripts Inline scripts are a more convenient way of using scripts, especially for constantly changing queries or ad-hoc queries. The main drawback of such an approach is security. If we do this, we allow users to run any kind of query, including any kind of script that can be used by attackers. Such an attack can execute arbitrary code on the server running Elasticsearch with rights equal to the ones given to the user who is running Elasticsearch. In the worst-case scenario, an attacker could use security holes to gain superuser rights. This is why inline scripts are disabled by default. After careful consideration, you can enable them by adding this to the elasticsearch.yml file: script.inline: on After allowing the inline script to be executed, we can run a query that looks as follows: curl -XGET 'localhost:9200/library/_search?pretty' -d '{   "query" : {     "match_all" : { }   },   "sort" : {     "_script" : {       "script" : {         "inline" : "_doc.tags.values.size() > 0 ? _doc.tags.values[0] : "u19999""        },        "type" : "string",        "order" : "asc"      }   } }' Indexed scripts The last option for defining scripts is to store them in the dedicated Elasticsearch index. From the same security reasons, dynamic execution of indexed scripts is by default disabled. To enable indexed scripts, we have to add a configuration similar option to the one that we've added to be able to use inline scripts. We need to add the following line to the elasticsearch.yml file: script.indexed: on After adding the above property to all the nodes and restarting the cluster, we will be ready to start using indexed scripts. Elasticsearch provides additional dedicated endpoints for this purpose. Let's store our script: curl -XPOST 'localhost:9200/_scripts/groovy/tag_sort' -d '{   "script" :  "_doc.tags.values.size() > 0 ? _doc.tags.values[0] : "u19999"" }' The script is ready, but let's discuss what we just did. We sent an HTTP POST request to the special _scripts REST endpoint. We also specified the language of the script (groovy in our case) and the name of the script (tag_sort). The body of the request is the script itself. We can now move on to the query, which looks as follows: curl -XGET 'localhost:9200/library/_search?pretty' -d '{   "query" : {     "match_all" : { }   },   "sort" : {     "_script" : {       "script" : {         "id" : "tag_sort"        },        "type" : "string",        "order" : "asc"      }   } }' As we can see, this query is practically identical to the query used with the script defined in a file. The only difference is the id parameter instead of file. Querying with scripts If we look at any request made to Elasticsearch that uses scripts, we will notice some similar properties, which are as follows: script: The property that wraps the script definition. inline: The property holding the code of the script itself. id – This is the property that defines the identifier of the indexed script. file: The filename (without extension) with the script definition when the in file script is used. lang: This is the property defining the script language. If it is omitted, Elasticsearch assumes groovy. params: This is an object containing parameters and their values. Every defined parameter can be used inside the script by specifying that parameter name. Parameters allow us to write cleaner code that will be executed in a more efficient manner. Scripts that use parameters are executed faster than code with embedded constants because of caching. Scripting with parameters As our scripts become more and more complicated, the need for creating multiple, almost identical scripts can appear. Those scripts usually differ in the values used, with the logic behind them being exactly the same. In our simple example, we have used a hardcoded value to mark documents with an empty tags list. Let's change this to allow the definition of a hardcoded value. Let's use in the file script definition and create the tag_sort_with_param.groovy file with the following contents: _doc.tags.values.size() > 0 ? _doc.tags.values[0] : tvalue The only change we've made is the introduction of a parameter named tvalue, which can be set in the query in the following way: curl -XGET 'localhost:9200/library/_search?pretty' -d '{   "query" : {     "match_all" : { }   },   "sort" : {     "_script" : {       "script" : {         "file" : "tag_sort_with_param",         "params" : {           "tvalue" : "000"         }        },        "type" : "string",        "order" : "asc"      }   } }' The params section defines all the script parameters. In our simple example, we've only used a single parameter, but of course, we can have multiple parameters in a single query. Script languages The default language for scripting is Groovy. However, you are not limited to only a single scripting language when using Elasticsearch. In fact, if you would like to, you can even use Java to write your scripts. In addition to that, the community behind Elasticsearch provides support of more languages as plugins. So, if you are willing to install plugins, you can extend the list of scripting languages that Elasticsearch supports even further. You may wonder why you should even consider using a scripting language other than the default Groovy. The first reason is your own preferences. If you are a Python enthusiast, you are probably now thinking about how to use Python for your Elasticsearch scripts. The other reason could be security. When we talked about inline scripts, we told you that inline scripts are turned off by default. This is not exactly true for all the scripting languages available out of the box. Inline scripts are disabled by default when using Grooby, but you can use Lucene expressions and Mustache without any issues. This is because those languages are sandboxed, which means that security-sensitive functions are turned off. And of course, the last factor when choosing the language is performance. Theoretically, native scripts (in Java) should have better performance than others, but you should remember that the difference can be insignificant. You should always consider the cost of development and measure the performance. Using something other than embedded languages Using Groovy for scripting is a simple and sufficient solution for most use cases. However, you may have a different preference and you would like to use something different, such as JavaScript, Python, or Mvel. For now, we'll just run the following command from the Elasticsearch directory: bin/plugin install elasticsearch/elasticsearch-lang-javascript/2.7.0 The preceding command will install a plugin that will allow the use of JavaScript as the scripting language. The only change we should make in the request is putting in additional information about the language we are using for scripting. And of course, we have to modify the script itself to correctly use the new language. Look at the following example: curl -XGET 'localhost:9200/library/_search?pretty' -d '{   "query" : {     "match_all" : { }   },   "sort" : {     "_script" : {       "script" : {         "inline" : "_doc.tags.values.length > 0 ? _doc.tags.values[0] :"u19999";",         "lang" : "javascript"       },       "type" : "string",       "order" : "asc"     }   } }' As you can see, we've used JavaScript for scripting instead of the default Groovy. The lang parameter informs Elasticsearch about the language being used. Using native code If the scripts are too slow or if you don't like scripting languages, Elasticsearch allows you to write Java classes and use them instead of scripts. There are two possible ways of adding native scripts: adding classes that define scripts to the Elasticsearch classpath, or adding a script as a functionality provided by plugin. We will describe the second solution as it is more elegant. The factory implementation We need to implement at least two classes to create a new native script. The first one is a factory for our script. For now, let's focus on it. The following sample code illustrates the factory for our script: package pl.solr.elasticsearch.examples.scripts; import java.util.Map; import org.elasticsearch.common.Nullable; import org.elasticsearch.script.ExecutableScript; import org.elasticsearch.script.NativeScriptFactory; public class HashCodeSortNativeScriptFactory implements NativeScriptFactory {     @Override     public ExecutableScript newScript(@Nullable Map<String, Object> params) {         return new HashCodeSortScript(params);     }   @Override   public boolean needsScores() {     return false;   } } This class should implement the org.elasticsearch.script.NativeScriptFactory class. The interface forces us to implement two methods. The newScript() method takes the parameters defined in the API call and returns an instance of our script. Finally, needsScores() informs Elasticsearch if we want to use scoring and that it should be calculated. Implementing the native script Now let's look at the implementation of our script. The idea is simple—our script will be used for sorting. The documents will be ordered by the hashCode() value of the chosen field. Documents without a value in the defined field will be first on the results list. We know that the logic doesn't make much sense, but it is good for presentation as it is simple. The source code for our native script looks as follows: package pl.solr.elasticsearch.examples.scripts; import java.util.Map; import org.elasticsearch.script.AbstractSearchScript; public class HashCodeSortScript extends AbstractSearchScript {   private String field = "name";   public HashCodeSortScript(Map<String, Object> params) {     if (params != null && params.containsKey("field")) {       this.field = params.get("field").toString();     }   }   @Override   public Object run() {     Object value = source().get(field);     if (value != null) {       return value.hashCode();     }     return 0;   } } First of all, our class inherits from the org.elasticsearch.script.AbstractSearchScript class and implements the run() method. This is where we get the appropriate values from the current document, process it according to our strange logic, and return the result. You may notice the source() call. Yes, it is exactly the same _source parameter that we met in the non-native scripts. The doc() and fields() methods are also available, and they follow the same logic that we described earlier. The thing worth looking at is how we've used the parameters. We assume that a user can put the field parameter, telling us which document field will be used for manipulation. We also provide a default value for this parameter. The plugin definition We said that we will install our script as a part of a plugin. This is why we need additional files. The first file is the plugin initialization class, where we can tell Elasticsearch about our new script: package pl.solr.elasticsearch.examples.scripts; import org.elasticsearch.plugins.Plugin; import org.elasticsearch.script.ScriptModule; public class ScriptPlugin extends Plugin {   @Override   public String description() {    return "The example of native sort script";   }   @Override   public String name() {     return "naive-sort-plugin";   }   public void onModule(final ScriptModule module) {     module.registerScript("native_sort",       HashCodeSortNativeScriptFactory.class);   } } The implementation is easy. The description() and name() methods are only for information purposes, so let's focus on the onModule() method. In our case, we need access to script module—the Elasticsearch service connected with scripts and scripting languages. This is why we define onModule() with one ScriptModule argument. Thanks to Elasticsearch magic, we can use this module and register our script so that it can be found by the engine. We have used the registerScript() method, which takes the script name and the previously defined factory class. The second file needed is a plugin descriptor file: plugin-descriptor.properties. It defines the constants used by the Elasticsearch plugin subsystem. Without thinking more, let's look at the contents of this file: jvm=true classname=pl.solr.elasticsearch.examples.scripts.ScriptPlugin elasticsearch.version=2.0.0-beta2-SNAPSHOT version=0.0.1-SNAPSHOT name=native_script description=Example Native Scripts java.version=1.7 The appropriate lines have the following meaning: jvm: This tells Elasticsearch that our file contains Java code classname: This describes the main class with the plugin definition elasticsearch.version and java.version: They tell about the Elasticsearch and Java versions needed for our plugin name and description: These are an informative name and a short description of our plugin And that's it! We have all the files needed to fire our script. Note that now it is quite convenient to add new scripts and pack them as a single plugin. Installing a plugin Now it's time to install our native script embedded in the plugin. After packing the compiled classes as a JAR archive, we should put it into the Elasticsearch plugins/native-script directory. The native-script part is a root directory for our plugin and you may name it as you wish. In this directory, you also need the prepared plugin-descriptor.properties file. This makes our plugin visible to Elasticsearch. Running the script After restarting Elasticsearch (or the entire cluster if you are running more than a single node), we can start sending the queries that use our native script. For example, we will send a query that uses our previously indexed data from the library index. This example query looks as follows: curl -XGET 'localhost:9200/library/_search?pretty' -d '{   "query" : {     "match_all" : { }   },   "sort" : {     "_script" : {       "script" : {         "script" : "native_sort",         "lang" : "native",         "params" : {           "field" : "otitle"         }       },       "type" : "string",       "order" : "asc"     }   } }' Note the params part of the query. In this call, we want to sort on the otitle field. We provide the script name as native_sort and the script language as native. This is required. If everything goes well, we should see our results sorted by our custom sort logic. If we look at the response from Elasticsearch, we will see that documents without the otitle field are at the first few positions of the results list and their sort value is 0. Summary In this article, we focused on querying, but not about the matching part of it—mostly about scoring. You learned how Apache Lucene TF/IDF scoring works. We saw the scripting capabilities of Elasticsearch and handled multilingual data. We also used boosting to influence how scores of returned documents were calculated and we used synonyms. Finally, we used explain information to see how document scores were calculated by query. Resources for Article:   Further resources on this subject: An Introduction to Kibana [article] Indexing the Data [article] Low-Level Index Control [article]
Read more
  • 0
  • 0
  • 6356

article-image-project-structure
Packt
08 Jan 2016
14 min read
Save for later

The Project Structure

Packt
08 Jan 2016
14 min read
In this article written by Nathanael Anderson, author of the book Getting Started with NativeScript, we will see how to navigate through your new project and its full structure. We will explain each of the files that are automatically created and where you create your own files. Then, we will proceed to gain a deeper understanding of some of the basic components of your application, and we will finally see how to change screens. In this article, we will cover the following topics: An overview of the project directory The root directory The application components (For more resources related to this topic, see here.) Project directory overview By running the nativescript create crossCommunicator command, the command creates a nice structure of files and folders for us to explore. First, we will do a high level overview of the different folders and their purposes and touch on the important files in those folders. Then, we will finish the overview by going into depth about the App directory, which is where you will spend pretty much your entire time in developing an application. To give you a good visual overview, here is what the project hierarchy structure looks like: The root directory In your Root folder, you will see only a couple of directories. The package.json file will look something like this: {   "nativescript": {     "id": "org.nativescript.crossCommunicator",     "tns-android": {       "version": "1.5.0"     },     "tns-ios": {       "version": "1.5.0"     },   },   "dependencies": {     "tns-core-modules": "1.5.0"   } } This is the NativeScript master project configuration file for your entire application. It basically outlines the basic information and all platform requirements of the project. It will also be modified by the nativescript tool when you add and remove any plugins or platforms. So, in the preceding package.json file, you can see that I have installed the Android (tns-android) and iOS (tns-ios) platforms, using the nativscript platform add command, and they are both currently at version 1.5.0. The tns-core-modules dependency was added by the nativescript command when we created the project and it is the core modules. Changing the app ID Now, if you want this to be your company's name instead of the default ID of org.nativescript.yourProjectName, there are two ways to set the app ID. The first way is to set it up when you create the project; executing a nativescript create myProjName --appid com.myCompany.myProjName command will automatically set the ID value. If you forget to run the create command with a --appid option; you can change it here. However, any time you change the option here, you will also have to remove all the installed platforms and then re-add the platforms you are using. This must be done as when each platform is added because it uses this configuration id while building all the platform folders and all of the needed platform files. The package.json file This file contains basic information about the current template that is installed. When you create a new application via the nativescript create command, by default, it copies everything from a template project called tns-template-hello-world. In the future, NativeScript will allow you to choose the template, but currently this is the only template that is available and working. This template contains the folders we discussed earlier and all the files that we will discuss further. The package.json file is from this template, and it basically tells you about the template and its specific version that is installed. Le's take a look at the following code snippet: {   "name": "tns-template-hello-world",   "main": "app.js",   "version": "1.5.0", ... more json documentation fields... } Feel free to modify the package.json file so that it matches your project's name and details. This file currently does not currently serve much purpose beyond template documentation and the link to the main application file. License The License file that is present in the the App folder is the license that the tns-template-hello-world is under. In your case, your app will probably be distributed under a different license. You can either update this file to match your application's license or delete it. App.js Awesome! We have finally made it to the first of the JavaScript files that make up the code that we will change to make this our application. This file is the bootstrap file of the entire application, and this is where the fun begins. In the preceding package.json file, you can see that the package.json file references this file (app.js) under the main key. This key in the package.json file is what NativeScript uses to make this file the entry point for the entire application. Looking at this file, we can see that it currently has four simple lines of code: var application = require("application"); application.mainModule = "main-page"; application.cssFile = "./app.css"; application.start(); The file seems simple enough, so let's work through the code as we see it. The first line loads the NativeScript common application component class, and the application component wraps the initialization and life cycle of the entire application. The require() function is the what we use to reference another file in your project. It will look in your current directory by default and then it will use some additional logic to look into a couple places in the tns_core_modules folder to check whether it can find the name as a common component. Now that we have the application component loaded, the next lines are used to configure what the application class does when your program starts and runs. The second line tells which one of the files is the main page. The third line tells the application which CSS file is the main CSS file for the entire application. And finally, we tell the application component to actually start. Now, in a sense, the application has actually started running, we are already running our JavaScript code. This start() function actually starts the code that manages the application life cycle and the application events, loads and applies your main CSS file, and finally loads your main page files. If you want to add any additional code to this file, you will need to put it before the application.start() command. On some platforms, nothing below the application.start() command will be run in the app.js file. The main-page.js file The JavaScript portion of the page is probably where you will spend the majority of your time developing, as it contains all of the logic of your page. To create a page, you typically have a JavaScript file as you can do everything in JavaScript, and this is where all your logic for the page resides. In our application, the main page currently has only six lines of code: var vmModule = require("./main-view-model"); function pageLoaded(args) {   var page = args.object;   page.bindingContext = vmModule.mainViewModel; } exports.pageLoaded = pageLoaded; The first line loads the main-view-model.js file. If you haven't guessed yet, in our new application, this is used as the model file for this page. We will check out the optional model file in a few minutes after we are done exploring the rest of the main-page files. An app page does not have to have a model, they are totally optional. Furthermore, you can actually combine your model JavaScript into the page's JavaScript file. Some people find it easier to keep their model separate, so when Telerik designed this example, they built this application using the MVVM pattern, which uses a separate view model file. For more information on MVVM, you can take a look at the Wikipedia entry at https://en.wikipedia.org/wiki/Model_View_ViewModel. This file also has a function called pageLoaded, which is what sets the model object as the model for this page. The third line assigns the page variable to the page component object that is passed as part of the event handler. The fourth line assigns the model to the current page's bindingContext attribute. Then, we export the pageLoaded function as a function called pageLoaded. Using exports and module.exports is the way in which we publish something to other files that use require() to load it. Each file is its own independent blackbox, nothing that is not exported can be seen by any of the other files. Using exports, you can create the interface of your code to the rest of the world. This is part of the CommonJS standard, and you can read more about it at the NodeJS site. The main-page.xml file The final file of our application folder is also named main-page; it is the page layout file. As you can see, the main-page.xml layout consists of seven simple lines of XML code, which actually does quite a lot, as you can see: <Page loaded="pageLoaded">   <StackLayout>     <Label text="Tap the button" cssClass="title"/>     <Button text="TAP" tap="{{ tapAction }}" />     <Label text="{{ message }}" cssClass="message"     textWrap="true"/>   </StackLayout> </Page> Each of the XML layout files are actually a simplified way to load your visual components that you want on your page. In this case, it is what made your app look like this: The main-view-model.js file The final file in our tour of the App folder is the model file. This file has about 30 lines of code. By looking at the first couple of lines, you might have figured out that this file was transpiled from TypeScript. Since this file actually has a lot of boilerplate and unneeded code from the TypeScript conversion, we will rewrite the code in plain JavaScript to help you easily understand what each of the parts are used for. This rewrite will be as close to what as I can make it. So without further ado, here is the original transpiled code to compare our new code with: var observable = require("data/observable"); var HelloWorldModel = (function (_super) {   __extends(HelloWorldModel, _super);   function HelloWorldModel() {     _super.call(this);     this.counter = 42;     this.set("message", this.counter + " taps left");   }   HelloWorldModel.prototype.tapAction = function () {     this.counter--;     if (this.counter <= 0) {       this.set("message", "Hoorraaay! You unlocked the       NativeScript clicker achievement!");     }     else {       this.set("message", this.counter + " taps left");     }   };   return HelloWorldModel; })(observable.Observable); exports.HelloWorldModel = HelloWorldModel; exports.mainViewModel = new HelloWorldModel(); The Rewrite of the main-view-model.js file The rewrite of the main-view-model.js file is very straightforward. The first thing we need for a working model to also require the Observable class is the primary class that handles data binding events in NativeScript. We then create a new instance of the Observable class named mainViewModel. Next, we need to assign the two default values to the mainViewModel instance. Then, we create the same tapAction() function, which is the code that is executed each time when the user taps on the button. Finally, we export the mainViewModel model we created so that it is available to any other files that require this file. This is what the new JavaScript version looks like: // Require the Observable class and create a new Model from it var Observable = require("data/observable").Observable; var mainViewModel = new Observable(); // Setup our default values mainViewModel.counter = 42; mainViewModel.set("message", mainViewModel.counter + " taps left"); // Setup the function that runs when a tap is detected. mainViewModel.tapAction = function() {   this.counter--;   if (this.counter <= 0) {     this.set("message", "Hoorraaay! You unlocked the NativeScript     clicker achievement!");   } else {     this.set("message", this.counter + " taps left");   } }; // Export our already instantiated model class as the variable name that the main-page.js is expecting on line 4. exports.mainViewModel = mainViewModel; The set() command is the only thing that is not totally self-explanatory or explained in this code. What is probably fairly obvious is that this command sets the variable specified to the value specified. However, what is not obvious is when a value is set on an instance of the Observable class, it will automatically send a change event to anyone who has asked to be notified of any changes to that specific variable. If you recall, in the main-page.xml file, the: <Label text="{{ message }}" ….> line will automatically register the label component as a listener for all change events on the message variable when the layout system creates the label. Now, every time the message variable is changed, the text on this label changes. The application component If you recall, earlier in the article, we discussed the app.js file. It basically contains only the code to set up the properties of your application, and then finally, it starts the application component. So, you probably have guessed that this is the primary component for your entire application life cycle. A part of the features that this component provides us is access to all the application-wide events. Frequently in an app, you will want to know when your app is no longer the foreground application or when it finally returns to being the foreground application. To get this information, you can attach  the code to two of the events that it provides like this: application.on("suspend", function(event) {   console.log("Hey, I was suspended – I thought I was your   favorite app!"); }); application.on("resume", function(event) {   console.log("Awesome, we are back!"); }); Some of the other events that you can watch from the application component are launch, exit, lowMemory and uncaughtError. These events allow you to handle different application wide issues that your application might need to know about. Creating settings.js In our application, we will need a settings page; so, we will create the framework for our application's setting page now. We will just get our feet a little wet and explore how to build it purely in JavaScript. As you can see, the following code is fairly straightforward. First, we require all the components that we will be using: Frame, Page, StackLayout, Button, and finally, the Label component. Then, we have to export a createPage function, which is what NativeScript will be running to generate the page if you do not have an XML layout file to go along with the page's JavaScript file. At the beginning of our createPage function, we create each of the four components that we will need. Then, we assign some values and properties to make them have some sort of visual capability that we will be able to see. Next, we create the parent-child relationships and add our label and button to the Layout component, and then we assign that layout to the Page component. Finally, we return the page component: // Add our Requires for the components we need on our page var frame = require("ui/frame"); var Page = require("ui/page").Page; var StackLayout = require("ui/layouts/stack-layout").StackLayout; var Label = require("ui/label").Label; var Button = require("ui/button").Button;   // Create our required function which NativeScript uses // to build the page. exports.createPage = function() {   // Create our components for this page   var page = new Page();   var layout = new StackLayout();   var welcomeLabel = new Label();   var backButton = new Button();     // Assign our page title   page.actionBar.title = "Settings";   // Setup our welcome label   welcomeLabel.text = "You are now in Settings!";   welcomeLabel.cssClass = "message";     // Setup our Go Back button   backButton.text = "Go Back";   backButton.on("tap", function () {     frame.topmost().goBack();   });     // Add our layout items to our StackLayout   layout.addChild(welcomeLabel);   layout.addChild(backButton);     // Assign our layout to the page.   page.content = layout;   // Return our created page   return page; }; One thing I did want to mention here is that if you are creating a page totally programmatically without the use of a Declarative XML file, the createPage function must return the page component. The frame component is expected to have a Page component. Summary We have covered a large amount of foundational information in this article. We also covered which files are used for your application and where to find and make any changes to the project control files. In addition to all this, we also covered several foundational components such as the Application, Frame, and Page components. Resources for Article: Further resources on this subject: Overview of TDD [article] Understanding outside-in [article] Understanding TDD [article]
Read more
  • 0
  • 0
  • 13370

article-image-working-events
Packt
08 Jan 2016
7 min read
Save for later

Working with Events

Packt
08 Jan 2016
7 min read
In this article by Troy Miles, author of the book jQuery Essentials, we will learn that an event is the occurrence of anything that the system considers significant. It can originate in the browser, the form, the keyboard, or any other subsystem, and it can also be generated by the application via a trigger. An event can be as simple as a key press or as complex as the completion of an Ajax request. (For more resources related to this topic, see here.) While there are a myriad of potential events, events only matter when the application listens for them. This is also known as hooking an event. By hooking an event, you tell the browser that this occurrence is important to you and to let you know when it happens. When the event occurs, the browser calls your event handling code passing the event object to it. The event object holds important event data, including which page element triggered it. Let's take a look at the first learned and possibly most important event, the ready event. The ready event The first event that programmers new to jQuery usually learn about is the ready event, sometimes referred to as the document ready event. This event signifies that the DOM is fully loaded and that jQuery is open for business. The ready event is similar to the document load event, except that it doesn't wait for all of the page's images and other assets to load. It only waits for the DOM to be ready. Also, if the ready event fires before it is hooked, the handler code will be called at least once unlike most events. The .ready() event can only be attached to the document element. When you think about it, it makes sense because it fires when the DOM, also known as Document Object Model is fully loaded. The .ready() event has few different hooking styles. All of the styles do the same thing—hook the event. Which one you use is up to you. In its most basic form, the hooking code looks similar to the following: $(document).ready(handler); As it can only be attached to the document element, the selector can be omitted. In which case the event hook looks as follows: $().ready(handler); However, the jQuery documentation does not recommend using the preceding form. There is still a terser version of this event's hook. This version omits nearly everything, and it is only passing an event handler to the jQuery function. It looks similar to the following: $(handler); While all of the different styles work, I only recommend the first form because it is the most clear. While the other forms work and save a few bytes worth of characters, they do so at the expense of code clarity. If you are worried about the number of bytes an expression uses, you should use a JavaScript minimizer instead. It will do a much more thorough job of shrinking code than you could ever do by hand. The ready event can be hooked as many times as you'd like. When the event is triggered, the handlers are called in the order in which they were hooked. Let's take a look at an example in code. // ready event style no# 1 $(document).ready(function () { console.log("document ready event handler style no# 1"); // we're in the event handler so the event has already fired. // let's hook it again and see what happens $(document).ready(function () { console.log("We get this handler even though the ready event has already fired"); }); }); // ready event style no# 2 $().ready(function () { console.log("document ready event handler style no# 2"); }); // ready event style no# 3 $(function () { console.log("document ready event handler style no# 3"); }); In the preceding code, we hook the ready event three times, each one using a different hooking style. The handlers are called in the same order that they are hooked. In the first event handler, we hook the event again. As the event has been triggered already, we may expect that the handler will never be called, but we would be wrong. jQuery treats the ready event differently than other events. Its handler is always called, even if the event has already been triggered. This makes the ready event a great place for initialization and other code, which must be run. Hooking events The ready event is different as compared to all of the other events. Its handler will be called once, unlike other events. It is also hooked differently than other events. All of the other events are hooked by chaining the .on() method to the set of elements that you wish to use to trigger the event. The first parameter passed to the hook is the name of the event followed by the handling function, which can either be an anonymous function or the name of a function. This is the basic pattern for event hooking. It is as follows: $(selector).on('event name', handling function); The .on() method and its companion the .off() method were first added in version 1.7 of jQuery. For older versions of jQuery, the method that is used to hook the event is .bind(). Neither the .bind() method nor its companion the .unbind() method are deprecated, but .on() and .off() are preferred over them. If you are switching from .bind(), the call to .on() is identical at its simplest levels. The .on() method has capabilities beyond that of the .bind() method, which requires different sets of parameters to be passed to it. If you would like for more than one event to share the same handler, simply place the name of the next event after the previous one with a space separating them: $("#clickA").on("mouseenter mouseleave", eventHandler); Unhooking events The main method that is used to unhook an event handler is .off(). Calling it is simple; it is similar to the following: $(elements).off('event name', handling function); The handling function is optional and the event name is also optional. If the event name is omitted, then all events that are attached to the elements are removed. If the event name is included, then all handlers for the specified event are removed. This can create problems. Think about the following scenario. You write a click event handler for a button. A bit later in the app's life cycle, someone else also needs to know when the button is clicked. Not wanting to interfere with already working code, they add a second handler. When their code is complete, they remove the handler as follows: $('#myButton').off('click'); As the handler was called using only using the event name, it removed not only the handler that it added but also all of the handlers for the click event. This is not what was wanted. Don't despair however; there are two fixes for this problem: function clickBHandler(event){ console.log('Button B has been clicked, external'); } $('#clickB').on('click', clickBHandler); $('#clickB').on('click', function(event){ console.log('Button B has been clicked, anonymous'); // turn off the 1st handler without during off the 2nd $('#clickB').off('click', clickBHandler); }); The first fix is to pass the event handler to the .off() method. In the preceding code, we placed two click event handlers on the button named clickB. The first event handler is installed using a function declaration, and the second is installed using an anonymous function. When the button is clicked, both of the event handlers are called. The second one turns off the first one by calling the .off() method and passing its event handler as a parameter. By passing the event handler, the .off() method is able to match the signature of the handler that you'd like to turn off. If you are not using anonymous functions, this fix is works well. But, what if you want to pass an anonymous function as the event handler? Is there a way to turn off one handler without turning off the other? Yes there is, the second fix is to use event namespacing. Summary In this article, we learned a lot about one of the most important constructs in modern web programming—events. They are the things that make a site interactive. Resources for Article: Further resources on this subject: Preparing Your First jQuery Mobile Project[article] Building a Custom Version of jQuery[article] Learning jQuery[article]
Read more
  • 0
  • 0
  • 2661
article-image-planning-your-crm-implementation
Packt
08 Jan 2016
13 min read
Save for later

Planning Your CRM Implementation

Packt
08 Jan 2016
13 min read
In this article by Joseph Murray and Brian Shaughnessy, authors of the book Using CiviCRM, Second Edition, you will learn to plan your implementation of CiviCRM so that your project has the best chance to achieve greater organization success. In this article, we will do the following: Identify potential barriers to success and learn how to overcome them Select an appropriate development methodology (For more resources related to this topic, see here.) Barriers to success Constituent Relationship Management (CRM) initiatives can be difficult. They require change, often impacting processes and workflows that are at the heart of your staff's daily responsibilities. They force you to rethink how you are managing your existing data, and decide what new data you want to begin collecting. They may require you to restructure external relationships, even as you rebuild internal processes and tools. Externally, the experience of your constituents, as they interact with your organization, may need to change so that they provide more value and fewer barriers to involvement. Internally, business processes and supporting technological systems may need to change in order to break down departmental operations' silos, increase efficiencies, and enable more effective targeting, improved responsiveness, and new initiatives. The success of the CRM project often depends on changing the behavior and attitude of individuals across the organization and replacing, changing, and/or integrating many IT systems used across the organization. To realize success as you manage the potential culture changes, you may need to ask staff members to take on tasks and responsibilities that do not directly benefit them or the department managed by their supervisor, but provides value to the organization by what it enables others in the organization to accomplish. As a result, it is often very challenging to align the interests of the staff and organizational units with the organization's broader interest in seeking improved constituent relations, as promised by the CRM strategy. This is why an executive sponsor, such as the executive director is so important. They must help staff see beyond their immediate scope of responsibility and buy-in to the larger goals of the organization. On the technical side, CRM projects for mid- and large-sized organizations typically involve replacing or integrating other systems. Configuring and customizing a new software system, migrating data into it, testing and deploying it, and training the staff members to use it, can be a challenge at the best of times. Doing it for multiple systems and more users multiplies the challenge. Since a CRM initiative generally involves integrating separate systems, you must be prepared to face the potential complexity of working with disparate data schemas requiring transformations and cleanup for interoperability, and keeping middleware in sync with changes in multiple independent software packages. Unfortunately, these challenges to the CRM implementation initiative may lead to a project failure if they are not identified and addressed. Common causes for failure may include: Lack of executive-level sponsorship resulting in improperly resolved turf wars. IT-led initiatives have a greater tendency to focus on cost efficiency. This focus will generally not result in better constituent relations that are oriented toward achieving the organization's mission. An IT approach, particularly where users and usability experts are not on the project team, may also lead to poor user adoption if the system is not adapted to their needs, or even if the users are poorly trained. No customer data integration approach resulting in not overcoming the data silos problem, no consolidated view of constituents, poorer targeting, and an inability to realize enterprise-level benefits. Lack of buy-in, leading to a lack of use of the new CRM system and continued use of the old processes and systems it was meant to supplant. Lack of training and follow-up causing staff anxiety and opposition. This may cause non-use or misuse of the system, resulting in poor data handling and mix-ups in the way in which constituents are treated. Not customizing enough to actually meet the requirements of the organization in the areas of: Data integration Business processes User experiences Over-customizing, causes: The costs of the system to escalate, especially through future upgrades The best practices incorporated in the base functionality to be overridden in some cases The user forms to become overly complex The user experiences to become off-putting No strategy for dealing with the technical challenges associated with developing, extending, and/or integrating the CRM software system, leads to: Cost overruns Poorly designed and built software Poor user experiences Incomplete or invalid data This does not mean that project failure is inevitable or common. These clearly identifiable causes of failure can be overcome through effective project planning. Perfection is the enemy of the good CRM systems and their functional components such as fundraising, ticket sales, communication, event registration, membership management, and case management are essential for the core operations of most non profit. Since they are so central to the daily operations of the organization, there is often legitimate fear of project failure when considering changes. This fear can easily create a perfectionist mentality, where the project team attempts to overcompensate by creating too much oversight, contingency planning, and project discovery time in an effort to avoid missing any potentially useful feature that could be integrated into the project. While planning is good, perfection may not be good, as perfection is often the enemy of the good. CRM implementations often risk erring on the side of what is known as the MIT approach (tongue-in-cheek). The MIT approach believes in, and attempts to design, construct, and deploy, the right thing right from the start. Its big-brain approach to problem solving leads to correctness, completeness, and consistency in the design. It values simplicity in the user interface over simplicity in the implementation design. The other end of the spectrum is captured with aphorisms like "Less is More," "KISS" (Keep it simple, stupid), and "Worse is Better." This alternate view willingly accepts deviations from correctness, completeness, and consistency in design in favor of general simplicity or simplicity of implementation over simplicity of user interface. The reason that such counter-intuitive approaches to developing solutions have become respected and popular is the problems and failures that can result from trying to do it all perfectly from the start. Neither end of the spectrum is healthier. Handcuffing the project to an unattainable standard of perfection or over simplifying in order to artificially reduce complexity will both lead to project failure. Value and success are generally found somewhere in between. As a project manager, it will be your responsibility to set the tone, determine priorities, and plan the implementation and development process. One rule that may help achieve balance and move the project forward is, Release early, release often. This is commonly embraced in the open source community where collaboration is essential to success. This motto: Captures the intent of catching errors earlier Allows users to realize value from the system sooner Allows users to better imagine and articulate what the software should do through ongoing use and interaction with a working system early in the process Creates a healthy cyclical process where end users are providing rapid feedback into the development process, and where those ideas are considered, implemented, and released on a regular basis There is no perfect antidote to these two extremes—only an awareness of the tendency for projects (and stakeholders) to lean in one of the two directions—and the realization the both extremes should be avoided. Development methodologies Whatever approach your organization decides to take for developing and implementing its CRM strategy, it's usually good to have an agreed upon process and methodology. Your processes define the steps to be taken as you implement the project. Your methodology defines the rules for the process, that is, the methods to be used throughout the course of the project. The spirit of this problem-solving approach can be seen in the Traditional Waterfall Development model and in the contrasting Iterative and Incremental Development model. Projects naturally change and evolve over time. You may find that you embrace one of these methodologies for initial implementation and then migrate to a different method or mixed method for maintenance and future development work. By no means should you feel restricted by the definitions provided, but rather adjust the principles to meet your changing needs throughout the course of the project. That being said, it's important that your team understands the project rules at a given point in time so that the project management principles are respected. The conventional Waterfall Development methodology The traditional Waterfall method of software development is sometimes thought of as "big design upfront." It employs a sequential approach to development, moving from needs analysis and requirements, to architectural and user experience, detailed design, implementation, integration, testing, deployment, and maintenance. The output of each step or phase flows downward, such as water, to the next step in the process, as illustrated by the arrows in the following figure: The Waterfall model tends to be more formal, more planned, includes more documentation, and often has a stronger division of labor. This methodology benefits from having clear, linear, and progressive development steps in which each phase builds upon the previous one. However, it can suffer from inflexibility if used too rigidly. For example, if during the verification and quality assurance phase you realize a significant functionality gap resulting from incorrect (or changing) specification requirements, then it may be difficult to interject those new needs into the process. The "release early, release often" iterative principle mentioned earlier can help overcome that inflexibility. If the overall process is kept tight and the development window short, you can justify delaying the new functionality or corrective specifications for the next release. If you do not embrace the "release early, release often" principle—either because it is not supported by the project team or the scope of the project does not permit it—you should still anticipate the need for flexibility and build it into your methodology. The overriding principle is to define the rules of the game early, so your project team knows what options are available at each stage of development. Iterative development methodology Iterative development models depart from this structure by breaking the work up into chunks that can be developed and delivered separately. The Waterfall process is used in each phase or segment of the project, but the overall project structure is not necessarily held to the same rigid process. As one moves farther away from the Waterfall approach, there is a greater emphasis on evaluating incrementally delivered pieces of the solution and incorporating feedback on what has already been developed into the planning of future work, as illustrated in the loop in the following figure: This methodology seeks to take what is good in the traditional Waterfall approach—structure, clearly defined linear steps, a strong development/quality assurance/roll out process—and improve it through shorter development cycles that are centered on smaller segments of the overall project. Perhaps, the biggest challenge in this model is the project management role, as it may result in many moving pieces that must be tightly coordinated in order to release the final working product. An iterative development method can also feel a little like a dog chasing its tail—you keep getting work done, but never feel like you're getting anywhere. It may be necessary to limit how many iterations take place before you pause the process, take a step back, and consider where you are in the project's bigger picture—before you set new goals and begin the cyclical process again. Agile development methodology Agile development methodologies are an effective derivative of the iterative development model that moves one step further away from the Waterfall model. They are characterized by requirements and solutions evolving together, requiring work teams to be drawn from all the relevant parts of the organization. They organize themselves to work in rapid 1 to 4 week iteration cycles. Agile centers on time-based release cycles, and in this way, it differs from the other methodologies discussed, which are oriented more toward functionality-based releases. The following figure illustrates the implementation of an Agile methodology that highlights short daily Scrum status meetings, a product backlog containing features or user stories for each iteration, and a sprint backlog containing revisable and re-prioritizable work items for the team during the current iteration. A deliberate effort is usually made in order to ensure that the sprint backlog is long enough to ensure that the lowest priority items will not be dealt with before the end of the iteration. Although they can be put onto the list of work items that may or may not be selected for the next iteration, the idea is that the client or the product owner should, at some point, decide that it's not worth investing more resources in the "nice to have, but not really necessary" items. But having those low-priority backlog items are equally as important for maximizing developer efficiency. If developers are able to work through the higher priority issues faster than originally expected, the backlog items give them a chance to chip away at the "nice to have" features while keeping within the time-based release cycle (and your budget constraints). As one might expect, this methodology relies heavily on effective prioritization. Since software releases and development cycles adhere to rigid timeframes, only high-priority issues or features are actively addressed at a given point in time; the remaining incomplete issues falling lower on the list are subject to reassignment for the next cycle. While an Agile development model may seem attractive (and rightly so), there are a few things to realize before embracing it: The process of reviewing, prioritizing, and managing the issue list does take time and effort. Each cycle will require the team to evaluate the status of issues and reprioritize for the next release. Central to the success of this model is a rigid allegiance to time-based releases and a ruthless determination to prevent feature creep. Yes – you will have releases that are delayed a week or see must-have features or bug fixes that enter the issue queue late in the process. However, the exceptions must not become the rule, or you lose your agility. Summary This article lists common barriers to the success of CRM initiatives that arise because of people issues or technical issues. These include problems getting systems and tools supporting disparate business functions to provide integrated functionality. We advocate a pragmatic approach to implementing a CRM strategy for your organization. We encourage the adoption of a change in approach and associated processes and methodologies that works for your organization: wide ranges in the level of structure, formality, and planning all work in different organizations. Resources for Article: Further resources on this subject: Getting Dynamics CRM 2015 Data into Power BI [article] Customization in Microsoft Dynamics CRM [article] Getting Started with Microsoft Dynamics CRM 2013 Marketing [article]
Read more
  • 0
  • 0
  • 2280

article-image-distributed-resource-scheduler
Packt
07 Jan 2016
14 min read
Save for later

Distributed resource scheduler

Packt
07 Jan 2016
14 min read
In this article written by Christian Stankowic, author of the book vSphere High Performance Essentials In cluster setups, Distributed Resource Scheduler (DRS) can assist you with automatic balancing CPU and storage load (Storage DRS). DRS monitors the ESXi hosts in a cluster and migrates the running VMs using vMotion, primarily, to ensure that all the VMs get the resources they need. Secondarily, it tries to balance the cluster. In addition to this, Storage DRS monitors the shared storage for information about latency and capacity consumption. In this case, Storage DRS recognizes the potential to optimize storage resources; it will make use of Storage vMotion to balance the load. We will cover Storage DRS in detail later. (For more resources related to this topic, see here.) Working of DRS DRS primarily uses two metrics to determine the cluster balance: Active host CPU: it includes the usage (CPU task time in ms) and ready (wait times in ms per VMs to get scheduled on physical cores) metrics. Active host Memory: It describes the amount of memory pages that are predicted to have changed in the last 20 seconds. A math-sampling algorithm calculates this amount; however, it is quite inaccurate. Active host memory is often used for resource capacity purposes. Be careful with using this value as an indicator as it only describes how aggressively a workload changes the memory. Depending on your application architecture, it may not measure how much memory a particular VM really needs. Think about applications that allocate a lot of memory for the purpose of caching. Using the active host memory metric for the purpose of capacity might lead to inappropriate settings. The migration threshold controls DRS's aggressiveness and defines how much a cluster can be imbalanced. Refer to the following table for detailed explanation: DRS level Priorities Effect Most conservative 1 Only affinity/anti-affinity constraints are applied More conservative 1–2 This will also apply recommendations addressing significant improvements Balanced (default) 1–3 Recommendations that, at least, promise good improvements are applied More aggressive 1–4 DRS applies recommendations that only promise a moderate improvement Most aggressive 1–5 Recommendations only addressing smaller improvements are applied Apart from the migration threshold, two other metrics—Target Host Load Standard Deviation (THLSD) and Current host load standard deviation (CHLSD)—are calculated. THLSD defines how much a cluster node's load can differ from others in order to be still balanced. The migration threshold and the particular ESXi host's active CPU and memory values heavily influence this metric. CHLSD calculates whether the cluster is currently balanced. If this value differs from the THLSD, the cluster is imbalanced and DRS will calculate the recommendations in order to balance it. In addition to this, DRS also calculates the vMotion overhead that is needed for the migration. If a migration's overhead is deemed higher than the benefit, vMotion will not be executed. DRS also evaluates the migration recommendations multiple times in order to avoid ping pong migrations. By default, once enabled, DRS is polled every five minutes (300 seconds). Depending on your landscape, it might be required to change this behavior. To do so, you need to alter the vpxd.cfg configuration file on the vCenter Server machine. Search for the following lines and alter the period (in seconds): <config>   <drm>     <pollPeriodSec>       300[SR1]      </pollPeriodSec>   </drm> </config> Refer to the following table for configuration file location, depending on your vCenter implementation: vCenter Server type File location vCenter Server Appliance /etc/vmware-vpx/vpxd.cfg vCenter Server C:ProgramDataVMwareVMware VirtualCentervpxd.cfg Check-list – performance tuning There are a couple of things to be considered when optimizing DRS for high-performance setups, as shown in the following: Make sure to use the hosts with homogenous CPU and memory configuration. Having different nodes will make DRS less effective. Use at least 1 Gbps network connection for vMotion. For better performance, it is recommended to use 10 Gbps instead. For virtual machines, it is a common procedure not to oversize them. Only configure as much as the CPU and memory resources need. Migrating workloads with unneeded resources takes more time. Make sure not to exceed the ESXi host and cluster limits that are mentioned in the VMware vSphere Configuration Maximums document. For vSphere 5.5, refer to https://www.vmware.com/pdf/vsphere5/r55/vsphere-55-configuration-maximums.pdf. For vSphere 6.0, refer to https://www.vmware.com/pdf/vsphere6/r60/vsphere-60-configuration-maximums.pdf. Configure DRS To configure DRS for your cluster, proceed with the following steps: Select your cluster from the inventory tab and click Manage and Settings. Under Services, select vSphere DRS. Click Edit. Select whether DRS should act in the Partially Automated or Fully Automated mode. In partially automated mode, DRS will place VMs in appropriate hosts, once powered on; however, it wil not migrate the running workloads. In fully automated mode, DRS will also migrate the running workloads in order to balance the cluster load. The Manual mode only gives you recommendations and the administrator can select the recommendations to apply. To create resource pools at cluster level, you will need to have at least the manual mode enabled. Select the DRS aggressiveness. Refer to the preceding table for a short explanation. Using more aggressive DRS levels is only recommended when having homogenous CPU and memory setups! When creating VMware support calls regarding DRS issues, a DRS dump file called drmdump is important. This file contains various metrics that DRS uses to calculate the possible migration benefits. On the vCenter Server Appliance, this file is located in /var/log/vmware/vpx/drmdump/clusterName. On the Windows variant, the file is located in %ALLUSERSPROFILE%VMwareVMware VirtualCenterLogsdrmdumpclusterName. VMware also offers an online tool called VM Resource and Availability Service (http://hasimulator.vmware.com), telling you which VMs can be restarted during the ESXi host failures. It requires you to upload this metric file in order to give you the results. This can be helpful when simulating the failure scenarios. Enhanced vMotion capability Enhanced vMotion Compatibility (EVC) enables your cluster to migrate the workloads between ESXi hosts with different processor generations. Unfortunately, it is not possible to migrate workloads between Intel-based and AMD-based servers; EVC only enables migrations in different Intel or AMD CPU generations. Once enabled, all the ESXi hosts are configured to provide the same set of CPU functions. In other words, the functions of newer CPU generations are disabled to match those of the older ESXi hosts in the cluster in order to create a common baseline. Configuring EVC To enable EVC, perform the following steps: Select the affected cluster from the inventory tab. Click on Manage, Settings, VMware EVC, and Edit. Choose Enable EVC for AMD Hosts or Enable EVC for Intel Hosts. Select the appropriate CPU generation for the cluster (the oldest). Make sure that Compatibility acknowledges your configuration. Save the changes, as follows: As mixing older hosts in high-performance clusters is not recommended, you should also avoid using EVC. To sum it up, keep the following steps in mind when planning the use of DRS: Enable DRS if you plan to have automatic load balancing; this is highly recommended for high-performance setups. Adjust the DRS aggressiveness level to match your requirements. Too aggressive migration thresholds may result in too many migrations, therefore, play with this setting to find the best for you. Make sure to have a separated vMotion network. Using the same logical network components as for the VM traffic is not recommended and might result in poor workload performance. Don't overload ESXi hosts to spare some CPU resources for vMotion processes in order to avoid performance bottlenecks during migrations. In high-performance setups, mixing various CPU and memory configurations is not recommended to achieve better performance. Try not to use EVC. Also, keep license constraints in mind when configuring DRS. Some software products might require additional licenses if it runs on multiple servers. We will focus on this later. Affinity and anti-affinity rules Sometimes, it is necessary to separate workloads or stick them together. To name some examples, think about the classical multi-tier applications such as the following: Frontend layer Database layer Backend layer One possibility would be to separate the particular VMs on multiple ESXi hosts to increase resilience. If a single ESXi host that is serving all the workloads crashes, all application components are affected by this fault. Moving all the participating application VMs to one single ESXi can result in higher performance as network traffic does not need to leave the ESXi host. However, there are more use cases to create affinity and anti-affinity rules, as shown in the following: Diving into production, development, and test workloads. For example, it would possible to separate production from the development and test workloads. This is a common procedure that many application vendors require. Licensing reasons (for example, license bound to the USB dongle, per core licensing, software assurance denying vMotion, and so on.) Application interoperability incompatibility (for example, applications need to run on separated hosts). As VMware vSphere has no knowledge about the license conditions of the workloads running virtualized, it is very important to check your software vendor's license agreements. You, as a virtual infrastructure administrator, are responsible to ensure that your software is fully licensed. Some software vendors require special licenses when running virtualized/on multiple hosts. There are two kinds of affinity/anti-affinity rules: VM-Host (relationship between VMs and ESXi hosts) and VM-VM (intra-relationship between particular VMs). Each rule consists of at least one VM and host DRS group. These groups also contain at least one entry. Every rule has a designation, where the administrator can choose between must or should. Implementing a rule with the should designation results in a preference on hosts satisfying all the configured rules. If no applicable host is found, the VM is put on another host in order to ensure at least the workload is running. If the must designation is selected, a VM is only running on hosts that are satisfying the configured rules. If no applicable host is found, the VM cannot be moved or started. This configuration approach is strict and requires excessive testing in order to avoid unplanned effects. DRS rules are rather combined than ranked. Therefore, if multiple rules are defined for a particular VM/host or VM/VM combination, the power-on process is only granted if all the rules apply to the requested action. If two rules are conflicting for a particular VM/host or VM/VM combination, the first rule is chosen and the other rule is automatically disabled. Especially, the use of the must rules should be evaluated very carefully as HA might not restart some workloads if these rules cannot be followed in case of a host crash. Configuring affinity/anti-affinity rules In this example, we will have a look at two use cases that affinity/anti-affinity rules can apply to. Example 1: VM-VM relationship This example consists of two VMs serving a two-tier application: db001 (database VM) and web001 (frontend VM). It is advisable to have both VMs running on the same physical host in order to reduce networking hops to connect the frontend server to its database. To configure the VM-VM affinity rule, proceed with the following steps: Select your cluster from the inventory tab and click Manage and VM/Host Rule underneath Configuration. Click Add. Enter a readable rule name (for example, db001-web001-bundle) and select Enable rule. Select the Keep Virtual Machines Together type and select the affected VMs. Click OK to save the rule, as shown in the following: When migrating one of the virtual machines using vMotion, the other VM will also migrate. Example 2: VM-Host relationship In this example, a VM (vcsa) is pinned to a particular ESXi host of a two-node cluster designated for production workloads. To configure the VM-Host affinity rule, proceed with the following steps: Select your cluster from the inventory tab and click Manage and VM/Host Groups underneath Configuration. Click Add. Enter a group name for the VM; make sure to select the VM Group type. Also, click Add to add the affected VM. Click Add once again. Enter a group name for the ESXi host; make sure to select the Host Group type. Later, click Add to add the ESXi host. Select VM/Host Rule underneath Configuration and click Add. Enter a readable rule name (for example, vcsa-to-esxi02) and select Enable rule. Select the Virtual Machines to Hosts type and select the previously created VM and host groups. Make sure to select Must run on hosts in group or Should run on hosts in group before clicking OK, as follows: Migrating the virtual machine to another host will fail with the following error message if Must run on hosts in group was selected earlier: Keep the following in mind when designing affinity and anti-affinity rules: Enable DRS. Double-check your software vendor's licensing agreements. Make sure to test your affinity/anti-affinity rules by simulating vMotion processes. Also, simulate host failures by using maintenance mode to ensure that your rules are working as expected. Note that the created rules also apply to HA and DPM. KISS – Keep it simple, stupid. Try to avoid utilizing too many or multiple rules for one VM/host combination. Distributed power management High performance setups are often the opposite of efficient, green infrastructures; however, high-performing virtual infrastructure setups can be efficient as well. Distributed Power Management (DPM) can help you with reducing the power costs and consumption of your virtual infrastructure. It is part of DRS and monitors the CPU and memory usage of all workloads running in the cluster. If it is possible to run all VMs on fewer hosts, DPM will put one or more ESXi hosts in standby mode (they will be powered off) after migrating the VMs using vMotion. DPM tries to keep the CPU and memory usage between 45% and 81% for all the cluster nodes by default. If this range is exceeded, the hosts will be powered on/off. Setting two advanced parameters can change this behaviour, as follows:DemandCapacityRatioTarget: Utilization target for the ESXi hosts (default: 63%)DemandCapacityRatioToleranceHost: Utilization range around target utilization (default 18%)The range is calculated as follows: (DemandCapacityRatioTarget - DemandCapacityRatioToleranceHost) to (DemandCapacityRatioTarget + DemandCapacityRatioToleranceHost)in this example we can calculate range: (63% - 18%) to (63% + 18%) To control a server's power state, DPM makes use of these three protocols in the following order: Intelligent Platform Management Interface (IPMI) Hewlett Packard Integrated Lights-Out (HP iLO) Wake-on-LAN (WoL) To enable IPMI/HP iLO management, you will need to configure the Baseboard Management Controller (BMC) IP address and other access information. To configure them, follow the given steps: Log in to vSphere Web Client and select the host that you want to configure for power management. Click on Configuration and select the Power Management tab. Select Properties and enter an IP address, MAC address, username, and password for the server's BMC. Note that entering hostnames will not work, as shown in the following: To enable DPM for a cluster, perform the following steps: Select the cluster from the inventory tab and select Manage. From the Services tab, select vSphere DRS and click Edit. Expand the Power Management tab and select Manual or Automatic. Also, select the threshold, DPM will choose to make power decisions. The higher the value, the faster DPM will put the ESXi hosts in standby mode, as follows: It is also possible to disable DPM for a particular host (for example, the strongest in your cluster). To do so; select the cluster and select Manage and Host Options. Check the host and click Edit. Make sure to select Disabled for the Power Management option. Consider giving a thought to the following when planning to utilize DPM: Make sure your server's have a supported BMC, such as HP iLO or IPMI. Evaluate the right DPM threshold. Also, keep your server's boot time (including firmware initialization) in mind and test your configuration before running in production. Keep in mind that the DPM also uses Active Memory and CPU usage for its decisions. Booting VMs might claim all memory; however, not use many active memory resources. If hosts are powered down while plenty VMs are booting, this might result in extensive swapping. Summary In this article, you learned how to implement the affinity and anti-affinity rules. You have also learned how to save power, while still achieving our workload requirements. Resources for Article: Further resources on this subject: Monitoring and Troubleshooting Networking [article] Storage Scalability [article] Upgrading VMware Virtual Infrastructure Setups [article]
Read more
  • 0
  • 0
  • 12447
Modal Close icon
Modal Close icon