Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7010 Articles
article-image-mapreduce-functions
Packt
03 Mar 2015
11 min read
Save for later

MapReduce functions

Packt
03 Mar 2015
11 min read
 In this article, by John Zablocki, author of the book, Couchbase Essentials, you will be acquainted to MapReduce and how you'll use it to create secondary indexes for our documents. At its simplest, MapReduce is a programming pattern used to process large amounts of data that is typically distributed across several nodes in parallel. In the NoSQL world, MapReduce implementations may be found on many platforms from MongoDB to Hadoop, and of course, Couchbase. Even if you're new to the NoSQL landscape, it's quite possible that you've already worked with a form of MapReduce. The inspiration for MapReduce in distributed NoSQL systems was drawn from the functional programming concepts of map and reduce. While purely functional programming languages haven't quite reached mainstream status, languages such as Python, C#, and JavaScript all support map and reduce operations. (For more resources related to this topic, see here.) Map functions Consider the following Python snippet: numbers = [1, 2, 3, 4, 5] doubled = map(lambda n: n * 2, numbers) #doubled == [2, 4, 6, 8, 10] These two lines of code demonstrate a very simple use of a map() function. In the first line, the numbers variable is created as a list of integers. The second line applies a function to the list to create a new mapped list. In this case, the map() function is supplied as a Python lambda, which is just an inline, unnamed function. The body of lambda multiplies each number by two. This map() function can be made slightly more complex by doubling only odd numbers, as shown in this code: numbers = [1, 2, 3, 4, 5] defdouble_odd(num):   if num % 2 == 0:     return num   else:     return num * 2   doubled = map(double_odd, numbers) #doubled == [2, 2, 6, 4, 10] Map functions are implemented differently in each language or platform that supports them, but all follow the same pattern. An iterable collection of objects is passed to a map function. Each item of the collection is then iterated over with the map function being applied to that iteration. The final result is a new collection where each of the original items is transformed by the map. Reduce functions Like maps, the reduce functions also work by applying a provided function to an iterable data structure. The key difference between the two is that the reduce function works to produce a single value from the input iterable. Using Python's built-in reduce() function, we can see how to produce a sum of integers, as follows: numbers = [1, 2, 3, 4, 5] sum = reduce(lambda x, y: x + y, numbers) #sum == 15 You probably noticed that unlike our map operation, the reduce lambda has two parameters (x and y in this case). The argument passed to x will be the accumulated value of all applications of the function so far, and y will receive the next value to be added to the accumulation. Parenthetically, the order of operations can be seen as ((((1 + 2) + 3) + 4) + 5). Alternatively, the steps are shown in the following list: x = 1, y = 2 x = 3, y = 3 x = 6, y = 4 x = 10, y = 5 x = 15 As this list demonstrates, the value of x is the cumulative sum of previous x and y values. As such, reduce functions are sometimes termed accumulate or fold functions. Regardless of their name, reduce functions serve the common purpose of combining pieces of a recursive data structure to produce a single value. Couchbase MapReduce Creating an index (or view) in Couchbase requires creating a map function written in JavaScript. When the view is created for the first time, the map function is applied to each document in the bucket containing the view. When you update a view, only new or modified documents are indexed. This behavior is known as incremental MapReduce. You can think of a basic map function in Couchbase as being similar to a SQL CREATE INDEX statement. Effectively, you are defining a column or a set of columns, to be indexed by the server. Of course, these are not columns, but rather properties of the documents to be indexed. Basic mapping To illustrate the process of creating a view, first imagine that we have a set of JSON documents as shown here: var books=[     { "id": 1, "title": "The Bourne Identity", "author": "Robert Ludlow"     },     { "id": 2, "title": "The Godfather", "author": "Mario Puzzo"     },     { "id": 3, "title": "Wiseguy", "author": "Nicholas Pileggi"     } ]; Each document contains title and author properties. In Couchbase, to query these documents by either title or author, we'd first need to write a map function. Without considering how map functions are written in Couchbase, we're able to understand the process with vanilla JavaScript: books.map(function(book) {   return book.author; }); In the preceding snippet, we're making use of the built-in JavaScript array's map() function. Similar to the Python snippets we saw earlier, JavaScript's map() function takes a function as a parameter and returns a new array with mapped objects. In this case, we'll have an array with each book's author, as follows: ["Robert Ludlow", "Mario Puzzo", "Nicholas Pileggi"] At this point, we have a mapped collection that will be the basis for our author index. However, we haven't provided a means for the index to be able to refer back to its original document. If we were using a relational database, we'd have effectively created an index on the Title column with no way to get back to the row that contained it. With a slight modification to our map function, we are able to provide the key (the id property) of the document as well in our index: books.map(function(book) {   return [book.author, book.id]; }); In this slightly modified version, we're including the ID with the output of each author. In this way, the index has its document's key stored with its title. [["The Bourne Identity", 1], ["The Godfather", 2], ["Wiseguy", 3]] We'll soon see how this structure more closely resembles the values stored in a Couchbase index. Basic reducing Not every Couchbase index requires a reduce component. In fact, we'll see that Couchbase already comes with built-in reduce functions that will provide you with most of the reduce behavior you need. However, before relying on only those functions, it's important to understand why you'd use a reduce function in the first place. Returning to the preceding example of the map, let's imagine we have a few more documents in our set, as follows: var books=[     { "id": 1, "title": "The Bourne Identity", "author": "Robert Ludlow"     },     { "id": 2, "title": "The Bourne Ultimatum", "author": "Robert Ludlow"     },     { "id": 3, "title": "The Godfather", "author": "Mario Puzzo"     },     { "id": 4, "title": "The Bourne Supremacy", "author": "Robert Ludlow"     },     { "id": 5, "title": "The Family", "author": "Mario Puzzo"     },  { "id": 6, "title": "Wiseguy", "author": "Nicholas Pileggi"     } ]; We'll still create our index using the same map function because it provides a way of accessing a book by its author. Now imagine that we want to know how many books an author has written, or (assuming we had more data) the average number of pages written by an author. These questions are not possible to answer with a map function alone. Each application of the map function knows nothing about the previous application. In other words, there is no way for you to compare or accumulate information about one author's book to another book by the same author. Fortunately, there is a solution to this problem. As you've probably guessed, it's the use of a reduce function. As a somewhat contrived example, consider this JavaScript: mapped = books.map(function (book) {     return ([book.id, book.author]); });   counts = {} reduced = mapped.reduce(function(prev, cur, idx, arr) { var key = cur[1];     if (! counts[key]) counts[key] = 0;     ++counts[key] }, null); This code doesn't quite accurately reflect the way you would count books with Couchbase but it illustrates the basic idea. You look for each occurrence of a key (author) and increment a counter when it is found. With Couchbase MapReduce, the mapped structure is supplied to the reduce() function in a better format. You won't need to keep track of items in a dictionary. Couchbase views At this point, you should have a general sense of what MapReduce is, where it came from, and how it will affect the creation of a Couchbase Server view. So without further ado, let's see how to write our first Couchbase view. In fact, there were two to choose from. The bucket we'll use is beer-sample. If you didn't install it, don't worry. You can add it by opening the Couchbase Console and navigating to the Settings tab. Here, you'll find the option to install the bucket, as shown next: First, you need to understand the document structures with which you're working. The following JSON object is a beer document (abbreviated for brevity): {  "name": "Sundog",  "type": "beer",  "brewery_id": "new_holland_brewing_company",  "description": "Sundog is an amber ale...",  "style": "American-Style Amber/Red Ale",  "category": "North American Ale" } As you can see, the beer documents have several properties. We're going to create an index to let us query these documents by name. In SQL, the query would look like this: SELECT Id FROM Beers WHERE Name = ? You might be wondering why the SQL example includes only the Id column in its projection. For now, just know that to query a document using a view with Couchbase, the property by which you're querying must be included in an index. To create that index, we'll write a map function. The simplest example of a map function to query beer documents by name is as follows: function(doc) {   emit(doc.name); } This body of the map function has only one line. It calls the built-in Couchbase emit() function. This function is used to signal that a value should be indexed. The output of this map function will be an array of names. The beer-sample bucket includes brewery data as well. These documents look like the following code (abbreviated for brevity): {   "name": "Thomas Hooker Brewing",   "city": "Bloomfield",   "state": "Connecticut",   "website": "http://www.hookerbeer.com/",   "type": "brewery" } If we reexamine our map function, we'll see an obvious problem; both the brewery and beer documents have a name property. When this map function is applied to the documents in the bucket, it will create an index with documents from either the brewery or beer documents. The problem is that Couchbase documents exist in a single container—the bucket. There is no namespace for a set of related documents. The solution has typically involved including a type or docType property on each document. The value of this property is used to distinguish one document from another. In the case of the beer-sample database, beer documents have type = "beer" and brewery documents have type = "brewery". Therefore, we are easily able to modify our map function to create an index only on beer documents: function(doc) {   if (doc.type == "beer") {     emit(doc.name);   } } The emit() function actually takes two arguments. The first, as we've seen, emits a value to be indexed. The second argument is an optional value and is used by the reduce function. Imagine that we want to count the number of beer types in a particular category. In SQL, we would write the following query: SELECT Category, COUNT(*) FROM Beers GROUP BY Category To achieve the same functionality with Couchbase Server, we'll need to use both map and reduce functions. First, let's write the map. It will create an index on the category property: function(doc) {   if (doc.type == "beer") {     emit(doc.category, 1);   } } The only real difference between our category index and our name index is that we're including an argument for the value parameter of the emit() function. What we'll do with that value is simply count them. This counting will be done in our reduce function: function(keys, values) {   return values.length; } In this example, the values parameter will be given to the reduce function as a list of all values associated with a particular key. In our case, for each beer category, there will be a list of ones (that is, [1, 1, 1, 1, 1, 1]). Couchbase also provides a built-in _count function. It can be used in place of the entire reduce function in the preceding example. Now that we've seen the basic requirements when creating an actual Couchbase view, it's time to add a view to our bucket. The easiest way to do so is to use the Couchbase Console. Summary In this article, you learned the purpose of secondary indexes in a key/value store. We dug deep into MapReduce, both in terms of its history in functional languages and as a tool for NoSQL and big data systems. Resources for Article: Further resources on this subject: Map Reduce? [article] Introduction to Mapreduce [article] Working with Apps Splunk [article]
Read more
  • 0
  • 0
  • 4795

article-image-building-color-picker-hex-rgb-conversion
Packt
02 Mar 2015
18 min read
Save for later

Building a Color Picker with Hex RGB Conversion

Packt
02 Mar 2015
18 min read
In this article by Vijay Joshi, author of the book Mastering jQuery UI, we are going to create a color selector, or color picker, that will allow the users to change the text and background color of a page using the slider widget. We will also use the spinner widget to represent individual colors. Any change in colors using the slider will update the spinner and vice versa. The hex value of both text and background colors will also be displayed dynamically on the page. (For more resources related to this topic, see here.) This is how our page will look after we have finished building it: Setting up the folder structure To set up the folder structure, follow this simple procedure: Create a folder named Article inside the MasteringjQueryUI folder. Directly inside this folder, create an HTML file and name it index.html. Copy the js and css folder inside the Article folder as well. Now go inside the js folder and create a JavaScript file named colorpicker.js. With the folder setup complete, let's start to build the project. Writing markup for the page The index.html page will consist of two sections. The first section will be a text block with some text written inside it, and the second section will have our color picker controls. We will create separate controls for text color and background color. Inside the index.html file write the following HTML code to build the page skeleton: <html> <head> <link rel="stylesheet" href="css/ui-lightness/jquery-ui- 1.10.4.custom.min.css"> </head> <body> <div class="container"> <div class="ui-state-highlight" id="textBlock"> <p> Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. </p> <p> Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. </p> <p> Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. </p> </div> <div class="clear">&nbsp;</div> <ul class="controlsContainer"> <li class="left"> <div id="txtRed" class="red slider" data-spinner="sptxtRed" data-type="text"></div><input type="text" value="0" id="sptxtRed" data-slider="txtRed" readonly="readonly" /> <div id="txtGreen" class="green slider" dataspinner=" sptxtGreen" data-type="text"></div><input type="text" value="0" id="sptxtGreen" data-slider="txtGreen" readonly="readonly" /> <div id="txtBlue" class="blue slider" dataspinner=" sptxtBlue" data-type="text"></div><input type="text" value="0" id="sptxtBlue" data-slider="txtBlue" readonly="readonly" /> <div class="clear">&nbsp;</div> Text Color : <span>#000000</span> </li> <li class="right"> <div id="bgRed" class="red slider" data-spinner="spBgRed" data-type="bg" ></div><input type="text" value="255" id="spBgRed" data-slider="bgRed" readonly="readonly" /> <div id="bgGreen" class="green slider" dataspinner=" spBgGreen" data-type="bg" ></div><input type="text" value="255" id="spBgGreen" data-slider="bgGreen" readonly="readonly" /> <div id="bgBlue" class="blue slider" data-spinner="spBgBlue" data-type="bg" ></div><input type="text" value="255" id="spBgBlue" data-slider="bgBlue" readonly="readonly" /> <div class="clear">&nbsp;</div> Background Color : <span>#ffffff</span> </li> </ul> </div> <script src="js/jquery-1.10.2.js"></script> <script src="js/jquery-ui-1.10.4.custom.min.js"></script> <script src="js/colorpicker.js"></script> </body> </html> We started by including the jQuery UI CSS file inside the head section. Proceeding to the body section, we created a div with the container class, which will act as parent div for all the page elements. Inside this div, we created another div with id value textBlock and a ui-state-highlight class. We then put some text content inside this div. For this example, we have made three paragraph elements, each having some random text inside it. After div#textBlock, there is an unordered list with the controlsContainer class. This ul element has two list items inside it. First list item has the CSS class left applied to it and the second has CSS class right applied to it. Inside li.left, we created three div elements. Each of these three div elements will be converted to a jQuery slider and will represent the red (R), green (G), and blue (B) color code, respectively. Next to each of these divs is an input element where the current color code will be displayed. This input will be converted to a spinner as well. Let's look at the first slider div and the input element next to it. The div has id txtRed and two CSS classes red and slider applied to it. The red class will be used to style the slider and the slider class will be used in our colorpicker.js file. Note that this div also has two data attributes attached to it, the first is data-spinner, whose value is the id of the input element next to the slider div we have provided as sptxtRed, the second attribute is data-type, whose value is text. The purpose of the data-type attribute is to let us know whether this slider will be used for changing the text color or the background color. Moving on to the input element next to the slider now, we have set its id as sptxtRed, which should match the value of the data-spinner attribute on the slider div. It has another attribute named data-slider, which contains the id of the slider, which it is related to. Hence, its value is txtRed. Similarly, all the slider elements have been created inside div.left and each slider has an input next to id. The data-type attribute will have the text value for all sliders inside div.left. All input elements have also been assigned a value of 0 as the initial text color will be black. The same pattern that has been followed for elements inside div.left is also followed for elements inside div.right. The only difference is that the data-type value will be bg for slider divs. For all input elements, a value of 255 is set as the background color is white in the beginning. In this manner, all the six sliders and the six input elements have been defined. Note that each element has a unique ID. Finally, there is a span element inside both div.left and div.right. The hex color code will be displayed inside it. We have placed #000000 as the default value for the text color inside the span for the text color and #ffffff as the default value for the background color inside the span for background color. Lastly, we have included the jQuery source file, the jQuery UI source file, and the colorpicker.js file. With the markup ready, we can now write the properties for the CSS classes that we used here. Styling the content To make the page presentable and structured, we need to add CSS properties for different elements. We will do this inside the head section. Go to the head section in the index.html file and write these CSS properties for different elements: <style type="text/css">   body{     color:#025c7f;     font-family:Georgia,arial,verdana;     width:700px;     margin:0 auto;   }   .container{     margin:0 auto;     font-size:14px;     position:relative;     width:700px;     text-align:justify;    } #textBlock{     color:#000000;     background-color: #ffffff;   }   .ui-state-highlight{     padding: 10px;     background: none;   }   .controlsContainer{       border: 1px solid;       margin: 0;       padding: 0;       width: 100%;       float: left;   }   .controlsContainer li{       display: inline-block;       float: left;       padding: 0 0 0 50px;       width: 299px;   }   .controlsContainer div.ui-slider{       margin: 15px 0 0;       width: 200px;       float:left;   }   .left{     border-right: 1px solid;   }   .clear{     clear: both;   }     .red .ui-slider-range{ background: #ff0000; }   .green .ui-slider-range{ background: #00ff00; }   .blue .ui-slider-range{ background: #0000ff; }     .ui-spinner{       height: 20px;       line-height: 1px;       margin: 11px 0 0 15px;     }   input[type=text]{     margin-top: 0;     width: 30px;   } </style> First, we defined some general rules for page body and div .container. Then, we defined the initial text color and background color for the div with id textBlock. Next, we defined the CSS properties for the unordered list ul .controlsContainer and its list items. We have provided some padding and width to each list item. We have also specified the width and other properties for the slider as well. Since the class ui-slider is added by jQuery UI to a slider element after it is initialized, we have added our properties in the .controlsContainer div .ui-slider rule. To make the sliders attractive, we then defined the background colors for each of the slider bars by defining color codes for red, green, and blue classes. Lastly, CSS rules have been defined for the spinner and the input box. We can now check our progress by opening the index.html page in our browser. Loading it will display a page that resembles the following screenshot: It is obvious that sliders and spinners will not be displayed here. This is because we have not written the JavaScript code required to initialize those widgets. Our next section will take care of them. Implementing the color picker In order to implement the required functionality, we first need to initialize the sliders and spinners. Whenever a slider is changed, we need to update its corresponding spinner as well, and conversely if someone changes the value of the spinner, we need to update the slider to the correct value. In case any of the value changes, we will then recalculate the current color and update the text or background color depending on the context. Defining the object structure We will organize our code using the object literal. We will define an init method, which will be the entry point. All event handlers will also be applied inside this method. To begin with, go to the js folder and open the colorpicker.js file for editing. In this file, write the code that will define the object structure and a call to it: var colorPicker = {   init : function ()   {       },   setColor : function(slider, value)   {   },   getHexColor : function(sliderType)   {   },   convertToHex : function (val)   {   } }   $(function() {   colorPicker.init(); }); An object named colorPicker has been defined with four methods. Let's see what all these methods will do: init: This method will be the entry point where we will initialize all components and add any event handlers that are required. setColor: This method will be the main method that will take care of updating the text and background colors. It will also update the value of the spinner whenever the slider moves. This method has two parameters; the slider that was moved and its current value. getHexColor: This method will be called from within setColor and it will return the hex code based on the RGB values in the spinners. It takes a sliderType parameter based on which we will decide which color has to be changed; that is, text color or background color. The actual hex code will be calculated by the next method. convertToHex: This method will convert an RGB value for color into its corresponding hex value and return it to get a HexColor method. This was an overview of the methods we are going to use. Now we will implement these methods one by one, and you will understand them in detail. After the object definition, there is the jQuery's $(document).ready() event handler that will call the init method of our object. The init method In the init method, we will initialize the sliders and the spinners and set the default values for them as well. Write the following code for the init method in the colorpicker.js file:   init : function () {   var t = this;   $( ".slider" ).slider(   {     range: "min",     max: 255,     slide : function (event, ui)     {       t.setColor($(this), ui.value);     },     change : function (event, ui)     {       t.setColor($(this), ui.value);     }   });     $('input').spinner(   {     min :0,     max : 255,     spin : function (event, ui)     {       var sliderRef = $(this).data('slider');       $('#' + sliderRef).slider("value", ui.value);     }   });       $( "#txtRed, #txtGreen, #txtBlue" ).slider('value', 0);   $( "#bgRed, #bgGreen, #bgBlue" ).slider('value', 255); } In the first line, we stored the current scope value, this, in a local variable named t. Next, we will initialize the sliders. Since we have used the CSS class slider on each slider, we can simply use the .slider selector to select all of them. During initialization, we provide four options for sliders: range, max, slide, and change. Note the value for max, which has been set to 255. Since the value for R, G, or B can be only between 0 and 255, we have set max as 255. We do not need to specify min as it is 0 by default. The slide method has also been defined, which is invoked every time the slider handle moves. The call back for slide is calling the setColor method with an instance of the current slider and the value of the current slider. The setColor method will be explained in the next section. Besides slide, the change method is also defined, which also calls the setColor method with an instance of the current slider and its value. We use both the slide and change methods. This is because a change is called once the user has stopped sliding the slider handle and the slider value has changed. Contrary to this, the slide method is called each time the user drags the slider handle. Since we want to change colors while sliding as well, we have defined the slide as well as change methods. It is time to initialize the spinners now. The spinner widget is initialized with three properties. These are min and max, and the spin. min and max method has been set to 0 and 255, respectively. Every time the up/down button on the spinner is clicked or the up/down arrow key is used, the spin method will be called. Inside this method, $(this) refers to the current spinner. We find our related slider to this spinner by reading the data-slider attribute of this spinner. Once we get the exact slider, we set its value using the value method on the slider widget. Note that calling the value method will invoke the change method of the slider as well. This is the primary reason we have defined a callback for the change event while initializing the sliders. Lastly, we will set the default values for the sliders. For sliders inside div.left, we have set the value as 0 and for sliders inside div.right, the value is set to 255. You can now check the page on your browser. You will find that the slider and the spinner elements are initialized now, with the values we specified: You can also see that changing the spinner value using either the mouse or the keyboard will update the value of the slider as well. However, changing the slider value will not update the spinner. We will handle this in the next section where we will change colors as well. Changing colors and updating the spinner The setColor method is called each time the slider or the spinner value changes. We will now define this method to change the color based on whether the slider's or spinner's value was changed. Go to the setColor method declaration and write the following code: setColor : function(slider, value) {   var t = this;   var spinnerRef = slider.data('spinner');   $('#' + spinnerRef).spinner("value", value);     var sliderType = slider.data('type')     var hexColor = t.getHexColor(sliderType);   if(sliderType == 'text')   {       $('#textBlock').css({'color' : hexColor});       $('.left span:last').text(hexColor);                  }   else   {       $('#textBlock').css({'background-color' : hexColor});       $('.right span:last').text(hexColor);                  } } In the preceding code, we receive the current slider and its value as a parameter. First we get the related spinner to this slider using the data attribute spinner. Then we set the value of the spinner to the current value of the slider. Now we find out the type of slider for which setColor is being called and store it in the sliderType variable. The value for sliderType will either be text, in case of sliders inside div.left, or bg, in case of sliders inside div.right. In the next line, we will call the getHexColor method and pass the sliderType variable as its argument. The getHexColor method will return the hex color code for the selected color. Next, based on the sliderType value, we set the color of div#textBlock. If the sliderType is text, we set the color CSS property of div#textBlock and display the selected hex code in the span inside div.left. If the sliderType value is bg, we set the background color for div#textBlock and display the hex code for the background color in the span inside div.right. The getHexColor method In the preceding section, we called the getHexColor method with the sliderType argument. Let's define it first, and then we will go through it in detail. Write the following code to define the getHexColor method: getHexColor : function(sliderType) {   var t = this;   var allInputs;   var hexCode = '#';   if(sliderType == 'text')   {     //text color     allInputs = $('.left').find('input[type=text]');   }   else   {     //background color     allInputs = $('.right').find('input[type=text]');   }   allInputs.each(function (index, element) {     hexCode+= t.convertToHex($(element).val());   });     return hexCode; } The local variable t has stored this to point to the current scope. Another variable allInputs is declared, and lastly a variable to store the hex code has been declared, whose value has been set to # initially. Next comes the if condition, which checks the value of parameter sliderType. If the value of sliderType is text, it means we need to get all the spinner values to change the text color. Hence, we use jQuery's find selector to retrieve all input boxes inside div.left. If the value of sliderType is bg, it means we need to change the background color. Therefore, the else block will be executed and all input boxes inside div.right will be retrieved. To convert the color to hex, individual values for red, green, and blue will have to be converted to hex and then concatenated to get the full color code. Therefore, we iterate in inputs using the .each method. Another method convertToHex is called, which converts the value of a single input to hex. Inside the each method, we keep concatenating the hex value of the R, G, and B components to a variable hexCode. Once all iterations are done, we return the hexCode to the parent function where it is used. Converting to hex convertToHex is a small method that accepts a value and converts it to the hex equivalent. Here is the definition of the convertToHex method: convertToHex : function (val) {   var x  = parseInt(val, 10).toString(16);   return x.length == 1 ? "0" + x : x; } Inside the method, firstly we will convert the received value to an integer using the parseInt method and then we'll use JavaScript's toString method to convert it to hex, which has base 16. In the next line, we will check the length of the converted hex value. Since we want the 6-character dash notation for color (such as #ff00ff), we need two characters each for red, green, and blue. Hence, we check the length of the created hex value. If it is only one character, we append a 0 to the beginning to make it two characters. The hex value is then returned to the parent function. With this, our implementation is complete and we can check it on a browser. Load the page in your browser and play with the sliders and spinners. You will see the text or background color changing, based on their value: You will also see the hex code displayed below the sliders. Also note that changing the sliders will change the value of the corresponding spinner and vice versa. Improving the Colorpicker This was a very basic tool that we built. You can add many more features to it and enhance its functionality. Here are some ideas to get you started: Convert it into a widget where all the required DOM for sliders and spinners is created dynamically Instead of two sliders, incorporate the text and background changing ability into a single slider with two handles, but keep two spinners as usual Summary In this article, we created a basic color picker/changer using sliders and spinners. You can use it to view and change the colors of your pages dynamically. Resources for Article: Further resources on this subject: Testing Ui Using WebdriverJs? [article] Important Aspect Angularjs Ui Development [article] Kendo Ui Dataviz Advance Charting [article]
Read more
  • 0
  • 0
  • 5586

Packt
02 Mar 2015
19 min read
Save for later

Entity Framework DB First – Inheritance Relationships between Entities

Packt
02 Mar 2015
19 min read
This article is written by Rahul Rajat Singh, the author of Mastering Entity Framework. So far, we have seen how we can use various approaches of Entity Framework, how we can manage database table relationships, and how to perform model validations using Entity Framework. In this article, we will see how we can implement the inheritance relationship between the entities. We will see how we can change the generated conceptual model to implement the inheritance relationship, and how it will benefit us in using the entities in an object-oriented manner and the database tables in a relational manner. (For more resources related to this topic, see here.) Domain modeling using inheritance in Entity Framework One of the major challenges while using a relational database is to manage the domain logic in an object-oriented manner when the database itself is implemented in a relational manner. ORMs like Entity Framework provide the strongly typed objects, that is, entities for the relational tables. However, it might be possible that the entities generated for the database tables are logically related to each other, and they can be better modeled using inheritance relationships rather than having independent entities. Entity Framework lets us create inheritance relationships between the entities, so that we can work with the entities in an object-oriented manner, and internally, the data will get persisted in the respective tables. Entity Framework provides us three ways of object relational domain modeling using the inheritance relationship: The Table per Type (TPT) inheritance The Table per Class Hierarchy (TPH) inheritance The Table per Concrete Class (TPC) inheritance Let's now take a look at the scenarios where the generated entities are not logically related, and how we can use these inheritance relationships to create a better domain model by implementing inheritance relationships between entities using the Entity Framework Database First approach. The Table per Type inheritance The Table per Type (TPT) inheritance is useful when our database has tables that are related to each other using a one-to-one relationship. This relation is being maintained in the database by a shared primary key. To illustrate this, let's take a look at an example scenario. Let's assume a scenario where an organization maintains a database of all the people who work in a department. Some of them are employees getting a fixed salary, and some of them are vendors who are hired at an hourly rate. This is modeled in the database by having all the common data in a table called Person, and there are separate tables for the data that is specific to the employees and vendors. Let's visualize this scenario by looking at the database schema: The database schema showing the TPT inheritance database schema The ID column for the People table can be an auto-increment identity column, but it should not be an auto-increment identity column for the Employee and Vendors tables. In the preceding figure, the People table contains all the data common to both type of worker. The Employee table contains the data specific to the employees and the Vendors table contains the data specific to the vendors. These tables have a shared primary key and thus, there is a one-to-one relationship between the tables. To implement the TPT inheritance, we need to perform the following steps in our application: Generate the default Entity Data Model. Delete the default relationships. Add the inheritance relationship between the entities. Use the entities via the DBContext object. Generating the default Entity Data Model Let's add a new ADO.NET Entity Data Model to our application, and generate the conceptual Entity Model for these tables. The default generated Entity Model will look like this: The generated Entity Data Model where the TPT inheritance could be used Looking at the preceding conceptual model, we can see that Entity Framework is able to figure out the one-to-one relationship between the tables and creates the entities with the same relationship. However, if we take a look at the generated entities from our application domain perspective, it is fairly evident that these entities can be better managed if they have an inheritance relationship between them. So, let's see how we can modify the generated conceptual model to implement the inheritance relationship, and Entity Framework will take care of updating the data in the respective tables. Deleting default relationships The first thing we need to do to create the inheritance relationship is to delete the existing relationship from the Entity Model. This can be done by right-clicking on the relationship and selecting Delete from Model as follows: Deleting an existing relationship from the Entity Model Adding inheritance relationships between entities Once the relationships are deleted, we can add the new inheritance relationships in our Entity Model as follows: Adding inheritance relationships in the Entity Model When we add an inheritance relationship, the Visual Entity Designer will ask for the base class and derived class as follows: Selecting the base class and derived class participating in the inheritance relationship Once the inheritance relationship is created, the Entity Model will look like this: Inheritance relationship in the Entity Model After creating the inheritance relationship, we will get a compile error that the ID property is defined in all the entities. To resolve this problem, we need to delete the ID column from the derived classes. This will still keep the ID column that maps the derived classes as it is. So, from the application perspective, the ID column is defined in the base class but from the mapping perspective, it is mapped in both the base class and derived class, so that the data will get inserted into tables mapped in both the base and derived entities. With this inheritance relationship in place, the entities can be used in an object-oriented manner, and Entity Framework will take care of updating the respective tables for each entity. Using the entities via the DBContext object As we know, DbContext is the primary class that should be used to perform various operations on entities. Let's try to use our SampleDbContext class to create an Employee and a Vendor using this Entity Model and see how the data gets updated in the database: using (SampleDbEntities db = new SampleDbEntities()) { Employee employee = new Employee(); employee.FirstName = "Employee 1"; employee.LastName = "Employee 1"; employee.PhoneNumber = "1234567"; employee.Salary = 50000; employee.EmailID = "employee1@test.com"; Vendor vendor = new Vendor(); vendor.FirstName = "vendor 1"; vendor.LastName = "vendor 1"; vendor.PhoneNumber = "1234567"; vendor.HourlyRate = 100; vendor.EmailID = "vendor1@test.com"; db.Workers.Add(employee); db.Workers.Add(vendor); db.SaveChanges(); } In the preceding code, what we are doing is creating an object of the Employee and Vendor type, and then adding them to People using the DbContext object. What Entity Framework will do internally is that it will look at the mappings of the base entity and the derived entities, and then push the respective data into the respective tables. So, if we take a look at the data inserted in the database, it will look like the following: A database snapshot of the inserted data It is clearly visible from the preceding database snapshot that Entity Framework looks at our inheritance relationship and pushes the data into the Person, Employee, and Vendor tables. The Table per Class Hierarchy inheritance The Table per Class Hierarchy (TPH) inheritance is modeled by having a single database table for all the entity classes in the inheritance hierarchy. The TPH inheritance is useful in cases where all the information about the related entities is stored in a single table. For example, using the earlier scenario, let's try to model the database in such a way that it will only contain a single table called Workers to store the Employee and Vendor details. Let's try to visualize this table: A database schema showing the TPH inheritance database schema Now what will happen in this case is that the common fields will be populated whenever we create a type of worker. Salary will only contain a value if the worker is of type Employee. The HourlyRate field will be null in this case. If the worker is of type Vendor, then the HourlyRate field will have a value, and Salary will be null. This pattern is not very elegant from a database perspective. Since we are trying to keep unrelated data in a single table, our table is not normalized. There will always be some redundant columns that contain null values if we use this approach. We should try not to use this pattern unless it is absolutely needed. To implement the TPH inheritance relationship using the preceding table structure, we need to perform the following activities: Generate the default Entity Data Model. Add concrete classes to the Entity Data Model. Map the concrete class properties to their respective tables and columns. Make the base class entity abstract. Use the entities via the DBContext object. Let's discuss this in detail. Generating the default Entity Data Model Let's now generate the Entity Data Model for this table. The Entity Framework will create a single entity, Worker, for this table: The generated model for the table created for implementing the TPH inheritance Adding concrete classes to the Entity Data Model From the application perspective, it would be a much better solution if we have classes such as Employee and Vendor, which are derived from the Worker entity. The Worker class will contain all the common properties, and Employee and Vendor will contain their respective properties. So, let's add new entities for Employee and Vendor. While creating the entity, we can specify the base class entity as Worker, which is as follows: Adding a new entity in the Entity Data Model using a base class type Similarly, we will add the Vendor entity to our Entity Data Model, and specify the Worker entity as its base class entity. Once the entities are generated, our conceptual model will look like this: The Entity Data Model after adding the derived entities Next, we have to remove the Salary and HourlyRate properties from the Worker entity, and put them in the Employee and the Vendor entities respectively. So, once the properties are put into the respective entities, our final Entity Data model will look like this: The Entity Data Model after moving the respective properties into the derived entities Mapping the concrete class properties to the respective tables and columns After this, we have to define the column mappings in the derived classes to let the derived classes know which table and column should be used to put the data. We also need to specify the mapping condition. The Employee entity should save the Salary property's value in the Salary column of the Workers table when the Salary property is Not Null and HourlyRate is Null: Table mapping and conditions to map the Employee entity to the respective tables Once this mapping is done, we have to mark the Salary property as Nullable=false in the entity property window. This will let Entity Framework know that if someone is creating an object of the Employee type, then the Salary field is mandatory: Setting the Employee entity properties as Nullable Similarly, the Vendor entity should save the HourlyRate property's value in the HourlyRate column of the Workers table when Salary is Null and HourlyRate is Not Null: Table mapping and conditions to map the Vendor entity to the respective tables And similar to the Employee class, we also have to mark the HourlyRate property as Nullable=false in the Entity Property window. This will help Entity Framework know that if someone is creating an object of the Vendor type, then the HourlyRate field is mandatory: Setting the Vendor entity properties to Nullable Making the base class entity abstract There is one last change needed to be able to use these models. To be able to use these models, we need to mark the base class as abstract, so that Entity Framework is able to resolve the object of Employee and Vendors to the Workers table. Making the base class Workers as abstract This will also be a better model from the application perspective because the Worker entity itself has no meaning from the application domain perspective. Using the entities via the DBContext object Now we have our Entity Data Model configured to use the TPH inheritance. Let's try to create an Employee object and a Vendor object, and add them to the database using the TPH inheritance hierarchy: using (SampleDbEntities db = new SampleDbEntities()){Employee employee = new Employee();employee.FirstName = "Employee 1";employee.LastName = "Employee 1";employee.PhoneNumber = "1234567";employee.Salary = 50000;employee.EmailID = "employee1@test.com";Vendor vendor = new Vendor();vendor.FirstName = "vendor 1";vendor.LastName = "vendor 1";vendor.PhoneNumber = "1234567";vendor.HourlyRate = 100;vendor.EmailID = "vendor1@test.com";db.Workers.Add(employee);db.Workers.Add(vendor);db.SaveChanges();} In the preceding code, we created objects of the Employee and Vendor types, and then added them to the Workers collection using the DbContext object. Entity Framework will look at the mappings of the base entity and the derived entities, will check the mapping conditions and the actual values of the properties, and then push the data to the respective tables. So, let's take a look at the data inserted in the Workers table: A database snapshot after inserting the data using the Employee and Vendor entities So, we can see that for our Employee and Vendor models, the actual data is being kept in the same table using Entity Framework's TPH inheritance. The Table per Concrete Class inheritance The Table per Concrete Class (TPC) inheritance can be used when the database contains separate tables for all the logical entities, and these tables have some common fields. In our existing example, if there are two separate tables of Employee and Vendor, then the database schema would look like the following: The database schema showing the TPC inheritance database schema One of the major problems in such a database design is the duplication of columns in the tables, which is not recommended from the database normalization perspective. To implement the TPC inheritance, we need to perform the following tasks: Generate the default Entity Data Model. Create the abstract class. Modify the CDSL to cater to the change. Specify the mapping to implement the TPT inheritance. Use the entities via the DBContext object. Generating the default Entity Data Model Let's now take a look at the generated entities for this database schema: The default generated entities for the TPC inheritance database schema Entity Framework has given us separate entities for these two tables. From our application domain perspective, we can use these entities in a better way if all the common properties are moved to a common abstract class. The Employee and Vendor entities will contain the properties specific to them and inherit from this abstract class to use all the common properties. Creating the abstract class Let's add a new entity called Worker to our conceptual model and move the common properties into this entity: Adding a base class for all the common properties Next, we have to mark this class as abstract from the properties window: Marking the base class as abstract class Modifying the CDSL to cater to the change Next, we have to specify the mapping for these tables. Unfortunately, the Visual Entity Designer has no support for this type of mapping, so we need to perform this mapping ourselves in the EDMX XML file. The conceptual schema definition language (CSDL) part of the EDMX file is all set since we have already moved the common properties into the abstract class. So, now we should be able to use these properties with an abstract class handle. The problem will come in the storage schema definition language (SSDL) and mapping specification language (MSL). The first thing that we need to do is to change the SSDL to let Entity Framework know that the abstract class Worker is capable of saving the data in two tables. This can be done by setting the EntitySet name in the EntityContainer tags as follows: <EntityContainer Name="todoDbModelStoreContainer">   <EntitySet Name="Employee" EntityType="Self.Employee" Schema="dbo" store_Type="Tables" />   <EntitySet Name="Vendor" EntityType="Self.Vendor" Schema="dbo" store_Type="Tables" /></EntityContainer> Specifying the mapping to implement the TPT inheritance Next, we need to change the MSL to properly map the properties to the respective tables based on the actual type of object. For this, we have to specify EntitySetMapping. The EntitySetMapping should look like the following: <EntityContainerMapping StorageEntityContainer="todoDbModelStoreContainer" CdmEntityContainer="SampleDbEntities">    <EntitySetMapping Name="Workers">   <EntityTypeMapping TypeName="IsTypeOf(SampleDbModel.Vendor)">       <MappingFragment StoreEntitySet="Vendor">       <ScalarProperty Name="HourlyRate" ColumnName="HourlyRate" />       <ScalarProperty Name="EMailId" ColumnName="EMailId" />       <ScalarProperty Name="PhoneNumber" ColumnName="PhoneNumber" />       <ScalarProperty Name="LastName" ColumnName="LastName" />       <ScalarProperty Name="FirstName" ColumnName="FirstName" />       <ScalarProperty Name="ID" ColumnName="ID" />       </MappingFragment>   </EntityTypeMapping>      <EntityTypeMapping TypeName="IsTypeOf(SampleDbModel.Employee)">       <MappingFragment StoreEntitySet="Employee">       <ScalarProperty Name="ID" ColumnName="ID" />       <ScalarProperty Name="Salary" ColumnName="Salary" />       <ScalarProperty Name="EMailId" ColumnName="EMailId" />       <ScalarProperty Name="PhoneNumber" ColumnName="PhoneNumber" />       <ScalarProperty Name="LastName" ColumnName="LastName" />       <ScalarProperty Name="FirstName" ColumnName="FirstName" />       </MappingFragment>   </EntityTypeMapping>   </EntitySetMapping></EntityContainerMapping> In the preceding code, we specified that if the actual type of object is Vendor, then the properties should map to the columns in the Vendor table, and if the actual type of entity is Employee, the properties should map to the Employee table, as shown in the following screenshot: After EDMX modifications, the mapping are visible in Visual Entity Designer If we now open the EDMX file again, we can see the properties being mapped to the respective tables in the respective entities. Doing this mapping from Visual Entity Designer is not possible, unfortunately. Using the entities via the DBContext object Let's use these "entities from our code: using (SampleDbEntities db = new SampleDbEntities()) { Employee employee = new Employee(); employee.FirstName = "Employee 1"; employee.LastName = "Employee 1"; employee.PhoneNumber = "1234567"; employee.Salary = 50000; employee.EMailId = "employee1@test.com"; Vendor vendor = new Vendor(); vendor.FirstName = "vendor 1"; vendor.LastName = "vendor 1"; vendor.PhoneNumber = "1234567"; vendor.HourlyRate = 100; vendor.EMailId = "vendor1@test.com"; db.Workers.Add(employee); db.Workers.Add(vendor); db.SaveChanges(); } In the preceding code, we created objects of the Employee and Vendor types and saved them using the Workers entity set, which is actually an abstract class. If we take a look at the inserted database, we will see the following: Database snapshot of the inserted data using TPC inheritance From the preceding screenshot, it is clear that the data is being pushed to the respective tables. The insert operation we saw in the previous code is successful but there will be an exception in the application. This exception is because when Entity Framework tries to access the values that are in the abstract class, it finds two records with same ID, and since the ID column is specified as a primary key, two records with the same value is a problem in this scenario. This exception clearly shows that the store/database generated identity columns will not work with the TPC inheritance. If we want to use the TPC inheritance, then we either need to use GUID based IDs, or pass the ID from the application, or perhaps use some database mechanism that can maintain the uniqueness of auto-generated columns across multiple tables. Choosing the inheritance strategy Now that we know about all the inheritance strategies supported by Entity Framework, let's try to analyze these approaches. The most important thing is that there is no single strategy that will work for all the scenarios. Especially if we have a legacy database. The best option would be to analyze the application requirements and then look at the existing table structure to see which approach is best suited. The Table per Class Hierarchy inheritance tends to give us denormalized tables and have redundant columns. We should only use it when the number of properties in the derived classes is very less, so that the number of redundant columns is also less, and this denormalized structure will not create problems over a period of time. Contrary to TPH, if we have a lot of properties specific to derived classes and only a few common properties, we can use the Table per Concrete Class inheritance. However, in this approach, we will end up with some properties being repeated in all the tables. Also, this approach imposes some limitations such as we cannot use auto-increment identity columns in the database. If we have a lot of common properties that could go into a base class and a lot of properties specific to derived classes, then perhaps Table per Type is the best option to go with. In any case, complex inheritance relationships that become unmanageable in the long run should be avoided. One alternative could be to have separate domain models to implement the application logic in an object-oriented manner, and then use mappers to map these domain models to Entity Framework's generated entity models. Summary In this article, we looked at the various types of inheritance relationship using Entity Framework. We saw how these inheritance relationships can be implemented, and some guidelines on which should be used in which scenario. Resources for Article: Further resources on this subject: Working with Zend Framework 2.0 [article] Hosting the service in IIS using the TCP protocol [article] Applying LINQ to Entities to a WCF Service [article]
Read more
  • 0
  • 0
  • 15753

article-image-model-view-viewmodel
Packt
02 Mar 2015
24 min read
Save for later

Model-View-ViewModel

Packt
02 Mar 2015
24 min read
In this article, by Einar Ingebrigtsen, author of the book, SignalR Blueprints, we will focus on a different programming model for client development: Model-View-ViewModel (MVVM). It will reiterate what you have already learned about SignalR, but you will also start to see a recurring theme in how you should architect decoupled software that adheres to the SOLID principles. It will also show the benefit of thinking in single page application terms (often referred to as Single Page Application (SPA)), and how SignalR really fits well with this idea. (For more resources related to this topic, see here.) The goal – an imagined dashboard A counterpart to any application is often a part of monitoring its health. Is it running? and are there any failures?. Getting this information in real time when the failure occurs is important and also getting some statistics from it is interesting. From a SignalR perspective, we will still use the hub abstraction to do pretty much what we have been doing, but the goal is to give ideas of how and what we can use SignalR for. Another goal is to dive into the architectural patterns, making it ready for larger applications. MVVM allows better separation and is very applicable for client development in general. A question that you might ask yourself is why KnockoutJS instead of something like AngularJS? It boils down to the personal preference to a certain degree. AngularJS is described as a MVW where W stands for Whatever. I find AngularJS less focused on the same things I focus on and I also find it very verbose to get it up and running. I'm not in any way an expert in AngularJS, but I have used it on a project and I found myself writing a lot to make it work the way I wanted it to in terms of MVVM. However, I don't think it's fair to compare the two. KnockoutJS is very focused in what it's trying to solve, which is just a little piece of the puzzle, while AngularJS is a full client end-to-end framework. On this note, let's just jump straight to it. Decoupling it all MVVM is a pattern for client development that became very popular in the XAML stack, enabled by Microsoft based on Martin Fowlers presentation model. Its principle is that you have a ViewModel that holds the state and exposes behavior that can be utilized from a view. The view observes any changes of the state the ViewModel exposes, making the ViewModel totally unaware that there is a view. The ViewModel is decoupled and can be put in isolation and is perfect for automated testing. As part of the state that the ViewModel typically holds is the model part, which is something it usually gets from the server, and a SignalR hub is the perfect transport to get this. It boils down to recognizing the different concerns that make up the frontend and separating it all. This gives us the following diagram: Back to basics This time we will go back in time, going down what might be considered a more purist path; use the browser elements (HTML, JavaScript, and CSS) and don't rely on any server-side rendering. Clients today are powerful and very capable and offloading the composition of what the user sees onto the client frees up server resources. You can also rely on the infrastructure of the Web for caching with static HTML files not rendered by the server. In fact, you could actually put these resources on a content delivery network, making the files available as close as possible to the end user. This would result in better load times for the user. You might have other reasons to perform server-side rendering and not just plain HTML. Leveraging existing infrastructure or third-party party tools could be those reasons. It boils down to what's right for you. But this particular sample will focus on things that the client can do. Anyways, let's get started. Open Visual Studio and create a new project by navigating to FILE | New | Project. The following dialog box will show up: From the left-hand side menu, select Web and then ASP.NET Web Application. Enter Chapter4 in the Name textbox and select your location. Select the Empty template from the template selector and make sure you deselect the Host in the cloud option. Then, click on OK, as shown in the following screenshot: Setting up the packages First, we want Twitter bootstrap. To get this, follow these steps: Add a NuGet package reference. Right-click on References in Solution Explorer and select Manage NuGet Packages and type Bootstrap in the search dialog box. Select it and then click on Install. We want a slightly different look, so we'll download one of the many bootstrap themes out here. Add a NuGet package reference called metro-bootstrap. As jQuery is still a part of this, let's add a NuGet package reference to it as well. For the MVVM part, we will use something called KnockoutJS; add it through NuGet as well. Add a NuGet package reference, as in the previous steps, but this time, type SignalR in the search dialog box. Find the package called Microsoft ASP.NET SignalR. Making any SignalR hubs available for the client Add a file called Startup.cs file to the root of the project. Add a Configuration method that will expose any SignalR hubs, as follows: public void Configuration(IAppBuilder app) { app.MapSignalR(); } At the top of the Startup.cs file, above the namespace declaration, but right below the using statements, add the following code:  [assembly: OwinStartupAttribute(typeof(Chapter4.Startup))] Knocking it out of the park KnockoutJS is a framework that implements a lot of the principles found in MVVM and makes it easier to apply. We're going to use the following two features of KnockoutJS, and it's therefore important to understand what they are and what significance they have: Observables: In order for a view to be able to know when state change in a ViewModel occurs, KnockoutJS has something called an observable for single objects or values and observable array for arrays. BindingHandlers: In the view, the counterparts that are able to recognize the observables and know how to deal with its content are known as BindingHandlers. We create binding expression in the view that instructs the view to get its content from the properties found in the binding context. The default binding context will be the ViewModel, but there are more advanced scenarios where this changes. In fact, there is a BindingHandler that enables you to specify the context at any given time called with. Our single page Whether one should strive towards having an SPA is widely discussed on the Web these days. My opinion on the subject, in the interest of the user, is that we should really try to push things in this direction. Having not to post back and cause a full reload of the page and all its resources and getting into the correct state gives the user a better experience. Some of the arguments to perform post-backs every now and then go in the direction of fixing potential memory leaks happening in the browser. Although, the technique is sound and the result is right, it really just camouflages a problem one has in the system. However, as with everything, it really depends on the situation. At the core of an SPA is a single page (pun intended), which is usually the index.html file sitting at the root of the project. Add the new index.html file and edit it as follows: Add a new HTML file (index.html) at the root of the project by right- clicking on the Chapter4 project in Solution Explorer. Navigate to Add | New Item | Web from the left-hand side menu, and then select HTML Page and name it index.html. Finally, click on Add. Let's put in the things we've added dependencies to, starting with the style sheets. In the index.html file, you'll find the <head> tag; add the following code snippet under the <title></title> tag: <link href="Content/bootstrap.min.css" rel="stylesheet" /> <link href="Content/metro-bootstrap.min.css" rel="stylesheet" /> Next, add the following code snippet right beneath the preceding code: <script type="text/javascript" src="Scripts/jquery- 1.9.0.min.js"></script> <script type="text/javascript" src="Scripts/jquery.signalR- 2.1.1.js"></script> <script type="text/javascript" src="signalr/hubs"></script> <script type="text/javascript" src="Scripts/knockout- 3.2.0.js"></script> Another thing we will need in this is something that helps us visualize things; Google has a free, open source charting library that we will use. We will take a dependency to the JavaScript APIs from Google. To do this, add the following script tag after the others: <script type="text/javascript" src="https://www.google.com/jsapi"></script> Now, we can start filling in the view part. Inside the <body> tag, we start by putting in a header, as shown here: <div class="navbar navbar-default navbar-static-top bsnavbar">     <div class="container">         <div class="navbar-header">             <h1>My Dashboard</h1>         </div>     </div> </div> The server side of things In this little dashboard thing, we will look at web requests, both successful and failed. We will perform some minor things for us to be able to do this in a very naive way, without having to flesh out a full mechanism to deal with error situations. Let's start by enabling all requests even static resources, such as HTML files, to run through all HTTP modules. A word of warning: there are performance implications of putting all requests through the managed pipeline, so normally, you wouldn't necessarily want to do this on a production system, but for this sample, it will be fine to show the concepts. Open Web.config in the project and add the following code snippet within the <configuration> tag: <system.webServer>   <modules runAllManagedModulesForAllRequests="true" /> </system.webServer> The hub In this sample, we will only have one hub, the one that will be responsible for dealing with reporting requests and failed requests. Let's add a new class called RequestStatisticsHub. Right-click on the project in Solution Explorer, select Class from Add, name it RequestStatisticsHub.cs, and then click on Add. The new class should inherit from the hub. Add the following using statement at the top: using Microsoft.AspNet.SignalR; We're going to keep a track of the count of requests and failed requests per time with a resolution of not more than every 30 seconds in the memory on the server. Obviously, if one wants to scale across multiple servers, this is way too naive and one should choose an out-of-process shared key-value store that goes across servers. However, for our purpose, this will be fine. Let's add a using statement at the top, as shown here: using System.Collections.Generic; At the top of the class, add the two dictionaries that we will use to hold this information: static Dictionary<string, int> _requestsLog = new Dictionary<string, int>(); static Dictionary<string, int> _failedRequestsLog = new Dictionary<string, int>(); In our client, we want to access these logs at startup. So let's add two methods to do so: public Dictionary<string, int> GetRequests() {     return _requestsLog; }   public Dictionary<string, int> GetFailedRequests() {     return _failedRequestsLog; } Remember the resolution of only keeping track of number of requests per 30 seconds at a time. There is no default mechanism in the .NET Framework to do this so we need to add a few helper methods to deal with rounding of time. Let's add a class called DateTimeRounding at the root of the project. Mark the class as a public static class and put the following extension methods in the class: public static DateTime RoundUp(this DateTime dt, TimeSpan d) {     var delta = (d.Ticks - (dt.Ticks % d.Ticks)) % d.Ticks;     return new DateTime(dt.Ticks + delta); }   public static DateTime RoundDown(this DateTime dt, TimeSpan d) {     var delta = dt.Ticks % d.Ticks;     return new DateTime(dt.Ticks - delta); }   public static DateTime RoundToNearest(this DateTime dt, TimeSpan d) {     var delta = dt.Ticks % d.Ticks;     bool roundUp = delta > d.Ticks / 2;       return roundUp ? dt.RoundUp(d) : dt.RoundDown(d); } Let's go back to the RequestStatisticsHub class and add some more functionality now so that we can deal with rounding of time: static void Register(Dictionary<string, int> log, Action<dynamic, string, int> hubCallback) {     var now = DateTime.Now.RoundToNearest(TimeSpan.FromSeconds(30));     var key = now.ToString("HH:mm");       if (log.ContainsKey(key))         log[key] = log[key] + 1;     else         log[key] = 1;       var hub = GlobalHost.ConnectionManager.GetHubContext<RequestStatisticsHub>() ;     hubCallback(hub.Clients.All, key, log[key]); }   public static void Request() {     Register(_requestsLog, (hub, key, value) => hub.requestCountChanged(key, value)); }   public static void FailedRequest() {     Register(_requestsLog, (hub, key, value) => hub.failedRequestCountChanged(key, value)); } This enables us to have a place to call in order to report requests and these get published back to any clients connected to this particular hub. Note the usage of GlobalHost and its ConnectionManager property. When we want to get a hub instance and when we are not in the hub context of a method being called from a client, we use ConnectionManager to get it. It gives is a proxy for the hub and enables us to call methods on any connected client. Naively dealing with requests With all this in place, we will be able to easily and naively deal with what we consider correct and failed requests. Let's add a Global.asax file by right-clicking on the project in Solution Explorer and select the New item from the Add. Navigate to Web and find Global Application Class, then click on Add. In the new file, we want to replace the BindingHandlers method with the following code snippet: protected void Application_AuthenticateRequest(object sender, EventArgs e) {     var path = HttpContext.Current.Request.Path;     if (path == "/") path = "index.html";       if (path.ToLowerInvariant().IndexOf(".html") < 0) return;       var physicalPath = HttpContext.Current.Request.MapPath(path);     if (File.Exists(physicalPath))     {         RequestStatisticsHub.Request();     }     else     {         RequestStatisticsHub.FailedRequest();     } } Basically, with this, we are only measuring requests with .html in its path, and if it's only "/", we assume it's "index.html". Any file that does not exist, accordingly, is considered an error; typically a 404 error and we register it as a failed request. Bringing it all back to the client With the server taken care of, we can start consuming all this in the client. We will now be heading down the path of creating a ViewModel and hook everything up. ViewModel Let's start by adding a JavaScript file sitting next to our index.html file at the root level of the project, call it index.js. This file will represent our ViewModel. Also, this scenario will be responsible to set up KnockoutJS, so that the ViewModel is in fact activated and applied to the page. As we only have this one page for this sample, this will be fine. Let's start by hooking up the jQuery document that is ready: $(function() { }); Inside the function created here, we will enter our viewModel definition, which will start off being an empty one: var viewModel = function() { }; KnockoutJS has a function to apply a viewModel to the document, meaning that the document or body will be associated with the viewModel instance given. Right under the definition of viewModel, add the following line: ko.applyBindings(new viewModel()); Compiling this and running it should at the very least not give you any errors but nothing more than a header saying My Dashboard. So, we need to lighten this up a bit. Inside the viewModel function definition, add the following code snippet: var self = this; this.requests = ko.observableArray(); this.failedRequests = ko.observableArray(); We enter a reference to this as a variant called self. This will help us with scoping issues later on. The arrays we added are now KnockoutJS's observable arrays that allows the view or any BindingHandler to observe the changes that are coming in. The ko.observableArray() and ko.observable() arrays both return a new function. So, if you want to access any values in it, you must unwrap it by calling it something that might seem counterintuitive at first. You might consider your variable as just another property. However, for the observableArray(), KnockoutJS adds most of the functions found in the array type in JavaScript and they can be used directly on the function without unwrapping. If you look at a variable that is an observableArray in the console of the browser, you'll see that it looks as if it actually is just any array. This is not really true though; to get to the values, you will have to unwrap it by adding () after accessing the variable. However, all the functions you're used to having on an array are here. Let's add a function that will know how to handle an entry into the viewModel function. An entry coming in is either an existing one or a new one; the key of the entry is the giveaway to decide: function handleEntry(log, key, value) {     var result = log().forEach(function (entry) {         if (entry[0] == key) {             entry[1](value);             return true;         }     });       if (result !== true) {         log.push([key, ko.observable(value)]);     } }; Let's set up the hub and add the following code to the viewModel function: var hub = $.connection.requestStatisticsHub; var initializedCount = 0;   hub.client.requestCountChanged = function (key, value) {     if (initializedCount < 2) return;     handleEntry(self.requests, key, value); }   hub.client.failedRequestCountChanged = function (key, value) {     if (initializedCount < 2) return;     handleEntry(self.failedRequests, key, value); } You might notice the initalizedCount variable. Its purpose is not to deal with requests until completely initialized, which comes next. Add the following code snippet to the viewModel function: $.connection.hub.start().done(function () {     hub.server.getRequests().done(function (requests) {         for (var property in requests) {             handleEntry(self.requests, property, requests[property]);         }           initializedCount++;     });     hub.server.getFailedRequests().done(function (requests) {         for (var property in requests) {             handleEntry(self.failedRequests, property, requests[property]);         }           initializedCount++;     }); }); We should now have enough logic in our viewModel function to actually be able to get any requests already sitting there and also respond to new ones coming. BindingHandler The key element of KnockoutJS is its BindingHandler mechanism. In KnockoutJS, everything starts with a data-bind="" attribute on an element in the HTML view. Inside the attribute, one puts binding expressions and the BindingHandlers are a key to this. Every expression starts with the name of the handler. For instance, if you have an <input> tag and you want to get the value from the input into a property on the ViewModel, you would use the BindingHandler value. There are a few BindingHandlers out of the box to deal with the common scenarios (text, value for each, and more). All of the BindingHandlers are very well documented on the KnockoutJS site. For this sample, we will actually create our own BindingHandler. KnockoutJS is highly extensible and allows you to do just this amongst other extensibility points. Let's add a JavaScript file called googleCharts.js at the root of the project. Inside it, add the following code: google.load('visualization', '1.0', { 'packages': ['corechart'] }); This will tell the Google API to enable the charting package. The next thing we want to do is to define the BindingHandler. Any handler has the option of setting up an init function and an update function. The init function should only occur once, when it's first initialized. Actually, it's when the binding context is set. If the parent binding context of the element changes, it will be called again. The update function will be called whenever there is a change in an observable or more observables that the binding expression is referring to. For our sample, we will use the init function only and actually respond to changes manually because we have a more involved scenario than what the default mechanism would provide us with. The update function that you can add to a BindingHandler has the exact same signature as the init function; hence, it is called an update. Let's add the following code underneath the load call: ko.bindingHandlers.lineChart = {     init: function (element, valueAccessor, allValueAccessors, viewModel, bindingContext) {     } }; This is the core structure of a BindingHandler. As you can see, we've named the BindingHandler as lineChart. This is the name we will use in our view later on. The signature of init and update are the same. The first parameter represents the element that holds the binding expression, whereas the second valueAccessor parameter holds a function that enables us to access the value, which is a result of the expression. KnockoutJS deals with the expression internally and parses any expression and figures out how to expand any values, and so on. Add the following code into the init function: optionsInput = valueAccessor();   var options = {     title: optionsInput.title,     width: optionsInput.width || 300,     height: optionsInput.height || 300,     backgroundColor: 'transparent',     animation: {         duration: 1000,         easing: 'out'     } };   var dataHash = {};   var chart = new google.visualization.LineChart(element); var data = new google.visualization.DataTable(); data.addColumn('string', 'x'); data.addColumn('number', 'y');   function addRow(row, rowIndex) {     var value = row[1];     if (ko.isObservable(value)) {         value.subscribe(function (newValue) {             data.setValue(rowIndex, 1, newValue);             chart.draw(data, options);         });     }       var actualValue = ko.unwrap(value);     data.addRow([row[0], actualValue]);       dataHash[row[0]] = actualValue; };   optionsInput.data().forEach(addRow);   optionsInput.data.subscribe(function (newValue) {     newValue.forEach(function(row, rowIndex) {         if( !dataHash.hasOwnProperty(row[0])) {             addRow(row,rowIndex);         }     });       chart.draw(data, options); });         chart.draw(data, options); As you can see, observables has a function called subscribe(), which is the same for both an observable array and a regular observable. The code adds a subscription to the array itself; if there is any change to the array, we will find the change and add any new row to the chart. In addition, when we create a new row, we subscribe to any change in its value so that we can update the chart. In the ViewModel, the values were converted into observable values to accommodate this. View Go back to the index.html file; we need the UI for the two charts we're going to have. Plus, we need to get both the new BindingHandler loaded and also the ViewModel. Add the following script references after the last script reference already present, as shown here: <script type="text/javascript" src="googleCharts.js"></script> <script type="text/javascript" src="index.js"></script> Inside the <body> tag below the header, we want to add a bootstrap container and a row to hold two metro styled tiles and utilize our new BindingHandler. Also, we want a footer sitting at the bottom, as shown in the following code: <div class="container">     <div class="row">         <div class="col-sm-6 col-md-4">             <div class="thumbnail tile tile-green-sea tile-large">                 <div data-bind="lineChart: { title: 'Web Requests', width: 300, height: 300, data: requests }"></div>             </div>         </div>           <div class="col-sm-6 col-md-4">             <div class="thumbnail tile tile-pomegranate tile- large">                 <div data-bind="lineChart: { title: 'Failed Web Requests', width: 300, height: 300, data: failedRequests }"></div>             </div>         </div>     </div>       <hr />     <footer class="bs-footer" role="contentinfo">         <div class="container">             The Dashboard         </div>     </footer> </div> Note the data: requests and data: failedRequests are a part of the binding expressions. These will be handled and resolved by KnockoutJS internally and pointed to the observable arrays on the ViewModel. The other properties are options that go into the BindingHandler and something it forwards to the Google Charting APIs. Trying it all out Running the preceding code (Ctrl + F5) should yield the following result: If you open a second browser and go to the same URL, you will see the change in the chart in real time. Waiting approximately for 30 seconds and refreshing the browser should add a second point automatically and also animate the chart accordingly. Typing a URL with a file that does exist should have the same effect on the failed requests chart. Summary In this article, we had a brief encounter with MVVM as a pattern with the sole purpose of establishing good practices for your client code. We added this to a single page application setting, sprinkling on top the SignalR to communicate from the server to any connected client. Resources for Article: Further resources on this subject: Using R for Statistics Research and Graphics? [article] Aspects Data Manipulation in R [article] Learning Data Analytics R and Hadoop [article]
Read more
  • 0
  • 0
  • 1928

article-image-dealing-interrupts
Packt
02 Mar 2015
19 min read
Save for later

Dealing with Interrupts

Packt
02 Mar 2015
19 min read
This article is written by Francis Perea, the author of the book Arduino Essentials. In all our previous projects, we have been constantly looking for events to occur. We have been polling, but looking for events to occur supposes a relatively big effort and a waste of CPU cycles to only notice that nothing happened. In this article, we will learn about interrupts as a totally new way to deal with events, being notified about them instead of looking for them constantly. Interrupts may be really helpful when developing projects in which fast or unknown events may occur, and thus we will see a very interesting project which will lead us to develop a digital tachograph for a computer-controlled motor. Are you ready? Here we go! (For more resources related to this topic, see here.) The concept of an interruption As you may have intuited, an interrupt is a special mechanism the CPU incorporates to have a direct channel to be noticed when some event occurs. Most Arduino microcontrollers have two of these: Interrupt 0 on digital pin 2 Interrupt 1 on digital pin 3 But some models, such as the Mega2560, come with up to five interrupt pins. Once an interrupt has been notified, the CPU completely stops what it was doing and goes on to look at it, by running a special dedicated function in our code called Interrupt Service Routine (ISR). When I say that the CPU completely stops, I mean that even functions such as delay() or millis() won't be updated while the ISR is being executed. Interrupts can be programmed to respond on different changes of the signal connected to the corresponding pin and thus the Arduino language has four predefined constants to represent each of these four modes: LOW: It will trigger the interrupt whenever the pin gets a LOW value CHANGE: The interrupt will be triggered when the pins change their values from HIGH to LOW or vice versa RISING: It will trigger the interrupt when signal goes from LOW to HIGH FALLING: It is just the opposite of RISING; the interrupt will be triggered when the signal goes from HIGH to LOW The ISR The function that the CPU will call whenever an interrupt occurs is so important to the micro that it has to accomplish a pair of rules: They can't have any parameter They can't return anything The interrupts can be executed only one at a time Regarding the first two points, they mean that we can neither pass nor receive any data from the ISR directly, but we have other means to achieve this communication with the function. We will use global variables for it. We can set and read from a global variable inside an ISR, but even so, these variables have to be declared in a special way. We have to declare them as volatile as we will see this later on in the code. The third point, which specifies that only one ISR can be attended at a time, is what makes the function millis() not being able to be updated. The millis() function relies on an interrupt to be updated, and this doesn't happen if another interrupt is already being served. As you may understand, ISR is critical to the correct code execution in a microcontroller. As a rule of thumb, we will try to keep our ISRs as simple as possible and leave all heavy weight processing that occurs outside of it, in the main loop of our code. The tachograph project To understand and manage interrupts in our projects, I would like to offer you a very particular one, a tachograph, a device that is present in all our cars and whose mission is to account for revolutions, normally the engine revolutions, but also in brake systems such as Anti-lock Brake System (ABS) and others. Mechanical considerations Well, calling it mechanical perhaps is too much, but let's make some considerations regarding how we are going to make our project account for revolutions. For this example project, I have used a small DC motor driven through a small transistor and, like in lots of industrial applications, an encoded wheel is a perfect mechanism to read the number of revolutions. By simply attaching a small disc of cardboard perpendicularly to your motor shaft, it is very easy to achieve it. By using our old friend, the optocoupler, we can sense something between its two parts, even with just a piece of cardboard with a small slot in just one side of its surface. Here, you can see the template I elaborated for such a disc, the cross in the middle will help you position the disc as perfectly as possible, that is, the cross may be as close as possible to the motor shaft. The slot has to be cut off of the black rectangle as shown in the following image: The template for the motor encoder Once I printed it, I glued it to another piece of cardboard to make it more resistant and glued it all to the crown already attached to my motor shaft. If yours doesn't have a surface big enough to glue the encoder disc to its shaft, then perhaps you can find a solution by using just a small piece of dough or similar to it. Once the encoder disc is fixed to the motor and spins attached to the motor shaft, we have to find a way to place the optocoupler in a way that makes it able to read through the encoder disc slot. In my case, just a pair of drops of glue did the trick, but if your optocoupler or motor doesn't allow you to apply this solution, I'm sure that a pair of zip ties or a small piece of dough can give you another way to fix it to the motor too. In the following image, you can see my final assembled motor with its encoder disc and optocoupler ready to be connected to the breadboard through alligator clips: The complete assembly for the motor encoder Once we have prepared our motor encoder, let's perform some tests to see it working and begin to write code to deal with interruptions. A simple interrupt tester Before going deep inside the whole code project, let's perform some tests to confirm that our encoder assembly is working fine and that we can correctly trigger an interrupt whenever the motor spins and the cardboard slot passes just through the optocoupler. The only thing you have to connect to your Arduino at the moment is the optocoupler; we will now operate our motor by hand and in a later section, we will control its speed from the computer. The test's circuit schematic is as follows: A simple circuit to test the encoder Nothing new in this circuit, it is almost the same as the one used in the optical coin detector, with the only important and necessary difference of connecting the wire coming from the detector side of the optocoupler to pin 2 of our Arduino board, because, as said in the preceding text, the interrupt 0 is available only through that pin. For this first test, we will make the encoder disc spin by hand, which allows us to clearly perceive when the interrupt triggers. For the rest of this example, we will use the LED included with the Arduino board connected to pin 13 as a way to visually indicate that the interrupts have been triggered. Our first interrupt and its ISR Once we have connected the optocoupler to the Arduino and prepared things to trigger some interrupts, let's see the code that we will use to test our assembly. The objective of this simple sketch is to commute the status of an LED every time an interrupt occurs. In the proposed tester circuit, the LED status variable will be changed every time the slot passes through the optocoupler: /*  Chapter 09 - Dealing with interrupts  A simple tester  By Francis Perea for Packt Publishing */   // A LED will be used to notify the change #define ledPin 13   // Global variables we will use // A variable to be used inside ISR volatile int status = LOW;   // A function to be called when the interrupt occurs void revolution(){   // Invert LED status   status=!status; }   // Configuration of the board: just one output void setup() {   pinMode(ledPin, OUTPUT);   // Assign the revolution() function as an ISR of interrupt 0   // Interrupt will be triggered when the signal goes from   // LOW to HIGH   attachInterrupt(0, revolution, RISING); }   // Sketch execution loop void loop(){    // Set LED status   digitalWrite(ledPin, status); } Let's take a look at its most important aspects. The LED pin apart, we declare a variable to account for changes occurring. It will be updated in the ISR of our interrupt; so, as I told you earlier, we declare it as follows: volatile int status = LOW; Following which we declare the ISR function, revolution(), which as we already know doesn't receive any parameter nor return any value. And as we said earlier, it must be as simple as possible. In our test case, the ISR simply inverts the value of the global volatile variable to its opposite value, that is, from LOW to HIGH and from HIGH to LOW. To allow our ISR to be called whenever an interrupt 0 occurs, in the setup() function, we make a call to the attachInterrupt() function by passing three parameters to it: Interrupt: The interrupt number to assign the ISR to ISR: The name without the parentheses of the function that will act as the ISR for this interrupt Mode: One of the following already explained modes that define when exactly the interrupt will be triggered In our case, the concrete sentence is as follows: attachInterrupt(0, revolution, RISING); This makes the function revolution() be the ISR of interrupt 0 that will be triggered when the signal goes from LOW to HIGH. Finally, in our main loop there is little to do. Simply update the LED based on the current value of the status variable that is going to be updated inside the ISR. If everything went right, you should see the LED commute every time the slot passes through the optocoupler as a consequence of the interrupt being triggered and the revolution() function inverting the value of the status variable that is used in the main loop to set the LED accordingly. A dial tachograph For a more complete example in this section, we will build a tachograph, a device that will present the current revolutions per minute of the motor in a visual manner by using a dial. The motor speed will be commanded serially from our computer by reusing some of the codes in our previous projects. It is not going to be very complicated if we include some way to inform about an excessive number of revolutions and even cut the engine in an extreme case to protect it, is it? The complete schematic of such a big circuit is shown in the following image. Don't get scared about the number of components as we have already seen them all in action before: The tachograph circuit As you may see, we will use a total of five pins of our Arduino board to sense and command such a set of peripherals: Pin 2: This is the interrupt 0 pin and thus it will be used to connect the output of the optocoupler. Pin 3: It will be used to deal with the servo to move the dial. Pin 4: We will use this pin to activate sound alarm once the engine current has been cut off to prevent overcharge. Pin 6: This pin will be used to deal with the motor transistor that allows us to vary the motor speed based on the commands we receive serially. Remember to use a PWM pin if you choose to use another one. Pin 13: Used to indicate with an LED an excessive number of revolutions per minute prior to cutting the engine off. There are also two more pins which, although not physically connected, will be used, pins 0 and 1, given that we are going to talk to the device serially from the computer. Breadboard connections diagram There are some wires crossed in the previous schematic, and perhaps you can see the connections better in the following breadboard connection image: Breadboard connection diagram for the tachograph The complete tachograph code This is going to be a project full of features and that is why it has such a number of devices to interact with. Let's resume the functioning features of the dial tachograph: The motor speed is commanded from the computer via a serial communication with up to five commands: Increase motor speed (+) Decrease motor speed (-) Totally stop the motor (0) Put the motor at full throttle (*) Reset the motor after a stall (R) Motor revolutions will be detected and accounted by using an encoder and an optocoupler Current revolutions per minute will be visually presented with a dial operated with a servomotor It gives visual indication via an LED of a high number of revolutions In case a maximum number of revolutions is reached, the motor current will be cut off and an acoustic alarm will sound With such a number of features, it is normal that the code for this project is going to be a bit longer than our previous sketches. Here is the code: /*  Chapter 09 - Dealing with interrupt  Complete tachograph system  By Francis Perea for Packt Publishing */   #include <Servo.h>   //The pins that will be used #define ledPin 13 #define motorPin 6 #define buzzerPin 4 #define servoPin 3   #define NOTE_A4 440 // Milliseconds between every sample #define sampleTime 500 // Motor speed increment #define motorIncrement 10 // Range of valir RPMs, alarm and stop #define minRPM  0 #define maxRPM 10000 #define alarmRPM 8000 #define stopRPM 9000   // Global variables we will use // A variable to be used inside ISR volatile unsigned long revolutions = 0; // Total number of revolutions in every sample long lastSampleRevolutions = 0; // A variable to convert revolutions per sample to RPM int rpm = 0; // LED Status int ledStatus = LOW; // An instace on the Servo class Servo myServo; // A flag to know if the motor has been stalled boolean motorStalled = false; // Thr current dial angle int dialAngle = 0; // A variable to store serial data int dataReceived; // The current motor speed int speed = 0; // A time variable to compare in every sample unsigned long lastCheckTime;   // A function to be called when the interrupt occurs void revolution(){   // Increment the total number of   // revolutions in the current sample   revolutions++; }   // Configuration of the board void setup() {   // Set output pins   pinMode(motorPin, OUTPUT);   pinMode(ledPin, OUTPUT);   pinMode(buzzerPin, OUTPUT);   // Set revolution() as ISR of interrupt 0   attachInterrupt(0, revolution, CHANGE);   // Init serial communication   Serial.begin(9600);   // Initialize the servo   myServo.attach(servoPin);   //Set the dial   myServo.write(dialAngle);   // Initialize the counter for sample time   lastCheckTime = millis(); }   // Sketch execution loop void loop(){    // If we have received serial data   if (Serial.available()) {     // read the next char      dataReceived = Serial.read();      // Act depending on it      switch (dataReceived){        // Increment speed        case '+':          if (speed<250) {            speed += motorIncrement;          }          break;        // Decrement speed        case '-':          if (speed>5) {            speed -= motorIncrement;          }          break;                // Stop motor        case '0':          speed = 0;          break;            // Full throttle           case '*':          speed = 255;          break;        // Reactivate motor after stall        case 'R':          speed = 0;          motorStalled = false;          break;      }     //Only if motor is active set new motor speed     if (motorStalled == false){       // Set the speed motor speed       analogWrite(motorPin, speed);     }   }   // If a sample time has passed   // We have to take another sample   if (millis() - lastCheckTime > sampleTime){     // Store current revolutions     lastSampleRevolutions = revolutions;     // Reset the global variable     // So the ISR can begin to count again     revolutions = 0;     // Calculate revolution per minute     rpm = lastSampleRevolutions * (1000 / sampleTime) * 60;     // Update last sample time     lastCheckTime = millis();     // Set the dial according new reading     dialAngle = map(rpm,minRPM,maxRPM,180,0);     myServo.write(dialAngle);   }   // If the motor is running in the red zone   if (rpm > alarmRPM){     // Turn on LED     digitalWrite(ledPin, HIGH);   }   else{     // Otherwise turn it off     digitalWrite(ledPin, LOW);   }   // If the motor has exceed maximum RPM   if (rpm > stopRPM){     // Stop the motor     speed = 0;     analogWrite(motorPin, speed);     // Disable it until a 'R' command is received     motorStalled = true;     // Make alarm sound     tone(buzzerPin, NOTE_A4, 1000);   }   // Send data back to the computer   Serial.print("RPM: ");   Serial.print(rpm);   Serial.print(" SPEED: ");   Serial.print(speed);   Serial.print(" STALL: ");   Serial.println(motorStalled); } It is the first time in this article that I think I have nothing to explain regarding the code that hasn't been already explained before. I have commented everything so that the code can be easily read and understood. In general lines, the code declares both constants and global variables that will be used and the ISR for the interrupt. In the setup section, all initializations of different subsystems that need to be set up before use are made: pins, interrupts, serials, and servos. The main loop begins by looking for serial commands and basically updates the speed value and the stall flag if command R is received. The final motor speed setting only occurs in case the stall flag is not on, which will occur in case the motor reaches the stopRPM value. Following with the main loop, the code looks if it has passed a sample time, in which case the revolutions are stored to compute real revolutions per minute (rpm), and the global revolutions counter incremented inside the ISR is set to 0 to begin again. The current rpm value is mapped to an angle to be presented by the dial and thus the servo is set accordingly. Next, a pair of controls is made: One to see if the motor is getting into the red zone by exceeding the max alarmRPM value and thus turning the alarm LED on And another to check if the stopRPM value has been reached, in which case the motor will be automatically cut off, the motorStalled flag is set to true, and the acoustic alarm is triggered When the motor has been stalled, it won't accept changes in its speed until it has been reset by issuing an R command via serial communication. In the last action, the code sends back some info to the Serial Monitor as another way of feedback with the operator at the computer and this should look something like the following screenshot: Serial Monitor showing the tachograph in action Modular development It has been quite a complex project in that it incorporates up to six different subsystems: optocoupler, motor, LED, buzzer, servo, and serial, but it has also helped us to understand that projects need to be developed by using a modular approach. We have worked and tested every one of these subsystems before, and that is the way it should usually be done. By developing your projects in such a submodular way, it will be easy to assemble and program the whole of the system. As you may see in the following screenshot, only by using such a modular way of working will you be able to connect and understand such a mess of wires: A working desktop may get a bit messy Summary I'm sure you have got the point regarding interrupts with all the things we have seen in this article. We have met and understood what an interrupt is and how does the CPU attend to it by running an ISR, and we have even learned about their special characteristics and restrictions and that we should keep them as little as possible. On the programming side, the only thing necessary to work with interrupts is to correctly attach the ISR with a call to the attachInterrupt() function. From the point of view of hardware, we have assembled an encoder that has been attached to a spinning motor to account for its revolutions. Finally, we have the code. We have seen a relatively long sketch, which is a sign that we are beginning to master the platform, are able to deal with a bigger number of peripherals, and that our projects require more complex software every time we have to deal with these peripherals and to accomplish all the other necessary tasks to meet what is specified in the project specifications. Resources for Article: Further resources on this subject: The Arduino Mobile Robot? [article] Using the Leap Motion Controller with Arduino [article] Android and Udoo Home Automation [article]
Read more
  • 0
  • 0
  • 28248

article-image-starting-small-and-growing-modular-way
Packt
02 Mar 2015
27 min read
Save for later

Starting Small and Growing in a Modular Way

Packt
02 Mar 2015
27 min read
This article written by Carlo Russo, author of the book KnockoutJS Blueprints, describes that RequireJS gives us a simplified format to require many parameters and to avoid parameter mismatch using the CommonJS require format; for example, another way (use this or the other one) to write the previous code is: define(function(require) {   var $ = require("jquery"),       ko = require("knockout"),       viewModel = {};   $(function() {       ko.applyBindings(viewModel);   });}); (For more resources related to this topic, see here.) In this way, we skip the dependencies definition, and RequireJS will add all the texts require('xxx') found in the function to the dependency list. The second way is better because it is cleaner and you cannot mismatch dependency names with named function arguments. For example, imagine you have a long list of dependencies; you add one or remove one, and you miss removing the relative function parameter. You now have a hard-to-find bug. And, in case you think that r.js optimizer behaves differently, I just want to assure you that it's not so; you can use both ways without any concern regarding optimization. Just to remind you, you cannot use this form if you want to load scripts dynamically or by depending on variable value; for example, this code will not work: var mod = require(someCondition ? "a" : "b");if (someCondition) {   var a = require('a');} else {   var a = require('a1');} You can learn more about this compatibility problem at this URL: http://www.requirejs.org/docs/whyamd.html#commonjscompat. You can see more about this sugar syntax at this URL: http://www.requirejs.org/docs/whyamd.html#sugar. Now that you know the basic way to use RequireJS, let's look at the next concept. Component binding handler The component binding handler is one of the new features introduced in Version 2.3 of KnockoutJS. Inside the documentation of KnockoutJS, we find the following explanation: Components are a powerful, clean way of organizing your UI code into self-contained, reusable chunks. They can represent individual controls/widgets, or entire sections of your application. A component is a combination of HTML and JavaScript. The main idea behind their inclusion was to create full-featured, reusable components, with one or more points of extensibility. A component is a combination of HTML and JavaScript. There are cases where you can use just one of them, but normally you'll use both. You can get a first simple example about this here: http://knockoutjs.com/documentation/component-binding.html. The best way to create self-contained components is with the use of an AMD module loader, such as RequireJS; put the View Model and the template of the component inside two different files, and then you can use it from your code really easily. Creating the bare bones of a custom module Writing a custom module of KnockoutJS with RequireJS is a 4-step process: Creating the JavaScript file for the View Model. Creating the HTML file for the template of the View. Registering the component with KnockoutJS. Using it inside another View. We are going to build bases for the Search Form component, just to move forward with our project; anyway, this is the starting code we should use for each component that we write from scratch. Let's cover all of these steps. Creating the JavaScript file for the View Model We start with the View Model of this component. Create a new empty file with the name BookingOnline/app/components/search.js and put this code inside it: define(function(require) {var ko = require("knockout"),     template = require("text!./search.html");function Search() {}return {   viewModel: Search,   template: template};}); Here, we are creating a constructor called Search that we will fill later. We are also using the text plugin for RequireJS to get the template search.html from the current folder, into the argument template. Then, we will return an object with the constructor and the template, using the format needed from KnockoutJS to use as a component. Creating the HTML file for the template of the View In the View Model we required a View called search.html in the same folder. At the moment, we don't have any code to put inside the template of the View, because there is no boilerplate code needed; but we must create the file, otherwise RequireJS will break with an error. Create a new file called BookingOnline/app/components/search.html with the following content: <div>Hello Search</div> Registering the component with KnockoutJS When you use components, there are two different ways to give KnockoutJS a way to find your component: Using the function ko.components.register Implementing a custom component loader The first way is the easiest one: using the default component loader of KnockoutJS. To use it with our component you should just put the following row inside the BookingOnline/app/index.js file, just before the row $(function () {: ko.components.register("search", {require: "components/search"}); Here, we are registering a module called search, and we are telling KnockoutJS that it will have to find all the information it needs using an AMD require for the path components/search (so it will load the file BookingOnline/app/components/search.js). You can find more information and a really good example about a custom component loader at: http://knockoutjs.com/documentation/component-loaders.html#example-1-a-component-loader-that-sets-up-naming-conventions. Using it inside another View Now, we can simply use the new component inside our View; put the following code inside our Index View (BookingOnline/index.html), before the script tag:    <div data-bind="component: 'search'"></div> Here, we are using the component binding handler to use the component; another commonly used way is with custom elements. We can replace the previous row with the following one:    <search></search> KnockoutJS will use our search component, but with a WebComponent-like code. If you want to support IE6-8 you should register the WebComponents you are going to use before the HTML parser can find them. Normally, this job is done inside the ko.components.register function call, but, if you are putting your script tag at the end of body as we have done until now, your WebComponent will be discarded. Follow the guidelines mentioned here when you want to support IE6-8: http://knockoutjs.com/documentation/component-custom-elements.html#note-custom-elements-and-internet-explorer-6-to-8 Now, you can open your web application and you should see the text, Hello Search. We put that markup only to check whether everything was working here, so you can remove it now. Writing the Search Form component Now that we know how to create a component, and we put the base of our Search Form component, we can try to look for the requirements for this component. A designer will review the View later, so we need to keep it simple to avoid the need for multiple changes later. From our analysis, we find that our competitors use these components: Autocomplete field for the city Calendar fields for check-in and check-out Selection field for the number of rooms, number of adults and number of children, and age of children This is a wireframe of what we should build (we got inspired by Trivago): We could do everything by ourselves, but the easiest way to realize this component is with the help of a few external plugins; we are already using jQuery, so the most obvious idea is to use jQuery UI to get the Autocomplete Widget, the Date Picker Widget, and maybe even the Button Widget. Adding the AMD version of jQuery UI to the project Let's start downloading the current version of jQuery UI (1.11.1); the best thing about this version is that it is one of the first versions that supports AMD natively. After reading the documentation of jQuery UI for the AMD (URL: http://learn.jquery.com/jquery-ui/environments/amd/) you may think that you can get the AMD version using the download link from the home page. However, if you try that you will get just a package with only the concatenated source; for this reason, if you want the AMD source file, you will have to go directly to GitHub or use Bower. Download the package from https://github.com/jquery/jquery-ui/archive/1.11.1.zip and extract it. Every time you use an external library, remember to check the compatibility support. In jQuery UI 1.11.1, as you can see in the release notes, they removed the support for IE7; so we must decide whether we want to support IE6 and 7 by adding specific workarounds inside our code, or we want to remove the support for those two browsers. For our project, we need to put the following folders into these destinations: jquery-ui-1.11.1/ui -> BookingOnline/app/ui jquery-ui-1.11.1/theme/base -> BookingOnline/css/ui We are going to apply the widget by JavaScript, so the only remaining step to integrate jQuery UI is the insertion of the style sheet inside our application. We do this by adding the following rows to the top of our custom style sheet file (BookingOnline/css/styles.css): @import url("ui/core.css");@import url("ui/menu.css");@import url("ui/autocomplete.css");@import url("ui/button.css");@import url("ui/datepicker.css");@import url("ui/theme.css") Now, we are ready to add the widgets to our web application. You can find more information about jQuery UI and AMD at: http://learn.jquery.com/jquery-ui/environments/amd/ Making the skeleton from the wireframe We want to give to the user a really nice user experience, but as the first step we can use the wireframe we put before to create a skeleton of the Search Form. Replace the entire content with a form inside the file BookingOnline/components/search.html: <form data-bind="submit: execute"></form> Then, we add the blocks inside the form, step by step, to realize the entire wireframe: <div>   <input type="text" placeholder="Enter a destination" />   <label> Check In: <input type="text" /> </label>   <label> Check Out: <input type="text" /> </label>   <input type="submit" data-bind="enable: isValid" /></div> Here, we built the first row of the wireframe; we will bind data to each field later. We bound the execute function to the submit event (submit: execute), and a validity check to the button (enable: isValid); for now we will create them empty. Update the View Model (search.js) by adding this code inside the constructor: this.isValid = ko.computed(function() {return true;}, this); And add this function to the Search prototype: Search.prototype.execute = function() { }; This is because the validity of the form will depend on the status of the destination field and of the check-in date and check-out date; we will update later, in the next paragraphs. Now, we can continue with the wireframe, with the second block. Here, we should have a field to select the number of rooms, and a block for each room. Add the following markup inside the form, after the previous one, for the second row to the View (search.html): <div>   <fieldset>     <legend>Rooms</legend>     <label>       Number of Room       <select data-bind="options: rangeOfRooms,                           value: numberOfRooms">       </select>     </label>     <!-- ko foreach: rooms -->       <fieldset>         <legend>           Room <span data-bind="text: roomNumber"></span>         </legend>       </fieldset>     <!-- /ko -->   </fieldset></div> In this markup we are asking the user to choose between the values found inside the array rangeOfRooms, to save the selection inside a property called numberOfRooms, and to show a frame for each room of the array rooms with the room number, roomNumber. When developing and we want to check the status of the system, the easiest way to do it is with a simple item inside a View bound to the JSON of a View Model. Put the following code inside the View (search.html): <pre data-bind="text: ko.toJSON($data, null, 2)"></pre> With this code, you can check the status of the system with any change directly in the printed JSON. You can find more information about ko.toJSON at http://knockoutjs.com/documentation/json-data.html Update the View Model (search.js) by adding this code inside the constructor: this.rooms = ko.observableArray([]);this.numberOfRooms = ko.computed({read: function() {   return this.rooms().length;},write: function(value) {   var previousValue = this.rooms().length;   if (value > previousValue) {     for (var i = previousValue; i < value; i++) {       this.rooms.push(new Room(i + 1));     }   } else {     this.rooms().splice(value);     this.rooms.valueHasMutated();   }},owner: this}); Here, we are creating the array of rooms, and a property to update the array properly. If the new value is bigger than the previous value it adds to the array the missing item using the constructor Room; otherwise, it removes the exceeding items from the array. To get this code working we have to create a module, Room, and we have to require it here; update the require block in this way:    var ko = require("knockout"),       template = require("text!./search.html"),       Room = require("room"); Also, add this property to the Search prototype: Search.prototype.rangeOfRooms = ko.utils.range(1, 10); Here, we are asking KnockoutJS for an array with the values from the given range. ko.utils.range is a useful method to get an array of integers. Internally, it simply makes an array from the first parameter to the second one; but if you use it inside a computed field and the parameters are observable, it re-evaluates and updates the returning array. Now, we have to create the View Model of the Room module. Create a new file BookingOnline/app/room.js with the following starting code: define(function(require) {var ko = require("knockout");function Room(roomNumber) {   this.roomNumber = roomNumber;}return Room;}); Now, our web application should appear like so: As you can see, we now have a fieldset for each room, so we can work on the template of the single room. Here, you can also see in action the previous tip about the pre field with the JSON data. With KnockoutJS 3.2 it is harder to decide when it's better to use a normal template or a component. The rule of thumb is to identify the degree of encapsulation you want to manage: Use the component when you want a self-enclosed black box, or the template if you want to manage the View Model directly. What we want to show for each room is: Room number Number of adults Number of children Age of each child We can update the Room View Model (room.js) by adding this code into the constructor: this.numberOfAdults = ko.observable(2);this.ageOfChildren = ko.observableArray([]);this.numberOfChildren = ko.computed({read: function() {   return this.ageOfChildren().length;},write: function(value) {   var previousValue = this.ageOfChildren().length;   if (value > previousValue) {     for (var i = previousValue; i < value; i++) {       this.ageOfChildren.push(ko.observable(0));     }   } else {     this.ageOfChildren().splice(value);     this.ageOfChildren.valueHasMutated();   }},owner: this});this.hasChildren = ko.computed(function() {return this.numberOfChildren() > 0;}, this); We used the same logic we have used before for the mapping between the count of the room and the count property, to have an array of age of children. We also created a hasChildren property to know whether we have to show the box for the age of children inside the View. We have to add—as we have done before for the Search View Model—a few properties to the Room prototype: Room.prototype.rangeOfAdults = ko.utils.range(1, 10);Room.prototype.rangeOfChildren = ko.utils.range(0, 10);Room.prototype.rangeOfAge = ko.utils.range(0, 17); These are the ranges we show inside the relative select. Now, as the last step, we have to put the template for the room in search.html; add this code inside the fieldset tag, after the legend tag (as you can see here, with the external markup):      <fieldset>       <legend>         Room <span data-bind="text: roomNumber"></span>       </legend>       <label> Number of adults         <select data-bind="options: rangeOfAdults,                            value: numberOfAdults"></select>       </label>       <label> Number of children         <select data-bind="options: rangeOfChildren,                             value: numberOfChildren"></select>       </label>       <fieldset data-bind="visible: hasChildren">         <legend>Age of children</legend>         <!-- ko foreach: ageOfChildren -->           <select data-bind="options: $parent.rangeOfAge,                               value: $rawData"></select>         <!-- /ko -->       </fieldset>     </fieldset>     <!-- /ko --> Here, we are using the properties we have just defined. We are using rangeOfAge from $parent because inside foreach we changed context, and the property, rangeOfAge, is inside the Room context. Why did I use $rawData to bind the value of the age of the children instead of $data? The reason is that ageOfChildren is an array of observables without any container. If you use $data, KnockoutJS will unwrap the observable, making it one-way bound; but if you use $rawData, you will skip the unwrapping and get the two-way data binding we need here. In fact, if we use the one-way data binding our model won't get updated at all. If you really don't like that the fieldset for children goes to the next row when it appears, you can change the fieldset by adding a class, like this: <fieldset class="inline" data-bind="visible: hasChildren"> Now, your application should appear as follows: Now that we have a really nice starting form, we can update the three main fields to use the jQuery UI Widgets. Realizing an Autocomplete field for the destination As soon as we start to write the code for this field we face the first problem: how can we get the data from the backend? Our team told us that we don't have to care about the backend, so we speak to the backend team to know how to get the data. After ten minutes we get three files with the code for all the calls to the backend; all we have to do is to download these files (we already got them with the Starting Package, to avoid another download), and use the function getDestinationByTerm inside the module, services/rest. Before writing the code for the field let's think about which behavior we want for it: When you put three or more letters, it will ask the server for the list of items Each recurrence of the text inside the field into each item should be bold When you select an item, a new button should appear to clear the selection If the current selected item and the text inside the field are different when the focus exits from the field, it should be cleared The data should be taken using the function, getDestinationByTerm, inside the module, services/rest The documentation of KnockoutJS also explains how to create custom binding handlers in the context of RequireJS. The what and why about binding handlers All the bindings we use inside our View are based on the KnockoutJS default binding handler. The idea behind a binding handler is that you should put all the code to manage the DOM inside a component different from the View Model. Other than this, the binding handler should be realized with reusability in mind, so it's always better not to hard-code application logic inside. The KnockoutJS documentation about standard binding is already really good, and you can find many explanations about its inner working in the Appendix, Binding Handler. When you make a custom binding handler it is important to remember that: it is your job to clean after; you should register event handling inside the init function; and you should use the update function to update the DOM depending on the change of the observables. This is the standard boilerplate code when you use RequireJS: define(function(require) {var ko = require("knockout"),     $ = require("jquery");ko.bindingHandlers.customBindingHandler = {   init: function(element, valueAccessor,                   allBindingsAccessor, data, context) {     /* Code for the initialization… */     ko.utils.domNodeDisposal.addDisposeCallback(element,       function () { /* Cleaning code … */ });   },   update: function (element, valueAccessor) {     /* Code for the update of the DOM… */   }};}); And inside the View Model module you should require this module, as follows: require('binding-handlers/customBindingHandler'); ko.utils.domNodeDisposal is a list of callbacks to be executed when the element is removed from the DOM; it's necessary because it's where you have to put the code to destroy the widgets, or remove the event handlers. Binding handler for the jQuery Autocomplete widget So, now we can write our binding handler. We will define a binding handler named autocomplete, which takes the observable to put the found value. We will also define two custom bindings, without any logic, to work as placeholders for the parameters we will send to the main binding handler. Our binding handler should: Get the value for the autoCompleteOptions and autoCompleteEvents optional data bindings. Apply the Autocomplete Widget to the item using the option of the previous step. Register all the event listeners. Register the disposal of the Widget. We also should ensure that if the observable gets cleared, the input field gets cleared too. So, this is the code of the binding handler to put inside BookingOnline/app/binding-handlers/autocomplete.js (I put comments between the code to make it easier to understand): define(function(require) {var ko = require("knockout"),     $ = require("jquery"),     autocomplete = require("ui/autocomplete");ko.bindingHandlers.autoComplete = {   init: function(element, valueAccessor, allBindingsAccessor, data, context) { Here, we are giving the name autoComplete to the new binding handler, and we are also loading the Autocomplete Widget of jQuery UI: var value = ko.utils.unwrapObservable(valueAccessor()),   allBindings = ko.utils.unwrapObservable(allBindingsAccessor()),   options = allBindings.autoCompleteOptions || {},   events = allBindings.autoCompleteEvents || {},   $element = $(element); Then, we take the data from the binding for the main parameter, and for the optional binding handler; we also put the current element into a jQuery container: autocomplete(options, $element);if (options._renderItem) {   var widget = $element.autocomplete("instance");   widget._renderItem = options._renderItem;}for (var event in events) {   ko.utils.registerEventHandler(element, event, events[event]);} Now we can apply the Autocomplete Widget to the field. If you are questioning why we used ko.utils.registerEventHandler here, the answer is: to show you this function. If you look at the source, you can see that under the wood it uses $.bind if jQuery is registered; so in our case we could simply use $.bind or $.on without any problem. But I wanted to show you this function because sometimes you use KnockoutJS without jQuery, and you can use it to support event handling of every supported browser. The source code of the function _renderItem is (looking at the file ui/autocomplete.js): _renderItem: function( ul, item ) {return $( "<li>" ).text( item.label ).appendTo( ul );}, As you can see, for security reasons, it uses the function text to avoid any possible code injection. It is important that you know that you should do data validation each time you get data from an external source and put it in the page. In this case, the source of data is already secured (because we manage it), so we override the normal behavior, to also show the HTML tag for the bold part of the text. In the last three rows we put a cycle to check for events and we register them. The standard way to register for events is with the event binding handler. The only reason you should use a custom helper is to give to the developer of the View a way to register events more than once. Then, we add to the init function the disposal code: // handle disposalko.utils.domNodeDisposal.addDisposeCallback(element, function() {$element.autocomplete("destroy");}); Here, we use the destroy function of the widget. It's really important to clean up after the use of any jQuery UI Widget or you'll create a really bad memory leak; it's not a big problem with simple applications, but it will be a really big problem if you realize an SPA. Now, we can add the update function:    },   update: function(element, valueAccessor) {     var value = valueAccessor(),         $element = $(element),         data = value();     if (!data)       $element.val("");   }};}); Here, we read the value of the observable, and clean the field if the observable is empty. The update function is executed as a computed observable, so we must be sure that we subscribe to the observables required inside. So, pay attention if you put conditional code before the subscription, because your update function could be not called anymore. Now that the binding is ready, we should require it inside our form; update the View search.html by modifying the following row:    <input type="text" placeholder="Enter a destination" /> Into this:    <input type="text" placeholder="Enter a destination"           data-bind="autoComplete: destination,                     autoCompleteEvents: destination.events,                     autoCompleteOptions: destination.options" /> If you try the application you will not see any error; the reason is that KnockoutJS ignores any data binding not registered inside the ko.bindingHandlers object, and we didn't require the binding handler autocomplete module. So, the last step to get everything working is the update of the View Model of the component; add these rows at the top of the search.js, with the other require(…) rows:      Room = require("room"),     rest = require("services/rest");require("binding-handlers/autocomplete"); We need a reference to our new binding handler, and a reference to the rest object to use it as source of data. Now, we must declare the properties we used inside our data binding; add all these properties to the constructor as shown in the following code: this.destination = ko.observable();this.destination.options = { minLength: 3,source: rest.getDestinationByTerm,select: function(event, data) {   this.destination(data.item);}.bind(this),_renderItem: function(ul, item) {   return $("<li>").append(item.label).appendTo(ul);}};this.destination.events = {blur: function(event) {   if (this.destination() && (event.currentTarget.value !==                               this.destination().value)) {     this.destination(undefined);   }}.bind(this)}; Here, we are defining the container (destination) for the data selected inside the field, an object (destination.options) with any property we want to pass to the Autocomplete Widget (you can check all the documentation at: http://api.jqueryui.com/autocomplete/), and an object (destination.events) with any event we want to apply to the field. Here, we are clearing the field if the text inside the field and the content of the saved data (inside destination) are different. Have you noticed .bind(this) in the previous code? You can check by yourself that the value of this inside these functions is the input field. As you can see, in our code we put references to the destination property of this, so we have to update the context to be the object itself; the easiest way to do this is with a simple call to the bind function. Summary In this article, we have seen all some functionalities of KnockoutJS (core). The application we realized was simple enough, but we used it to learn better how to use components and custom binding handlers. If you think we put too much code for such a small project, try to think what differences you have seen between the first and the second component: the more component and binding handler code you write, the lesser you will have to write in the future. The most important point about components and custom binding handlers is that you have to realize them looking at future reuse; more good code you write, the better it will be for you later. The core point of this article was AMD and RequireJS; how to use them inside a KnockoutJS project, and why you should do it. Resources for Article: Further resources on this subject: Components [article] Web Application Testing [article] Top features of KnockoutJS [article] e to add—as we have done before for the Search View Model—  
Read more
  • 0
  • 0
  • 2180
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-quick-start-guide-flume
Packt
02 Mar 2015
15 min read
Save for later

A Quick Start Guide to Flume

Packt
02 Mar 2015
15 min read
In this article by Steve Hoffman, the author of the book, Apache Flume: Distributed Log Collection for Hadoop - Second Edition, we will learn about the basics that are required to be known before we start working with Apache Flume. This article will help you get started with Flume. So, let's start with the first step: downloading and configuring Flume. (For more resources related to this topic, see here.) Downloading Flume Let's download Flume from http://flume.apache.org/. Look for the download link in the side navigation. You'll see two compressed .tar archives available along with the checksum and GPG signature files used to verify the archives. Instructions to verify the download are on the website, so I won't cover them here. Checking the checksum file contents against the actual checksum verifies that the download was not corrupted. Checking the signature file validates that all the files you are downloading (including the checksum and signature) came from Apache and not some nefarious location. Do you really need to verify your downloads? In general, it is a good idea and it is recommended by Apache that you do so. If you choose not to, I won't tell. The binary distribution archive has bin in the name, and the source archive is marked with src. The source archive contains just the Flume source code. The binary distribution is much larger because it contains not only the Flume source and the compiled Flume components (jars, javadocs, and so on), but also all the dependent Java libraries. The binary package contains the same Maven POM file as the source archive, so you can always recompile the code even if you start with the binary distribution. Go ahead, download and verify the binary distribution to save us some time in getting started. Flume in Hadoop distributions Flume is available with some Hadoop distributions. The distributions supposedly provide bundles of Hadoop's core components and satellite projects (such as Flume) in a way that ensures things such as version compatibility and additional bug fixes are taken into account. These distributions aren't better or worse; they're just different. There are benefits to using a distribution. Someone else has already done the work of pulling together all the version-compatible components. Today, this is less of an issue since the Apache BigTop project started (http://bigtop.apache.org/). Nevertheless, having prebuilt standard OS packages, such as RPMs and DEBs, ease installation as well as provide startup/shutdown scripts. Each distribution has different levels of free and paid options, including paid professional services if you really get into a situation you just can't handle. There are downsides, of course. The version of Flume bundled in a distribution will often lag quite a bit behind the Apache releases. If there is a new or bleeding-edge feature you are interested in using, you'll either be waiting for your distribution's provider to backport it for you, or you'll be stuck patching it yourself. Furthermore, while the distribution providers do a fair amount of testing, such as any general-purpose platform, you will most likely encounter something that their testing didn't cover, in which case, you are still on the hook to come up with a workaround or dive into the code, fix it, and hopefully, submit that patch back to the open source community (where, at a future point, it'll make it into an update of your distribution or the next version). So, things move slower in a Hadoop distribution world. You can see that as good or bad. Usually, large companies don't like the instability of bleeding-edge technology or making changes often, as change can be the most common cause of unplanned outages. You'd be hard pressed to find such a company using the bleeding-edge Linux kernel rather than something like Red Hat Enterprise Linux (RHEL), CentOS, Ubuntu LTS, or any of the other distributions whose target is stability and compatibility. If you are a startup building the next Internet fad, you might need that bleeding-edge feature to get a leg up on the established competition. If you are considering a distribution, do the research and see what you are getting (or not getting) with each. Remember that each of these offerings is hoping that you'll eventually want and/or need their Enterprise offering, which usually doesn't come cheap. Do your homework. Here's a short, nondefinitive list of some of the more established players. For more information, refer to the following links: Cloudera: http://cloudera.com/ Hortonworks: http://hortonworks.com/ MapR: http://mapr.com/ An overview of the Flume configuration file Now that we've downloaded Flume, let's spend some time going over how to configure an agent. A Flume agent's default configuration provider uses a simple Java property file of key/value pairs that you pass as an argument to the agent upon startup. As you can configure more than one agent in a single file, you will need to additionally pass an agent identifier (called a name) so that it knows which configurations to use. In my examples where I'm only specifying one agent, I'm going to use the name agent. By default, the configuration property file is monitored for changes every 30 seconds. If a change is detected, Flume will attempt to reconfigure itself. In practice, many of the configuration settings cannot be changed after the agent has started. Save yourself some trouble and pass the undocumented --no-reload-conf argument when starting the agent (except in development situations perhaps). If you use the Cloudera distribution, the passing of this flag is currently not possible. I've opened a ticket to fix that at https://issues.cloudera.org/browse/DISTRO-648. If this is important to you, please vote it up. Each agent is configured, starting with three parameters: agent.sources=<list of sources>agent.channels=<list of channels>agent.sinks=<list of sinks> Each source, channel, and sink also has a unique name within the context of that agent. For example, if I'm going to transport my Apache access logs, I might define a channel named access. The configurations for this channel would all start with the agent.channels.access prefix. Each configuration item has a type property that tells Flume what kind of source, channel, or sink it is. In this case, we are going to use an in-memory channel whose type is memory. The complete configuration for the channel named access in the agent named agent would be: agent.channels.access.type=memory Any arguments to a source, channel, or sink are added as additional properties using the same prefix. The memory channel has a capacity parameter to indicate the maximum number of Flume events it can hold. Let's say we didn't want to use the default value of 100; our configuration would now look like this: agent.channels.access.type=memoryagent.channels.access.capacity=200 Finally, we need to add the access channel name to the agent.channels property so that the agent knows to load it: agent.channels=access Let's look at a complete example using the canonical "Hello, World!" example. Starting up with "Hello, World!" No technical article would be complete without a "Hello, World!" example. Here is the configuration file we'll be using: agent.sources=s1agent.channels=c1agent.sinks=k agent.sources.s1.type=netcatagent.sources.s1.channels=c1agent.sources.s1.bind=0.0.0.0agent.sources.s1.port=1234 agent.channels.c1.type=memory agent.sinks.k1.type=loggeragent.sinks.k1.channel=c1 Here, I've defined one agent (called agent) who has a source named s1, a channel named c1, and a sink named k1. The s1 source's type is netcat, which simply opens a socket listening for events (one line of text per event). It requires two parameters: a bind IP and a port number. In this example, we are using 0.0.0.0 for a bind address (the Java convention to specify listen on any address) and port 12345. The source configuration also has a parameter called channels (plural), which is the name of the channel(s) the source will append events to, in this case, c1. It is plural, because you can configure a source to write to more than one channel; we just aren't doing that in this simple example. The channel named c1 is a memory channel with a default configuration. The sink named k1 is of the logger type. This is a sink that is mostly used for debugging and testing. It will log all events at the INFO level using Log4j, which it receives from the configured channel, in this case, c1. Here, the channel keyword is singular because a sink can only be fed data from one channel. Using this configuration, let's run the agent and connect to it using the Linux netcat utility to send an event. First, explode the .tar archive of the binary distribution we downloaded earlier: $ tar -zxf apache-flume-1.5.2-bin.tar.gz$ cd apache-flume-1.5.2-bin Next, let's briefly look at the help. Run the flume-ng command with the help command: $ ./bin/flume-ng helpUsage: ./bin/flume-ng <command> [options]... commands:help                 display this help textagent                run a Flume agentavro-client           run an avro Flume clientversion               show Flume version info global options:--conf,-c <conf>     use configs in <conf> directory--classpath,-C <cp>   append to the classpath--dryrun,-d          do not actually start Flume, just print the command--plugins-path <dirs> colon-separated list of plugins.d directories. See the                       plugins.d section in the user guide for more details.                       Default: $FLUME_HOME/plugins.d-Dproperty=value     sets a Java system property value-Xproperty=value     sets a Java -X option agent options:--conf-file,-f <file> specify a config file (required)--name,-n <name>     the name of this agent (required)--help,-h             display help text avro-client options:--rpcProps,-P <file>   RPC client properties file with server connection params--host,-H <host>       hostname to which events will be sent--port,-p <port>       port of the avro source--dirname <dir>       directory to stream to avro source--filename,-F <file>   text file to stream to avro source (default: std input)--headerFile,-R <file> File containing event headers as key/value pairs on each new line--help,-h             display help text Either --rpcProps or both --host and --port must be specified. Note that if <conf> directory is specified, then it is always included first in the classpath. As you can see, there are two ways with which you can invoke the command (other than the simple help and version commands). We will be using the agent command. The use of avro-client will be covered later. The agent command has two required parameters: a configuration file to use and the agent name (in case your configuration contains multiple agents). Let's take our sample configuration and open an editor (vi in my case, but use whatever you like): $ vi conf/hw.conf Next, place the contents of the preceding configuration into the editor, save, and exit back to the shell. Now you can start the agent: $ ./bin/flume-ng agent -n agent -c conf -f conf/hw.conf -Dflume.root.logger=INFO,console The -Dflume.root.logger property overrides the root logger in conf/log4j.properties to use the console appender. If we didn't override the root logger, everything would still work, but the output would go to the log/flume.log file instead of being based on the contents of the default configuration file. Of course, you can edit the conf/log4j.properties file and change the flume.root.logger property (or anything else you like). To change just the path or filename, you can set the flume.log.dir and flume.log.file properties in the configuration file or pass additional flags on the command line as follows: $ ./bin/flume-ng agent -n agent -c conf -f conf/hw.conf -Dflume.root.logger=INFO,console -Dflume.log.dir=/tmp -Dflume.log.file=flume-agent.log You might ask why you need to specify the -c parameter, as the -f parameter contains the complete relative path to the configuration. The reason for this is that the Log4j configuration file should be included on the class path. If you left the -c parameter off the command, you'll see this error: Warning: No configuration directory set! Use --conf <dir> to override.log4j:WARN No appenders could be found for logger (org.apache.flume.lifecycle.LifecycleSupervisor).log4j:WARN Please initialize the log4j system properly.log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info But you didn't do that so you should see these key log lines: 2014-10-05 15:39:06,109 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration.validateConfiguration(FlumeConfiguration.java:140)] Post-validation flume configuration contains configuration foragents: [agent] This line tells you that your agent starts with the name agent. Usually you'd look for this line only to be sure you started the right configuration when you have multiple configurations defined in your configuration file. 2014-10-05 15:39:06,076 (conf-file-poller-0) [INFO - org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:133)] Reloadingconfiguration file:conf/hw.conf This is another sanity check to make sure you are loading the correct file, in this case our hw.conf file. 2014-10-05 15:39:06,221 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:138)]Starting new configuration:{ sourceRunners:{s1=EventDrivenSourceRunner: { source:org.apache.flume.source.NetcatSource{name:s1,state:IDLE} }} sinkRunners:{k1=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@442fbe47 counterGroup:{ name:null counters:{} } }}channels:{c1=org.apache.flume.channel.MemoryChannel{name: c1}} } Once all the configurations have been parsed, you will see this message, which shows you everything that was configured. You can see s1, c1, and k1, and which Java classes are actually doing the work. As you probably guessed, netcat is a convenience for org.apache.flume.source.NetcatSource. We could have used the class name if we wanted. In fact, if I had my own custom source written, I would use its class name for the source's type parameter. You cannot define your own short names without patching the Flume distribution. 2014-10-05 15:39:06,427 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.source.NetcatSource.start(NetcatSource.java:164)] CreatedserverSocket:sun.nio.ch.ServerSocketChannelImpl[/0.0.0.0:12345] Here, we see that our source is now listening on port 12345 for the input. So, let's send some data to it. Finally, open a second terminal. We'll use the nc command (you can use Telnet or anything else similar) to send the Hello World string and press the Return (Enter) key to mark the end of the event: % nc localhost 12345Hello WorldOK The OK message came from the agent after we pressed the Return key, signifying that it accepted the line of text as a single Flume event. If you look at the agent log, you will see the following: 2014-10-05 15:44:11,215 (SinkRunner-PollingRunner-DefaultSinkProcessor)[INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)] Event: { headers:{} body: 48 65 6C 6C 6F 20 57 6F 72 6C 64Hello World } This log message shows you that the Flume event contains no headers (NetcatSource doesn't add any itself). The body is shown in hexadecimal along with a string representation (for us humans to read, in this case, our Hello World message). If I send the following line and then press the Enter key, you'll get an OK message: The quick brown fox jumped over the lazy dog. You'll see this in the agent's log: 2014-10-05 15:44:57,232 (SinkRunner-PollingRunner-DefaultSinkProcessor)[INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)]Event: { headers:{} body: 54 68 65 20 71 75 69 63 6B 20 62 72 6F 77 6E 20The quick brown } The event appears to have been truncated. The logger sink, by design, limits the body content to 16 bytes to keep your screen from being filled with more than what you'd need in a debugging context. If you need to see the full contents for debugging, you should use a different sink, perhaps the file_roll sink, which would write to the local filesystem. Summary In this article, we covered how to download the Flume binary distribution. We created a simple configuration file that included one source writing to one channel, feeding one sink. The source listened on a socket for network clients to connect to and to send it event data. These events were written to an in-memory channel and then fed to a Log4j sink to become the output. We then connected to our listening agent using the Linux netcat utility and sent some string events to our Flume agent's source. Finally, we verified that our Log4j-based sink wrote the events out. Resources for Article: Further resources on this subject: About Cassandra [article] Introducing Kafka [article] Transformation [article]
Read more
  • 0
  • 0
  • 7160

article-image-controlling-dc-motors-using-shield
Packt
27 Feb 2015
4 min read
Save for later

Controlling DC motors using a shield

Packt
27 Feb 2015
4 min read
 In this article by Richard Grimmett, author of the book Intel Galileo Essentials,let's graduate from a simple DC motor to a wheeled platform. There are several simple, two-wheeled robotics platforms. In this example, you'll use one that is available on several online electronics stores. It is called the Magician Chassis, sourced by SparkFun. The following image shows this: (For more resources related to this topic, see here.) To make this wheeled robotic platform work, you're going to control the two DC motors connected directly to the two wheels. You'll want to control both the direction and the speed of the two wheels to control the direction of the robot. You'll do this with an Arduino shield designed for this purpose. The Galileo is designed to accommodate many of these shields. The following image shows the shield: Specifically, you'll be interested in the connections on the front corner of the shield, which is where you will connect the two DC motors. Here is a close-up of that part of the board: It is these three connections that you will use in this example. First, however, place the board on top of the Galileo. Then mount the two boards to the top of your two-wheeled robotic platform, like this: In this case, I used a large cable tie to mount the boards to the platform, using the foam that came with the motor shield between the Galileo and plastic platform. This particular platform comes with a 4 AA battery holder, so you'll need to connect this power source, or whatever power source you are going to use, to the motor shield. The positive and negative terminals are inserted into the motor shield by loosening the screws, inserting the wires, and then tightening the screws, like this: The final step is to connect the motor wires to the motor controller shield. There are two sets of connections, one for each motor like this: Insert some batteries, and then connect the Galileo to the computer via the USB cable, and you are now ready to start programming in order to control the motors. Galileo code for the DC motor shield Now that the Hardware is in place, bring up the IDE, make sure that the proper port and device are selected, and enter the following code: The code is straightforward. It consists of the following three blocks: The declaration of the six variables that connect to the proper Galileo pins: int pwmA = 3; int pwmB = 11; int brakeA = 9; int brakeB = 8; int directionA = 12; int directionB = 13; The setup() function, which sets the directionA, directionB, brakeA, and brakeB digital output pins: pinMode(directionA, OUTPUT); pinMode(brakeA, OUTPUT); pinMode(directionB, OUTPUT); pinMode(brakeB, OUTPUT); The loop() function. This is an example of how to make the wheeled robot go forward, then turn to the right. At each of these steps, you use the brake to stop the robot: // Move Forward digitalWrite(directionA, HIGH); digitalWrite(brakeA, LOW); analogWrite(pwmA, 255); digitalWrite(directionB, HIGH); digitalWrite(brakeB, LOW); analogWrite(pwmB, 255); delay(2000); digitalWrite(brakeA, HIGH); digitalWrite(brakeB, HIGH); delay(1000); //Turn Right digitalWrite(directionA, LOW); //Establishes backward direction of Channel A digitalWrite(brakeA, LOW); //Disengage the Brake for Channel A analogWrite(pwmA, 128); //Spins the motor on Channel A at half speed digitalWrite(directionB, HIGH); //Establishes forward direction of Channel B digitalWrite(brakeB, LOW); //Disengage the Brake for Channel B analogWrite(pwmB, 128); //Spins the motor on Channel B at full speed delay(2000); digitalWrite(brakeA, HIGH); digitalWrite(brakeB, HIGH); delay(1000); Once you have uploaded the code, the program should run in a loop. If you want to run your robot without connecting to the computer, you'll need to add a battery to power the Galileo. The Galileo will need at least 2 Amps, but you might want to consider providing 3 Amps or more based on your project. To supply this from a battery, you can use one of several different choices. My personal favorite is to use an emergency cell phone charging battery, like this: If you are going to use this, you'll need a USB-to-2.1 mm DC plug cable, available at most online stores. Once you have uploaded the code, you can disconnect the computer, then press the reset button. Your robot can move all by itself! Summary By now, you should be feeling a bit more comfortable with configuring Hardware and writing code for the Galileo. This example is fun, and provides you with a moving platform. Resources for Article: Further resources on this subject: The Raspberry Pi And Raspbian? [article] Raspberry Pi Gaming Operating Systems [article] Clusters Parallel Computing And Raspberry Pi- Brief Background [article]
Read more
  • 0
  • 0
  • 5345

article-image-yarn-and-hadoop
Packt
27 Feb 2015
8 min read
Save for later

YARN and Hadoop

Packt
27 Feb 2015
8 min read
In this article, by the authors, Amol Fasale and Nirmal Kumar, of the book, YARN Essentials, you will learn about what YARN is and how it's implemented with Hadoop. YARN. YARN stands for Yet Another Resource Negotiator. YARN is a generic resource platform to manage resources in a typical cluster. YARN was introduced with Hadoop 2.0, which is an open source distributed processing framework from the Apache Software Foundation. In 2012, YARN became one of the subprojects of the larger Apache Hadoop project. YARN is also coined by the name of MapReduce 2.0. This is since Apache Hadoop MapReduce has been re-architectured from the ground up to Apache Hadoop YARN. Think of YARN as a generic computing fabric to support MapReduce and other application paradigms within the same Hadoop cluster; earlier, this was limited to batch processing using MapReduce. This really changed the game to recast Apache Hadoop as a much more powerful data processing system. With the advent of YARN, Hadoop now looks very different compared to the way it was only a year ago. YARN enables multiple applications to run simultaneously on the same shared cluster and allows applications to negotiate resources based on need. Therefore, resource allocation/management is central to YARN. YARN has been thoroughly tested at Yahoo! since September 2012. It has been in production across 30,000 nodes and 325 PB of data since January 2013. Recently, Apache Hadoop YARN won the Best Paper Award at ACM Symposium on Cloud Computing (SoCC) in 2013! (For more resources related to this topic, see here.) The redesign idea Initially, Hadoop was written solely as a MapReduce engine. Since it runs on a cluster, its cluster management components were also tightly coupled with the MapReduce programming paradigm. The concepts of MapReduce and its programming paradigm were so deeply ingrained in Hadoop that one could not use it for anything else except MapReduce. MapReduce therefore became the base for Hadoop, and as a result, the only thing that could be run on Hadoop was a MapReduce job, batch processing. In Hadoop 1.x, there was a single JobTracker service that was overloaded with many things such as cluster resource management, scheduling jobs, managing computational resources, restarting failed tasks, monitoring TaskTrackers, and so on. There was definitely a need to separate the MapReduce (specific programming model) part and the resource management infrastructure in Hadoop. YARN was the first attempt to perform this separation. Limitations of the classical MapReduce or Hadoop 1.x The main limitations of Hadoop 1.x can be categorized into the following areas: Limited scalability: Large Hadoop clusters reported some serious limitations on scalability. This is caused mainly by a single JobTracker service, which ultimately results in a serious deterioration of the overall cluster performance because of attempts to re-replicate data and overload live nodes, thus causing a network flood. According to Yahoo!, the practical limits of such a design are reached with a cluster of ~5,000 nodes and 40,000 tasks running concurrently. Therefore, it is recommended that you create smaller and less powerful clusters for such a design. Low cluster resource utilization: The resources in Hadoop 1.x on each slave node (data node), are divided in terms of a fixed number of map and reduce slots. Consider the scenario where a MapReduce job has already taken up all the available map slots and now wants more new map tasks to run. In this case, it cannot run new map tasks, even though all the reduce slots are still empty. This notion of a fixed number of slots has a serious drawback and results in poor cluster utilization. Lack of support for alternative frameworks/paradigms: The main focus of Hadoop right from the beginning was to perform computation on large datasets using parallel processing. Therefore, the only programming model it supported was MapReduce. With the current industry needs in terms of new use cases in the world of big data, many new and alternative programming models (such Apache Giraph, Apache Spark, Storm, Tez, and so on) are coming into the picture each day. There is definitely an increasing demand to support multiple programming paradigms besides MapReduce, to support the varied use cases that the big data world is facing. YARN as the modern operating system of Hadoop The MapReduce programming model is, no doubt, great for many applications, but not for everything in the world of computation. There are use cases that are best suited for MapReduce, but not all. MapReduce is essentially batch-oriented, but support for real-time and near real-time processing are the emerging requirements in the field of big data. YARN took cluster resource management capabilities from the MapReduce system so that new engines could use these generic cluster resource management capabilities. This lightened up the MapReduce system to focus on the data processing part, which it is good at and will ideally continue to be so. YARN therefore turns into a data operating system for Hadoop 2.0, as it enables multiple applications to coexist in the same shared cluster. Refer to the following figure: YARN as a modern OS for Hadoop What are the design goals for YARN This section talks about the core design goals of YARN: Scalability: Scalability is a key requirement for big data. Hadoop was primarily meant to work on a cluster of thousands of nodes with commodity hardware. Also, the cost of hardware is reducing year-on-year. YARN is therefore designed to perform efficiently on this network of a myriad of nodes. High cluster utilization: In Hadoop 1.x, the cluster resources were divided in terms of fixed size slots for both map and reduce tasks. This means that there could be a scenario where map slots might be full while reduce slots are empty, or vice versa. This was definitely not an optimal utilization of resources, and it needed further optimization. YARN fine-grained resources in terms of RAM, CPU, and disk (containers), leading to an optimal utilization of the available resources. Locality awareness: This is a key requirement for YARN when dealing with big data; moving computation is cheaper than moving data. This helps to minimize network congestion and increase the overall throughput of the system. Multitenancy: With the core development of Hadoop at Yahoo, primarily to support large-scale computation, HDFS also acquired a permission model, quotas, and other features to improve its multitenant operation. YARN was therefore designed to support multitenancy in its core architecture. Since cluster resource allocation/management is at the heart of YARN, sharing processing and storage capacity across clusters was central to the design. YARN has the notion of pluggable schedulers and the Capacity Scheduler with YARN has been enhanced to provide a flexible resource model, elastic computing, application limits, and other necessary features that enable multiple tenants to securely share the cluster in an optimized way. Support for programming model: The MapReduce programming model is no doubt great for many applications, but not for everything in the world of computation. As the world of big data is still in its inception phase, organizations are heavily investing in R&D to develop new and evolving frameworks to solve a variety of problems that big data brings. A flexible resource model: Besides mismatch with the emerging frameworks’ requirements, the fixed number of slots for resources had serious problems. It was straightforward for YARN to come up with a flexible and generic resource management model. A secure and auditable operation: As Hadoop continued to grow to manage more tenants with a myriad of use cases across different industries, the requirements for isolation became more demanding. Also, the authorization model lacked strong and scalable authentication. This is because Hadoop was designed with parallel processing in mind, with no comprehensive security. Security was an afterthought. YARN understands this and adds security-related requirements into its design. Reliability/availability: Although fault tolerance is in the core design, in reality maintaining a large Hadoop cluster is a tedious task. All issues related to high availability, failures, failures on restart, and reliability were therefore a core requirement for YARN. Backward compatibility: Hadoop 1.x has been in the picture for a while, with many successful production deployments across many industries. This massive installation base of MapReduce applications and the ecosystem of related projects, such as Hive, Pig, and so on, would not tolerate a radical redesign. Therefore, the new architecture reused as much code from the existing framework as possible, and no major surgery was conducted on it. This made MRv2 able to ensure satisfactory compatibility with MRv1 applications. Summary In this article, you learned what YARN is and how it has turned out to be the modern operating system for Hadoop, making it a multiapplication platform. Resources for Article: Further resources on this subject: Sizing and Configuring your Hadoop Cluster [article] Hive in Hadoop [article] Evolution of Hadoop [article]
Read more
  • 0
  • 0
  • 2372

Packt
27 Feb 2015
25 min read
Save for later

Putting It All Together – Community Radio

Packt
27 Feb 2015
25 min read
In this article by Andy Matthews, author of the book Creating Mobile Apps with jQuery Mobile, Second Edition, we will see a website where listeners will be greeted with music from local, independent bands across several genres and geographic regions. Building this will take many of the skills, and we'll pepper in some new techniques that can be used in this new service. Let's see what technology and techniques we could bring to bear on this venture. In this article, we will cover: A taste of Balsamiq Organizing your code An introduction to the Web Audio API Prompting the user to install your app New device-level hardware access To app or not to app Three good reasons for compiling an app (For more resources related to this topic, see here.) A taste of Balsamiq Balsamiq (http://www.balsamiq.com/) is a very popular User Experience (UX) tool for rapid prototyping. It is perfect for creating and sharing interactive mockups: When I say very popular, I mean lots of major names that you're used to seeing. Over 80,000 companies create their software with the help of Balsamiq Mockups. So, let's take a look at what the creators of a community radio station might have in mind. They might start with a screen which looks like this; a pretty standard implementation. It features an icon toolbar at the bottom and a listview element in the content: Ideally, we'd like to keep this particular implementation as pure HTML/JavaScript/CSS. That way, we could compile it into a native app at some point, using PhoneGap. However, we'd like to stay true to the Don't Repeat Yourself (DRY) principle. That means, that we're going to want to inject this footer onto every page without using a server-side process. To that end, let's set up a hidden part of our app to contain all the global elements that we may want: <div id="globalComponents">   <div data-role="navbar" class="bottomNavBar">       <ul>           <li><a data-icon="music" href="#stations_by_region" data-transition="slideup">stations</a></li>         <li><a data-icon="search" href="#search_by_artist" data-transition="slideup">discover</a></li>           <li><a data-icon="calendar" href="#events_by_location" data-transition="slideup">events</a></li>           <li><a data-icon="gear" href="#settings" data-transition="slideup">settings</a></li>       </ul>   </div></div> We'll keep this code at the bottom of the page and hide it with a simple CSS rule in the stylesheet, #globalComponents{display:none;}. Now, we'll insert this global footer into each page, just before they are created. Using the clone() method (shown in the next code snippet) ensures that not only are we pulling over a copy of the footer, but also any data attached with it. In this way, each page is built with the exact same footer, just like it is in a server-side include. When the page goes through its normal initialization process, the footer will receive the same markup treatment as the rest of the page: /************************* The App************************/var radioApp = {universalPageBeforeCreate:function(){   var $page = $(this);   if($page.find(".bottomNavBar").length == 0){     $page.append($("#globalComponents .bottomNavBar").clone());   }}}/************************* The Events************************///Interface Events$(document).on("pagebeforecreate", "[data- "role="page"]",radioApp.universalPageBeforeCreate); Look at what we've done here in this piece of JavaScript code. We're actually organizing our code a little more effectively. Organizing your code I believe in a very pragmatic approach to coding, which leads me to use more simple structures and a bare minimum of libraries. However, there are values and lessons to be learned out there. MVC, MVVM, MV* For the last couple of years, serious JavaScript developers have been bringing backend development structures to the web, as the size and scope of their project demanded a more regimented approach. For highly ambitious, long-lasting, in-browser apps, this kind of structured approach can help. This is even truer if you're on a larger team. MVC stands for Model-View-Controller ( see http://en.wikipedia.org/wiki/Model–view–controller), MVVM is for Model View ViewModel (see http://en.wikipedia.org/wiki/Model_View_ViewModel), and MV* is shorthand for Model View Whatever and is the general term used to sum up this entire movement of bringing these kinds of structures to the frontend. Some of the more popular libraries include: Backbone.JS (http://backbonejs.org/): An adapter and sample of how to make Backbone play nicely with jQuery Mobile can be found at http://demos.jquerymobile.com/1.4.5/backbone-requirejs. Ember (http://emberjs.com/): An example for Ember can be found at https://github.com/LuisSala/emberjs-jqm. AngularJS (https://angularjs.org/): Angular also has adapters for jQM in progress. There are several examples at https://github.com/tigbro/jquery-mobile-angular-adapter. Knockout: (http://knockoutjs.com/). A very nice comparison of these, and more, is at http://readwrite.com/2014/02/06/angular-backbone-ember-best-javascript-framework-for-you. MV* and jQuery Mobile Yes, you can do it!! You can add any one of these MV* frameworks to jQuery Mobile and make as complex an app as you like. Of them all, I lean toward the Ember platform for desktop and Angular for jQuery Mobile. However, I'd like to propose another alternative. I'm not going to go in-depth into the concepts behind MVC frameworks. Ember, Angular, and Backbone, all help you to separate the concerns of your application into manageable pieces, offering small building blocks with which to create your application. But, we don't need yet another library/framework to do this. It is simple enough to write code in a more organized fashion. Let's create a structure similar to what I've started before: //JavaScript Document/******************** The Application*******************//******************** The Events*******************//******************** The Model*******************/ The application Under the application section, let's fill in some of our app code and give it a namespace. Essentially, namespacing is taking your application-specific code and putting it into its own named object, so that the functions and variables won't collide with other potential global variables and functions. It keeps you from polluting the global space and helps preserve your code from those who are ignorant regarding your work. Granted, this is JavaScript and people can override anything they wish. However, this also makes it a whole lot more intentional to override something like the radioApp.getStarted function, than simply creating your own function called getStarted. Nobody is going to accidentally override a namespaced function. /******************** The application*******************/var radioApp = {settings:{   initialized:false,   geolocation:{     latitude:null,     longitude:null,   },   regionalChoice:null,   lastStation:null},getStarted:function(){   location.replace("#initialize");},fireCustomEvent:function(){   var $clicked = $(this);   var eventValue = $clicked.attr("data-appEventValue");   var event = new jQuery.Event($(this).attr("data-appEvent"));   if(eventValue){ event.val = eventValue; }   $(window).trigger(event);},otherMethodsBlahBlahBlah:function(){}} Pay attention, in particular, to the fireCustomEvent. function With that, we can now set up an event management system. At its core, the idea is pretty simple. We'd like to be able to simply put tag attributes on our clickable objects and have them fire events, such as all the MV* systems. This fits the bill perfectly. It would be quite common to set up a click event handler on a link, or something, to catch the activity. This is far simpler. Just an attribute here and there and you're wired in. The HTML code becomes more readable too. It's easy to see how declarative this makes your code: <a href="javascript://" data-appEvent="playStation" data- appEventValue="country">Country</a> The events Now, instead of watching for clicks, we're listening for events. You can have as many parts of your app as you like registering themselves to listen for the event, and then execute appropriately. As we fill out more of our application, we'll start collecting a lot of events. Instead of letting them get scattered throughout multiple nested callbacks and such, we'll be keeping them all in one handy spot. In most JavaScript MV* frameworks, this part of the code is referred to as the Router. Hooked to each event, you will see nothing but namespaced application calls: /******************** The events*******************///Interface events$(document).on("click", "[data-appEvent]",radioApp.fireCustomEvent);"$(document).on("pagecontainerbeforeshow","[data-role="page"]",radioApp.universalPageBeforeShow);"$(document).on("pagebeforecreate","[data-role="page"]",radioApp.universalPageBeforeCreate);"$(document).on("pagecontainershow", "#initialize",radioApp.getLocation);"$(document).on("pagecontainerbeforeshow", "#welcome",radioApp.initialize);//Application events$(window).on("getStarted",radioApp.getStarted);$(window).on("setHomeLocation",radioApp.setHomeLocation);$(window).on("setNotHomeLocation",radioApp.setNotHomeLocation);$(window).on("playStation",radioApp.playStation); Notice the separation of concerns into interface events and application events. We're using this as a point of distinction between events that are fired as a result of natural jQuery Mobile events (interface events), and events that we have thrown (application events). This may be an arbitrary distinction, but for someone who comes along later to maintain your code, this could come in handy. The model The model section contains the data for your application. This is typically the kind of data that is pulled in from your backend APIs. It's probably not as important here, but it never hurts to namespace what's yours. Here, we have labeled our data as the modelData label. Any information we pull in from the APIs can be dumped right into this object, like we've done here with the station data: /******************** The Model*******************/var modelData = {station:{   genres:[       {       display:"Seattle Grunge",       genreId:12,       genreParentId:1       }   ],   metroIds[14,33,22,31],   audioIds[55,43,26,23,11]}} Pair this style of programming with client-side templating, and you'll be looking at some highly maintainable, well-structured code. However, there are some features that are still missing. Typically, these frameworks will also provide bindings for your templates. This means that you only have to render the templates once. After that, simply updating your model object will be enough to cause the UI to update itself. The problem with these bound templates, is that they update the HTML in a way that would be perfect for a desktop application. But remember, jQuery Mobile does a lot of DOM manipulation to make things happen. In jQuery Mobile, a listview element starts like this: <ul data-role="listview" data-inset="true"><li><a href="#stations">Local Stations</a></li></ul> After the normal DOM manipulation, you get this: <ul data-role="listview" data-inset="true" data-theme="b" style="margin-top:0" class="ui-listview ui-listview-inset ui- corner-all ui-shadow"> <li data-corners="false" data-shadow="false" data- iconshadow="true" data-wrapperels="div" data-icon="arrow-r" data- iconpos="right" data-theme="b" class="ui-btn ui-btn-icon-right ui- li-has-arrow ui-li ui-corner-top ui-btn-up-b">       <div class="ui-btn-inner ui-li ui-corner-top">           <div class="ui-btn-text">               <a href="#stations" class="ui-link-inherit">Local Stations</a>           </div><span class="ui-icon ui-icon-arrow-r ui-icon- shadow">&nbsp;</span>       </div>   </li></ul> And that's just a single list item. You really don't want to include all that junk in your templates; so what you need to do, is just add your usual items to the listview element and then call the .listview("refresh") function. Even if you're using one of the MV* systems, you'll still have to either find, or write, an adapter that will refresh the listviews when something is added or deleted. With any luck, these kinds of things will be solved at the platform level soon. Until then, using a real MV* system with jQM will be a pain in the posterior. Introduction to the Web Audio API The Web Audio API is a fairly new development and, at the time of writing this, only existed within the mobile space on Mobile Safari and Chrome for Android (http://caniuse.com/#feat=audio-api). The Web Audio API is available on the latest versions of desktop Chrome, Safari, and Firefox, so you can still do your initial test coding there. It's only a matter of time before this is built into other major platforms. Most of the code for this part of the project, and the full explanation of the API, can be found at http://tinyurl.com/webaudioapi2014. Let's use feature detection to branch our capabilities: function init() {if("webkitAudioContext" in window) {   context = new webkitAudioContext();   // an analyzer is used for the spectrum   analyzer = context.createAnalyzer();   analyzer.smoothingTimeConstant = 0.85;   analyzer.connect(context.destination);   fetchNextSong();} else {   //do the old stuff}} The original code for this page was designed to kick off simultaneous downloads for every song in the queue. With a fat connection, this would probably be OK. Not so much on mobile. Because of the limited connectivity and bandwidth, it would be better to just chain downloads to ensure a better experience and a more respectful use of bandwidth: function fetchNextSong() {var request = new XMLHttpRequest();var nextSong = songs.pop();if(nextSong){   request = new XMLHttpRequest();   // the underscore prefix is a common naming convention   // to remind us that the variable is developer-supplied   request._soundName = nextSong;   request.open("GET", PATH + request._soundName + ".mp3", " true);   request.responseType = "arraybuffer";   request.addEventListener("load", bufferSound, false);   request.send();}} Now, the bufferSound function just needs to call the fetchNextSong function after buffering, as shown in the following code snippet: function bufferSound(event) {   var request = event.target;   context.decodeAudioData(request.response, function onSuccess(decodedBuffer) {       myBuffers.push(decodedBuffer);       fetchNextSong();   }, function onFailure() {       alert("Decoding the audio buffer failed");   });} One last thing we need to change from the original, is telling the buffer to pull the songs in the order that they were inserted: function playSound() {   // create a new AudioBufferSourceNode   var source = context.createBufferSource();   source.buffer = myBuffers.shift();  source.loop = false;   source = routeSound(source);   // play right now (0 seconds from now)   // can also pass context.currentTime   source.start(0); mySpectrum = setInterval(drawSpectrum, 30);   mySource = source;} For anyone on iOS, this solution is pretty nice. There is a lot more to this API for those who want to dig in. With this out-of-the-box example, you get a nice canvas-based audio analyzer that gives you a very nice professional look, as the audio levels bounce to the music. Slider controls are used to change the volume, the left-right balance, and the high-pass filter. If you don't know what a high-pass filter is, don't worry; I think that filter's usefulness went the way of the cassette deck. Regardless, it's fun to play with: The Web Audio API is a very serious piece of business. This example was adapted from the example on Apple's site. It only plays one sound. However, the Web Audio API was designed with the idea of making it possible to play multiple sounds, alter them in multiple ways, and even dynamically generate sounds using JavaScript. In the meantime, if you want to see this proof of concept in jQuery Mobile, you will find it in the example source in the webaudioapi.html file. For an even deeper look at what is coming, you can check the docs at https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html. Prompting the user to install your app Now, let's take a look at how we can prompt our users to download the Community Radio app to their home screens. It is very likely that you've seen it before; it's the little bubble that pops up and instructs the user with the steps to install the app. There are many different projects out there, but the best one that I have seen is a derivative of the one started by Google. Much thanks and respect to Okamototk on GitHub (https://github.com/okamototk) for taking and improving it. Okamototk evolved the bubble to include several versions of Android, legacy iOS, and even BlackBerry. You can find his original work at https://github.com/okamototk/jqm-mobile-bookmark-bubble. However, unless you can read Japanese, or enjoy translating it. Don't worry about annoying your customers too much. With this version, if they dismiss the bookmarking bubble three times, they won't see it again. The count is stored in HTML5 LocalStorage; so, if they clear out the storage, they'll see the bubble again. Thankfully, most people out there don't even know that can be done, so it won't happen very often. Usually, it's geeks like us that clear things like LocalStorage and cookies, and we know what we're getting into when we do it. In my edition of the code, I've combined all the JavaScript into a single file meant to be placed between your import of jQuery and jQuery Mobile. At the top, the first non-commented line is: page_popup_bubble="#welcome"; This is what you would change to be your own first page, or where you want the bubble to pop up. In my version, I have hardcoded the font color and text shadow properties into the bubble. This was needed, because in jQM the font color and text shadow color vary, based on the theme you're using. Consequently, in jQuery Mobile's original default A theme (white text on a black background), the font was showing up as white with a dark shadow on top of a white bubble. With my modified version, for older jQuery Mobile versions, it will always look right. We just need to be sure we've set up our page with the proper links in the head, and that our images are in place: <link rel="apple-touch-icon-precomposed" sizes="144x144" href="images/album144.png"><link rel="apple-touch-icon-precomposed" sizes="114x114" href="images/album114.png"><link rel="apple-touch-icon-precomposed" sizes="72x72" href="images/album72.png"><link rel="apple-touch-icon-precomposed" href="images/album57.png"><link rel="shortcut icon" href="img/images/album144.png"> Note the Community Radio logo here. The logo is pulled from our link tags marked with rel="apple-touch-icon-precomposed" and injected into the bubble. So, really, the only thing in the jqm_bookmark_bubble.js file that you would need to alter is the page_popup_bubble function. New device-level hardware access New kinds of hardware-level access are coming to our mobile browsers every year. Here is a look at some of what you can start doing now, and what's on the horizon. Not all of these are applicable to every project, but if you think creatively, you can probably find innovative ways to use them. Accelerometers Accelerometers are the little doo-dads inside your phone that measure the phone's orientation in space. To geek out on this, read http://en.wikipedia.org/wiki/Accelerometer. This goes beyond the simple orientation we've been using. This is true access to the accelerometers, in detail. Think about the user being able to shake their device, or tilting it as a method of interaction with your app. Maybe, Community Radio is playing something they don't like and we can give them a fun way to rage against the song. Something such as, shake a song to never hear it again. Here is a simple marble rolling game somebody made as a proof of concept. See http://menscher.com/teaching/woaa/examples/html5_accelerometer.html. Camera Apple's iOS 8 and Android's Lollipop can both access photos on their filesystems as well as the cameras. Granted, these are the latest and greatest versions of these two platforms. If you intend to support the many woefully out of date Android devices (2.3, 2.4) that are still being sold off the shelves as if brand new, then you're going to want to go with a native compilation such as PhoneGap or Apache Cordova to get that capability: <input type="file" accept="image/*"><input type="file" accept="video/*"> The following screenshot has iOS to the left and Android to the right: APIs on the horizon Mozilla is doing a lot to push the mobile web API envelope. You can check out what's on the horizon here: https://wiki.mozilla.org/WebAPI. To app or not to app, that is the question Should you or should you not compile your project into a native app? Here are some things to consider. Raining on the parade (take this seriously) When you compile your first project into an app, there is a certain thrill that you get. You did it! You made a real app! It is at this point that we need to remember the words of Dr. Ian Malcolm from the movie Jurassic Park (Go watch it again. I'll wait): "You stood on the shoulders of geniuses to accomplish something as fast as you could, and before you even knew what you had, you patented it, and packaged it, and slapped it on a plastic lunchbox, and now [bangs on the table] you're selling it, you wanna sell it. Well... your scientists were so preoccupied with whether or not they could that they didn't stop to think if they should."                                                                                                  – Dr. Ian Malcolm These words are very close to prophetic for us. In the end, their own creation ate most of the guests for lunch. According to this report from August 2012 http://www.webpronews.com/over-two-thirds-of-the-app-store-has-never-been-downloaded-2012-08 (and several others like it that I've seen before), over two-thirds of all apps on the app stores have never been downloaded. Not even once! So, realistically, app stores are where most projects go to die. Even if your app is discovered, the likelihood that anyone will use it for any significant period of time is astonishingly small. According to this article in Forbes (http://tech.fortune.cnn.com/2009/02/20/the-half-life-of-an-iphone-app/), most apps are abandoned in the space of minutes and never opened again. Paid apps last about twice as long, before either being forgotten or removed. Games have some staying power, but let's be honest, jQuery Mobile isn't exactly a compelling gaming platform, is it?? The Android world is in terrible shape. Devices can still be purchased running ancient versions of the OS, and carriers and hardware partners are not providing updates to them in anything even resembling a timely fashion. If you want to monitor the trouble you could be bringing upon yourself by embracing a native strategy, look here: http://developer.android.com/about/dashboards/index.html: You can see how fractured the Android landscape is, as well as how many older versions you'll probably have to support. On the flip side, if you're publishing strictly to the web, then every time your users visit your site, they'll be on the latest edition using the latest APIs, and you'll never have to worry about somebody using some out-of-date version. Do you have a security patch you need to apply? You can do it in seconds. If you're on the Apple app store, this patch could take days or even weeks. Three good reasons for compiling an app Yes, I know I just finished telling you about your slim chances of success and the fire and brimstone you will face for supporting apps. However, here are a few good reasons to make a real app. In fact, in my opinion, they're the only acceptable reasons. The project itself is the product This is the first and only sure sign that you need to package your project as an app. I'm not talking about selling things through your project. I'm talking about the project itself. It should be made into an app. May the force be with you. Access to native only hardware capabilities GPS and camera are reliably available for the two major platforms in their latest editions. iOS even supports accelerometers. However, if you're looking for more than this, you'll need to compile down to an app to get access to these APIs. Push notifications Do you like them? I don't know about you, but I get way too many push notifications; any app that gets too pushy either gets uninstalled or its notifications are completely turned off. I'm not alone in this. However, if you simply must have push notifications and can't wait for the web-based implementation, you'll have to compile an app. Supporting current customers Ok, this one is a stretch, but if you work in corporate America, you're going to hear it. The idea is that you're an established business and you want to give mobile support to your clients. You or someone above you has read a few whitepapers and/or case studies that show that almost 50 percent of people search in the app stores first. Even if that were true (which I'm still not sold on), you're talking to a businessperson. They understand money, expenses, and escalated maintenance. Once you explain to them the cost, complexity, and potential ongoing headaches of building and testing for all the platforms and their OS versions in the wild, it becomes a very appealing alternative to simply put out a marketing push to your current customers that you're now supporting mobile, and all they have to do is go to your site on their mobile device. Marketing folks are always looking for reasons to toot their horns at customers anyway. Marketing might still prefer to have the company icon on the customer's device to reinforce brand loyalty, but this is simply a matter of educating them that it can be done without an app. You still may not be able to convince all the right people that apps are the wrong way to go when it comes to customer support. If you can't do it on your own, slap them on their heads with a little Jakob Nielson. If they won't listen to you, maybe they'll listen to him. I would defy anyone who says that the Nielsen Norman Group doesn't know what they're saying. See http://www.nngroup.com/articles/mobile-sites-vs-apps-strategy-shift/ for the following quote: "Summary: Mobile apps currently have better usability than mobile sites, but forthcoming changes will eventually make a mobile site the superior strategy." So the $64,000 question becomes: are we making something for right now or for the future? If we're making it for right now, what are the criteria that should mark the retirement of the native strategy? Or do we intend to stay locked on it forever? Don't go into that war without an exit strategy. Summary I don't know about you, but I'm exhausted. I really don't think there's any more that can be said about jQuery Mobile, or its supporting technologies at this time. You've got examples on how to build things for a whole host of industries, and ways to deploy it through the web. At this point, you should be quoting Bob the Builder. Can we build it? Yes, we can! I hope this article has assisted and/or inspired you to go and make something great. I hope you change the world and get filthy stinking rich doing it. I'd love to hear your success stories as you move forward. To let me know how you're doing, or to let me know of any errata, or even if you just have some questions, please don't hesitate to email us directly at http://andy@commadelimited.com. Now, go be awesome! Resources for Article: Further resources on this subject: Tips and Tricks for Working with jQuery and WordPress [article] Building a Custom Version of jQuery [article] jQuery UI 1.8: The Accordion Widget [article]
Read more
  • 0
  • 0
  • 3248
article-image-applications-webrtc
Packt
27 Feb 2015
20 min read
Save for later

Applications of WebRTC

Packt
27 Feb 2015
20 min read
This article is by Andrii Sergiienko, the author of the book WebRTC Cookbook. WebRTC is a relatively new and revolutionary technology that opens new horizons in the area of interactive applications and services. Most of the popular web browsers support it natively (such as Chrome and Firefox) or via extensions (such as Safari). Mobile platforms such as Android and iOS allow you to develop native WebRTC applications. In this article, we will cover the following recipes: Creating a multiuser conference using WebRTCO Taking a screenshot using WebRTC Compiling and running a demo for Android (For more resources related to this topic, see here.) Creating a multiuser conference using WebRTCO In this recipe, we will create a simple application that supports a multiuser videoconference. We will do it using WebRTCO—an open source JavaScript framework for developing WebRTC applications. Getting ready For this recipe, you should have a web server installed and configured. The application we will create can work while running on the local filesystem, but it is more convenient to use it via the web server. To create the application, we will use the signaling server located on the framework's homepage. The framework is open source, so you can download the signaling server from GitHub and install it locally on your machine. GitHub's page for the project can be found at https://github.com/Oslikas/WebRTCO. How to do it… The following recipe is built on the framework's infrastructure. We will use the framework's signaling server. What we need to do is include the framework's code and do some initialization procedure: Create an HTML file and add common HTML heads: <!DOCTYPE html> <html lang="en"> <head>     <meta charset="utf-8"> Add some style definitions to make the web page looking nicer:     <style type="text/css">         video {             width: 384px;             height: 288px;             border: 1px solid black;             text-align: center;         }         .container {             width: 780px;             margin: 0 auto;         }     </style> Include the framework in your project: <script type="text/javascript" src ="https://cdn.oslikas.com/js/WebRTCO-1.0.0-beta-min.js"charset="utf-8"></script></head> Define the onLoad function—it will be called after the web page is loaded. In this function, we will make some preliminary initializing work: <body onload="onLoad();"> Define HTML containers where the local video will be placed: <div class="container">     <video id="localVideo"></video> </div> Define a place where the remote video will be added. Note that we don't create HTML video objects, and we just define a separate div. Further, video objects will be created and added to the page by the framework automatically: <div class="container" id="remoteVideos"></div> <div class="container"> Create the controls for the chat area: <div id="chat_area" style="width:100%; height:250px;overflow: auto; margin:0 auto 0 auto; border:1px solidrgb(200,200,200); background: rgb(250,250,250);"></div></div><div class="container" id="div_chat_input"><input type="text" class="search-query"placeholder="chat here" name="msgline" id="chat_input"><input type="submit" class="btn" id="chat_submit_btn"onclick="sendChatTxt();"/></div> Initialize a few variables: <script type="text/javascript">     var videoCount = 0;     var webrtco = null;     var parent = document.getElementById('remoteVideos');     var chatArea = document.getElementById("chat_area");     var chatColorLocal = "#468847";     var chatColorRemote = "#3a87ad"; Define a function that will be called by the framework when a new remote peer is connected. This function creates a new video object and puts it on the page:     function getRemoteVideo(remPid) {         var video = document.createElement('video');         var id = 'remoteVideo_' + remPid;         video.setAttribute('id',id);         parent.appendChild(video);         return video;     } Create the onLoad function. It initializes some variables and resizes the controls on the web page. Note that this is not mandatory, and we do it just to make the demo page look nicer:     function onLoad() {         var divChatInput =         document.getElementById("div_chat_input");         var divChatInputWidth = divChatInput.offsetWidth;         var chatSubmitButton =         document.getElementById("chat_submit_btn");         var chatSubmitButtonWidth =         chatSubmitButton.offsetWidth;         var chatInput =         document.getElementById("chat_input");         var chatInputWidth = divChatInputWidth -         chatSubmitButtonWidth - 40;         chatInput.setAttribute("style","width:" +         chatInputWidth + "px");         chatInput.style.width = chatInputWidth + 'px';         var lv = document.getElementById("localVideo"); Create a new WebRTCO object and start the application. After this point, the framework will start signaling connection, get access to the user's media, and will be ready for income connections from remote peers: webrtco = new WebRTCO('wss://www.webrtcexample.com/signalling',lv, OnRoomReceived, onChatMsgReceived, getRemoteVideo, OnBye);}; Here, the first parameter of the function is the URL of the signaling server. In this example, we used the signaling server provided by the framework. However, you can install your own signaling server and use an appropriate URL. The second parameter is the local video object ID. Then, we will supply functions to process messages of received room, received message, and received remote video stream. The last parameter is the function that will be called when some of the remote peers have been disconnected. The following function will be called when the remote peer has closed the connection. It will remove video objects that became outdated:     function OnBye(pid) {         var video = document.getElementById("remoteVideo_"         + pid);         if (null !== video) video.remove();     }; We also need a function that will create a URL to share with other peers in order to make them able to connect to the virtual room. The following piece of code represents such a function: function OnRoomReceived(room) {addChatTxt("Now, if somebody wants to join you,should use this link: <ahref=""+window.location.href+"?room="+room+"">"+window.location.href+"?room="+room+"</a>",chatColorRemote);}; The following function prints some text in the chat area. We will also use it to print the URL to share with remote peers:     function addChatTxt(msg, msgColor) {         var txt = "<font color=" + msgColor + ">" +         getTime() + msg + "</font><br/>";         chatArea.innerHTML = chatArea.innerHTML + txt;         chatArea.scrollTop = chatArea.scrollHeight;     }; The next function is a callback that is called by the framework when a peer has sent us a message. This function will print the message in the chat area:     function onChatMsgReceived(msg) {         addChatTxt(msg, chatColorRemote);     }; To send messages to remote peers, we will create another function, which is represented in the following code:     function sendChatTxt() {         var msgline =         document.getElementById("chat_input");         var msg = msgline.value;         addChatTxt(msg, chatColorLocal);         msgline.value = '';         webrtco.API_sendPutChatMsg(msg);     }; We also want to print the time while printing messages; so we have a special function that formats time data appropriately:     function getTime() {         var d = new Date();         var c_h = d.getHours();         var c_m = d.getMinutes();         var c_s = d.getSeconds();           if (c_h < 10) { c_h = "0" + c_h; }         if (c_m < 10) { c_m = "0" + c_m; }         if (c_s < 10) { c_s = "0" + c_s; }         return c_h + ":" + c_m + ":" + c_s + ": ";     }; We have some helper code to make our life easier. We will use it while removing obsolete video objects after remote peers are disconnected:     Element.prototype.remove = function() {         this.parentElement.removeChild(this);     }     NodeList.prototype.remove =     HTMLCollection.prototype.remove = function() {         for(var i = 0, len = this.length; i < len; i++) {             if(this[i] && this[i].parentElement) {                 this[i].parentElement.removeChild(this[i]);             }         }     } </script> </body> </html> Now, save the file and put it on the web server, where it could be accessible from web browser. How it works… Open a web browser and navigate to the place where the file is located on the web server. You will see an image from the web camera and a chat area beneath it. At this stage, the application has created the WebRTCO object and initiated the signaling connection. If everything is good, you will see an URL in the chat area. Open this URL in a new browser window or on another machine—the framework will create a new video object for every new peer and will add it to the web page. The number of peers is not limited by the application. In the following screenshot, I have used three peers: two web browser windows on the same machine and a notebook as the third peer: Taking a screenshot using WebRTC Sometimes, it can be useful to be able to take screenshots from a video during videoconferencing. In this recipe, we will implement such a feature. Getting ready No specific preparation is necessary for this recipe. You can take any basic WebRTC videoconferencing application. We will add some code to the HTML and JavaScript parts of the application. How to do it… Follow these steps: First of all, add image and canvas objects to the web page of the application. We will use these objects to take screenshots and display them on the page: <img id="localScreenshot" src=""> <canvas style="display:none;" id="localCanvas"></canvas> Next, you have to add a button to the web page. After clicking on this button, the appropriate function will be called to take the screenshot from the local stream video: <button onclick="btn_screenshot()" id="btn_screenshot">Make a screenshot</button> Finally, we need to implement the screenshot taking function: function btn_screenshot() { var v = document.getElementById("localVideo"); var s = document.getElementById("localScreenshot"); var c = document.getElementById("localCanvas"); var ctx = c.getContext("2d"); Draw an image on the canvas object—the image will be taken from the video object: ctx.drawImage(v,0,0); Now, take reference of the canvas, convert it to the DataURL object, and insert the value into the src option of the image object. As a result, the image object will show us the taken screenshot: s.src = c.toDataURL('image/png'); } That is it. Save the file and open the application in a web browser. Now, when you click on the Make a screenshot button, you will see the screenshot in the appropriate image object on the web page. You can save the screenshot to the disk using right-click and the pop-up menu. How it works… We use the canvas object to take a frame of the video object. Then, we will convert the canvas' data to DataURL and assign this value to the src parameter of the image object. After that, an image object is referred to the video frame, which is stored in the canvas. Compiling and running a demo for Android Here, you will learn how to build a native demo WebRTC application for Android. Unfortunately, the supplied demo application from Google doesn't contain any IDE-specific project files, so you will have to deal with console scripts and commands during all the building process. Getting ready We will need to check whether we have all the necessary libraries and packages installed on the work machine. For this recipe, I used a Linux box—Ubuntu 14.04.1 x64. So all the commands that might be specific for OS will be relevant to Ubuntu. Nevertheless, using Linux is not mandatory and you can take Windows or Mac OS X. If you're using Linux, it should be 64-bit based. Otherwise, you most likely won't be able to compile Android code. Preparing the system First of all, you need to install the necessary system packages: sudo apt-get install git git-svn subversion g++ pkg-config gtk+-2.0libnss3-dev libudev-dev ant gcc-multilib lib32z1 lib32stdc++6 Installing Oracle JDK By default, Ubuntu is supplied with OpenJDK, but it is highly recommended that you install an Oracle JDK. Otherwise, you can face issues while building WebRTC applications for Android. One another thing that you should keep in mind is that you should probably use Oracle JDK version 1.6—other versions (in particular, 1.7 and 1.8) might not be compatible with the WebRTC code base. This will probably be fixed in the future, but in my case, only Oracle JDK 1.6 was able to build the demo successfully. Download the Oracle JDK from its home page at http://www.oracle.com/technetwork/java/javase/downloads/index.html. In case there is no download link on such an old JDK, you can try another URL: http://www.oracle.com/technetwork/java/javasebusiness/downloads/java-archive-downloads-javase6-419409.html. Oracle will probably ask you to sign in or register first. You will be able to download anything from their archive. Install the downloaded JDK: sudo mkdir –p /usr/lib/jvmcd /usr/lib/jvm && sudo /bin/sh ~/jdk-6u45-linux-x64.bin --noregister Here, I assume that you downloaded the JDK package into the home directory. Register the JDK in the system: sudo update-alternatives --install /usr/bin/javac javac /usr/lib/jvm/jdk1.6.0_45/bin/javac 50000 sudo update-alternatives --install /usr/bin/java java /usr/lib/jvm/jdk1.6.0_45/bin/java 50000 sudo update-alternatives --config javac sudo update-alternatives --config java cd /usr/lib sudo ln -s /usr/lib/jvm/jdk1.6.0_45 java-6-sun export JAVA_HOME=/usr/lib/jvm/jdk1.6.0_45/ Test the Java version: java -version You should see something like Java HotSpot on the screen—it means that the correct JVM is installed. Getting the WebRTC source code Perform the following steps to get the WebRTC source code: Download and prepare Google Developer Tools:Getting the WebRTC source code mkdir –p ~/dev && cd ~/dev git clone https://chromium.googlesource.com/chromium/tools/depot_tools.git export PATH=`pwd`/depot_tools:"$PATH" Download the WebRTC source code: gclient config http://webrtc.googlecode.com/svn/trunk echo "target_os = ['android', 'unix']" >> .gclient gclient sync The last command can take a couple of minutes (actually, it depends on your Internet connection speed), as you will be downloading several gigabytes of source code. Installing Android Developer Tools To develop Android applications, you should have Android Developer Tools (ADT) installed. This SDK contains Android-specific libraries and tools that are necessary to build and develop native software for Android. Perform the following steps to install ADT: Download ADT from its home page http://developer.android.com/sdk/index.html#download. Unpack ADT to a folder: cd ~/dev unzip ~/adt-bundle-linux-x86_64-20140702.zip Set up the ANDROID_HOME environment variable: export ANDROID_HOME=`pwd`/adt-bundle-linux-x86_64-20140702/sdk How to do it… After you've prepared the environment and installed the necessary system components and packages, you can continue to build the demo application: Prepare Android-specific build dependencies: cd ~/dev/trunk source ./build/android/envsetup.sh Configure the build scripts: export GYP_DEFINES="$GYP_DEFINES build_with_libjingle=1 build_with_chromium=0 libjingle_java=1 OS=android"gclient runhooks Build the WebRTC code with the demo application: ninja -C out/Debug -j 5 AppRTCDemo After the last command, you can find the compiled Android packet with the demo application at ~/dev/trunk/out/Debug/AppRTCDemo-debug.apk. Running on the Android simulator Follow these steps to run an application on the Android simulator: Run Android SDK manager and install the necessary Android components: $ANDROID_HOME/tools/android sdk Choose at least Android 4.x—lower versions don't have WebRTC support. In the following screenshot, I've chosen Android SDK 4.4 and 4.2: Create an Android virtual device: cd $ANDROID_HOME/tools ./android avd & The last command executes the Android SDK tool to create and maintain virtual devices. Create a new virtual device using this tool. You can see an example in the following screenshot: Start the emulator using just the created virtual device: ./emulator –avd emu1 & This can take a couple of seconds (or even minutes), after that you should see a typical Android device home screen, like in the following screenshot: Check whether the virtual device is simulated and running: cd $ANDROID_HOME/platform-tools ./adb devices You should see something like the following: List of devices attached emulator-5554   device This means that your just created virtual device is OK and running; so we can use it to test our demo application. Install the demo application on the virtual device: ./adb install ~/dev/trunk/out/Debug/AppRTCDemo-debug.apk You should see something like the following: 636 KB/s (2507985 bytes in 3.848s) pkg: /data/local/tmp/AppRTCDemo-debug.apk Success This means that the application is transferred to the virtual device and is ready to be started. Switch to the simulator window; you should see the demo application's icon. Execute it like it is a real Android device. In the following screenshot, you can see the installed demo application AppRTC: While trying to launch the application, you might see an error message with a Java runtime exception referring to GLSurfaceView. In this case, you probably need to switch to the Use Host GPU option while creating the virtual device with Android Virtual Device (AVD) tool. Fixing a bug with GLSurfaceView Sometimes if you're using an Android simulator with a virtual device on the ARM architecture, you can be faced with an issue when the application says No config chosen, throws an exception, and exits. This is a known defect in the Android WebRTC code and its status can be tracked at https://code.google.com/p/android/issues/detail?id=43209. The following steps can help you fix this bug in the original demo application: Go to the ~/dev/trunk/talk/examples/android/src/org/appspot/apprtc folder and edit the AppRTCDemoActivity.java file. Look for the following line of code: vsv = new AppRTCGLView(this, displaySize); Right after this line, add the following line of code: vsv.setEGLConfigChooser(8,8,8,8,16,16); You will need to recompile the application: cd ~/dev/trunk ninja -C out/Debug AppRTCDemo  Now you can deploy your application and the issue will not appear anymore. Running on a physical Android device For deploying applications on an Android device, you don't need to have any developer certificates (like in the case of iOS devices). So if you have an Android physical device, it probably would be easier to debug and run the demo application on the device rather than on the simulator. Connect the Android device to the machine using a USB cable. On the Android device, switch the USB debug mode on. Check whether your machine sees your device: cd $ANDROID_HOME/platform-tools ./adb devices If device is connected and the machine sees it, you should see the device's name in the result print of the preceding command: List of devices attached QO4721C35410   device Deploy the application onto the device: cd $ANDROID_HOME/platform-tools ./adb -d install ~/dev/trunk/out/Debug/AppRTCDemo-debug.apk You will get the following output: 3016 KB/s (2508031 bytes in 0.812s) pkg: /data/local/tmp/AppRTCDemo-debug.apk Success After that you should see the AppRTC demo application's icon on the device: After you have started the application, you should see a prompt to enter a room number. At this stage, go to http://apprtc.webrtc.org in your web browser on another machine; you will see an image from your camera. Copy the room number from the URL string and enter it in the demo application on the Android device. Your Android device and another machine will try to establish a peer-to-peer connection, and might take some time. In the following screenshot, you can see the image on the desktop after the connection with Android smartphone has been established: Here, the big image represents what is translated from the frontal camera of the Android smartphone; the small image depicts the image from the notebook's web camera. So both the devices have established direct connection and translate audio and video to each other. The following screenshot represents what was seen on the Android device: There's more… The original demo doesn't contain any ready-to-use IDE project files; so you have to deal with console commands and scripts during all the development process. You can make your life a bit easier if you use some third-party tools that simplify the building process. Such tools can be found at http://tech.pristine.io/build-android-apprtc. Summary In this article, we have learned to create a multiuser conference using WebRTCO, take a screenshot using WebRTC, and compile and run a demo for Android. Resources for Article: Further resources on this subject: Webrtc with Sip and Ims? [article] Using the Webrtc Data Api [article] Applying Webrtc for Education and E Learning [article]
Read more
  • 0
  • 0
  • 3293

article-image-postgres-add
Packt
27 Feb 2015
7 min read
Save for later

Postgres Add-on

Packt
27 Feb 2015
7 min read
In this article by Patrick Espake, author of the book Learning Heroku Postgres, you will learn how to install and set up PostgreSQL and how to create an app using Postgres. (For more resources related to this topic, see here.) Local setup You need to install PostgreSQL on your computer; this installation is recommended because some commands of the Postgres add-on require PostgreSQL to be installed. Besides that, it's a good idea for your development database to be similar to your production database; this avoids problems between these environments. Next, you will learn how to set up PostgreSQL on Mac OS X, Windows, and Linux. In addition to pgAdmin, this is the most popular and rich feature in PostgreSQL's administration and development platform. The versions recommended for installation are PostgreSQL 9.4.0 and pgAdmin 1.20.0, or the latest available versions. Setting up PostgreSQL on Mac OS X The Postgres.app application is the simplest way to get started with PostgreSQL on Mac OS X, it contains many features in a single installation package: PostgreSQL 9.4.0 PostGIS 2.1.4 Procedural languages: PL/pgSQL, PL/Perl, PL/Python, and PLV8 (JavaScript) Popular extensions such as hstore, uuid-ossp, and others Many command-line tools for managing PostgreSQL and convenient tools for GIS The following screenshot displays the postgresapp website: For installation, visit the address http://postgresapp.com/, carry out the appropriate download, drag it to the applications directory, and then double-click to open. The other alternatives for installing PostgreSQL are to use the default graphic installer, Fink, MacPorts, or Homebrew. All of them are available at http://www.postgresql.org/download/macosx. To install pgAdmin, you should visit http://www.pgadmin.org/download/macosx.php, download the latest available version, and follow the installer instructions. Setting up PostgreSQL on Windows PostgreSQL on Windows is provided using a graphical installer that includes the PostgreSQL server, pgAdmin, and the package manager that is used to download and install additional applications and drivers for PostgreSQL. To install PostgreSQL, visit http://www.postgresql.org/download/windows, click on the download link, and select the the appropriate Windows version: 32 bit or 64 bit. Follow the instructions provided by the installer. After installing PostgreSQL on Windows, you need to set the PATH environment variable so that the psql, pg_dump and pg_restore commands can work through the Command Prompt. Perform the following steps: Open My Computer. Right-click on My Computer and select Properties. Click on Advanced System Settings. Click on the Environment Variables button. From the System variables box, select the Path variable. Click on Edit. At the end of the line, add the bin directory of PostgreSQL: c:Program FilesPostgreSQL9.4bin;c:Program FilesPostgreSQL9.4lib. Click on the OK button to save. The directory follows the pattern c:Program FilesPostgreSQLVERSION..., check your PostgreSQL version. Setting up PostgreSQL on Linux The great majority of Linux distributions already have PostgreSQL in their package manager. You can search the appropriate package for your distribution and install it. If your distribution is Debian or Ubuntu, you can install it with the following command: $ sudo apt-get install postgresql If your Linux distribution is Fedora, Red Hat, CentOS, Scientific Linux, or Oracle Enterprise Linux, you can use the YUM package manager to install PostgreSQL: $ sudo yum install postgresql94-server$ sudo service postgresql-9.4 initdb$ sudo chkconfig postgresql-9.4 on$ sudo service postgresql-9.4 start If your Linux distribution doesn't have PostgreSQL in your package manager, you can install it using the Linux installer. Just visit the website http://www.postgresql.org/download/linux, choose the appropriate installer, 32-bit or 64-bits, and follow the install instructions. You can install pgAdmin through the package manager of your Linux distribution; for Debian or Ubuntu you can use the following command: $ sudo apt-get install pgadmin3 For Linux distributions that use the YUM package manager, you can install through the following command: $ sudo yum install pgadmin3 If your Linux distribution doesn't have pgAdmin in its package manager, you can download and install it following the instructions provided at http://www.pgadmin.org/download/. Creating a local database For the examples in this article, you will need to have a local database created. You will create a new database called my_local_database through pgAdmin. To create the new database, perform the following steps: Open pgAdmin. Connect to the database server through the access credentials that you chose in the installation process. Click on the Databases item in the tree view. Click on the menu Edit -> New Object -> New database. Type the name my_local_database for the database. Click on the OK button to save. Creating a new local database called my_local_database Creating a new app Many features in Heroku can be implemented in two different ways; the first is via the Heroku client, which is installed through the Heroku Toolbelt, and the other is through the web Heroku dashboard. In this section, you will learn how to use both of them. Via the Heroku dashboard Access the website https://dashboard.heroku.com and login. After that, click on the plus sign at the top of the dashboard to create a new app and the following screen will be shown: Creating an app In this step, you should provide the name of your application. In the preceding example, it's learning-heroku-postgres-app. You can choose a name you prefer. Select which region you want to host it on; two options are available: United States or Europe. Heroku doesn't allow duplicated names for applications; each application name supplied is global and, after it has been used once, it will not be available for another person. It can happen that you choose a name that is already being used. In this case, you should choose another name. Choose the best option for you, it is usually recommended you select the region that is closest to you to decrease server response time. Click on the Create App button. Then Heroku will provide some information to perform the first deploy of your application. The website URL and Git repository are created using the following addresses: http://your-app-name.herokuapp.com and git@heroku.com/your-app-name.git. learning-heroku-postgres-app created Next you will create a directory in your computer and link it with Heroku to perform future deployments of your source code. Open your terminal and type the following commands: $ mkdir your-app-name$ cd your-app-name$ git init$ heroku git:remote -a your-app-nameGit remote heroku added Finally, you are able to deploy your source code at any time through these commands: $ git add .$ git commit –am "My updates"$ git push heroku master Via the Heroku client Creating a new application via the Heroku client is very simple. The first step is to create the application directory on your computer. For that, open the Terminal and type the following commands: $ mkdir your-app-name$ cd your-app-name$ git init After that you need to create a new Heroku application through the command: $ heroku apps:create your-app-nameCreating your-app-name... done, stack is cedar-14https://your-app-name.herokuapp.com/ | HYPERLINK "https://git.heroku.com/your-app-name.git" https://git.heroku.com/your-app-name.gitGit remote heroku added Finally, you are able to deploy your source code at any time through these commands: $ git add .$ git commit –am "My updates"$ git push heroku master Another very common case is when you already have a Git repository on your computer with the application's source code and you want to deploy it on Heroku. In this case, you must run the heroku apps:create your-app-name command inside the application directory and the link with Heroku will be created. Summary In this article, you learned how to configure your local environment to work with PostgreSQL and pgAdmin. Besides that, you have also understood how to install Heroku Postgres in your application. In addition, you have understood that the first database is created automatically when the Heroku Postgres add-on is installed in your application and there are several PostgreSQL databases as well. You also learned that the great majority of tasks can be performed in two ways: via the Heroku Client and via the Heroku dashboard. Resources for Article: Further resources on this subject: Building Mobile Apps [article] Managing Heroku from the Command Line [article] Securing the WAL Stream [article]
Read more
  • 0
  • 0
  • 2539

article-image-booting-system
Packt
27 Feb 2015
12 min read
Save for later

Booting the System

Packt
27 Feb 2015
12 min read
In this article by William Confer and William Roberts, author of the book, Exploring SE for Android, we will learn once we have an SE for Android system, we need to see how we can make use of it, and get it into a usable state. In this article, we will: Modify the log level to gain more details while debugging Follow the boot process relative to the policy loader Investigate SELinux APIs and SELinuxFS Correct issues with the maximum policy version number Apply patches to load and verify an NSA policy (For more resources related to this topic, see here.) You might have noticed some disturbing error messages in dmesg. To refresh your memory, here are some of them: # dmesg | grep –i selinux <6>SELinux: Initializing. <7>SELinux: Starting in permissive mode <7>SELinux: Registering netfilter hooks <3>SELinux: policydb version 26 does not match my version range 15-23 ... It would appear that even though SELinux is enabled, we don't quite have an error-free system. At this point, we need to understand what causes this error, and what we can do to rectify it. At the end of this article, we should be able to identify the boot process of an SE for Android device with respect to policy loading, and how that policy is loaded into the kernel. We will then address the policy version error. Policy load An Android device follows a boot sequence similar to that of the *NIX booting sequence. The boot loader boots the kernel, and the kernel finally executes the init process. The init process is responsible for managing the boot process of the device through init scripts and some hard coded logic in the daemon. Like all processes, init has an entry point at the main function. This is where the first userspace process begins. The code can be found by navigating to system/core/init/init.c. When the init process enters main (refer to the following code excerpt), it processes cmdline, mounts some tmpfs filesystems such as /dev, and some pseudo-filesystems such as procfs. For SE for Android devices, init was modified to load the policy into the kernel as early in the boot process as possible. The policy in an SELinux system is not built into the kernel; it resides in a separate file. In Android, the only filesystem mounted in early boot is the root filesystem, a ramdisk built into boot.img. The policy can be found in this root filesystem at /sepolicy on the UDOO or target device. At this point, the init process calls a function to load the policy from the disk and sends it to the kernel, as follows: int main(int argc, char *argv[]) { ...   process_kernel_cmdline();   unionselinux_callback cb;   cb.func_log = klog_write;   selinux_set_callback(SELINUX_CB_LOG, cb);     cb.func_audit = audit_callback;   selinux_set_callback(SELINUX_CB_AUDIT, cb);     INFO(“loading selinux policyn”);   if (selinux_enabled) {     if (selinux_android_load_policy() < 0) {       selinux_enabled = 0;       INFO(“SELinux: Disabled due to failed policy loadn”);     } else {       selinux_init_all_handles();     }   } else {     INFO(“SELinux:  Disabled by command line optionn”);   } … In the preceding code, you will notice the very nice log message, SELinux: Disabled due to failed policy load, and wonder why we didn't see this when we ran dmesg before. This code executes before setlevel in init.rc is executed. The default init log level is set by the definition of KLOG_DEFAULT_LEVEL in system/core/include/cutils/klog.h. If we really wanted to, we could change that, rebuild, and actually see that message. Now that we have identified the initial path of the policy load, let's follow it on its course through the system. The selinux_android_load_policy() function can be found in the Android fork of libselinux, which is in the UDOO Android source tree. The library can be found at external/libselinux, and all of the Android modifications can be found in src/android.c. The function starts by mounting a pseudo-filesystem called SELinuxFS. In systems that do not have sysfs mounted, the mount point is /selinux; on systems that have sysfs mounted, the mount point is /sys/fs/selinux. You can check mountpoints on a running system using the following command: # mount | grep selinuxfs selinuxfs /sys/fs/selinux selinuxfs rw,relatime 0 0 SELinuxFS is an important filesystem as it provides the interface between the kernel and userspace for controlling and manipulating SELinux. As such, it has to be mounted for the policy load to work. The policy load uses the filesystem to send the policy file bytes to the kernel. This happens in the selinux_android_load_policy() function: int selinux_android_load_policy(void) {   char *mnt = SELINUXMNT;   int rc;   rc = mount(SELINUXFS, mnt, SELINUXFS, 0, NULL);   if (rc < 0) {     if (errno == ENODEV) {       /* SELinux not enabled in kernel */       return -1;     }     if (errno == ENOENT) {       /* Fall back to legacy mountpoint. */       mnt = OLDSELINUXMNT;       rc = mkdir(mnt, 0755);       if (rc == -1 && errno != EEXIST) {         selinux_log(SELINUX_ERROR,”SELinux:           Could not mkdir:  %sn”,         strerror(errno));         return -1;       }       rc = mount(SELINUXFS, mnt, SELINUXFS, 0, NULL);     }   }   if (rc < 0) {     selinux_log(SELINUX_ERROR,”SELinux:  Could not mount selinuxfs:  %sn”,     strerror(errno));     return -1;   }   set_selinuxmnt(mnt);     return selinux_android_reload_policy(); } The set_selinuxmnt(car *mnt) function changes a global variable in libselinux so that other routines can find the location of this vital interface. From there it calls another helper function, selinux_android_reload_policy(), which is located in the same libselinux android.c file. It loops through an array of possible policy locations in priority order. This array is defined as follows: Static const char *const sepolicy_file[] = {   “/data/security/current/sepolicy”,   “/sepolicy”,   0 }; Since only the root filesystem is mounted, it chooses /sepolicy at this time. The other path is for dynamic runtime reloads of policy. After acquiring a valid file descriptor to the policy file, the system is memory mapped into its address space, and calls security_load_policy(map, size) to load it to the kernel. This function is defined in load_policy.c. Here, the map parameter is the pointer to the beginning of the policy file, and the size parameter is the size of the file in bytes: int selinux_android_reload_policy(void) {   int fd = -1, rc;   struct stat sb;   void *map = NULL;   int i = 0;     while (fd < 0 && sepolicy_file[i]) {     fd = open(sepolicy_file[i], O_RDONLY | O_NOFOLLOW);     i++;   }   if (fd < 0) {     selinux_log(SELINUX_ERROR, “SELinux:  Could not open sepolicy:  %sn”,     strerror(errno));     return -1;   }   if (fstat(fd, &sb) < 0) {     selinux_log(SELINUX_ERROR, “SELinux:  Could not stat %s:  %sn”,     sepolicy_file[i], strerror(errno));     close(fd);     return -1;   }   map = mmap(NULL, sb.st_size, PROT_READ, MAP_PRIVATE, fd, 0);   if (map == MAP_FAILED) {     selinux_log(SELINUX_ERROR, “SELinux:  Could not map %s:  %sn”,     sepolicy_file[i], strerror(errno));     close(fd);     return -1;   }     rc = security_load_policy(map, sb.st_size);   if (rc < 0) {     selinux_log(SELINUX_ERROR, “SELinux:  Could not load policy:  %sn”,     strerror(errno));     munmap(map, sb.st_size);     close(fd);     return -1;   }     munmap(map, sb.st_size);   close(fd);   selinux_log(SELINUX_INFO, “SELinux: Loaded policy from %sn”, sepolicy_file[i]);     return 0; } The security load policy opens the <selinuxmnt>/load file, which in our case is /sys/fs/selinux/load. At this point, the policy is written to the kernel via this pseudo file: int security_load_policy(void *data, size_t len) {   char path[PATH_MAX];   int fd, ret;     if (!selinux_mnt) {     errno = ENOENT;     return -1;   }     snprintf(path, sizeof path, “%s/load”, selinux_mnt);   fd = open(path, O_RDWR);   if (fd < 0)   return -1;     ret = write(fd, data, len);   close(fd);   if (ret < 0)   return -1;   return 0; } Fixing the policy version At this point, we have a clear idea of how the policy is loaded into the kernel. This is very important. SELinux integration with Android began in Android 4.0, so when porting to various forks and fragments, this breaks, and code is often missing. Understanding all parts of the system, however cursory, will help us to correct issues as they appear in the wild and develop. This information is also useful to understand the system as a whole, so when modifications need to be made, you'll know where to look and how things work. At this point, we're ready to correct the policy versions. The logs and kernel config are clear; only policy versions up to 23 are supported, and we're trying to load policy version 26. This will probably be a common problem with Android considering kernels are often out of date. There is also an issue with the 4.3 sepolicy shipped by Google. Some changes by Google made it a bit more difficult to configure devices as they tailored the policy to meet their release goals. Essentially, the policy allows nearly everything and therefore generates very few denial logs. Some domains in the policy are completely permissive via a per-domain permissive statement, and those domains also have rules to allow everything so denial logs do not get generated. To correct this, we can use a more complete policy from the NSA. Replace external/sepolicy with the download from https://bitbucket.org/seandroid/external-sepolicy/get/seandroid-4.3.tar.bz2. After we extract the NSA's policy, we need to correct the policy version. The policy is located in external/sepolicy and is compiled with a tool called check_policy. The Android.mk file for sepolicy will have to pass this version number to the compiler, so we can adjust this here. On the top of the file, we find the culprit: ... # Must be <= /selinux/policyvers reported by the Android kernel. # Must be within the compatibility range reported by checkpolicy -V. POLICYVERS ?= 26 ... Since the variable is overridable by the ?= assignment. We can override this in BoardConfig.mk. Edit device/fsl/imx6/BoardConfigCommon.mk, adding the following POLICYVERS line to the bottom of the file: ... BOARD_FLASH_BLOCK_SIZE := 4096 TARGET_RECOVERY_UI_LIB := librecovery_ui_imx # SELinux Settings POLICYVERS := 23 -include device/google/gapps/gapps_config.mk Since the policy is on the boot.img image, build the policy and bootimage: $ mmm -B external/sepolicy/ $ make –j4 bootimage 2>&1 | tee logz !!!!!!!!! WARNING !!!!!!!!! VERIFY BLOCK DEVICE !!!!!!!!! $ sudo chmod 666 /dev/sdd1 $ dd if=$OUT/boot.img of=/dev/sdd1 bs=8192 conv=fsync Eject the SD card, place it into the UDOO, and boot. The first of the preceding commands should produce the following log output: out/host/linux-x86/bin/checkpolicy: writing binary representation (version 23) to out/target/product/udoo/obj/ETC/sepolicy_intermediates/sepolicy At this point, by checking the SELinux logs using dmesg, we can see the following: # dmesg | grep –i selinux <6>init: loading selinux policy <7>SELinux: 128 avtab hash slots, 490 rules. <7>SELinux: 128 avtab hash slots, 490 rules. <7>SELinux: 1 users, 2 roles, 274 types, 0 bools, 1 sens, 1024 cats <7>SELinux: 84 classes, 490 rules <7>SELinux: Completing initialization. Another command we need to run is getenforce. The getenforce command gets the SELinux enforcing status. It can be in one of three states: Disabled: No policy is loaded or there is no kernel support Permissive: Policy is loaded and the device logs denials (but is not in enforcing mode) Enforcing: This state is similar to the permissive state except that policy violations result in EACCESS being returned to userspace One of the goals while booting an SELinux system is to get to the enforcing state. Permissive is used for debugging, as follows: # getenforce Permissive Summary In this article, we covered the important policy load flow through the init process. We also changed the policy version to suit our development efforts and kernel version. From there, we were able to load the NSA policy and verify that the system loaded it. This article additionally showcased some of the SELinux APIs and their interactions with SELinuxFS. Resources for Article: Further resources on this subject: Android And Udoo Home Automation? [article] Sound Recorder For Android [article] Android Virtual Device Manager [article]
Read more
  • 0
  • 0
  • 12848
article-image-getting-and-running-cassandra
Packt
27 Feb 2015
20 min read
Save for later

Getting Up and Running with Cassandra

Packt
27 Feb 2015
20 min read
As an application developer, you have almost certainly worked with databases extensively. You must have built products using relational databases like MySQL and PostgreSQL, and perhaps experimented with a document store like MongoDB or a key-value database like Redis. While each of these tools has its strengths, you will now consider whether a distributed database like Cassandra might be the best choice for the task at hand. In this article by Mat Brown, author of the book Learning Apache Cassandra, we'll talk about the major reasons to choose Cassandra from among the many database options available to you. Having established that Cassandra is a great choice, we'll go through the nuts and bolts of getting a local Cassandra installation up and running. By the end of this article, you'll know: When and why Cassandra is a good choice for your application How to install Cassandra on your development machine How to interact with Cassandra using cqlsh How to create a keyspace (For more resources related to this topic, see here.) What Cassandra offers, and what it doesn't Cassandra is a fully distributed, masterless database, offering superior scalability and fault tolerance to traditional single master databases. Compared with other popular distributed databases like Riak, HBase, and Voldemort, Cassandra offers a uniquely robust and expressive interface for modeling and querying data. What follows is an overview of several desirable database capabilities, with accompanying discussions of what Cassandra has to offer in each category. Horizontal scalability Horizontal scalability refers to the ability to expand the storage and processing capacity of a database by adding more servers to a database cluster. A traditional single-master database's storage capacity is limited by the capacity of the server that hosts the master instance. If the data set outgrows this capacity, and a more powerful server isn't available, the data set must be sharded among multiple independent database instances that know nothing of each other. Your application bears responsibility for knowing to which instance a given piece of data belongs. Cassandra, on the other hand, is deployed as a cluster of instances that are all aware of each other. From the client application's standpoint, the cluster is a single entity; the application need not know, nor care, which machine a piece of data belongs to. Instead, data can be read or written to any instance in the cluster, referred to as a node; this node will forward the request to the instance where the data actually belongs. The result is that Cassandra deployments have an almost limitless capacity to store and process data; when additional capacity is required, more machines can simply be added to the cluster. When new machines join the cluster, Cassandra takes care of rebalancing the existing data so that each node in the expanded cluster has a roughly equal share. Cassandra is one of the several popular distributed databases inspired by the Dynamo architecture, originally published in a paper by Amazon. Other widely used implementations of Dynamo include Riak and Voldemort. You can read the original paper at http://s3.amazonaws.com/AllThingsDistributed/sosp/amazon-dynamo-sosp2007.pdf. High availability The simplest database deployments are run as a single instance on a single server. This sort of configuration is highly vulnerable to interruption: if the server is affected by a hardware failure or network connection outage, the application's ability to read and write data is completely lost until the server is restored. If the failure is catastrophic, the data on that server might be lost completely. A master-follower architecture improves this picture a bit. The master instance receives all write operations, and then these operations are replicated to follower instances. The application can read data from the master or any of the follower instances, so a single host becoming unavailable will not prevent the application from continuing to read data. A failure of the master, however, will still prevent the application from performing any write operations, so while this configuration provides high read availability, it doesn't completely provide high availability. Cassandra, on the other hand, has no single point of failure for reading or writing data. Each piece of data is replicated to multiple nodes, but none of these nodes holds the authoritative master copy. If a machine becomes unavailable, Cassandra will continue writing data to the other nodes that share data with that machine, and will queue the operations and update the failed node when it rejoins the cluster. This means in a typical configuration, two nodes must fail simultaneously for there to be any application-visible interruption in Cassandra's availability. How many copies?When you create a keyspace - Cassandra's version of a database - you specify how many copies of each piece of data should be stored; this is called the replication factor. A replication factor of 3 is a common and good choice for many use cases. Write optimization Traditional relational and document databases are optimized for read performance. Writing data to a relational database will typically involve making in-place updates to complicated data structures on disk, in order to maintain a data structure that can be read efficiently and flexibly. Updating these data structures is a very expensive operation from a standpoint of disk I/O, which is often the limiting factor for database performance. Since writes are more expensive than reads, you'll typically avoid any unnecessary updates to a relational database, even at the expense of extra read operations. Cassandra, on the other hand, is highly optimized for write throughput, and in fact never modifies data on disk; it only appends to existing files or creates new ones. This is much easier on disk I/O and means that Cassandra can provide astonishingly high write throughput. Since both writing data to Cassandra, and storing data in Cassandra, are inexpensive, denormalization carries little cost and is a good way to ensure that data can be efficiently read in various access scenarios. Because Cassandra is optimized for write volume, you shouldn't shy away from writing data to the database. In fact, it's most efficient to write without reading whenever possible, even if doing so might result in redundant updates. Just because Cassandra is optimized for writes doesn't make it bad at reads; in fact, a well-designed Cassandra database can handle very heavy read loads with no problem. Structured records The first three database features we looked at are commonly found in distributed data stores. However, databases like Riak and Voldemort are purely key-value stores; these databases have no knowledge of the internal structure of a record that's stored at a particular key. This means useful functions like updating only part of a record, reading only certain fields from a record, or retrieving records that contain a particular value in a given field are not possible. Relational databases like PostgreSQL, document stores like MongoDB, and, to a limited extent, newer key-value stores like Redis do have a concept of the internal structure of their records, and most application developers are accustomed to taking advantage of the possibilities this allows. None of these databases, however, offer the advantages of a masterless distributed architecture. In Cassandra, records are structured much in the same way as they are in a relational database—using tables, rows, and columns. Thus, applications using Cassandra can enjoy all the benefits of masterless distributed storage while also getting all the advanced data modeling and access features associated with structured records. Secondary indexes A secondary index, commonly referred to as an index in the context of a relational database, is a structure allowing efficient lookup of records by some attribute other than their primary key. This is a widely useful capability: for instance, when developing a blog application, you would want to be able to easily retrieve all of the posts written by a particular author. Cassandra supports secondary indexes; while Cassandra's version is not as versatile as indexes in a typical relational database, it's a powerful feature in the right circumstances. Efficient result ordering It's quite common to want to retrieve a record set ordered by a particular field; for instance, a photo sharing service will want to retrieve the most recent photographs in descending order of creation. Since sorting data on the fly is a fundamentally expensive operation, databases must keep information about record ordering persisted on disk in order to efficiently return results in order. In a relational database, this is one of the jobs of a secondary index. In Cassandra, secondary indexes can't be used for result ordering, but tables can be structured such that rows are always kept sorted by a given column or columns, called clustering columns. Sorting by arbitrary columns at read time is not possible, but the capacity to efficiently order records in any way, and to retrieve ranges of records based on this ordering, is an unusually powerful capability for a distributed database. Immediate consistency When we write a piece of data to a database, it is our hope that that data is immediately available to any other process that may wish to read it. From another point of view, when we read some data from a database, we would like to be guaranteed that the data we retrieve is the most recently updated version. This guarantee is called immediate consistency, and it's a property of most common single-master databases like MySQL and PostgreSQL. Distributed systems like Cassandra typically do not provide an immediate consistency guarantee. Instead, developers must be willing to accept eventual consistency, which means when data is updated, the system will reflect that update at some point in the future. Developers are willing to give up immediate consistency precisely because it is a direct tradeoff with high availability. In the case of Cassandra, that tradeoff is made explicit through tunable consistency. Each time you design a write or read path for data, you have the option of immediate consistency with less resilient availability, or eventual consistency with extremely resilient availability. Discretely writable collections While it's useful for records to be internally structured into discrete fields, a given property of a record isn't always a single value like a string or an integer. One simple way to handle fields that contain collections of values is to serialize them using a format like JSON, and then save the serialized collection into a text field. However, in order to update collections stored in this way, the serialized data must be read from the database, decoded, modified, and then written back to the database in its entirety. If two clients try to perform this kind of modification to the same record concurrently, one of the updates will be overwritten by the other. For this reason, many databases offer built-in collection structures that can be discretely updated: values can be added to, and removed from collections, without reading and rewriting the entire collection. Cassandra is no exception, offering list, set, and map collections, and supporting operations like "append the number 3 to the end of this list". Neither the client nor Cassandra itself needs to read the current state of the collection in order to update it, meaning collection updates are also blazingly efficient. Relational joins In real-world applications, different pieces of data relate to each other in a variety of ways. Relational databases allow us to perform queries that make these relationships explicit, for instance, to retrieve a set of events whose location is in the state of New York (this is assuming events and locations are different record types). Cassandra, however, is not a relational database, and does not support anything like joins. Instead, applications using Cassandra typically denormalize data and make clever use of clustering in order to perform the sorts of data access that would use a join in a relational database. For data sets that aren't already denormalized, applications can also perform client-side joins, which mimic the behavior of a relational database by performing multiple queries and joining the results at the application level. Client-side joins are less efficient than reading data that has been denormalized in advance, but offer more flexibility. MapReduce MapReduce is a technique for performing aggregate processing on large amounts of data in parallel; it's a particularly common technique in data analytics applications. Cassandra does not offer built-in MapReduce capabilities, but it can be integrated with Hadoop in order to perform MapReduce operations across Cassandra data sets, or Spark for real-time data analysis. The DataStax Enterprise product provides integration with both of these tools out-of-the-box. Comparing Cassandra to the alternatives Now that you've got an in-depth understanding of the feature set that Cassandra offers, it's time to figure out which features are most important to you, and which database is the best fit. The following table lists a handful of commonly used databases, and key features that they do or don't have: Feature Cassandra PostgreSQL MongoDB Redis Riak Structured records Yes Yes Yes Limited No Secondary indexes Yes Yes Yes No Yes Discretely writable collections Yes Yes Yes Yes No Relational joins No Yes No No No Built-in MapReduce No No Yes No Yes Fast result ordering Yes Yes Yes Yes No Immediate consistency Configurable at query level Yes Yes Yes Configurable at cluster level Transparent sharding Yes No Yes No Yes No single point of failure Yes No No No Yes High throughput writes Yes No No Yes Yes As you can see, Cassandra offers a unique combination of scalability, availability, and a rich set of features for modeling and accessing data. Installing Cassandra Now that you're acquainted with Cassandra's substantial powers, you're no doubt chomping at the bit to try it out. Happily, Cassandra is free, open source, and very easy to get running. Installing on Mac OS X First, we need to make sure that we have an up-to-date installation of the Java Runtime Environment. Open the Terminal application, and type the following into the command prompt: $ java -version You will see an output that looks something like the following: java version "1.8.0_25"Java(TM) SE Runtime Environment (build 1.8.0_25-b17)Java HotSpot(TM) 64-Bit Server VM (build 25.25-b02, mixed mode) Pay particular attention to the java version listed: if it's lower than 1.7.0_25, you'll need to install a new version. If you have an older version of Java or if Java isn't installed at all, head to https://www.java.com/en/download/mac_download.jsp and follow the download instructions on the page. You'll need to set up your environment so that Cassandra knows where to find the latest version of Java. To do this, set up your JAVA_HOME environment variable to the install location, and your PATH to include the executable in your new Java installation as follows: $ export JAVA_HOME="/Library/Internet Plug- Ins/JavaAppletPlugin.plugin/Contents/Home"$ export PATH="$JAVA_HOME/bin":$PATH You should put these two lines at the bottom of your .bashrc file to ensure that things still work when you open a new terminal. The installation instructions given earlier assume that you're using the latest version of Mac OS X (at the time of writing this, 10.10 Yosemite). If you're running a different version of OS X, installing Java might require different steps. Check out https://www.java.com/en/download/faq/java_mac.xml for detailed installation information. Once you've got the right version of Java, you're ready to install Cassandra. It's very easy to install Cassandra using Homebrew; simply type the following: $ brew install cassandra$ pip install cassandra-driver cql$ cassandra Here's what we just did: Installed Cassandra using the Homebrew package manager Installed the CQL shell and its dependency, the Python Cassandra driver Started the Cassandra server Installing on Ubuntu First, we need to make sure that we have an up-to-date installation of the Java Runtime Environment. Open the Terminal application, and type the following into the command prompt: $ java -version You will see an output that looks similar to the following: java version "1.7.0_65"OpenJDK Runtime Environment (IcedTea 2.5.3) (7u71-2.5.3- 0ubuntu0.14.04.1)OpenJDK 64-bit Server VM (build 24.65-b04, mixed mode) Pay particular attention to the java version listed: it should start with 1.7. If you have an older version of Java, or if Java isn't installed at all, you can install the correct version using the following command: $ sudo apt-get install openjdk-7-jre-headless Once you've got the right version of Java, you're ready to install Cassandra. First, you need to add Apache's Debian repositories to your sources list. Add the following lines to the file /etc/apt/sources.list: deb http://www.apache.org/dist/cassandra/debian 21x maindeb-src http://www.apache.org/dist/cassandra/debian 21x main In the Terminal application, type the following into the command prompt: $ gpg --keyserver pgp.mit.edu --recv-keys F758CE318D77295D$ gpg --export --armor F758CE318D77295D | sudo apt-key add -$ gpg --keyserver pgp.mit.edu --recv-keys 2B5C1B00$ gpg --export --armor 2B5C1B00 | sudo apt-key add -$ gpg --keyserver pgp.mit.edu --recv-keys 0353B12C$ gpg --export --armor 0353B12C | sudo apt-key add -$ sudo apt-get update$ sudo apt-get install cassandra$ cassandra Here's what we just did: Added the Apache repositories for Cassandra 2.1 to our sources list Added the public keys for the Apache repo to our system and updated our repository cache Installed Cassandra Started the Cassandra server Installing on Windows The easiest way to install Cassandra on Windows is to use the DataStax Community Edition. DataStax is a company that provides enterprise-level support for Cassandra; they also release Cassandra packages at both free and paid tiers. DataStax Community Edition is free, and does not differ from the Apache package in any meaningful way. DataStax offers a graphical installer for Cassandra on Windows, which is available for download at planetcassandra.org/cassandra. On this page, locate Windows Server 2008/Windows 7 or Later (32-Bit) from the Operating System menu (you might also want to look for 64-bit if you run a 64-bit version of Windows), and choose MSI Installer (2.x) from the version columns. Download and run the MSI file, and follow the instructions, accepting the defaults: Once the installer completes this task, you should have an installation of Cassandra running on your machine. Bootstrapping the project We will build an application called MyStatus, which allows users to post status updates for their friends to read. CQL – the Cassandra Query Language Since this is about Cassandra and not targeted to users of any particular programming language or application framework, we will focus entirely on the database interactions that MyStatus will require. Code examples will be in Cassandra Query Language (CQL). Specifically, we'll use version 3.1.1 of CQL, which is available in Cassandra 2.0.6 and later versions. As the name implies, CQL is heavily inspired by SQL; in fact, many CQL statements are equally valid SQL statements. However, CQL and SQL are not interchangeable. CQL lacks a grammar for relational features such as JOIN statements, which are not possible in Cassandra. Conversely, CQL is not a subset of SQL; constructs for retrieving the update time of a given column, or performing an update in a lightweight transaction, which are available in CQL, do not have an SQL equivalent. Interacting with Cassandra Most common programming languages have drivers for interacting with Cassandra. When selecting a driver, you should look for libraries that support the CQL binary protocol, which is the latest and most efficient way to communicate with Cassandra. The CQL binary protocol is a relatively new introduction; older versions of Cassandra used the Thrift protocol as a transport layer. Although Cassandra continues to support Thrift, avoid Thrift-based drivers, as they are less performant than the binary protocol. Here are CQL binary drivers available for some popular programming languages: Language Driver Available at Java DataStax Java Driver github.com/datastax/java-driver Python DataStax Python Driver github.com/datastax/python-driver Ruby DataStax Ruby Driver github.com/datastax/ruby-driver C++ DataStax C++ Driver github.com/datastax/cpp-driver C# DataStax C# Driver github.com/datastax/csharp-driver JavaScript (Node.js) node-cassandra-cql github.com/jorgebay/node-cassandra-cql PHP phpbinarycql github.com/rmcfrazier/phpbinarycql While you will likely use one of these drivers in your applications, you can simply use the cqlsh tool, which is a command-line interface for executing CQL queries and viewing the results. To start cqlsh on OS X or Linux, simply type cqlsh into your command line; you should see something like this: $ cqlshConnected to Test Cluster at localhost:9160.[cqlsh 4.1.1 | Cassandra 2.0.7 | CQL spec 3.1.1 | Thrift protocol 19.39.0]Use HELP for help.cqlsh> On Windows, you can start cqlsh by finding the Cassandra CQL Shell application in the DataStax Community Edition group in your applications. Once you open it, you should see the same output we just saw. Creating a keyspace A keyspace is a collection of related tables, equivalent to a database in a relational system. To create the keyspace for our MyStatus application, issue the following statement in the CQL shell:    CREATE KEYSPACE "my_status"   WITH REPLICATION = {      'class': 'SimpleStrategy', 'replication_factor': 1    }; Here we created a keyspace called my_status. When we create a keyspace, we have to specify replication options. Cassandra provides several strategies for managing replication of data; SimpleStrategy is the best strategy as long as your Cassandra deployment does not span multiple data centers. The replication_factor value tells Cassandra how many copies of each piece of data are to be kept in the cluster; since we are only running a single instance of Cassandra, there is no point in keeping more than one copy of the data. In a production deployment, you would certainly want a higher replication factor; 3 is a good place to start. A few things at this point are worth noting about CQL's syntax: It's syntactically very similar to SQL; as we further explore CQL, the impression of similarity will not diminish. Double quotes are used for identifiers such as keyspace, table, and column names. As in SQL, quoting identifier names is usually optional, unless the identifier is a keyword or contains a space or another character that will trip up the parser. Single quotes are used for string literals; the key-value structure we use for replication is a map literal, which is syntactically similar to an object literal in JSON. As in SQL, CQL statements in the CQL shell must terminate with a semicolon. Selecting a keyspace Once you've created a keyspace, you would want to use it. In order to do this, employ the USE command: USE "my_status"; This tells Cassandra that all future commands will implicitly refer to tables inside the my_status keyspace. If you close the CQL shell and reopen it, you'll need to reissue this command. Summary In this article, you explored the reasons to choose Cassandra from among the many databases available, and having determined that Cassandra is a great choice, you installed it on your development machine. You had your first taste of the Cassandra Query Language when you issued your first command via the CQL shell in order to create a keyspace. You're now poised to begin working with Cassandra in earnest. Resources for Article: Further resources on this subject: Getting Started with Apache Cassandra [article] Basic Concepts and Architecture of Cassandra [article] About Cassandra [article]
Read more
  • 0
  • 0
  • 5655

article-image-restful-web-service-mocking-and-testing-soapui-raml-and-json-slurper-script-assertion
Packt
26 Feb 2015
15 min read
Save for later

RESTful Web Service Mocking and Testing with SoapUI, RAML, and a JSON Slurper Script Assertion

Packt
26 Feb 2015
15 min read
In this article written by Rupert Anderson, the author of SoapUI Cookbook, we will cover the following topics: Installing the SoapUI RAML plugin Generating a SoapUI REST project and mock service using the RAML plugin Testing response values using JSON Slurper As you might already know, despite being called SoapUI, the product actually has an excellent RESTful web service mock and testing functionality. Also, SoapUI is very open and extensible with a great plugin framework. This makes it relatively easy to use and develop plugins to support APIs defined by other technologies, for example RAML (http://raml.org/) and Swagger (http://swagger.io/). If you haven't seen it before, RESTful API Modeling Language or RAML is a modern way to describe RESTful web services that use YAML and JSON standards. As a brief demonstration, this article uses the excellent SoapUI RAML plugin to: Generate a SoapUI REST service definition automatically from the RAML definition Generate a SoapUI REST mock with an example response automatically from the RAML definition Create a SoapUI TestSuite, TestCase, and TestStep to call the mock Assert that the response contains the values we expect using a Script Assertion and JSON Slurper to parse and inspect the JSON content This article assumes that you have used SoapUI before, but not RAML or the RAML plugin. If you haven't used SoapUI before, then you can still give it a shot, but it might help to first take a look at Chapters 3 and 4 of the SoapUI Cookbook or the Getting Started, REST, and REST Mocking sections at http://www.soapui.org/. (For more resources related to this topic, see here.) Installing the SoapUI RAML plugin This recipe shows how to get the SoapUI RAML plugin installed and checked. Getting ready We'll use SoapUI open source version 5.0.0 here. You can download it from http://www.soapui.org/ if you need it. We'll also need the RAML plugin. You can download the latest version (0.4) from http://sourceforge.net/projects/soapui-plugins/files/soapui-raml-plugin/. How to do it... First, we'll download the plugin, install it, restart SoapUI, and check whether it's available: Download the RAML plugin zip file from sourceforge using the preceding link. It should be called soapui-raml-plugin-0.4-plugin-dist.zip. To install the plugin, if you're happy to, you can simply unzip the plugin zip file to <SoapUI Home>/java/app/bin; this will have the same effect as manually performing the following steps:     Create a plugins folder if one doesn't already exist in <SoapUI Home>/java/app/bin/plugins.     Copy the soapui-raml-plugin-0.4-plugin.jar file from the plugins folder of the expanded zip file into the plugins folder.     Copy the raml-parser-0.9-SNAPSHOT.jar and snakeyaml-1.13.jar files from the ext folder of the expanded zip file into the <SoapUI Home>/java/app/bin/ext folder.     The resulting folder structure under <SoapUI Home>/java/app/bin should now be something like: If SoapUI is running, we need to restart it so that the plugin and dependencies are added to its classpath and can be used, and check whether it's available. When SoapUI has started/restarted, we can confirm whether the plugin and dependencies have loaded successfully by checking the SoapUI tab log: INFO:Adding [/Applications/SoapUI-5.0.0.app/Contents/Resources/app/bin/ext/raml-parser-0.9-SNAPSHOT.jar] to extensions classpath INFO:Adding [/Applications/SoapUI-5.0.0.app/Contents/Resources/app/bin/ext/snakeyaml-1.13.jar] to extensions classpath INFO:Adding plugin from [/Applications/SoapUI-5.0.0.app/Contents/java/app/bin/plugins/soapui-raml-plugin-0.4-plugin.jar] To check whether the new RAML Action is available in SoapUI, we'll need a workspace and a project:     Create a new SoapUI Workspace if you don't already have one, and call it RESTWithRAML.     In the new Workspace, create New Generic Project; just enter Project Name of Invoice and click on OK. Finally, if you right-click on the created Invoice project, you should see a menu option of Import RAML Definition as shown in the following screenshot: How it works... In order to concentrate on using RAML with SoapUI, we won't go into how the plugin actually works. In very simple terms, the SoapUI plugin framework uses the plugin jar file (soapui-raml-plugin-0.4-plugin.jar) to load the standard Java RAML Parser (raml-parser-0.9-SNAPSHOT.jar) and the Snake YAML Parser (snakeyaml-1.13.jar) onto SoapUI's classpath and run them from a custom menu Action. If you have Java skills and understand the basics of SoapUI extensions, then many plugins are quite straightforward to understand and develop. If you would like to understand how SoapUI plugins work and how to develop them, then please refer to Chapters 10 and 11 of the SoapUI Cookbook. You can also take a look at Ole Lensmar's (plugin creator and co-founder of SoapUI) RAML Plugin blog and plugin source code from the following links. There's more... If you read more on SoapUI plugins, one thing to be aware of is that open source plugins are now termed "old-style" plugins in the SoapUI online documentation. This is because the commercial SoapUI Pro and newer SoapUI NG versions of SoapUI feature an enhanced plugin framework with Plugin Manger to install plugins from a plugin repository (see http://www.soapui.org/extension-plugins/plugin-manager.html). The new Plugin Manager plugins are not compatible with open source SoapUI, and open source or "old-style" plugins will not load using the Plugin Manager. See also More information on using and understanding various SoapUI plugins can be found in Chapter 10, Using Plugins of the SoapUI Cookbook More information on how to develop SoapUI extensions and plugins can be found in Chapter 11, Taking SoapUI Further of the SoapUI Cookbook The other open source SoapUI plugins can also be found in Ole Lensmar's blog, http://olensmar.blogspot.se/p/soapui-plugins.html Generating a SoapUI REST service definition and mock service using the RAML plugin In this section, we'll use the SoapUI RAML plugin to set up a SoapUI REST service definition, mock service, and example response using an RAML definition file. This recipe assumes you've followed the previous one to install the RAML plugin and create the SoapUI Workspace and Project. Getting ready For this recipe, we'll use the following simple invoice RAML definition: #%RAML 0.8title: Invoice APIbaseUri: http://localhost:8080/{version}version: v1.0/invoice:   /{id}:     get:       description: Retrieves an invoice by id.       responses:         200:           body:             application/json:             example: |                 {                   "invoice": {                     "id": "12345",                     "companyName": "Test Company",                     "amount": 100.0                   }                 } This RAML definition describes the following RESTful service endpoint and resource:http://localhost:8080/v1.0/invoice/{id}. The definition also provides example JSON invoice content (shown highlighted in the preceding code). How to do it... First, we'll use the SoapUI RAML plugin to generate the REST service, resource, and mock for the Invoice project created in the previous recipe. Then, we'll get the mock running and fire a test REST request to make sure it returns the example JSON invoice content provided by the preceding RAML definition: To generate the REST service, resource, and mock using the preceding RAML definition:     Right-click on the Invoice project created in the previous recipe and select Import RAML Definition.     Create a file that contains the preceding RAML definition, for example, invoicev1.raml.     Set RAML Definition to the location of invoicev1.raml.     Check the Generate MockService checkbox.     The Add RAML Definition window should look something like the following screenshot; when satisfied, click on OK to generate. If everything goes well, you should see the following log messages in the SoapUI tab: Importing RAML from [file:/work/RAML/invoicev1.raml] CWD:/Applications/SoapUI-5.0.0.app/Contents/java/app/bin Also, the Invoice Project should now contain the following:     An Invoice API service definition.     An invoice resource.     A sample GET request (Request 1).     An Invoice API MockService with a sample response (Response 200) that contains the JSON invoice content from the RAML definition. See the following screenshot: Before using the mock, we need to tweak Resource Path to remove the {id}, which is not a placeholder and will cause the mock to only respond to requests to /v1.0/invoice/{id} rather than /v1.0/invoice/12345 and so on. To do this:     Double-click on Invoice API MockService.     Double-click on the GET /v1.0/invoice/{id} action and edit the Resource Path as shown in the following screenshot: Start the mock. It should now publish the endpoint at http://localhost:8080/v1.0/invoice/and respond to the HTTP GET requests. Now, let's set up Request 1 and fire it at the mock to give it a quick test:     Double-click on Request 1 to edit it.     Under the Request tab, set Value of the id parameter to 12345.     Click on the green arrow to dispatch the request to the mock.     All being well, you should see a successful response, and under the JSON or RAW tab, you should see the JSON invoice content from the RAML definition, as shown in the following screenshot: How it works... Assuming that you're familiar with the basics of SoapUI projects, mocks, and test requests, the interesting part will probably be the RAML Plugin itself. To understand exactly what's going on in the RAML plugin, we really need to take a look at the source code on GitHub. A very simplified explanation of the plugin functionality is: The plugin contains a custom SoapUI Action ImportRamlAction.java. SoapUI Actions define menu-driven functionality so that when Import RAML Definition is clicked, the custom Action invokes the main RAML import functionality in RamlImporter.groovy. The RamlImporter.groovy class: Loads the RAML definition and uses the Java RAML Parser (see previous recipe) to build a Java representation of the definition. This Java representation is then traversed and used to create and configure the SoapUI framework objects (or Model items), that is, service definition, resource, mock, and its sample response. Once the plugin has done its work, everything else is standard SoapUI functionality! There's more... As you might have noticed under the REST service's menu or in the plugin source code, there are two other RAML menu options or Actions: Export RAML: This generates an RAML file from an existing SoapUI REST service; for example, you can design your RESTful API manually in a SoapUI REST project and then export an RAML definition for it. Update from RAML definition: Similar to the standard SOAP service's Update Definition, you could use this to update SoapUI project artifacts automatically using a newer version of the service's RAML definition. To understand more about developing custom SoapUI Actions, Model items, and plugins, Chapters 10 and 11 of the SoapUI Cookbook should hopefully help, as also the SoapUI online docs: Custom Actions: http://www.soapui.org/extension-plugins/old-style-extensions/developing-old-style-extensions.html Model Items: http://www.soapui.org/scripting---properties/the-soapui-object-model.html See also The blog article of the creator of the RAML plugin and the cofounder of SoapUI can be found at http://olensmar.blogspot.se/2013/12/a-raml-apihub-plugin-for-soapui.html The RAML plugin source code can be found at https://github.com/olensmar/soapui-raml-plugin There are many excellent RAML tools available for download at http://raml.org/projects.html For more information on SoapUI Mocks, see Chapter 3, Developing and Deploying Dynamic REST and SOAP Mocks of the SoapUI Cookbook Testing response values using JsonSlurper Now that we have a JSON invoice response, let's look at some options of testing it: XPath: Because of the way SoapUI stores the response, it is possible to use XPath Assertions, for example, to check whether the company is "Test Company": We could certainly use this approach, but to me, it doesn't seem completely appropriate to test JSON values! JSONPath Assertions (SoapUI Pro/NG only): The commercial versions of SoapUI have several types of JSONPath Assertion. This is a nice option if you've got it, but we'll only use open source SoapUI for this article. Script Assertion: When you want to assert something in a way that isn't available, there's always the (Groovy) Script Assertion option! In this recipe, we'll use option 3 and create a Script Assertion using JsonSlurper (http://groovy.codehaus.org/gapi/groovy/json/JsonSlurper.html) to parse and assert the invoice values that we expect to see in the response. How to do it.. First, we'll need to add a SoapUI TestSuite, TestCase, and REST Test Request TestStep to our invoice project. Then, we'll add a new Script Assertion to TestStep to parse and assert that the invoice response values are according to what we expect. Right-click on the Invoice API service and select New TestSuite and enter a name, for example, TestSuite. Right-click on the TestSuite and select New TestCase and enter a name, for example, TestCase. Right-click on TestCase:     Select Add TestStep | REST Test Request.     Enter a name, for example, Get Invoice.     When prompted to select REST method to invoke for request, choose Invoice API -> /{id} -> get -> Request 1.     This should create a preconfigured REST Test Request TestStep, like the one in the following screenshot. Open the Assertions tab:     Right-click on the Assertions tab.     Select Add Assertion | Script Assertion.     Paste the following Groovy script: import groovy.json.JsonSlurper def responseJson = messageExchange.response.contentAsString def slurper = new JsonSlurper().parseText(responseJson) assert slurper.invoice.id=='12345' assert slurper.invoice.companyName=='Test Company' assert slurper.invoice.amount==100.0 Start Invoice API MockService if it isn't running. Now, if you run REST Test Request TestStep, the Script Assertion should pass, and you should see something like this: Optionally, to make sure the Script Assertion is checking the values, do the following:     Edit Response 200 in Invoice API MockService.     Change the id value from 12345 to 54321.     Rerun the Get Invoice TestStep, and you should now see an Assertion failure like this: com.eviware.soapui.impl.wsdl.teststeps.assertions.basic.GroovyScriptAssertion@6fe0c5f3assert slurper.invoice.id=='12345'       |       |       | |       |       |       | false       |       |       54321       |       [amount:100.0, id:54321, companyName:Test Company]       [invoice:[amount:100.0, id:54321, companyName:Test Company]] How it works JsonSlurper is packaged as a part of the standard groovy-all-x.x.x,jar that SoapUI uses, so there is no need to add any additional libraries to use it in Groovy scripts. However, we do need an import statement for the JsonSlurper class to use it in our Script Assertion, as it is not part of the Groovy language itself. Similar to Groovy TestSteps, Script Assertions have access to special variables that SoapUI populates and passes in when Assertion is executed. In this case, we use the messageExchange variable to obtain the response content as a JSON String. Then, JsonSlurper is used to parse the JSON String into a convenient object model for us to query and use in standard Groovy assert statements. There's more Another very similar option would have been to create a Script Assertion to use JsonPath (https://code.google.com/p/json-path/). However, since JsonPath is not a standard Groovy library, we would have needed to add its JAR files to the <SoapUI home>/java/app/bin/ext folder, like the RAML Parser in the previous recipe. See also You may also want to check the response for JSON schema compliance. For an example of how to do this, see the Testing response compliance using JSON schemas recipe in Chapter 4, Web Service Test Scenarios of the SoapUI Cookbook. To understand more about SoapUI Groovy scripting, the SoapUI Cookbook has numerous examples explained throughout its chapters, and explores many common use-cases when scripting the SoapUI framework. Some interesting SoapUI Groovy scripting examples can also be found at http://www.soapui.org/scripting---properties/tips---tricks.html. Summary In this article, you learned about installing the SoapUI RAML plugin, generating a SoapUI REST project and mock service using the RAML plugin, and testing response values using JSON Slurper. Resources for Article: Further resources on this subject: SOAP and PHP 5 [article] Web Services Testing and soapUI [article] ADempiere 3.6: Building and Configuring Web Services [article]
Read more
  • 0
  • 0
  • 4988
Modal Close icon
Modal Close icon