Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-knockoutjs-templates
Packt
04 Mar 2015
38 min read
Save for later

KnockoutJS Templates

Packt
04 Mar 2015
38 min read
 In this article by Jorge Ferrando, author of the book KnockoutJS Essentials, we are going talk about how to design our templates with the native engine and then we will speak about mechanisms and external libraries we can use to improve the Knockout template engine. When our code begins to grow, it's necessary to split it in several parts to keep it maintainable. When we split JavaScript code, we are talking about modules, classes, function, libraries, and so on. When we talk about HTML, we call these parts templates. KnockoutJS has a native template engine that we can use to manage our HTML. It is very simple, but also has a big inconvenience: templates, it should be loaded in the current HTML page. This is not a problem if our app is small, but it could be a problem if our application begins to need more and more templates. (For more resources related to this topic, see here.) Preparing the project First of all, we are going to add some style to the page. Add a file called style.css into the css folder. Add a reference in the index.html file, just below the bootstrap reference. The following is the content of the file: .container-fluid { margin-top: 20px; } .row { margin-bottom: 20px; } .cart-unit { width: 80px; } .btn-xs { font-size:8px; } .list-group-item { overflow: hidden; } .list-group-item h4 { float:left; width: 100px; } .list-group-item .input-group-addon { padding: 0; } .btn-group-vertical > .btn-default { border-color: transparent; } .form-control[disabled], .form-control[readonly] { background-color: transparent !important; } Now remove all the content from the body tag except for the script tags and paste in these lines: <div class="container-fluid"> <div class="row" id="catalogContainer">    <div class="col-xs-12"       data-bind="template:{name:'header'}"></div>    <div class="col-xs-6"       data-bind="template:{name:'catalog'}"></div>    <div id="cartContainer" class="col-xs-6 well hidden"       data-bind="template:{name:'cart'}"></div> </div> <div class="row hidden" id="orderContainer"     data-bind="template:{name:'order'}"> </div> <div data-bind="template: {name:'add-to-catalog-modal'}"></div> <div data-bind="template: {name:'finish-order-modal'}"></div> </div> Let's review this code. We have two row classes. They will be our containers. The first container is named with the id value as catalogContainer and it will contain the catalog view and the cart. The second one is referenced by the id value as orderContainer and we will set our final order there. We also have two more <div> tags at the bottom that will contain the modal dialogs to show the form to add products to our catalog and the other one will contain a modal message to tell the user that our order is finished. Along with this code you can see a template binding inside the data-bind attribute. This is the binding that Knockout uses to bind templates to the element. It contains a name parameter that represents the ID of a template. <div class="col-xs-12" data-bind="template:{name:'header'}"></div> In this example, this <div> element will contain the HTML that is inside the <script> tag with the ID header. Creating templates Template elements are commonly declared at the bottom of the body, just above the <script> tags that have references to our external libraries. We are going to define some templates and then we will talk about each one of them: <!-- templates --> <script type="text/html" id="header"></script> <script type="text/html" id="catalog"></script> <script type="text/html" id="add-to-catalog-modal"></script> <script type="text/html" id="cart-widget"></script> <script type="text/html" id="cart-item"></script> <script type="text/html" id="cart"></script> <script type="text/html" id="order"></script> <script type="text/html" id="finish-order-modal"></script> Each template name is descriptive enough by itself, so it's easy to know what we are going to set inside them. Let's see a diagram showing where we dispose each template on the screen:   Notice that the cart-item template will be repeated for each item in the cart collection. Modal templates will appear only when a modal dialog is displayed. Finally, the order template is hidden until we click to confirm the order. In the header template, we will have the title and the menu of the page. The add-to-catalog-modal template will contain the modal that shows the form to add a product to our catalog. The cart-widget template will show a summary of our cart. The cart-item template will contain the template of each item in the cart. The cart template will have the layout of the cart. The order template will show the final list of products we want to buy and a button to confirm our order. The header template Let's begin with the HTML markup that should contain the header template: <script type="text/html" id="header"> <h1>    Catalog </h1>   <button class="btn btn-primary btn-sm" data-toggle="modal"     data-target="#addToCatalogModal">    Add New Product </button> <button class="btn btn-primary btn-sm" data-bind="click:     showCartDetails, css:{ disabled: cart().length < 1}">    Show Cart Details </button> <hr/> </script> We define a <h1> tag, and two <button> tags. The first button tag is attached to the modal that has the ID #addToCatalogModal. Since we are using Bootstrap as the CSS framework, we can attach modals by ID using the data-target attribute, and activate the modal using the data-toggle attribute. The second button will show the full cart view and it will be available only if the cart has items. To achieve this, there are a number of different ways. The first one is to use the CSS-disabled class that comes with Twitter Bootstrap. This is the way we have used in the example. CSS binding allows us to activate or deactivate a class in the element depending on the result of the expression that is attached to the class. The other method is to use the enable binding. This binding enables an element if the expression evaluates to true. We can use the opposite binding, which is named disable. There is a complete documentation on the Knockout website http://knockoutjs.com/documentation/enable-binding.html: <button class="btn btn-primary btn-sm" data-bind="click:   showCartDetails, enable: cart().length > 0"> Show Cart Details </button>   <button class="btn btn-primary btn-sm" data-bind="click:   showCartDetails, disable: cart().length < 1"> Show Cart Details </button> The first method uses CSS classes to enable and disable the button. The second method uses the HTML attribute, disabled. We can use a third option, which is to use a computed observable. We can create a computed observable variable in our view-model that returns true or false depending on the length of the cart: //in the viewmodel. Remember to expose it var cartHasProducts = ko.computed(function(){ return (cart().length > 0); }); //HTML <button class="btn btn-primary btn-sm" data-bind="click:   showCartDetails, enable: cartHasProducts"> Show Cart Details </button> To show the cart, we will use the click binding. Now we should go to our viewmodel.js file and add all the information we need to make this template work: var cart = ko.observableArray([]); var showCartDetails = function () { if (cart().length > 0) {    $("#cartContainer").removeClass("hidden"); } }; And you should expose these two objects in the view-model: return {    searchTerm: searchTerm,    catalog: filteredCatalog,    newProduct: newProduct,    totalItems:totalItems,    addProduct: addProduct,    cart: cart,    showCartDetails: showCartDetails, }; The catalog template The next step is to define the catalog template just below the header template: <script type="text/html" id="catalog"> <div class="input-group">    <span class="input-group-addon">      <i class="glyphicon glyphicon-search"></i> Search    </span>    <input type="text" class="form-control" data-bind="textInput:       searchTerm"> </div> <table class="table">    <thead>    <tr>      <th>Name</th>      <th>Price</th>      <th>Stock</th>      <th></th>    </tr>    </thead>    <tbody data-bind="foreach:catalog">    <tr data-bind="style:color:stock() < 5?'red':'black'">      <td data-bind="text:name"></td>      <td data-bind="text:price"></td>      <td data-bind="text:stock"></td>      <td>        <button class="btn btn-primary"          data-bind="click:$parent.addToCart">          <i class="glyphicon glyphicon-plus-sign"></i> Add        </button>      </td>    </tr>    </tbody>    <tfoot>    <tr>      <td colspan="3">        <strong>Items:</strong><span           data-bind="text:catalog().length"></span>      </td>      <td colspan="1">        <span data-bind="template:{name:'cart-widget'}"></span>      </td>    </tr>    </tfoot> </table> </script> Now, each line uses the style binding to alert the user, while they are shopping, that the stock is reaching the maximum limit. The style binding works the same way that CSS binding does with classes. It allows us to add style attributes depending on the value of the expression. In this case, the color of the text in the line must be black if the stock is higher than five, and red if it is four or less. We can use other CSS attributes, so feel free to try other behaviors. For example, set the line of the catalog to green if the element is inside the cart. We should remember that if an attribute has dashes, you should wrap it in single quotes. For example, background-color will throw an error, so you should write 'background-color'. When we work with bindings that are activated depending on the values of the viewmodel, it is good practice to use computed observables. Therefore, we can create a computed value in our product model that returns the value of the color that should be displayed: //In the Product.js var _lineColor = ko.computed(function(){ return (_stock() < 5)? 'red' : 'black'; }); return { lineColor:_lineColor }; //In the template <tr data-bind="style:lineColor"> ... </tr> It would be even better if we create a class in our style.css file that is called stock-alert and we use the CSS binding: //In the style file .stock-alert { color: #f00; } //In the Product.js var _hasStock = ko.computed(function(){ return (_stock() < 5);   }); return { hasStock: _hasStock }; //In the template <tr data-bind="css: hasStock"> ... </tr> Now, look inside the <tfoot> tag. <td colspan="1"> <span data-bind="template:{name:'cart-widget'}"></span> </td> As you can see, we can have nested templates. In this case, we have the cart-widget template inside our catalog template. This give us the possibility of having very complex templates, splitting them into very small pieces, and combining them, to keep our code clean and maintainable. Finally, look at the last cell of each row: <td> <button class="btn btn-primary"     data-bind="click:$parent.addToCart">    <i class="glyphicon glyphicon-plus-sign"></i> Add </button> </td> Look at how we call the addToCart method using the magic variable $parent. Knockout gives us some magic words to navigate through the different contexts we have in our app. In this case, we are in the catalog context and we want to call a method that lies one level up. We can use the magical variable called $parent. There are other variables we can use when we are inside a Knockout context. There is complete documentation on the Knockout website http://knockoutjs.com/documentation/binding-context.html. In this project, we are not going to use all of them. But we are going quickly explain these binding context variables, just to understand them better. If we don't know how many levels deep we are, we can navigate to the top of the view-model using the magic word $root. When we have many parents, we can get the magic array $parents and access each parent using indexes, for example, $parents[0], $parents[1]. Imagine that you have a list of categories where each category contains a list of products. These products are a list of IDs and the category has a method to get the name of their products. We can use the $parents array to obtain the reference to the category: <ul data-bind="foreach: {data: categories}"> <li data-bind="text: $data.name"></li> <ul data-bind="foreach: {data: $data.products, as: 'prod'}>    <li data-bind="text:       $parents[0].getProductName(prod.ID)"></li> </ul> </ul> Look how helpful the as attribute is inside the foreach binding. It makes code more readable. But if you are inside a foreach loop, you can also access each item using the $data magic variable, and you can access the position index that each element has in the collection using the $index magic variable. For example, if we have a list of products, we can do this: <ul data-bind="foreach: cart"> <li><span data-bind="text:$index">    </span> - <span data-bind="text:$data.name"></span> </ul> This should display: 0 – Product 1 1 – Product 2 2 – Product 3 ...  KnockoutJS magic variables to navigate through contexts Now that we know more about what binding variables are, let's go back to our code. We are now going to write the addToCart method. We are going to define the cart items in our js/models folder. Create a file called CartProduct.js and insert the following code in it: //js/models/CartProduct.js var CartProduct = function (product, units) { "use strict";   var _product = product,    _units = ko.observable(units);   var subtotal = ko.computed(function(){    return _product.price() * _units(); });   var addUnit = function () {    var u = _units();    var _stock = _product.stock();    if (_stock === 0) {      return;    } _units(u+1);    _product.stock(--_stock); };   var removeUnit = function () {    var u = _units();    var _stock = _product.stock();    if (u === 0) {      return;    }    _units(u-1);    _product.stock(++_stock); };   return {    product: _product,    units: _units,    subtotal: subtotal,    addUnit : addUnit,    removeUnit: removeUnit, }; }; Each cart product is composed of the product itself and the units of the product we want to buy. We will also have a computed field that contains the subtotal of the line. We should give the object the responsibility for managing its units and the stock of the product. For this reason, we have added the addUnit and removeUnit methods. These methods add one unit or remove one unit of the product if they are called. We should reference this JavaScript file into our index.html file with the other <script> tags. In the viewmodel, we should create a cart array and expose it in the return statement, as we have done earlier: var cart = ko.observableArray([]); It's time to write the addToCart method: var addToCart = function(data) { var item = null; var tmpCart = cart(); var n = tmpCart.length; while(n--) {    if (tmpCart[n].product.id() === data.id()) {      item = tmpCart[n];    } } if (item) {    item.addUnit(); } else {    item = new CartProduct(data,0);    item.addUnit();    tmpCart.push(item);       } cart(tmpCart); }; This method searches the product in the cart. If it exists, it updates its units, and if not, it creates a new one. Since the cart is an observable array, we need to get it, manipulate it, and overwrite it, because we need to access the product object to know if the product is in the cart. Remember that observable arrays do not observe the objects they contain, just the array properties. The add-to-cart-modal template This is a very simple template. We just wrap the code to add a product to a Bootstrap modal: <script type="text/html" id="add-to-catalog-modal"> <div class="modal fade" id="addToCatalogModal">    <div class="modal-dialog">      <div class="modal-content">        <form class="form-horizontal" role="form"           data-bind="with:newProduct">          <div class="modal-header">            <button type="button" class="close"               data-dismiss="modal">              <span aria-hidden="true">&times;</span>              <span class="sr-only">Close</span>            </button><h3>Add New Product to the Catalog</h3>          </div>          <div class="modal-body">            <div class="form-group">              <div class="col-sm-12">                <input type="text" class="form-control"                  placeholder="Name" data-bind="textInput:name">              </div>            </div>            <div class="form-group">              <div class="col-sm-12">                <input type="text" class="form-control"                   placeholder="Price" data-bind="textInput:price">              </div>            </div>            <div class="form-group">              <div class="col-sm-12">                <input type="text" class="form-control"                   placeholder="Stock" data-bind="textInput:stock">              </div>            </div>          </div>          <div class="modal-footer">            <div class="form-group">              <div class="col-sm-12">                <button type="submit" class="btn btn-default"                  data-bind="{click:$parent.addProduct}">                  <i class="glyphicon glyphicon-plus-sign">                  </i> Add Product                </button>              </div>            </div>          </div>        </form>      </div><!-- /.modal-content -->    </div><!-- /.modal-dialog --> </div><!-- /.modal --> </script> The cart-widget template This template gives the user information quickly about how many items are in the cart and how much all of them cost: <script type="text/html" id="cart-widget"> Total Items: <span data-bind="text:totalItems"></span> Price: <span data-bind="text:grandTotal"></span> </script> We should define totalItems and grandTotal in our viewmodel: var totalItems = ko.computed(function(){ var tmpCart = cart(); var total = 0; tmpCart.forEach(function(item){    total += parseInt(item.units(),10); }); return total; }); var grandTotal = ko.computed(function(){ var tmpCart = cart(); var total = 0; tmpCart.forEach(function(item){    total += (item.units() * item.product.price()); }); return total; }); Now you should expose them in the return statement, as we always do. Don't worry about the format now, you will learn how to format currency or any kind of data in the future. Now you must focus on learning how to manage information and how to show it to the user. The cart-item template The cart-item template displays each line in the cart: <script type="text/html" id="cart-item"> <div class="list-group-item" style="overflow: hidden">    <button type="button" class="close pull-right" data-bind="click:$root.removeFromCart"><span>&times;</span></button>    <h4 class="" data-bind="text:product.name"></h4>    <div class="input-group cart-unit">      <input type="text" class="form-control" data-bind="textInput:units" readonly/>        <span class="input-group-addon">          <div class="btn-group-vertical">            <button class="btn btn-default btn-xs"               data-bind="click:addUnit">              <i class="glyphicon glyphicon-chevron-up"></i>            </button>            <button class="btn btn-default btn-xs"               data-bind="click:removeUnit">              <i class="glyphicon glyphicon-chevron-down"></i>            </button>          </div>        </span>    </div> </div> </script> We set an x button in the top-right of each line to easily remove a line from the cart. As you can see, we have used the $root magic variable to navigate to the top context because we are going to use this template inside a foreach loop, and it means this template will be in the loop context. If we consider this template as an isolated element, we can't be sure how deep we are in the context navigation. To be sure, we go to the right context to call the removeFormCart method. It's better to use $root instead of $parent in this case. The code for removeFromCart should lie in the viewmodel context and should look like this: var removeFromCart = function (data) { var units = data.units(); var stock = data.product.stock(); data.product.stock(units+stock); cart.remove(data); }; Notice that in the addToCart method, we get the array that is inside the observable. We did that because we need to navigate inside the elements of the array. In this case, Knockout observable arrays have a method called remove that allows us to remove the object that we pass as a parameter. If the object is in the array, it will be removed. Remember that the data context is always passed as the first parameter in the function we use in the click events. The cart template The cart template should display the layout of the cart: <script type="text/html" id="cart"> <button type="button" class="close pull-right"     data-bind="click:hideCartDetails">    <span>&times;</span> </button> <h1>Cart</h1> <div data-bind="template: {name: 'cart-item', foreach:cart}"     class="list-group"></div> <div data-bind="template:{name:'cart-widget'}"></div> <button class="btn btn-primary btn-sm"     data-bind="click:showOrder">    Confirm Order </button> </script> It's important that you notice the template binding that we have just below <h1>Cart</h1>. We are binding a template with an array using the foreach argument. With this binding, Knockout renders the cart-item template for each element inside the cart collection. This considerably reduces the code we write in each template and in addition makes them more readable. We have once again used the cart-widget template to show the total items and the total amount. This is one of the good features of templates, we can reuse content over and over. Observe that we have put a button at the top-right of the cart to close it when we don't need to see the details of our cart, and the other one to confirm the order when we are done. The code in our viewmodel should be as follows: var hideCartDetails = function () { $("#cartContainer").addClass("hidden"); }; var showOrder = function () { $("#catalogContainer").addClass("hidden"); $("#orderContainer").removeClass("hidden"); }; As you can see, to show and hide elements we use jQuery and CSS classes from the Bootstrap framework. The hidden class just adds the display: none style to the elements. We just need to toggle this class to show or hide elements in our view. Expose these two methods in the return statement of your view-model. We will come back to this when we need to display the order template. This is the result once we have our catalog and our cart:   The order template Once we have clicked on the Confirm Order button, the order should be shown to us, to review and confirm if we agree. <script type="text/html" id="order"> <div class="col-xs-12">    <button class="btn btn-sm btn-primary"       data-bind="click:showCatalog">      Back to catalog    </button>    <button class="btn btn-sm btn-primary"       data-bind="click:finishOrder">      Buy & finish    </button> </div> <div class="col-xs-6">    <table class="table">      <thead>      <tr>        <th>Name</th>        <th>Price</th>        <th>Units</th>        <th>Subtotal</th>      </tr>      </thead>      <tbody data-bind="foreach:cart">      <tr>        <td data-bind="text:product.name"></td>        <td data-bind="text:product.price"></td>        <td data-bind="text:units"></td>        <td data-bind="text:subtotal"></td>      </tr>      </tbody>      <tfoot>      <tr>        <td colspan="3"></td>        <td>Total:<span data-bind="text:grandTotal"></span></td>      </tr>      </tfoot>    </table> </div> </script> Here we have a read-only table with all cart lines and two buttons. One is to confirm, which will show the modal dialog saying the order is completed, and the other gives us the option to go back to the catalog and keep on shopping. There is some code we need to add to our viewmodel and expose to the user: var showCatalog = function () { $("#catalogContainer").removeClass("hidden"); $("#orderContainer").addClass("hidden"); }; var finishOrder = function() { cart([]); hideCartDetails(); showCatalog(); $("#finishOrderModal").modal('show'); }; As we have done in previous methods, we add and remove the hidden class from the elements we want to show and hide. The finishOrder method removes all the items of the cart because our order is complete; hides the cart and shows the catalog. It also displays a modal that gives confirmation to the user that the order is done.  Order details template The finish-order-modal template The last template is the modal that tells the user that the order is complete: <script type="text/html" id="finish-order-modal"> <div class="modal fade" id="finishOrderModal">    <div class="modal-dialog">            <div class="modal-content">        <div class="modal-body">        <h2>Your order has been completed!</h2>        </div>        <div class="modal-footer">          <div class="form-group">            <div class="col-sm-12">              <button type="submit" class="btn btn-success"                 data-dismiss="modal">Continue Shopping              </button>            </div>          </div>        </div>      </div><!-- /.modal-content -->    </div><!-- /.modal-dialog --> </div><!-- /.modal --> </script> The following screenshot displays the output:   Handling templates with if and ifnot bindings You have learned how to show and hide templates with the power of jQuery and Bootstrap. This is quite good because you can use this technique with any framework you want. The problem with this type of code is that since jQuery is a DOM manipulation library, you need to reference elements to manipulate them. This means you need to know over which element you want to apply the action. Knockout gives us some bindings to hide and show elements depending on the values of our view-model. Let's update the show and hide methods and the templates. Add both the control variables to your viewmodel and expose them in the return statement. var visibleCatalog = ko.observable(true); var visibleCart = ko.observable(false); Now update the show and hide methods: var showCartDetails = function () { if (cart().length > 0) {    visibleCart(true); } };   var hideCartDetails = function () { visibleCart(false); };   var showOrder = function () { visibleCatalog(false); };   var showCatalog = function () { visibleCatalog(true); }; We can appreciate how the code becomes more readable and meaningful. Now, update the cart template, the catalog template, and the order template. In index.html, consider this line: <div class="row" id="catalogContainer"> Replace it with the following line: <div class="row" data-bind="if: visibleCatalog"> Then consider the following line: <div id="cartContainer" class="col-xs-6 well hidden"   data-bind="template:{name:'cart'}"></div> Replace it with this one: <div class="col-xs-6" data-bind="if: visibleCart"> <div class="well" data-bind="template:{name:'cart'}"></div> </div> It is important to know that the if binding and the template binding can't share the same data-bind attribute. This is why we go from one element to two nested elements in this template. In other words, this example is not allowed: <div class="col-xs-6" data-bind="if:visibleCart,   template:{name:'cart'}"></div> Finally, consider this line: <div class="row hidden" id="orderContainer"   data-bind="template:{name:'order'}"> Replace it with this one: <div class="row" data-bind="ifnot: visibleCatalog"> <div data-bind="template:{name:'order'}"></div> </div> With the changes we have made, showing or hiding elements now depends on our data and not on our CSS. This is much better because now we can show and hide any element we want using the if and ifnot binding. Let's review, roughly speaking, how our files are now: We have our index.html file that has the main container, templates, and libraries: <!DOCTYPE html> <html> <head> <title>KO Shopping Cart</title> <meta name="viewport" content="width=device-width,     initial-scale=1"> <link rel="stylesheet" type="text/css"     href="css/bootstrap.min.css"> <link rel="stylesheet" type="text/css" href="css/style.css"> </head> <body>   <div class="container-fluid"> <div class="row" data-bind="if: visibleCatalog">    <div class="col-xs-12"       data-bind="template:{name:'header'}"></div>    <div class="col-xs-6"       data-bind="template:{name:'catalog'}"></div>    <div class="col-xs-6" data-bind="if: visibleCart">      <div class="well" data-bind="template:{name:'cart'}"></div>    </div> </div> <div class="row" data-bind="ifnot: visibleCatalog">    <div data-bind="template:{name:'order'}"></div> </div> <div data-bind="template: {name:'add-to-catalog-modal'}"></div> <div data-bind="template: {name:'finish-order-modal'}"></div> </div>   <!-- templates --> <script type="text/html" id="header"> ... </script> <script type="text/html" id="catalog"> ... </script> <script type="text/html" id="add-to-catalog-modal"> ... </script> <script type="text/html" id="cart-widget"> ... </script> <script type="text/html" id="cart-item"> ... </script> <script type="text/html" id="cart"> ... </script> <script type="text/html" id="order"> ... </script> <script type="text/html" id="finish-order-modal"> ... </script> <!-- libraries --> <script type="text/javascript"   src="js/vendors/jquery.min.js"></script> <script type="text/javascript"   src="js/vendors/bootstrap.min.js"></script> <script type="text/javascript"   src="js/vendors/knockout.debug.js"></script> <script type="text/javascript"   src="js/models/product.js"></script> <script type="text/javascript"   src="js/models/cartProduct.js"></script> <script type="text/javascript" src="js/viewmodel.js"></script> </body> </html> We also have our viewmodel.js file: var vm = (function () { "use strict"; var visibleCatalog = ko.observable(true); var visibleCart = ko.observable(false); var catalog = ko.observableArray([...]); var cart = ko.observableArray([]); var newProduct = {...}; var totalItems = ko.computed(function(){...}); var grandTotal = ko.computed(function(){...}); var searchTerm = ko.observable(""); var filteredCatalog = ko.computed(function () {...}); var addProduct = function (data) {...}; var addToCart = function(data) {...}; var removeFromCart = function (data) {...}; var showCartDetails = function () {...}; var hideCartDetails = function () {...}; var showOrder = function () {...}; var showCatalog = function () {...}; var finishOrder = function() {...}; return {    searchTerm: searchTerm,    catalog: filteredCatalog,    cart: cart,    newProduct: newProduct,    totalItems:totalItems,    grandTotal:grandTotal,    addProduct: addProduct,    addToCart: addToCart,    removeFromCart:removeFromCart,    visibleCatalog: visibleCatalog,    visibleCart: visibleCart,    showCartDetails: showCartDetails,    hideCartDetails: hideCartDetails,    showOrder: showOrder,    showCatalog: showCatalog,    finishOrder: finishOrder }; })(); ko.applyBindings(vm); It is useful to debug to globalize the view-model. It is not good practice in production environments, but it is good when you are debugging your application. Window.vm = vm; Now you have easy access to your view-model from the browser debugger or from your IDE debugger. In addition to the product model, we have created a new model called CartProduct: var CartProduct = function (product, units) { "use strict"; var _product = product,    _units = ko.observable(units); var subtotal = ko.computed(function(){...}); var addUnit = function () {...}; var removeUnit = function () {...}; return {    product: _product,    units: _units,    subtotal: subtotal,    addUnit : addUnit,    removeUnit: removeUnit }; }; You have learned how to manage templates with Knockout, but maybe you have noticed that having all templates in the index.html file is not the best approach. We are going to talk about two mechanisms. The first one is more home-made and the second one is an external library used by lots of Knockout developers, created by Jim Cowart, called Knockout.js-External-Template-Engine (https://github.com/ifandelse/Knockout.js-External-Template-Engine). Managing templates with jQuery Since we want to load templates from different files, let's move all our templates to a folder called views and make one file per template. Each file will have the same name the template has as an ID. So if the template has the ID, cart-item, the file should be called cart-item.html and will contain the full cart-item template: <script type="text/html" id="cart-item"></script>  The views folder with all templates Now in the viewmodel.js file, remove the last line (ko.applyBindings(vm)) and add this code: var templates = [ 'header', 'catalog', 'cart', 'cart-item', 'cart-widget', 'order', 'add-to-catalog-modal', 'finish-order-modal' ];   var busy = templates.length; templates.forEach(function(tpl){ "use strict"; $.get('views/'+ tpl + '.html').then(function(data){    $('body').append(data);    busy--;    if (!busy) {      ko.applyBindings(vm);    } }); }); This code gets all the templates we need and appends them to the body. Once all the templates are loaded, we call the applyBindings method. We should do it this way because we are loading templates asynchronously and we need to make sure that we bind our view-model when all templates are loaded. This is good enough to make our code more maintainable and readable, but is still problematic if we need to handle lots of templates. Further more, if we have nested folders, it becomes a headache listing all our templates in one array. There should be a better approach. Managing templates with koExternalTemplateEngine We have seen two ways of loading templates, both of them are good enough to manage a low number of templates, but when lines of code begin to grow, we need something that allows us to forget about template management. We just want to call a template and get the content. For this purpose, Jim Cowart's library, koExternalTemplateEngine, is perfect. This project was abandoned by the author in 2014, but it is still a good library that we can use when we develop simple projects. We just need to download the library in the js/vendors folder and then link it in our index.html file just below the Knockout library. <script type="text/javascript" src="js/vendors/knockout.debug.js"></script> <script type="text/javascript"   src="js/vendors/koExternalTemplateEngine_all.min.js"></script> Now you should configure it in the viewmodel.js file. Remove the templates array and the foreach statement, and add these three lines of code: infuser.defaults.templateSuffix = ".html"; infuser.defaults.templateUrl = "views"; ko.applyBindings(vm); Here, infuser is a global variable that we use to configure the template engine. We should indicate which suffix will have our templates and in which folder they will be. We don't need the <script type="text/html" id="template-id"></script> tags any more, so we should remove them from each file. So now everything should be working, and the code we needed to succeed was not much. KnockoutJS has its own template engine, but you can see that adding new ones is not difficult. If you have experience with other template engines such as jQuery Templates, Underscore, or Handlebars, just load them in your index.html file and use them, there is no problem with that. This is why Knockout is beautiful, you can use any tool you like with it. You have learned a lot of things in this article, haven't you? Knockout gives us the CSS binding to activate and deactivate CSS classes according to an expression. We can use the style binding to add CSS rules to elements. The template binding helps us to manage templates that are already loaded in the DOM. We can iterate along collections with the foreach binding. Inside a foreach, Knockout gives us some magic variables such as $parent, $parents, $index, $data, and $root. We can use the binding as along with the foreach binding to get an alias for each element. We can show and hide content using just jQuery and CSS. We can show and hide content using the bindings: if, ifnot, and visible. jQuery helps us to load Knockout templates asynchronously. You can use the koExternalTemplateEngine plugin to manage templates in a more efficient way. The project is abandoned but it is still a good solution. Summary In this article, you have learned how to split an application using templates that share the same view-model. Now that we know the basics, it would be interesting to extend the application. Maybe we can try to create a detailed view of the product, or maybe we can give the user the option to register where to send the order. Resources for Article: Further resources on this subject: Components [article] Web Application Testing [article] Top features of KnockoutJS [article]
Read more
  • 0
  • 0
  • 11034

article-image-our-app-and-tool-stack
Packt
04 Mar 2015
33 min read
Save for later

Our App and Tool Stack

Packt
04 Mar 2015
33 min read
In this article by Zachariah Moreno, author of the book AngularJS Deployment Essentials, you will learn how to do the following: Minimize efforts and maximize results using a tool stack optimized for AngularJS development Access the krakn app via GitHub Scaffold an Angular app with Yeoman, Grunt, and Bower Set up a local Node.js development server Read through krakn's source code Before NASA or Space X launches a vessel into the cosmos, there is a tremendous amount of planning and preparation involved. The guiding principle when planning for any successful mission is similar to minimizing efforts and resources while retaining maximum return on the mission. Our principles for development and deployment are no exception to this axiom, and you will gain a firmer working knowledge of how to do so in this article. (For more resources related to this topic, see here.) The right tools for the job Web applications can be compared to buildings; without tools, neither would be a pleasure to build. This makes tools an indispensable factor in both development and construction. When tools are combined, they form a workflow that can be repeated across any project built with the same stack, facilitating the practices of design, development, and deployment. The argument can be made that it is just as paramount to document workflow as an application's source code or API. Along with grouping tools into categories based on the phases of building applications, it is also useful to group tools based on the opinions of a respective project—in our case, Angular, Ionic, and Firebase. I call tools grouped into opinionated workflows tool stacks. For example, the remainder of this article discusses the tool stack used to build the application that we will deploy across environments in this book. In contrast, if you were to build a Ruby on Rails application, the tool stack would be completely different because the project's opinions are different. Our app is called krakn, and it functions as a real-time chat application built on top of the opinions of Angular, the Ionic Framework, and Firebase. You can find all of krakn's source code at https://github.com/zachmoreno/krakn. Version control with Git and GitHub Git is a command-line interface (CLI) developed by Linus Torvalds, to use on the famed Linux kernel. Git is mostly popular due to its distributed architecture making it nearly impossible for corruption to occur. Git's distributed architecture means that any remote repository has all of the same information as your local repository. It is useful to think of Git as a free insurance policy for my code. You will need to install Git using the instructions provided at www.git-scm.com/ for your development workstation's operating system. GitHub.com has played a notable role in Git's popularization, turning its functionality into a social network focused on open source code contributions. With a pricing model that incentivizes Open Source contributions and licensing for private, GitHub elevated the use of Git to heights never seen before. If you don't already have an account on GitHub, now is the perfect time to visit github.com to provision a free account. I mentioned earlier that krakn's code is available for forking at github.com/ZachMoreno/krakn. This means that any person with a GitHub account has the ability to view my version of krakn, and clone a copy of their own for further modifications or contributions. In GitHub's web application, forking manifests itself as a button located to the right of the repository's title, which in this case is XachMoreno/krakn. When you click on the button, you will see an animation that simulates the hardcore forking action. This results in a cloned repository under your account that will have a title to the tune of YourName/krakn. Node.js Node.js, commonly known as Node, is a community-driven server environment built on Google Chrome's V8 JavaScript runtime that is entirely event driven and facilitates a nonblocking I/O model. According to www.nodejs.org, it is best suited for: "Data-intensive real-time applications that run across distributed devices." So what does all this boil down to? Node empowers web developers to write JavaScript both on the client and server with bidirectional real-time I/O. The advent of Node has empowered developers to take their skills from the client to the server, evolving from frontend to full stack (like a caterpillar evolving into a butterfly). Not only do these skills facilitate a pay increase, they also advance the Web towards the same functionality as the traditional desktop or native application. For our purposes, we use Node as a tool; a tool to build real-time applications in the fewest number of keystrokes, videos watched, and words read as possible. Node is, in fact, a modular tool through its extensible package interface, called Node Package Manager (NPM). You will use NPM as a means to install the remainder of our tool stack. NPM The NPM is a means to install Node packages on your local or remote server. NPM is how we will install the majority of the tools and software used in this book. This is achieved by running the $ npm install –g [PackageName] command in your command line or terminal. To search the full list of Node packages, visit www.npmjs.org or run $ npm search [Search Term] in your command line or terminal as shown in the following screenshot: Yeoman's workflow Yeoman is a CLI that is the glue that holds your tools into your opinionated workflow. Although the term opinionated might sound off-putting, you must first consider the wisdom and experience of the developers and community before you who maintain Yeoman. In this context, opinionated means a little more than a collection of the best practices that are all aimed at improving your developer's experience of building static websites, single page applications, and everything in between. Opinionated does not mean that you are locked into what someone else feels is best for you, nor does it mean that you must strictly adhere to the opinions or best practices included. Yeoman is general enough to help you build nearly anything for the Web as well as improving your workflow while developing it. The tools that make up Yeoman's workflow are Yo, Grunt.js, Bower, and a few others that are more-or-less optional, but are probably worth your time. Yo Apart from having one of the hippest namespaces, Yo is a powerful code generator that is intelligent enough to scaffold most sites and applications. By default, instantiating a yo command assumes that you mean to scaffold something at a project level, but yo can also be scoped more granularly by means of sub-generators. For example, the command for instantiating a new vanilla Angular project is as follows: $ yo angular radicalApp Yo will not finish your request until you provide some further information about your desired Angular project. This is achieved by asking you a series of relevant questions, and based on your answers, yo will scaffold a familiar application folder/file structure, along with all the boilerplate code. Note that if you have worked with the angular-seed project, then the Angular application that yo generates will look very familiar to you. Once you have an Angular app scaffolded, you can begin using sub-generator commands. The following command scaffolds a new route, radicalRoute, within radicalApp: $ yo angular:route radicalRoute The :route sub-generator is a very powerful command, as it automates all of the following key tasks: It creates a new file, radicalApp/scripts/controllers/radicalRoute.js, that contains the controller logic for the radicalRoute view It creates another new file, radicalApp/views/radicalRoute.html, that contains the associated view markup and directives Lastly, it adds an additional route within, radicalApp/scripts/app.js, that connects the view to the controller Additionally, the sub-generators for yo angular include the following: :controller :directive :filter :service :provider :factory :value :constant :decorator :view All the sub-generators allow you to execute finer detailed commands for scaffolding smaller components when compared to :route, which executes a combination of sub-generators. Installing Yo Within your workstation's terminal or command-line application type, insert the following command, followed by a return: $ npm install -g yo If you are a Linux or Mac user, you might want to prefix the command with sudo, as follows: $ sudo npm install –g yo Grunt Grunt.js is a task runner that enhances your existing and/or Yeoman's workflow by automating repetitive tasks. Each time you generate a new project with yo, it creates a /Gruntfile.js file that wires up all of the curated tasks. You might have noticed that installing Yo also installs all of Yo's dependencies. Reading through /Gruntfile.js should incite a fair amount of awe, as it gives you a snapshot of what is going on under the hood of Yeoman's curated Grunt tasks and its dependencies. Generating a vanilla Angular app produces a /Gruntfile.js file, as it is responsible for performing the following tasks: It defines where Yo places Bower packages, which is covered in the next section It defines the path where the grunt build command places the production-ready code It initializes the watch task to run: JSHint when JavaScript files are saved Karma's test runner when JavaScript files are saved Compass when SCSS or SASS files are saved The saved /Gruntfile.js file It initializes LiveReload when any HTML or CSS files are saved It configures the grunt server command to run a Node.js server on localhost:9000, or to show test results on localhost:9001 It autoprefixes CSS rules on LiveReload and grunt build It renames files for optimizing browser caching It configures the grunt build command to minify images, SVG, HTML, and CSS files or to safely minify Angular files Let us pause for a moment to reflect on the amount of time it would take to find, learn, and implement each dependency into our existing workflow for each project we undertake. Ok, we should now have a greater appreciation for Yeoman and its community. For the vast majority of the time, you will likely only use a few Grunt commands, which include the following: $ grunt server $ grunt test $ grunt build Bower If Yo scaffolds our application's structure and files, and Grunt automates repetitive tasks for us, then what does Bower bring to the party? Bower is web development's missing package manager. Its functionality parallels that of Ruby Gems for the Ruby on Rails MVC framework, but is not limited to any single framework or technology stack. The explicit use of Bower is not required by the Yeoman workflow, but as I mentioned previously, the use of Bower is configured automatically for you in your project's /Gruntfile.js file. How does managing packages improve our development workflow? With all of the time we've been spending in our command lines and terminals, it is handy to have the ability to automate the management of third-party dependencies within our application. This ability manifests itself in a few simple commands, the most ubiquitous being the following command: $ bower install [PackageName] --save With this command, Bower will automate the following steps: First, search its packages for the specified package name Download the latest stable version of the package if found Move the package to the location defined in your project's /Gruntfile.js file, typically a folder named /bower_components Insert dependencies in the form of <link> elements for CSS files in the document's <head> element, and <script> elements for JavaScript files right above the document's closing </body> tag, to the package's files within your project's /index.html file This process is one that web developers are more than familiar with because adding a JavaScript library or new dependency happens multiple times within every project. Bower speeds up our existing manual process through automation and improves it by providing the latest stable version of a package and then notifying us of an update if one is available. This last part, "notifying us of an update if … available", is important because as a web developer advances from one project to the next, it is easy to overlook keeping dependencies as up to date as possible. This is achieved by running the following command: $ bower update This command returns all the available updates, if available, and will go through the same process of inserting new references where applicable. Bower.io includes all of the documentation on how to use Bower to its fullest potential along with the ability to search through all of the available Bower packages. Searching for available Bower packages can also be achieved by running the following command: $ bower search [SearchTerm] If you cannot find the specific dependency for which you search, and the project is on GitHub, consider contributing a bower.json file to the project's root and inviting the owner to register it by running the following command: $ bower register [ThePackageName] [GitEndpoint] Registration allows you to install your dependency by running the next command: $ bower install [ThePackageName] The Ionic framework The Ionic framework is a truly remarkable advancement in bridging the gap between web applications and native mobile applications. In some ways, Ionic parallels Yeoman where it assembles tools that were already available to developers into a neat package, and structures a workflow around them, inherently improving our experience as developers. If Ionic is analogous to Yeoman, then what are the tools that make up Ionic's workflow? The tools that, when combined, make Ionic noteworthy are Apache Cordova, Angular, Ionic's suite of Angular directives, and Ionic's mobile UI framework. Batarang An invaluable piece to our Angular tool stack is the Google Chrome Developer Tools extension, Batarang, by Brian Ford. Batarang adds a third-party panel (on the right-hand side of Console) to DevTools that facilitates Angular's specific inspection in the event of debugging. We can view data in the scopes of each model, analyze each expression's performance, and view a beautiful visualization of service dependencies all from within Batarang. Because Angular augments the DOM with ng- attributes, it also provides a Properties pane within the Elements panel, to inspect the models attached to a given element's scope. The extension is easy to install from either the Chrome Web Store or the project's GitHub repository and inspection can be enabled by performing the following steps: Firstly, open the Chrome Developer Tools. You should then navigate to the AngularJS panel. Finally, select the Enable checkbox on the far right tab. Your active Chrome tab will then be reloaded automatically, and the AngularJS panel will begin populating the inspection data. In addition, you can leverage the Angular pane with the Elements panel to view Angular-specific properties at an elemental level, and observe the $scope variable from within the Console panel. Sublime Text and Editor Integration While developing any Angular app, it is helpful to augment our workflow further with Angular-specific syntax completion, snippets, go to definition, and quick panel search in the form of a Sublime Text package. Perform the following steps: If you haven't installed Sublime Text already, you need to first install Package Control. Otherwise, continue with the next step. Once installed, press command + Shift + P in Sublime. Then, you need to select the Package Control: Install Package option. Finally, type angularjs and press Enter on your keyboard. In addition to support within Sublime, Angular enhancements exist for lots of popular editors, including WebStorm, Coda, and TextMate. Krakn As a quick refresher, krakn was constructed using all of the tools that are covered in this article. These include Git, GitHub, Node.js, NPM, Yeoman's workflow, Yo, Grunt, Bower, Batarang, and Sublime Text. The application builds on Angular, Firebase, the Ionic Framework, and a few other minor dependencies. The workflow I used to develop krakn went something like the following. Follow these steps to achieve the same thing. Note that you can skip the remainder of this section if you'd like to get straight to the deployment action, and feel free to rename things where necessary. Setting up Git and GitHub The workflow I followed while developing krakn begins with initializing our local Git repository and connecting it to our remote master repository on GitHub. In order to install and set up both, perform the following steps: Firstly, install all the tool stack dependencies, and create a folder called krakn. Following this, run $ git init, and you will create a README.md file. You should then run $ git add README.md and commit README.md to the local master branch. You then need to create a new remote repository on GitHub called XachMoreno/krakn. Following this, run the following command: $ git remote add origin git@github.com:[YourGitHubUserName] /krakn.git Conclude the setup by running $ git push –u origin master. Scaffolding the app with Yo Scaffolding our app couldn't be easier with the yo ionic generator. To do this, perform the following steps: Firstly, install Yo by running $ npm install -g yo. After this, install generator-ionicjs by running $ npm install -g generator-ionicjs. To conclude the scaffolding of your application, run the yo ionic command. Development After scaffolding the folder structure and boilerplate code, our workflow advances to the development phase, which is encompassed in the following steps: To begin, run grunt server. You are now in a position to make changes, for example, these being deletions or additions. Once these are saved, LiveReload will automatically reload your browser. You can then review the changes in the browser. Repeat steps 2-4 until you are ready to advance to the predeployment phase. Views, controllers, and routes Being a simple chat application, krakn has only a handful of views/routes. They are login, chat, account, menu, and about. The menu view is present in all the other views in the form of an off-canvas menu. The login view The default view/route/controller is named login. The login view utilizes the Firebase's Simple Login feature to authenticate users before proceeding to the rest of the application. Apart from logging into krakn, users can register a new account by entering their desired credentials. An interesting part of the login view is the use of the ng-show directive to toggle the second password field if the user selects the register button. However, the ng-model directive is the first step here, as it is used to pass the input text from the view to the controller and ultimately, the Firebase Simple Login. Other than the Angular magic, this view uses the ion-view directive, grid, and buttons that are all core to Ionic. Each view within an Ionic app is wrapped within an ion-view directive that contains a title attribute as follows: <ion-view title="Login"> The login view uses the standard input elements that contain a ng-model attribute to bind the input's value back to the controller's $scope as follows:   <input type="text" placeholder="you@email.com" ng-model= "data.email" />     <input type="password" placeholder=  "embody strength" ng-model="data.pass" />     <input type="password" placeholder=  "embody strength" ng-model="data.confirm" /> The Log In and Register buttons call their respective functions using the ng-click attribute, with the value set to the function's name as follows:   <button class="button button-block button-positive" ng-  click="login()" ng-hide="createMode">Log In</button> The Register and Cancel buttons set the value of $scope.createMode to true or false to show or hide the correct buttons for either action:   <button class="button button-block button-calm" ng-  click="createMode = true" ng-hide=  "createMode">Register</button>   <button class="button button-block button-calm" ng-  show="createMode" ng-click=  "createAccount()">Create Account</button>     <button class="button button-block button-  assertive" ng-show="createMode" ng-click="createMode =   false">Cancel</button> $scope.err is displayed only when you want to show the feedback to the user:   <p ng-show="err" class="assertive text-center">{{err}}</p>   </ion-view> The login controller is dependent on Firebase's loginService module and Angular's core $location module: controller('LoginCtrl', ['$scope', 'loginService', '$location',   function($scope, loginService, $location) { Ionic's directives tend to create isolated scopes, so it was useful here to wrap our controller's variables within a $scope.data object to avoid issues within the isolated scope as follows:     $scope.data = {       "email"   : null,       "pass"   : null,       "confirm"  : null,       "createMode" : false     } The login() function easily checks the credentials before authentication and sends feedback to the user if needed:     $scope.login = function(cb) {       $scope.err = null;       if( !$scope.data.email ) {         $scope.err = 'Please enter an email address';       }       else if( !$scope.data.pass ) {         $scope.err = 'Please enter a password';       } If the credentials are sound, we send them to Firebase for authentication, and when we receive a success callback, we route the user to the chat view using $location.path() as follows:       else {         loginService.login($scope.data.email,         $scope.data.pass, function(err, user) {          $scope.err = err? err + '' : null;          if( !err ) {           cb && cb(user);           $location.path('krakn/chat');          }        });       }     }; The createAccount() function works in much the same way as login(), except that it ensures that the users don't already exist before adding them to your Firebase and logging them in:     $scope.createAccount = function() {       $scope.err = null;       if( assertValidLoginAttempt() ) {        loginService.createAccount($scope.data.email,    $scope.data.pass,          function(err, user) {           if( err ) {             $scope.err = err? err + '' : null;           }           else {             // must be logged in before I can write to     my profile             $scope.login(function() {              loginService.createProfile(user.uid,     user.email);              $location.path('krakn/account');             });           }          });       }     }; The assertValidLoginAttempt() function is a function used to ensure that no errors are received through the account creation and authentication flows:     function assertValidLoginAttempt() {       if( !$scope.data.email ) {        $scope.err = 'Please enter an email address';       }       else if( !$scope.data.pass ) {        $scope.err = 'Please enter a password';       }       else if( $scope.data.pass !== $scope.data.confirm ) {        $scope.err = 'Passwords do not match';       }       return !$scope.err;     }    }]) The chat view Keeping vegan practices aside, the meat and potatoes of krakn's functionality lives within the chat view/controller/route. The design is similar to most SMS clients, with the input in the footer of the view and messages listed chronologically in the main content area. The ng-repeat directive is used to display a message every time a message is added to the messages collection in Firebase. If you submit a message successfully, unsuccessfully, or without any text, feedback is provided via the placeholder attribute of the message input. There are two filters being utilized within the chat view: orderByPriority and timeAgo. The orderByPriority filter is defined within the firebase module that uses the Firebase object IDs that ensure objects are always chronological. The timeAgo filter is an open source Angular module that I found. You can access it at JS Fiddle. The ion-view directive is used once again to contain our chat view: <ion-view title="Chat"> Our list of messages is composed using the ion-list and ion-item directives, in addition to a couple of key attributes. The ion-list directive gives us some nice interactive controls using the option-buttons and can-swipe attributes. This results in each list item being swipable to the left, revealing our option-buttons as follows:    <ion-list option-buttons="itemButtons" can-swipe=     "true" ng-show="messages"> Our workhorse in the chat view is the trusty ng-repeat directive, responsible for persisting our data from Firebase to our service to our controller and into our view and back again:    <ion-item ng-repeat="message in messages |      orderByPriority" item="item" can-swipe="true"> Then, we bind our data into vanilla HTML elements that have some custom styles applied to them:     <h2 class="user">{{ message.user }}</h2> The third-party timeago filter converts the time into something such as, "5 min ago", similar to Instagram or Facebook:     <small class="time">{{ message.receivedTime |       timeago }}</small>     <p class="message">{{ message.text }}</p>    </ion-item>   </ion-list> A vanilla input element is used to accept chat messages from our users. The input data is bound to $scope.data.newMessage for sending data to Firebase and $scope.feedback is used to keep our users informed:   <input type="text" class="{{ feeling }}" placeholder=    "{{ feedback }}" ng-model="data.newMessage" /> When you click on the send/submit button, the addMessage() function sends the message to your Firebase, and adds it to the list of chat messages, in real time:   <button type="submit" id="chat-send" class="button button-small button-clear" ng-click="addMessage()"><span class="ion-android-send"></span></button> </ion-view> The ChatCtrl controller is dependant on a few more modules other than our LoginCtrl, including syncData, $ionicScrollDelegate, $ionicLoading, and $rootScope: controller('ChatCtrl', ['$scope', 'syncData', '$ionicScrollDelegate', '$ionicLoading', '$rootScope',    function($scope, syncData, $ionicScrollDelegate, $ionicLoading, $rootScope) { The userName variable is derived from the authenticated user's e-mail address (saved within the application's $rootScope) by splitting the e-mail and using everything before the @ symbol: var userEmail = $rootScope.auth.user.e-mail       userName = userEmail.split('@'); Avoid isolated scope issue in the same fashion, as we did in LoginCtrl:     $scope.data = {       newMessage   : null,       user      : userName[0]     } Our view will only contain the latest 20 messages that have been synced from Firebase:     $scope.messages = syncData('messages', 20); When a new message is saved/synced, it is added to the bottom of the ng-repeated list, so we use the $ionicScrollDeligate variable to automatically scroll the new message into view on the display as follows: $ionicScrollDelegate.scrollBottom(true); Our default chat input placeholder text is something on your mind?:     $scope.feedback = 'something on your mind?';     // displays as class on chat input placeholder     $scope.feeling = 'stable'; If we have a new message and a valid username (shortened), then we can call the $add() function, which syncs the new message to Firebase and our view is as follows:     $scope.addMessage = function() {       if(  $scope.data.newMessage         && $scope.data.user ) {        // new data elements cannot be synced without adding          them to FB Security Rules        $scope.messages.$add({                    text    : $scope.data.newMessage,                    user    : $scope.data.user,                    receivedTime : Number(new Date())                  });        // clean up        $scope.data.newMessage = null; On a successful sync, the feedback updates say Done! What's next?, as shown in the following code snippet:        $scope.feedback = 'Done! What's next?';        $scope.feeling = 'stable';       }       else {        $scope.feedback = 'Please write a message before sending';        $scope.feeling = 'assertive';       }     };       $ionicScrollDelegate.scrollBottom(true); ]) The account view The account view allows the logged in users to view their current name and e-mail address along with providing them with the ability to update their password and e-mail address. The input fields interact with Firebase in the same way as the chat view does using the syncData method defined in the firebase module: <ion-view title="'Account'" left-buttons="leftButtons"> The $scope.user object contains our logged in user's account credentials, and we bind them into our view as follows:   <p>{{ user.name }}</p>  …   <p>{{ user.email }}</p> The basic account management functionality is provided within this view; so users can update their e-mail address and or password if they choose to, using the following code snippet:   <input type="password" ng-keypress=    "reset()" ng-model="oldpass"/>  …   <input type="password" ng-keypress=    "reset()" ng-model="newpass"/>  …   <input type="password" ng-keypress=    "reset()" ng-model="confirm"/> Both the updatePassword() and updateEmail() functions work in much the same fashion as our createAccount() function within the LoginCtrl controller. They check whether the new e-mail or password is not the same as the old, and if all is well, it syncs them to Firebase and back again:   <button class="button button-block button-calm" ng-click=    "updatePassword()">update password</button>  …    <p class="error" ng-show="err">{{err}}</p>   <p class="good" ng-show="msg">{{msg}}</p>  …   <input type="text" ng-keypress="reset()" ng-model="newemail"/>  …   <input type="password" ng-keypress="reset()" ng-model="pass"/>  …   <button class="button button-block button-calm" ng-click=    "updateEmail()">update email</button>  …   <p class="error" ng-show="emailerr">{{emailerr}}</p>   <p class="good" ng-show="emailmsg">{{emailmsg}}</p>  … </ion-view> The menu view Within krakn/app/scripts/app.js, the menu route is defined as the only abstract state. Because of its abstract state, it can be presented in the app along with the other views by the ion-side-menus directive provided by Ionic. You might have noticed that only two menu options are available before signing into the application and that the rest appear only after authenticating. This is achieved using the ng-show-auth directive on the chat, account, and log out menu items. The majority of the options for Ionic's directives are available through attributes making them simple to use. For example, take a look at the animation="slide-left-right" attribute. You will find Ionic's use of custom attributes within the directives as one of the ways that the Ionic Framework is setting itself apart from other options within this space. The ion-side-menu directive contains our menu list similarly to the one we previously covered, the ion-view directive, as follows: <ion-side-menus>  <ion-pane ion-side-menu-content>   <ion-nav-bar class="bar-positive"> Our back button is displayed by including the ion-nav-back-button directive within the ion-nav-bar directive:    <ion-nav-back-button class="button-clear"><i class=     "icon ion-chevron-left"></i> Back</ion-nav-back-button>   </ion-nav-bar> Animations within Ionic are exposed and used through the animation attribute, which is built atop the ngAnimate module. In this case, we are doing a simple animation that replicates the experience of a native mobile app:   <ion-nav-view name="menuContent" animation="slide-left-right"></ion-nav-view>  </ion-pane>    <ion-side-menu side="left">   <header class="bar bar-header bar-positive">    <h1 class="title">Menu</h1>   </header>   <ion-content class="has-header"> A simple ion-list directive/element is used to display our navigation items in a vertical list. The ng-show attribute handles the display of menu items before and after a user has authenticated. Before a user logs in, they can access the navigation, but only the About and Log In views are available until after successful authentication.    <ion-list>     <ion-item nav-clear menu-close href=      "#/app/chat" ng-show-auth="'login'">      Chat     </ion-item>       <ion-item nav-clear menu-close href="#/app/about">      About     </ion-item>       <ion-item nav-clear menu-close href=      "#/app/login" ng-show-auth="['logout','error']">      Log In     </ion-item> The Log Out navigation item is only displayed once logged in, and upon a click, it calls the logout() function in addition to navigating to the login view:     <ion-item nav-clear menu-close href="#/app/login" ng-click=      "logout()" ng-show-auth="'login'">      Log Out     </ion-item>    </ion-list>   </ion-content>  </ion-side-menu> </ion-side-menus> The MenuCtrl controller is the simplest controller in this application, as all it contains is the toggleMenu() and logout() functions: controller("MenuCtrl", ['$scope', 'loginService', '$location',   '$ionicScrollDelegate', function($scope, loginService,   $location, $ionicScrollDelegate) {   $scope.toggleMenu = function() {    $scope.sideMenuController.toggleLeft();   };     $scope.logout = function() {     loginService.logout();     $scope.toggleMenu();  };  }]) The about view The about view is 100 percent static, and its only real purpose is to present the credits for all the open source projects used in the application. Global controller constants All of krakn's controllers share only two dependencies: ionic and ngAnimate. Because Firebase's modules are defined within /app/scripts/app.js, they are available for consumption by all the controllers without the need to define them as dependencies. Therefore, the firebase service's syncData and loginService are available to ChatCtrl and LoginCtrl for use. The syncData service is how krakn utilizes three-way data binding provided by krakenjs.com. For example, within the ChatCtrl controller, we use syncData( 'messages', 20 ) to bind the latest twenty messages within the messages collection to $scope for consumption by the chat view. Conversely, when a ng-click user clicks the submit button, we write the data to the messages collection by use of the syncData.$add() method inside the $scope.addMessage() function: $scope.addMessage = function() {   if(...) { $scope.messages.$add({ ... });   } }; Models and services The model for krakn is www.krakn.firebaseio.com. The services that consume krakn's Firebase API are as follows: The firebase service in krakn/app/scripts/service.firebase.js The login service in krakn/app/scripts/service.login.js The changeEmail service in krakn/app/scripts/changeEmail.firebase.js The firebase service defines the syncData service that is responsible for routing data bidirectionally between krakn/app/bower_components/angularfire.js and our controllers. Please note that the reason I have not mentioned angularfire.js until this point is that it is basically an abstract data translation layer between firebaseio.com and Angular applications that intend on consuming data as a service. Predeployment Once the majority of an application's development phase has been completed, at least for the initial launch, it is important to run all of the code through a build process that optimizes the file size through compression of images and minification of text files. This piece of the workflow was not overlooked by Yeoman and is available through the use of the $ grunt build command. As mentioned in the section on Grunt, the /Gruntfile.js file defines where built code is placed once it is optimized for deployment. Yeoman's default location for built code is the /dist folder, which might or might not exist depending on whether you have run the grunt build command before. Summary In this article, we discussed the tool stack and workflow used to build the app. Together, Git and Yeoman formed a solid foundation for building krakn. Git and GitHub provided us with distributed version control and a platform for sharing the application's source code with you and the world. Yeoman facilitated the remainder of the workflow: scaffolding with Yo, automation with Grunt, and package management with Bower. With our app fully scaffolded, we were able to build our interface with the directives provided by the Ionic Framework, and wire up the real-time data synchronization forged by our Firebase instance. With a few key tools, we were able to minimize our development time while maximizing our return. Resources for Article: Further resources on this subject: Role of AngularJS? [article] AngularJS Project [article] Creating Our First Animation AngularJS [article]
Read more
  • 0
  • 0
  • 2688

article-image-deployment-scenarios
Packt
04 Mar 2015
10 min read
Save for later

Deployment Scenarios

Packt
04 Mar 2015
10 min read
In this article by Andrea Gazzarini, author of the book Apache Solr Essentials, contains information on the various ways in which you can deploy Solr, including key features and pros and cons for each scenario. Solr has a wide range of deployment alternatives, from monolithic to distributed indexes and standalone to clustered instances. We will organize this article by deployment scenarios, with a growing level of complexity. This article will cover the following topics: Sharding Replication: master, slave, and repeaters (For more resources related to this topic, see here.) Standalone instance All the examples use a standalone instance of Solr, that is, one or more cores managed by a Solr deployment hosted in a standalone servlet container (for example, Jetty, Tomcat, and so on). This kind of deployment is useful for development because, as you learned, it is very easy to start and debug. Besides, it can also be suitable for a production context if you don't have strict non-functional requirements and have a small or medium amount of data. I have used a standalone instance to provide autocomplete services for small and medium intranet systems. Anyway, the main features of this kind of deployment are simplicity and maintainability; one simple node acts as both an indexer and a searcher. The following diagram depicts a standalone instance with two cores: Shards When a monolithic index becomes too large for a single node or when additions, deletions, or queries take too long to execute, the index can be split into multiple pieces called shards. The previous sentence highlights a logical and theoretical evolution path of a Solr index. However, this (in general) is valid for all scenarios we will describe. It is strongly recommended that you perform a preliminary analysis of your data and the estimated growth factor in order to decide from the beginning the right configuration that suits your requirements. Although it is possible to split an existing index into shards (https://lucene.apache.org/core/4_10_3/misc/org/apache/lucene/index/PKIndexSplitter.html), things definitely become easier if you start directly with a distributed index (if you need it, of course). The index is split vertically so that each shard contains a disjoint set of the entire index. Solr will query and merge results across those shards. The following diagram illustrates a Solr deployment with 3 nodes; this deployment consists of two cores (C1 and C2) divided into three shards (S1, S2, and S3): When using shards, only query requests are distributed. This means that it's up to the indexer to add and distribute the data across nodes, and to subsequently forward a change request (that is, delete, replace, and commit) for a given document to the appropriate shard (the shard that owns the document). The Solr Wiki recommends a simple, hash-based algorithm to determine the shard where a given document should be indexed: documentId.hashCode() % numServers Using this approach is also useful in order to know in advance where to send delete or update requests for a given document. On the opposite side, a searcher client will send a query request to any node, but it has to specify an additional shards parameter that declares the target shards that will be queried. In the following example, assuming that two shards are hosted in two servers listening to ports 8080 and 8081, the same request when sent to both nodes will produce the same result: http://localhost:8080/solr/c1/query?q=*:*&shards=localhost:8080/solr/c1,localhost:8081/solr/c2 http://localhost:8081/solr/c2/query?q=*:*&shards=localhost:8080/solr/c1,localhost:8081/solr/c2 When sending a query request, a client can optionally include a pseudofield associated with the [shard] transformer. In this case, as a part of each returned document, there will be additional information indicating the owning shard. This is an example of such a request: http://localhost:8080/solr/c1/query?q=*:*&shards=localhost:8080/solr/c1,localhost:8081/solr/c2&src_shard:[shard] Here is the corresponding response (note the pseudofield aliased as src_shard): <result name="response" numFound="192" start="0"> <doc>    <str name="id">9920</str>    <str name="brand">Fender</str>    <str name="model">Jazz Bass</str>    <arr name="artist">    <str>Marcus Miller</str>    </arr><str name="series">Marcus Miller signature</str>    <str name="src_shard">localhost:8080/solr/shard1</str> </doc> … <doc>    <str name="id">4392</str>    <str name="brand">Music Man</str>    <str name="model">Sting Ray</str>    <arr name="artist"><str>Tony Levin</str></arr>    <str name="series">5 strings DeLuxe</str>    <str name="src_shard">localhost:8081/solr/shard2</str> </doc> </result> The following are a few things to keep in mind when using this deployment scenario: The schema must have a uniqueKey field. This field must be declared as stored and indexed; in addition, it is supposed to be unique across all shards. Inverse Document Frequency (IDF) calculations cannot be distributed. IDF is computed per shard. Joins between documents belonging to different shards are not supported. If a shard receives both index and query requests, the index may change during a query execution, thus compromising the outgoing results (for example, a matching document that has been deleted). Master/slaves scenario In a master/slaves scenario, there are two types of Solr servers: an indexer (the master) and one or more searchers (the slaves). The master is the server that manages the index. It receives update requests and applies those changes. A searcher, on the other hand, is a Solr server that exposes search services to external clients. The index, in terms of data files, is replicated from the indexer to the searcher through HTTP by means of a built-in RequestHandler that must be configured on both the indexer side and searcher side (within the solrconfig.xml configuration file). On the indexer (master), a replication configuration looks like this: <requestHandler    name="/replication"  class="solr.ReplicationHandler">    <lst name="master">      <str name="replicateAfter">startup</str>      <str name="replicateAfter">optimize</str>      <str name="confFiles">schema.xml,stopwords.txt</str>    </lst> </requestHandler> The replication mechanism can be configured to be triggered after one of the following events: Commit: A commit has been applied Optimize: The index has been optimized Startup: The Solr instance has started In the preceding example, we want the index to be replicated after startup and optimize commands. Using the confFiles parameter, we can also indicate a set of configuration files (schema.xml and stopwords.txt, in the example) that must be replicated together with the index. Remember that changes on those files don't trigger any replication. Only a change in the index, in conjunction with one of the events we defined in the replicateAfter parameter, will mark the index (and the configuration files) as replicable. On the searcher side, the configuration looks like the following: <requestHandler name="/replication" class="solr.ReplicationHandler"> <lst name="slave">    <str name="masterUrl">http://<localhost>:<port>/solrmaster</str>    <str name="pollInterval">00:00:10</str> </lst> </requestHandler> You can see that a searcher periodically keeps polling the master (the pollInterval parameter) to check whether a newer version of the index is available. If it is, the searcher will start the replication mechanism by issuing a request to the master, which is completely unaware of the searchers. The replicability status of the index is actually indicated by a version number. If the searcher has the same version as the master, it means the index is the same. If the versions are different, it means that a newer version of the index is available on the master, and replication can start. Other than separating responsibilities, this deployment configuration allows us to have a so-called diamond architecture, consisting of one indexer and several searchers. When the replication is triggered, each searcher in the ring will receive a whole copy of the index. This allows the following: Load balancing of the incoming (query) requests. An increment to the availability of the whole system. In the event of a server crash, the other searchers will continue to serve the incoming requests. The following diagram illustrates a master/slave deployment scenario with one indexer, three searchers, and two cores: If the searchers are in several geographically dislocated data centers, an additional role called repeater can be configured in each data center in order to rationalize the replication data traffic flow between nodes. A repeater is simply a node that acts as both a master and a slave. It is a slave of the main master, and at the same time, it acts as master of the searchers within the same data center, as shown in this diagram: Shards with replication This scenario combines shards and replication in order to have a scalable system with high throughput and availability. There is one indexer and one or more searchers for each shard, allowing load balancing between (query) shard requests. The following diagram illustrates a scenario with two cores, three shards, one indexer, and (due to problems with available space), only one searcher for each shard: The drawback of this approach is undoubtedly the overall growing complexity of the system that requires more effort in terms of maintainability, manageability, and system administration. In addition to this, each searcher is an independent node, and we don't have a central administration console where a system administrator can get a quick overview of system health. Summary In this article, we described various ways in which you can deploy Solr. Each deployment scenario has specific features, advantages, and drawbacks that make a choice ideal for one context and bad for another. A good thing is that the different scenarios are not strictly exclusive; they follow an incremental approach. In an ideal context, things should start immediately with the perfect scenario that fits your needs. However, unless your requirements are clear right from the start, you can begin with a simple configuration and then change it, depending on how your application evolves. Resources for Article: Further resources on this subject: Tuning Solr JVM and Container [article] Boost Your search [article] In the Cloud [article]
Read more
  • 0
  • 0
  • 2009

article-image-working-vmware-infrastructure
Packt
04 Mar 2015
21 min read
Save for later

Working with VMware Infrastructure

Packt
04 Mar 2015
21 min read
In this article by Daniel Langenhan, the author of VMware vRealize Orchestrator Cookbook, we will take a closer look at how Orchestrator interacts with vCenter Server and vRealize Automation (vRA—formerly known as vCloud Automation Center, vCAC). vRA uses Orchestrator to access and automate infrastructure using Orchestrator plugins. We will take a look at how to make Orchestrator workflows available to vRA. We will investigate the following recipes: Unmounting all the CD-ROMs of all VMs in a cluster Provisioning a VM from a template An approval process for VM provisioning (For more resources related to this topic, see here.) There are quite a lot of plugins for Orchestrator to interact with VMware infrastructure and programs: vCenter Server vCloud Director (vCD) vRealize Automation (vRA—formally known as vCloud Automation Center, vCAC) Site Recovery Manager (SRM) VMware Auto Deploy Horizon (View and Virtual Desktops) vRealize Configuration Manager (earlier known as vCenter Configuration Manager) vCenter Update Manager vCenter Operation Manager, vCOPS (only example packages) VMware, as of writing of this article, is still renaming its products. An overview of all plugins and their names and download links can be found at http://www.vcoteam.info/links/plug-ins.html. There are quite a lot of plugins, and we will not be able to cover all of them, so we will focus on the one that is most used, vCenter. Sadly, vCloud Director is earmarked by VMware to disappear for everyone but service providers, so there is no real need to show any workflow for it. We will also work with vRA and see how it interacts with Orchestrator. vSphere automation The interaction between Orchestrator and vCenter is done using the vCenter API. Here is the explanation of the interaction, which you can refer to in the following figure. A user starts an Orchestrator workflow (1) either in an interactive way via the vSphere Web Client, the Orchestrator Web Operator, the Orchestrator Client, or via the API. The workflow in Orchestrator will then send a job (2) to vCenter and receive a task ID back (type VC:Task). vCenter will then start enacting the job (3). Using the vim3WaitTaskEnd action (4), Orchestrator pauses until the task has been completed. If we do not use the wait task, we can't be certain whether the task has ended or failed. It is extremely important to use the vim3WaitTaskEnd action whenever we send a job to vCenter. When the wait task reports that the job has finished, the workflow will be marked as finished. The vCenter MoRef The MoRef (Managed Object Reference) is a unique ID for every object inside vCenter. MoRefs are basically strings; some examples are shown here: VM Network Datastore ESXi host Data center Cluster vm-301 network-312 dvportgroup-242 datastore-101 host-44 data center-21 domain-c41 The MoRefs are typically stored in the attribute .id or .key of the Orchestrator API object. For example, the MoRef of a vSwitch Network is VC:Network.id. To browse for MoRefs, you can use the Managed Object Browser (MOB), documented at https://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.wssdk.pg.doc/PG_Appx_Using_MOB.20.1.html. The vim3WaitTaskEnd action As already said, vim3WaitTaskEnd is one of the most central actions while interacting with vCenter. The action has the following variables: Category Name Type Usage IN vcTask VC:Task Carries the reconfiguration task from the script to the wait task IN progress Boolean Write to the logs the progress of a task in percentage IN pollRate Number How often the action should be checked for task completion in vCenter OUT ActionResult Any Returns the task's result The wait task will check in regular intervals (pollRate) the status of a task that has been submitted to vCenter. The task can have the following states: State Meaning Queued The task is queued and will be executed as soon as possible. Running The task is currently running. If the progress is set to true, the progress in percentage will be displayed in the logs. Success The task is finished successfully. Error The task has failed and an error will be thrown. Other vCenter wait actions There are actually five waiting tasks that come with the vCenter Server plugin. Here's an overview of the other four: Task Description vim3WaitToolsStarted This task waits until the VMware tools are started on a VM or until a timeout is reached. Vim3WaitForPrincipalIP This task waits until the VMware tools report the primary IP of a VM or until a timeout is reached. This typically indicates that the operating system is ready to receive network traffic. The action will return the primary IP. Vim3WaitDnsNameInTools This task waits until the VMware tools report a given DNS name of a VM or until a timeout is reached. The in-parameter addNumberToName is not used and can be set to Null. WaitTaskEndOrVMQuestion This task waits until a task is finished or if a VM develops a question. A vCenter question is related to user interaction. vRealize Automation (vRA) Automation has changed since the beginning of Orchestrator. Before, tools such as vCloud Director or vCloud Automation Center (vCAC)/vRealize Automation (vRA), Orchestrator was the main tool for automating vCenter resources. With version 6.2 of vCloud Automation Center (vCAC), the product has been renamed vRealize Automation. Now vRA is deemed to become the central cornerstone in the VMware automation effort. vRealize Orchestrator (vRO), is used by vRA to interact with and automate VMware and non-VMware products and infrastructure elements. Throughout the various vCAC/vRA interactions, the role of Orchestrator has changed substantially. Orchestrator started off as an extension to vCAC and became a central part of vRA. In vCAC 5.x, Orchestrator was only an extension of the IaaS life cycle. Orchestrator was tied in using the stubs vCAC 6.0 integrated Orchestrator as an XaaS service (Everything as a Service) using the Advanced Service Designer (ASD) In vCAC 6.1, Orchestrator is used to perform all VMware NSX operations (VMware's new network virtualization and automation), meaning that it became even more of a central part of the IaaS services. With vCAC 6.2, the Advance Service Designer (ASD) was enhanced to allow more complex form of designs, allowing better leverage of Orchestrator workflows. As you can see in the following figure, vRA connects to the vCenter Server using an infrastructure endpoint that allows vRA to conduct basic infrastructure actions, such as power operations, cloning, and so on. It doesn't allow any complex interactions with the vSphere infrastructure, such as HA configurations. Using the Advanced Service Endpoints, vRA integrates the Orchestrator (vRO) plugins as additional services. This allows vRA to offer the entire plugin infrastructure as services to vRA. The vCenter Server, AD, and PowerShell plugins are typical integrations that are used with vRA. Using Advance Service Designer (ASD), you can create integrations that use Orchestrator workflows. ASD allows you to offer Orchestrator workflows as vRA catalog items, making it possible for tenants to access any IT service that can be configured with Orchestrator via its plugins. The following diagram shows an example using the Active Directory plugin. The Orchestrator Plugin provides access to the AD services. By creating a custom resource using the exposed AD infrastructure, we can create a service blueprint and resource actions, both of which are based on Orchestrator workflows that use the AD plugin. The other method of integrating Orchestrator into the IaaS life cycle, which was predominately used in vCAC 5.x was to use the stubs. The build process of a VM has several steps; each step can be assigned a customizable workflow (called a stub). You can configure vRA to run an Orchestrator workflow at these stubs in order to facilitate a few customized actions. Such actions could be taken to change the VMs HA or DRS configuration, or to use the guest integration to install or configure a program on a VM. Installation How to install and configure vRA is out of the scope of this article, but take a look at http://www.kendrickcoleman.com/index.php/Tech-Blog/how-to-install-vcloud-automation-center-vcac-60-part-1-identity-appliance.html for more information. If you don't have the hardware or the time to install vRA yourself, you can use the VMware Hands-on Labs, which can be accessed after clicking on Try for Free at http://hol.vmware.com. The vRA Orchestrator plugin Due to the renaming, the vRA plugin is called vRealize Orchestrator vRA Plug-in 6.2.0, however the file you download and use is named o11nplugin-vcac-6.2.0-2287231.vmoapp. The plugin currently creates a workflow folder called vCloud Automation Center. vRA-integrated Orchestrator The vRA appliance comes with an installed and configured vRO instance; however, the best practice for a production environment is to use a dedicated Orchestrator installation, even better would be an Orchestrator cluster. Dynamic Types or XaaS XaaS means Everything (X) as a Service. The introduction of Dynamic Types in Orchestrator Version 5.5.1 does exactly that; it allows you to build your own plugins and interact with infrastructure that has not yet received its own plugin. Take a look at this article by Christophe Decanini; it integrates Twitter with Orchestrator using Dynamic Types at http://www.vcoteam.info/articles/learn-vco/282-dynamic-types-tutorial-implement-your-own-twitter-plug-in-without-any-scripting.html. Read more… To read more about Orchestrator integration with vRA, please take a look at the official VMware documentation. Please note that the official documentation you need to look at is about vRealize Automation, and not about vCloud Automation Center, but, as of writing this article, the documentation can be found at https://www.vmware.com/support/pubs/vrealize-automation-pubs.html. The document called Advanced Service Design deals with vRO and Advanced Service Designer The document called Machine Extensibility discusses customization using subs Unmounting all the CD-ROMs of all VMs in a cluster This is an easy recipe to start with, but one you can really make it work for your existing infrastructure. The workflow will unmount all CD-ROMs from a running VM. A mounted CD-ROM may block a VM from being vMotioned. Getting ready We need a VM that can mount a CD-ROM either as an ISO from a host or from the client. Before you start the workflow, make sure that the VM is powered on and has an ISO connected to it. How to do it... Create a new workflow with the following variables: Name Type Section Use cluster VC:ClusterComputerResource IN Used to input the cluster clusterVMs Array of VC:VirtualMachine Attribute Use to capture all VMs in a cluster Add the getAllVMsOfCluster action to the schema and assign the cluster in-parameter and the clusterVMs attribute to it as actionResult. Now, add a Foreach element to the schema and assign the workflow Disconnect all detachable devices from a running virtual machine. Assign the Foreach element clusterVMs as a parameter. Save and run the workflow. How it works... This recipe shows how fast and easily you can design solutions that help you with everyday vCenter problems. The problem is that VMs that have CD-ROMs or floppies mounted may experience problems using vMotion, making it impossible for them to be used with DRS. The reality is that a lot of admins mount CD-ROMs and then forget to disconnect them. Scheduling this script every evening just before the nighttime backups will make sure that a production cluster is able to make full use of DRS and is therefore better load-balanced. You can improve this workflow by integrating an exclusion list. See also Refer to the example workflow, 7.01 UnMount CD-ROM from Cluster. Provisioning a VM from a template In this recipe, we will build a deployment workflow for Windows and Linux VMs. We will learn how to create workflows and reduce the amount of input variables. Getting ready We need a Linux or Windows template that we can clone and provision. How to do it… We have split this recipe in two sections. In the first section, we will create a configuration element, and in the second, we will create the workflow. Creating a configuration We will use a configuration for all reusable variables. Build a configuration element that contains the following items: Name Type Use productId String This is the Windows product ID—the licensing code joinDomain String This is the Windows domain FQDN to join domainAdmin Credential These are the credentials to join the domain licenseMode VC:CustomizationLicenseDataMode Example, perServer licenseUsers Number This denotes the number of licensed concurrent users inTimezone Enums:MSTimeZone Time zone fullName String Full name of the user orgName String Organization name newAdminPassword String New admin password dnsServerList Array of String List of DNS servers dnsDomain String DNS domain gateway Array of String List of gateways Creating the base workflow Now we will create the base workflow: Create the workflow as shown in the following figure by adding the given elements:      Clone, Windows with single NIC and credential      Clone, Linux with single NIC      Custom decision Use the Clone, Windows… workflow to create all variables. Link up the ones that you have defined in the configuration as attributes. The rest are defined as follows: Name Type Section Use vmName String IN This is the new virtual machine's name vm VC:VirtualMachine IN Virtual machine to clone folder VC:VmFolder IN This is the virtual machine folder datastore VC:Datastore IN This is the datastore in which you store the virtual machine pool VC:ResourcePool IN This is the resource pool in which you create the virtual machine network VC:Network IN This is the network to which you attach the virtual network interface ipAddress String IN This is the fixed valid IP address subnetMask String IN This is the subnet mask template Boolean Attribute For value No, mark new VM as template powerOn Boolean Attribute For value Yes, power on the VM after creation doSysprep Boolean Attribute For value Yes, run Windows Sysprep dhcp Boolean Attribute For value No, use DHCP newVM VC:VirtualMachine OUT This is the newly-created VM The following sub-workflow in-parameters will be set to special values: Workflow In-parameter value Clone, Windows with single NIC and credential host Null joinWorkgroup Null macAddress Null netBIOS Null primaryWINS Null secondaryWINS Null name vmName clientName vmName Clone, Linux with single NIC host Null macAddress Null name vmName clientName vmName Define the in-parameter VM as input for the Custom decision and add the following script. The script will check whether the name of the OS contains the word Microsoft: guestOS=vm.config.guestFullName; System.log(guestOS);if (guestOS.indexOf("Microsoft") >=0){return true;} else {return false} Save and run the workflow. This workflow will now create a new VM from an existing VM and customize it with a fixed IP. How it works… As you can see, creating workflows to automate vCenter deployments is pretty straightforward. Dealing with the various in-parameters of workflows can be quite overwhelming. The best way to deal with this problem is to hide away variables by defining them centrally using a configuration, or define them locally as attributes. Using configurations has the advantage that you can create them once and reuse them as needed. You can even push the concept a bit further by defining multiple configurations for multiple purposes, such as different environments. While creating a new workflow for automation, a typical approach is as follows: Look for a workflow that you need. Run the workflow normally to check out what it actually does. Either create a new workflow that uses the original or duplicate and edit the one you tried, modifying it until it does what you want. A fast way to deal with a lot of variables is to drag every element you need into the schema and then use the binding to create the variables as needed. You may have noticed that this workflow only lets you select vSwitch networks, not distributed vSwitch networks. You can improve this workflow with the following features: Read the existing Sysprep information stored in your vCenter Server Generate different predefined configurations (for example DEV or Prod) There's more... We can improve the workflow by implementing the ability to change the vCPU and the memory of the VM. Follow these steps to implement it: Move the out-parameter newVM to be an attribute. Add the following variables: Name Type Section Use vCPU Number IN This variable denotes the amount of vCPUs Memory Number IN This variable denotes the amount of VM memory vcTask VC:Task Attribute This variable will carry the reconfiguration task from the script to the wait task progress Boolean Attribute Value NO, vim3WaitTaskEnd pollRate Number Attribute Value 5, vim3WaitTaskEnd ActionResult Any Attribute vim3WaitTaskEnd Add the following actions and workflows according to the next figure:      shutdownVMAndForce      changeVMvCPU      vim3WaitTaskEnd      changeVMRAM      Start virtual machine Bind newVM to all the appropriate input parameters of the added actions and workflows. Bind actionResults (VC:tasks) of the change actions to vim3WaitTasks. See also Refer to the example workflows, 7.02.1 Provision VM (Base), 7.02.2 Provision VM (HW custom), as well as the configuration element, 7 VM provisioning. An approval process for VM provisioning In this recipe, we will see how to create a workflow that waits for an approver to approve the VM creation before provisioning it. We will learn how to combine mail and external events in a workflow to make it interact with different users. Getting ready For this recipe, we first need the provisioning workflow that we have created in the Provisioning a VM from a template recipe. You can use the example workflow, 7.02.1 Provision VM (Base). Additionally, we need a functional e-mail system as well as a workflow to send e-mails. You can use the example workflow, 4.02.1 SendMail as well as its configuration item, 4.2.1 Working with e-mail. How to do it… We will split this recipe in three parts. First, we will create a configuration element then, we will create the workflow, and lastly, we will use a presentation to make the workflow usable. Creating a configuration element We will use a configuration for all reusable variables. Build a configuration element that contains the following items: Name Type Use templates Array/VC:VirtualMachine This contains all the VMs that serve as templates folders Array/VC:VmFolder This contains all the VM folders that are targets for VM provisioning networks Array/VC:Network This contains all VM networks that are targets for VM provisioning resourcePools Array/VC:ResourcePool This contains all resource pools that are targets for VM provisioning datastores Array/VC:Datastore This contains all datastores that are targets for VM provisioning daysToApproval Number These are the number of days the approval should be available for approver String This is the e-mail of the approver Please note that you also have to define or use the configuration elements for SendMail, as well as the Provision VM workflows. You can use the examples contained in the example package. Creating a workflow Create a new workflow and add the following variables: Name Type Section Use mailRequester String IN This is the e-mail address of the requester vmName String IN This is the name of the new virtual machine vm VC:VirtualMachine IN This is the virtual machine to be cloned folder VC:VmFolder IN This is the virtual machine folder datastore VC:Datastore IN This is the datastore in which you store the virtual machine pool VC:ResourcePool IN This is the resource pool in which you create the virtual machine network VC:Network IN This is the network to which you attach the virtual network interface ipAddress String IN This is the fixed valid IP address subnetMask String IN This is the subnet mask isExternalEvent Boolean Attribute A value of true defines this event as external mailApproverSubject String Attribute This is the subject line of the mail sent to the approver mailApproverContent String Attribute This is the content of the mail that is sent to the approver mailRequesterSubject String Attribute This is the subject line of the mail sent to the requester when the VM is provisioned mailRequesterContent String Attribute This is the content of the mail that is sent to the requester when the VM is provisioned mailRequesterDeclinedSubject String Attribute This is the subject line of the mail sent to the requester when the VM is declined mailRequesterDeclinedContent String Attribute This is the content of the mail that is sent to the requester when the VM is declined eventName String Attribute This is the name of the external event endDate Date Attribute This is the end date for the wait of external event approvalSuccess Boolean Attribute This checks whether the VM has been approved Now add all the attributes we defined in the configuration element and link them to the configuration. Create the workflow as shown in the following figure by adding the given elements:      Scriptable task      4.02.1 SendMail (example workflow)       Wait for custom event       Decision       Provision VM (example workflow) Edit the scriptable task and bind the following variables to it: In Out vmName ipAddress mailRequester template approver days to approval mailApproverSubject mailApproverContent mailRequesterSubject mailRequesterContent mailRequesterDeclinedSubject mailRequesterDeclinedContent eventName endDate Add the following script to the scriptable task: //construct event name eventName="provision-"+vmName; //add days to today for approval var today = new Date(); var endDate = new Date(today); endDate.setDate(today.getDate()+daysToApproval); //construct external URL for approval var myURL = new URL() ; myURL=System.customEventUrl(eventName, false); externalURL=myURL.url; //mail to approver mailApproverSubject="Approval needed: "+vmName; mailApproverContent="Dear Approver,n the user "+mailRequester+" would like to provision a VM from template "+template.name+".n To approve please click here: "+externalURL; //VM provisioned mailRequesterSubject="VM ready :"+vmName; mailRequesterContent="Dear Requester,n the VM "+vmName+" has been provisioned and is now available under IP :"+ipAddress; //declined mailRequesterDeclinedSubject="Declined :"+vmName; mailRequesterDeclinedContent="Dear Requester,n the VM "+vmName+" has been declined by "+approver; Bind the out-parameter of Wait for customer event to approvalSuccess. Configure the Decision element with approvalSuccess as true. Bind all the other variables to the workflow elements. Improving with the presentation We will now edit the workflow's presentation in order to make it workable for the requester. To do so, follow the given steps: Click on Presentation and follow the steps to alter the presentation, as seen in the following screenshot: Add the following properties to the in-parameters: In-parameter Property Value template Predefined list of elements #templates folder Predefined list of elements #folders datastore Predefined list of elements #datastores pool Predefined list of elements #resourcePools network Predefined list of elements #networks You can now use the General tab of each in-parameter to change the displayed text. Save and close the workflow. How it works… This is a very simplified example of an approval workflow to create VMs. The aim of this recipe is to introduce you to the method and ideas of how to build such a workflow. This workflow will only give a requester the choices that are configured in the configuration element, making the workflow quite safe for users that have only limited knowhow of the IT environment. When the requester submits the workflow, an e-mail is sent to the approver. The e-mail contains a link, which when clicked, triggers the external event and approves the VM. If the VM is approved it will get provisioned, and when the provisioning has finished an e-mail is sent to the requester stating that the VM is now available. If the VM is not approved within a certain timeframe, the requester will receive an e-mail that the VM was not approved. To make this workflow fully functional, you can add permissions for a requester group to the workflow and Orchestrator so that the user can use the vCenter to request a VM. Things you can do to improve the workflow are as follows: Schedule the provisioning to a future date. Use the resources for the e-mail and replace the content. Add an error workflow in case the provisioning fails. Use AD to read out the current user's e-mail and full name to improve the workflow. Create a workflow that lets an approver configure the configuration elements that a requester can chose from. Reduce the selections by creating, for instance, a development and production configuration that contains the correct folders, datastores, networks, and so on. Create a decommissioning workflow that is automatically scheduled so that the VM is destroyed automatically after a given period of time. See also Refer to the example workflow, 7.03 Approval and the configuration element, 7 approval. Summary In this article, we discussed one of the important aspects of the interaction of Orchestrator with vCenter Server and vRealize Automation, that is VM provisioning. Resources for Article: Further resources on this subject: Importance of Windows RDS in Horizon View [article] Metrics in vRealize Operations [article] Designing and Building a Horizon View 6.0 Infrastructure [article]
Read more
  • 0
  • 0
  • 13128

article-image-ios-security-overview
Packt
04 Mar 2015
20 min read
Save for later

iOS Security Overview

Packt
04 Mar 2015
20 min read
In this article by Allister Banks and Charles S. Edge, the authors of the book, Learning iOS Security, we will go through an overview of the basic security measures followed in an iOS. Out of the box, iOS is one of the most secure operating systems available. There are a number of factors that contribute to the elevated security level. These include the fact that users cannot access the underlying operating system. Apps also have data in a silo (sandbox), so instead of accessing the system's internals they can access the silo. App developers choose whether to store settings such as passwords in the app or on iCloud Keychain, which is a secure location for such data on a device. Finally, Apple has a number of controls in place on devices to help protect users while providing an elegant user experience. However, devices can be made even more secure than they are now. In this article, we're going to get some basic security tasks under our belt in order to get some basic best practices of security. Where we feel more explanation is needed about what we did on devices, we'll explore a part of the technology itself in this article. This article will cover the following topics: Pairing Backing up your device Initial security checklist Safari and built-in app protection Predictive search and spotlight (For more resources related to this topic, see here.) To kick off the overview of iOS security, we'll quickly secure our systems by initially providing a simple checklist of tasks, where we'll configure a few device protections that we feel everyone should use. Then, we'll look at how to take a backup of our devices and finally, at how to use a built-in web browser and protections around a browser. Pairing When you connect a device to a computer that runs iTunes for the first time, you are prompted to enter a password. Doing so allows you to synchronize the device to a computer. Applications that can communicate over this channel include iTunes, iPhoto, Xcode, and others. To pair a device to a Mac, simply plug the device in (if you have a passcode, you'll need to enter that in order to pair the device.) When the device is plugged in, you'll be prompted on both the device and the computer to establish a trust. Simply tap on Trust on the iOS device, as shown in the following screenshot: Trusting a computer For the computer to communicate with the iOS device, you'll also need to accept the pairing on your computer (although, when you use libimobiledevice, which is the command to pair, does not require doing so, because you use the command line to accept). When prompted, click on Continue to establish the pairing, as seen in the following screenshot (the screenshot is the same in Windows): Trusting a device When a device is paired, a file is created in /var/db/lockdown, which is the UDID of the device with a property list (plist) extension. A property list is an Apple XML file that stores a variety of attributes. In Windows, iOS data is stored in the MobileSync folder, which you can access by navigating to Users(username)AppDataRoamingApple ComputerMobileSync. The information in this file sets up a trust between the computers and includes the following attributes: DeviceCertificate: This certificate is unique to each device. EscrowBag: The key bag of EscrowBag contains class keys used to decrypt the device. HostCertificate: This certificate is for the host who's paired with iOS devices (usually, the same for all files that you've paired devices with, on your computer). HostID: This is a generated ID for the host. HostPrivateKey: This is the private key for your Mac (should be the same in all files on a given computer). RootCertificate: This is the certificate used to generate keys (should be the same in all files on a given computer). RootPrivateKey: This is the private key of the computer that runs iTunes for that device. SystemBUID: This refers to the ID of the computer that runs iTunes. WiFiMACAddress: This is the Mac address of the Wi-Fi interface of the device that is paired to the computer. If you do not have an active Wi-Fi interface, MAC is still used while pairing. Why does this matter? It's important to know how a device interfaces with a computer. These files can be moved between computers and contain a variety of information about a device, including private keys. Having keys isn't all that is required for a computer to communicate with a device. When the devices are interfacing with a computer over USB, if you have a passcode enabled on the device, you will be required to enter that passcode in order to unlock the device. Once a computer is able to communicate with a device, you need to be careful as the backups of a device, apps that get synchronized to a device, and other data that gets exchanged with a device can be exposed while at rest on devices. Backing up your device What do most people do to maximize the security of iOS devices? Before we do anything, we need to take a backup of our devices. This protects the device from us by providing a restore point. This also secures the data from the possibility of losing it through a silly mistake. There are two ways, which are most commonly used to take backups: iCloud and iTunes. As the names imply, the first makes backups for the data on Apple's cloud service and the second on desktop computers. We'll cover how to take a backup on iCloud first. iCloud backups An iCloud account comes with free storage, to back up your Apple devices. An iOS device takes a backup to Apple servers and can be restored when a new device is set up from those same servers (it's a screen that appears during the activation process of a new device. Also, it appears as an option in iTunes if you back up to iTunes over USB—covered later in this article). Setting up and checking the status of iCloud backups is a straightforward process. From the Settings app, tap on iCloud and then Backup. As you can see from the Backup screen, you have two options, iCloud Backup, which enables automatic backups of the device to your iCloud account, and Back Up Now, which runs an immediate backup of the device. iCloud backups Allowing iCloud to take backups on devices is optional. You can disable access to iCloud and iCloud backups. However, doing so is rarely a good idea as you are limiting the functionality of the device and putting the data on your device at risk, if that data isn't backed up another way such as through iTunes. Many people have reservations about storing data on public clouds; especially, data as private as phone data (texts, phone call history, and so on). For more information on Apple's security and privacy around iCloud, refer to http://support.apple.com/en-us/HT202303. If you do not trust Apple or it's cloud, then you can also take a backup of your device using iTunes, described in the next section. Taking backups using iTunes Originally, iTunes was used to take backups for iOS devices. You can still use iTunes and it's likely you will have a second backup even if you are using iCloud, simply for a quick restore if nothing else. Backups are usually pretty small. The reason is that the operating system is not part of backups, since users can't edit any of those files. Therefore, you can use an ipsw file (the operating system) to restore a device. These are accessed through Apple Configurator or through iTunes if you have a restore file waiting to be installed. These can be seen in ~/Library/iTunes, and the name of the device and its software updates, as can be seen in the following screenshot: IPSW files Backups are stored in the ~/Library/Application Support/MobileSync/Backup directory. Here, you'll see a number of directories that are associated with the UDID of the devices, and within those, you'll see a number of files that make up the modular incremental backups beyond the initial backup. It's a pretty smart system and allows you to restore a device at different points in time without taking too long to perform each backup. Backups are stored in the Documents and SettingsUSERNAMEApplication DataApple ComputerMobileSyncBackup directory on Windows XP and in the UsersUSERNAMEAppDataRoamingApple ComputerMobileSyncBackup directory for newer operating systems. To enable an iTunes back up, plug a device into a computer, and then open iTunes. Click on the device for it to show the device details screen. The top section of the screen is for Backups (in the following screenshot, you can set a back up to This computer, which takes a backup on the computer you are on). I would recommend you to always choose the Encrypt iPhone backup option as it forces you to save a password in order to restore the back up. Additionally, you can use the Back Up Now button to kick off the first back up, as shown in the following screenshot: iTunes Viewing iOS data in iTunes To show why it's important to encrypt backups, let's look at what can be pulled out of those backups. There are a few tools that can extract backups, provided you have a password. Here, we'll look at iBackup Extractor to view the backup of your browsing history, calendars, call history, contacts, iMessages, notes, photos, and voicemails. To get started, download iBackup Extractor from http://www.wideanglesoftware.com/ibackupextractor. When you open iBackup Extractor for the first time, simply choose the device backup you wish to extract in iBackup Extractor. As you can see in following screenshot, you will be prompted for a password in order to unlock the Backup key bag. Enter the password to unlock the system. Unlock the backups Note that the file tree in the following screenshot gives away some information on the structure of the iOS filesystem, or at least, the data stored in the backups of the iOS device. For now, simply click on Browser to see a list of files that can be extracted from the backup, as you can see in the next screenshot: View Device Contents Using iBackup Extractor Note the prevalence of SQL databases in the files. Most apps use these types of databases to store data on devices. Also, check out the other options such as extracting notes (many that were possibly deleted), texts (some that have been deleted from devices), and other types of data from devices. Now that we've exhausted backups and proven that you should really put a password in place for your back ups, let's finally get to some basic security tasks to be performed on these devices! Initial security checklist Apple has built iOS to be one of the most secure operating systems in the world. This has been made possible by restricting access to much of the operating system by end users, unless you jailbreak a device. In this article, we won't cover jail-breaking devices much due to the fact that securing the devices then becomes a whole new topic. Instead, we have focused on what you need to do, how you can do those tasks, what the impacts are, and, how to manage security settings based on a policy. The basic steps required to secure an iOS device start with encrypting devices, which is done by assigning a passcode to a device. We will then configure how much inactive time before a device requires a PIN and accordingly manage the privacy settings. These settings allow us to get some very basic security features under our belt, and set the stage to explain what some of the features actually do. Configuring a passcode The first thing most of us need to do on an iOS device is configure a passcode for the device. Several things happen when a passcode is enabled, as shown in the following steps: The device is encrypted. The device then requires a passcode to wake up. An idle timeout is automatically set that puts the device to sleep after a few minutes of inactivity. This means that three of the most important things you can do to secure a device are enabled when you set up a passcode. Best of all, Apple recommends setting up a passcode during the initial set up of new devices. You can manage passcode settings using policies (or profiles as Apple likes to call them in iOS). Best of all—you can set a passcode and then use your fingerprint on the Home button instead of that passcode. We have found that by the time our phone is out of our pocket and if our finger is on the home button, the device is unlocked by the time we check it. With iPhone 6 and higher versions, you can now use that same fingerprint to secure payment information. Check whether a passcode has been configured, and if needed, configure a passcode using the Settings app. The Settings app is by default on the Home screen where many settings on the device, including Wi-Fi networks the device has been joined to, app preferences, mail accounts, and other settings are configured. To set a passcode, open the Settings app and tap on Touch ID & Passcode If a passcode has been set, you will see the Turn Passcode Off (as seen in the following screenshot) option If a passcode has not been set, then you can do so at this screen as well Additionally, you can change a passcode that has been set using the Change Passcode button and define a fingerprint or additional fingerprints that can be used with a touch ID There are two options in the USE TOUCH ID FOR section of the screen. You can choose whether, or not, you need to enter the passcode in order to unlock a phone, which you should use unless the device is also used by small children or as a kiosk. In these cases, you don't need to encrypt or take a backup of the device anyway. The second option is to force the entering of a passcode while using the App Store and iTunes. This can cost you money if someone else is using your device, so let the default value remain, which requires you to enter a passcode to unlock the options. Configure a Passcode The passcode settings are very easy to configure; so, they should be configured when possible. Scroll down on this screen and you'll see several other features, as shown in the next screenshot. The first option on the screen is Simple Passcode. Most users want to use a simple pin with an iOS device. Trying to use alphanumeric and long passcodes simply causes most users to try to circumvent the requirement. To add a fingerprint as a passcode, simply tap on Add a Fingerprint…, which you can see in the preceding screenshot, and follow the onscreen instructions. Additionally, the following can be accessed when the device is locked, and you can choose to turn them off: Today: This shows an overview of upcoming calendar items Notifications View: This shows you the recent push notifications (apps that have updates on the device) Siri: This represents the voice control of the device Passbook: This tool is used to make payments and display tickets for concert venues and meetups Reply with Message: This tool allows you to send a text reply to an incoming call (useful if you're on the treadmill) Each organization can decide whether it considers these options to be a security risk and direct users how to deal with them, or they can implement a policy around these options. Passcode Settings There aren't a lot of security options around passcodes and encryption, because by and large, Apple secures the device by giving you fewer options than you'll actually use. Under the hood, (for example, through Apple Configurator and Mobile Device Management) there are a lot of other options, but these aren't exposed to end users of devices. For the most part, a simple four-character passcode will suffice for most environments. When you complicate passcodes, devices become much more difficult to unlock, and users tend to look for ways around passcode enforcement policies. The passcode is only used on the device, so complicating the passcode will only reduce the likelihood that a passcode would be guessed before swiping open a device, which typically occurs within 10 tries. Finally, to disable a passcode and therefore encryption, simply go to the Touch ID & Passcode option in the Settings app and tap on Turn Passcode Off. Configuring privacy settings Once a passcode is set and the device is encrypted, it's time to configure the privacy settings. Third-party apps cannot communicate with one another by default in iOS. Therefore, you must enable communication between them (also between third-party apps and built-in iOS apps that have APIs). This is a fundamental concept when it comes to securing iOS devices. To configure privacy options, open the Settings app and tap on the entry for Privacy. On the Privacy screen, you'll see a list of each app that can be communicated with by other apps, as shown in the following screenshot: Privacy Options As an example, tap on the Location Services entry, as shown in the next screenshot. Here, you can set which apps can communicate with Location Services and when. If an app is set to While Using, the app can communicate with Location Services when the app is open. If an app is set to Always, then the app can only communicate with Location Services when the app is open and not when it runs in the background. Configure Location Services On the Privacy screen, tap on Photos. Here, you have fewer options because unlike the location of a device, you can't access photos when the app is running in the background. Here, you can enable or disable an app by communicating with the photo library on a device, as seen in the next screenshot: Configure What Apps Can Access Your Camera Roll Each app should be configured in such a way that it can communicate with the features of iOS or other apps that are absolutely necessary. Other privacy options which you can consider disabling include Siri and Handoff. Siri has the voice controls of an iOS. Because Siri can be used even when your phone is locked, consider to disable it by opening the Settings app, tapping on General and then on Siri, and you will be able disable the voice controls. To disable Handoff, you should use the General System Preference pane in any OS X computer paired to an iOS device. There, uncheck the Allow Handoff between this Mac and your iCloud devices option. Safari and built-in App protections Web browsers have access to a lot of data. One of the most popular targets on other platforms has been web browsers. The default browser on an iOS device is Safari. Open the Settings app and then tap on Safari. The Safari preferences to secure iOS devices include the following: Passwords & AutoFill: This is a screen that includes contact information, a list of saved passwords and credit cards used in web browsers. This data is stored in an iCloud Keychain if iCloud Keychain has been enabled in your phone. Favorites: This performs the function of bookmark management. This shows bookmarks in iOS. Open Links: This configures how links are managed. Block Pop-ups: This enables a pop-up blocker. Scroll down and you'll see the Privacy & Security options (as seen in the next screenshot). Here, you can do the following: Do Not Track: By this, you can block the tracking of browsing activity by websites. Block Cookies: A cookie is a small piece of data sent from a website to a visitor's browser. Many sites will send cookies to third-party sites, so the management of cookies becomes an obstacle to the privacy of many. By default, Safari only allows cookies from websites that you visit (Allow from Websites I Visit). Set the Cookies option to Always Block in order to disable its ability to accept any cookies; set the option to Always Allow to accept cookies from any source; and set the option to Allow from Current Website Only to only allow cookies from certain websites. Fraudulent Website Warning: This blocks phishing attacks (sites that only exist to steal personal information). Clear History and Website Data: This clears any cached history, web files, and passwords from the Safari browser. Use Cellular Data: When this option is turned off, it disables web traffic over cellular connections (so web traffic will only work when the phone is connected to a Wi-Fi network). Configure Privacy Settings for Safari There are also a number of advanced options that can be accessed by clicking on the Advanced button, as shown in the following screenshot: Configure the Advanced Safari Options These advanced options include the following: Website Data: This option (as you can see in the next screenshot) shows the amount of data stored from each site that caches files on the device, and allows you to swipe left on these entries to access any files saved for the site. Tap on Remove All Website Data to remove data for all the sites at once. JavaScript: This allows you to disable any JavaScripts from running on sites the device browses. Web Inspector: This shows the device in the Develop menu on a computer connected to the device. If the Web Inspector option has been disabled, use Advanced Preferences in the Safari Preferences option of Safari. View Website Data On Devices Browser security is an important aspect of any operating system. Predictive search and spotlight The final aspect of securing the settings on an iOS device that we'll cover in this article includes predictive search and spotlight. When you use the spotlight feature in iOS, usage data is sent to Apple along with the information from Location Services. Additionally, you can search for anything on a device, including items previously blocked from being accessed. The ability to search for blocked content warrants the inclusion in locking down a device. That data is then used to generate future searches. This feature can be disabled by opening the Settings app, tap on Privacy, then Location Services, and then System Services. Simply slide Spotlight Suggestions to Off to disable the location data from going over that connection. To limit the type of data that spotlight sends, open the Settings app, tap on General, and then on Spotlight Search. Uncheck each item you don't want indexed in the Spotlight database. The following screenshot shows the mentioned options: Configure What Spotlight Indexes These were some of the basic tactical tasks that secure devices. Summary This article was a whirlwind of quick changes that secure a device. Here, we paired devices, took a backup, set a passcode, and secured app data and Safari. We showed how to manually do some tasks that are set via policies. Resources for Article: Further resources on this subject: Creating a Brick Breaking Game [article] New iPad Features in iOS 6 [article] Sparrow iOS Game Framework - The Basics of Our Game [article]
Read more
  • 0
  • 0
  • 13184

article-image-test-driving-uitableviews-cedar
Joe Masilotti
04 Mar 2015
8 min read
Save for later

Test Driving UITableViews with Cedar

Joe Masilotti
04 Mar 2015
8 min read
One of the first things a developer does when learning iOS development is to display a list of items to the user. In iOS we use UITableViews to show one-dimensional tables of information. In practice they look like a long list of data and should be used in that way. UITableViews get their information from a UITableViewDataSource, which responds to a few delegate methods for a number of cells and what information the cells contain. This post will follow a step-by-step guide to test driving UITableViews in iOS. All code samples will use the behavior-driven testing framework Cedar. Cedar can be installed as a Cocoapod by adding the following to your Podfile: target Specs do pod Cedar end Follow this guide for installation and configuration instructions if you are having trouble or want a crash course on the framework. Unit-Style Approach One way to test table views is to follow a unit-style approach on the data source. The goal there is to call single public methods and assert that the correct state was altered or the return value was configured correctly. The target for unit testing a UITableView is its UITableViewDataSource property. The tests for this are fairly straightforward as they call -tableView:cellForRowAtIndexPath: and -tableView:numberOfCellsInSection: directly. For example, let's say we want our controller to display a table with the current list of iPhones. Our mental assertions are that this table should show a single section with nine items, one for each of the iPhone, iPhone 3G, iPhone 3GS, iPhone 4, iPhone 4s, iPhone 5, iPhone 5s, iPhone 6, and iPhone 6 Plus. The unit tests will follow a very similar pattern. Since a table defaults to one section we don't need to write a test asserting the number of sections. We can just go about testing that there are nine cells and assuming that the first and last cells text is correct, everything is working. describe(@"ViewController", ^{ __block ViewController *subject; beforeEach(^{ subject = [[ViewController alloc] init]; }); describe(@"-tableView:numberOfRowsInSection:", ^{ it(@"should have nine cells", ^{ [subject tableView:subject.tableView numberOfRowsInSection:0] should equal(9); }); }); describe(@"-tableView:cellForRowAtIndexPath:", ^{ __block UITableViewCell *cell; context(@"the first cell", ^{ beforeEach(^{ NSIndexPath *indexPath = [NSIndexPath indexPathForRow:0 inSection:0]; cell = [subject tableView:subject.tableView cellForRowAtIndexPath:indexPath]; }); it(@"should display 'iPhone'", ^{ cell.textLabel.text should equal(@"iPhone"); }); }); context(@"the last cell", ^{ beforeEach(^{ NSIndexPath *indexPath = [NSIndexPath indexPathForRow:8 inSection:0]; cell = [subject tableView:subject.tableView cellForRowAtIndexPath:indexPath]; }); it(@"should display 'iPhone 6 Plus'", ^{ cell.textLabel.text should equal(@"iPhone 6 Plus"); }); }); }); }); Now the good part about these tests is that they are easy to follow and straight to the point. When we ask how many items there are we expect the right amount. And when we want to ensure the first cell is set up correctly we test just that. Issues Unfortunately there are a few problems with this approach. The biggest issue is that we can get these tests to pass without actually displaying anything on the screen. A simple implementation of these two methods in our controller will make everything green but has no guarantee that a table view is on the screen (or that one even exists!). The first step in remedying this is to write a test asserting that the table view is a subview. Another, albeit minor, issue is we are breaking encapsulation; we are exposing that our controller conforms to the UITableViewDataSource protocol. Let's see what we can do about these two problems. Benefits Don't think that unit-style is bad, it just has different uses. If you have an app that uses multiple instances you will see benefits from this approach. This is because all you would need in your controller is to ensure the right type of data source was configured. You could take this one step farther by injecting the array of items to display and unit testing that. Then you have a repeatable unit of code that shows a list of data conforming to your app's specifications, which is quite powerful. Behavior-Driven Approach Let's take a more behavioral approach to our problem. Our goal is to display to the user the list of iPhones. If we care about what the user sees what is the closest way of replicating that? How about what cells are visible to the user? From Apple's documentation, -visibleCells on UITableView: Returns the table cells that are visible in the receiver. This sounds interesting. Let's restructure our tests to run assertions on the cells that the user sees, not some made up world of delegates and data sources. describe(@"when the view loads", ^{ beforeEach(^{ subject.view should_not be_nil; [subject.view layoutIfNeeded]; }); it(@"should display the first iPhone, first", ^{ UITableViewCell *firstCell = subject.tableView.visibleCells.firstObject; firstCell.textLabel.text should equal(@"iPhone"); }); it(@"display the iPhone 6 Plus, last", ^{ UITableViewCell *lastCell = subject.tableView.visibleCells.lastObject; lastCell.textLabel.text should equal(@"iPhone 6 Plus"); }); }); Note that in the beforeEach we assert that the view should exist. This is to kick off the controller's view lifecycle methods, namely -loadView and -viewDidLoad. We then tell its view to layout its subviews if need be. This ensures that anything we add as subviews have their layout constraints configured and applied. To get this to pass we have a few things to take care of. Create the backing array of iPhones Create the table view and add it as a subview Become the data source and respond to the calls The first one is easy so let's knock that out first. @interface ViewController () <UITableViewDataSource> @property (nonatomic) UITableView *tableView; @property (nonatomic, strong) NSArray *iPhones; @end @implementation ViewController - (instancetype)init { if (self = [super init]) { self.iPhones = @[ @"iPhone", @"iPhone 3G", @"iPhone 3GS", @"iPhone 4", @"iPhone 4s", @"iPhone 5", @"iPhone 5s", @"iPhone 6", @"iPhone 6 Plus" ]; } return self; } Note the opening up of the -tableView property in the interface extension. This allows us to keep it private in the header and the outside world while still being able to modify it internally. Next let's add the table view and its auto layout constraints. - (void)viewDidLoad { [super viewDidLoad]; self.tableView = [[UITableView alloc] init]; [self.view addSubview:self.tableView]; [self addTableViewConstraints]; } #pragma mark - Private - (void)addTableViewConstraints { self.tableView.translatesAutoresizingMaskIntoConstraints = NO; NSDictionary *views = @{ @"tableView": self.tableView }; [self.view addConstraints:[NSLayoutConstraint constraintsWithVisualFormat:@"V:|[tableView]|" options:kNilOptions metrics:nil views:views]]; [self.view addConstraints:[NSLayoutConstraint constraintsWithVisualFormat:@"H:|[tableView]|" options:kNilOptions metrics:nil views:views]]; } Since we aren't working with Storyboards or xibs/nibs we create the table view manually and add it as a subview. We also will need to add some simple auto layout constraints to have it fill the screen. Check out Apple's Auto Layout by Example guide if you would like a deeper explanation. Finally let's get to the meat of the issue and respond to the data source methods. #pragma mark - <UITableViewDataSource> - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return self.iPhones.count; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:kCellIdentifier forIndexPath:indexPath]; cell.textLabel.text = self.iPhones[indexPath.row]; return cell; } We also need to become the data source of the table so do that and register the cell in -viewDidLoad. [self.tableView registerClass:[UITableViewCell class] forCellReuseIdentifier:kCellIdentifier]; self.tableView.dataSource = self; Finally add the constant to the top of the file. NSString * const kCellIdentifier = @"CellIdentifier"; What's interesting with this approach is that not until you have every line correct with the tests pass. This helps ensure that what is happening under spec is closer to the real experience of the app. For example, having a table view on the screen, responding to the delegate calls, but not assigning the delegate won't get you anywhere. In the unit approach you could have done just that but still seen your tests go green. Benefits of Behavior Testing When testing behavior you put yourself in a world that more closely represents the state when a user is interacting with it. It also enables you to test collaboration between objects without having to single very simple piece of the architecture out. This means it can be easy to get carried away and start writing full integration tests from controllers. If you keep to only testing one or two layers of abstraction, in this case the table view through the delegate, your code and specs remain easy to read and understand. A side effect of this approach enabled us to hide some implementation details in the production code. This means we are more freely to do a green-to-green refactor without having to change our specs. For example, we could extract the UITableViewDataSource into its own object and know that it works correctly when all of the existing tests still pass. If we wanted to then reuse that collaborator we could then extract the specs and have it stand on its own. Or if our backing array turned into an NSDictionary and found everything by key nothing in our tests would have to change. There are many styles of testing and even more ways to test Objective-C code and the Cocoa Touch framework. Behavior testing is just one approach that has proved to be the most flexible and easy to understand for me. What other techniques and methods have you implemented to ensure code coverage on your own iOS apps? About the author Joe Masilotti is a test-driven iOS developer living in Brooklyn, NY. He contributes to open-source testing tools on GitHub and talks about development, cooking, and craft beer on Twitter.
Read more
  • 0
  • 0
  • 2975
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
Packt
04 Mar 2015
22 min read
Save for later

Python functions – Avoid repeating code

Packt
04 Mar 2015
22 min read
In this article by Silas Toms, author of the book ArcPy and ArcGIS – Geospatial Analysis with Python we will see how programming languages share a concept that has aided programmers for decades: functions. The idea of a function, loosely speaking, is to create blocks of code that will perform an action on a piece of data, transforming it as required by the programmer and returning the transformed data back to the main body of code. Functions are used because they solve many different needs within programming. Functions reduce the need to write repetitive code, which in turn reduces the time needed to create a script. They can be used to create ranges of numbers (the range() function), or to determine the maximum value of a list (the max function), or to create a SQL statement to select a set of rows from a feature class. They can even be copied and used in another script or included as part of a module that can be imported into scripts. Function reuse has the added bonus of making programming more useful and less of a chore. When a scripter starts writing functions, it is a major step towards making programming part of a GIS workflow. (For more resources related to this topic, see here.) Technical definition of functions Functions, also called subroutines or procedures in other programming languages, are blocks of code that have been designed to either accept input data and transform it, or provide data to the main program when called without any input required. In theory, functions will only transform data that has been provided to the function as a parameter; it should not change any other part of the script that has not been included in the function. To make this possible, the concept of namespaces is invoked. Namespaces make it possible to use a variable name within a function, and allow it to represent a value, while also using the same variable name in another part of the script. This becomes especially important when importing modules from other programmers; within that module and its functions, the variables that it contains might have a variable name that is the same as a variable name within the main script. In a high-level programming language such as Python, there is built-in support for functions, including the ability to define function names and the data inputs (also known as parameters). Functions are created using the keyword def plus a function name, along with parentheses that may or may not contain parameters. Parameters can also be defined with default values, so parameters only need to be passed to the function when they differ from the default. The values that are returned from the function are also easily defined. A first function Let's create a function to get a feel for what is possible when writing functions. First, we need to invoke the function by providing the def keyword and providing a name along with the parentheses. The firstFunction() will return a string when called: def firstFunction():    'a simple function returning a string'    return "My First Function" >>>firstFunction() The output is as follows: 'My First Function' Notice that this function has a documentation string or doc string (a simple function returning a string) that describes what the function does; this string can be called later to find out what the function does, using the __doc__ internal function: >>>print firstFunction.__doc__ The output is as follows: 'a simple function returning a string' The function is defined and given a name, and then the parentheses are added followed by a colon. The following lines must then be indented (a good IDE will add the indention automatically). The function does not have any parameters, so the parentheses are empty. The function then uses the keyword return to return a value, in this case a string, from the function. Next, the function is called by adding parentheses to the function name. When it is called, it will do what it has been instructed to do: return the string. Functions with parameters Now let's create a function that accepts parameters and transforms them as needed. This function will accept a number and multiply it by 3: def secondFunction(number):    'this function multiples numbers by 3'    return number *3 >>> secondFunction(4) The output is as follows: 12 The function has one flaw, however; there is no assurance that the value passed to the function is a number. We need to add a conditional to the function to make sure it does not throw an exception: def secondFunction(number):    'this function multiples numbers by 3'    if type(number) == type(1) or type(number) == type(1.0):        return number *3 >>> secondFunction(4.0) The output is as follows: 12.0 >>>secondFunction(4) The output is as follows: 12 >>>secondFunction("String") >>> The function now accepts a parameter, checks what type of data it is, and returns a multiple of the parameter whether it is an integer or a function. If it is a string or some other data type, as shown in the last example, no value is returned. There is one more adjustment to the simple function that we should discuss: parameter defaults. By including default values in the definition of the function, we avoid having to provide parameters that rarely change. If, for instance, we wanted a different multiplier than 3 in the simple function, we would define it like this: def thirdFunction(number, multiplier=3):    'this function multiples numbers by 3'    if type(number) == type(1) or type(number) == type(1.0):        return number *multiplier >>>thirdFunction(4) The output is as follows: 12 >>>thirdFunction(4,5) The output is as follows: 20 The function will work when only the number to be multiplied is supplied, as the multiplier has a default value of 3. However, if we need another multiplier, the value can be adjusted by adding another value when calling the function. Note that the second value doesn't have to be a number as there is no type checking on it. Also, the default value(s) in a function must follow the parameters with no defaults (or all parameters can have a default value and the parameters can be supplied to the function in order or by name). Using functions to replace repetitive code One of the main uses of functions is to ensure that the same code does not have to be written over and over. The first portion of the script that we could convert into a function is the three ArcPy functions. Doing so will allow the script to be applicable to any of the stops in the Bus Stop feature class and have an adjustable buffer distance: bufferDist = 400 buffDistUnit = "Feet" lineName = '71 IB' busSignage = 'Ferry Plaza' sqlStatement = "NAME = '{0}' AND BUS_SIGNAG = '{1}'" def selectBufferIntersect(selectIn,selectOut,bufferOut,     intersectIn, intersectOut, sqlStatement,   bufferDist, buffDistUnit, lineName, busSignage):    'a function to perform a bus stop analysis'    arcpy.Select_analysis(selectIn, selectOut, sqlStatement.format(lineName, busSignage))    arcpy.Buffer_analysis(selectOut, bufferOut, "{0} {1}".format(bufferDist), "FULL", "ROUND", "NONE", "")    arcpy.Intersect_analysis("{0} #;{1} #".format(bufferOut, intersectIn), intersectOut, "ALL", "", "INPUT")    return intersectOut This function demonstrates how the analysis can be adjusted to accept the input and output feature class variables as parameters, along with some new variables. The function adds a variable to replace the SQL statement and variables to adjust the bus stop, and also tweaks the buffer distance statement so that both the distance and the unit can be adjusted. The feature class name variables, defined earlier in the script, have all been replaced with local variable names; while the global variable names could have been retained, it reduces the portability of the function. The next function will accept the result of the selectBufferIntersect() function and search it using the Search Cursor, passing the results into a dictionary. The dictionary will then be returned from the function for later use: def createResultDic(resultFC):    'search results of analysis and create results dictionary' dataDictionary = {}      with arcpy.da.SearchCursor(resultFC, ["STOPID","POP10"]) as cursor:        for row in cursor:            busStopID = row[0]            pop10 = row[1]            if busStopID not in dataDictionary.keys():                dataDictionary[busStopID] = [pop10]            else:                dataDictionary[busStopID].append(pop10)    return dataDictionary This function only requires one parameter: the feature class returned from the searchBufferIntersect() function. The results holding dictionary is first created, then populated by the search cursor, with the busStopid attribute used as a key, and the census block population attribute added to a list assigned to the key. The dictionary, having been populated with sorted data, is returned from the function for use in the final function, createCSV(). This function accepts the dictionary and the name of the output CSV file as a string: def createCSV(dictionary, csvname): 'a function takes a dictionary and creates a CSV file'    with open(csvname, 'wb') as csvfile:        csvwriter = csv.writer(csvfile, delimiter=',')        for busStopID in dictionary.keys():            popList = dictionary[busStopID]            averagePop = sum(popList)/len(popList)            data = [busStopID, averagePop]            csvwriter.writerow(data) The final function creates the CSV using the csv module. The name of the file, a string, is now a customizable parameter (meaning the script name can be any valid file path and text file with the extension .csv). The csvfile parameter is passed to the CSV module's writer method and assigned to the variable csvwriter, and the dictionary is accessed and processed, and passed as a list to csvwriter to be written to the CSV file. The csv.writer() method processes each item in the list into the CSV format and saves the final result. Open the CSV file with Excel or a text editor such as Notepad. To run the functions, we will call them in the script following the function definitions: analysisResult = selectBufferIntersect(Bus_Stops,Inbound71, Inbound71_400ft_buffer, CensusBlocks2010, Intersect71Census, bufferDist, lineName,                busSignage ) dictionary = createResultDic(analysisResult) createCSV(dictionary,r'C:\Projects\Output\Averages.csv') Now, the script has been divided into three functions, which replace the code of the first modified script. The modified script looks like this: # -*- coding: utf-8 -*- # --------------------------------------------------------------------------- # 8662_Chapter4Modified1.py # Created on: 2014-04-22 21:59:31.00000 #   (generated by ArcGIS/ModelBuilder) # Description: # Adjusted by Silas Toms # 2014 05 05 # ---------------------------------------------------------------------------   # Import arcpy module import arcpy import csv   # Local variables: Bus_Stops = r"C:\Projects\PacktDB.gdb\SanFrancisco\Bus_Stops" CensusBlocks2010 = r"C:\Projects\PacktDB.gdb\SanFrancisco\CensusBlocks2010" Inbound71 = r"C:\Projects\PacktDB.gdb\Chapter3Results\Inbound71" Inbound71_400ft_buffer = r"C:\Projects\PacktDB.gdb\Chapter3Results\Inbound71_400ft_buffer" Intersect71Census = r"C:\Projects\PacktDB.gdb\Chapter3Results\Intersect71Census" bufferDist = 400 lineName = '71 IB' busSignage = 'Ferry Plaza' def selectBufferIntersect(selectIn,selectOut,bufferOut,intersectIn,                          intersectOut, bufferDist,lineName, busSignage ):    arcpy.Select_analysis(selectIn,                          selectOut,                           "NAME = '{0}' AND BUS_SIGNAG = '{1}'".format(lineName, busSignage))    arcpy.Buffer_analysis(selectOut,                          bufferOut,                          "{0} Feet".format(bufferDist),                          "FULL", "ROUND", "NONE", "")    arcpy.Intersect_analysis("{0} #;{1} #".format(bufferOut,intersectIn),                              intersectOut, "ALL", "", "INPUT")    return intersectOut   def createResultDic(resultFC):    dataDictionary = {}       with arcpy.da.SearchCursor(resultFC,                                ["STOPID","POP10"]) as cursor:        for row in cursor:            busStopID = row[0]            pop10 = row[1]            if busStopID not in dataDictionary.keys():                dataDictionary[busStopID] = [pop10]            else:                dataDictionary[busStopID].append(pop10)    return dataDictionary   def createCSV(dictionary, csvname):    with open(csvname, 'wb') as csvfile:        csvwriter = csv.writer(csvfile, delimiter=',')        for busStopID in dictionary.keys():            popList = dictionary[busStopID]            averagePop = sum(popList)/len(popList)            data = [busStopID, averagePop]            csvwriter.writerow(data) analysisResult = selectBufferIntersect(Bus_Stops,Inbound71, Inbound71_400ft_buffer,CensusBlocks2010,Intersect71Census, bufferDist,lineName, busSignage ) dictionary = createResultDic(analysisResult) createCSV(dictionary,r'C:\Projects\Output\Averages.csv') print "Data Analysis Complete" Further generalization of the functions, while we have created functions from the original script that can be used to extract more data about bus stops in San Francisco, our new functions are still very specific to the dataset and analysis for which they were created. This can be very useful for long and laborious analysis for which creating reusable functions is not necessary. The first use of functions is to get rid of the need to repeat code. The next goal is to then make that code reusable. Let's discuss some ways in which we can convert the functions from one-offs into reusable functions or even modules. First, let's examine the first function: def selectBufferIntersect(selectIn,selectOut,bufferOut,intersectIn,                          intersectOut, bufferDist,lineName, busSignage ):    arcpy.Select_analysis(selectIn,                          selectOut,                          "NAME = '{0}' AND BUS_SIGNAG = '{1}'".format(lineName, busSignage))    arcpy.Buffer_analysis(selectOut,                          bufferOut,                          "{0} Feet".format(bufferDist),                          "FULL", "ROUND", "NONE", "")    arcpy.Intersect_analysis("{0} #;{1} #".format(bufferOut,intersectIn),                              intersectOut, "ALL", "", "INPUT")    return intersectOut This function appears to be pretty specific to the bus stop analysis. It's so specific, in fact, that while there are a few ways in which we can tweak it to make it more general (that is, useful in other scripts that might not have the same steps involved), we should not convert it into a separate function. When we create a separate function, we introduce too many variables into the script in an effort to simplify it, which is a counterproductive effort. Instead, let's focus on ways to generalize the ArcPy tools themselves. The first step will be to split the three ArcPy tools and examine what can be adjusted with each of them. The Select tool should be adjusted to accept a string as the SQL select statement. The SQL statement can then be generated by another function or by parameters accepted at runtime. For instance, if we wanted to make the script accept multiple bus stops for each run of the script (for example, the inbound and outbound stops for each line), we could create a function that would accept a list of the desired stops and a SQL template, and would return a SQL statement to plug into the Select tool. Here is an example of how it would look: def formatSQLIN(dataList, sqlTemplate):    'a function to generate a SQL statement'    sql = sqlTemplate #"OBJECTID IN "    step = "("    for data in dataList:        step += str(data)    sql += step + ")"    return sql   def formatSQL(dataList, sqlTemplate):    'a function to generate a SQL statement'    sql = ''    for count, data in enumerate(dataList):        if count != len(dataList)-1:            sql += sqlTemplate.format(data) + ' OR '        else:            sql += sqlTemplate.format(data)    return sql   >>> dataVals = [1,2,3,4] >>> sqlOID = "OBJECTID = {0}" >>> sql = formatSQL(dataVals, sqlOID) >>> print sql The output is as follows: OBJECTID = 1 OR OBJECTID = 2 OR OBJECTID = 3 OR OBJECTID = 4 This new function, formatSQL(), is a very useful function. Let's review what it does by comparing the function to the results following it. The function is defined to accept two parameters: a list of values and a SQL template. The first local variable is the empty string sql, which will be added to using string addition. The function is designed to insert the values into the variable sql, creating a SQL statement by taking the SQL template and using string formatting to add them to the template, which in turn is added to the SQL statement string (note that sql += is equivelent to sql = sql +). Also, an operator (OR) is used to make the SQL statement inclusive of all data rows that match the pattern. This function uses the built-in enumerate function to count the iterations of the list; once it has reached the last value in the list, the operator is not added to the SQL statement. Note that we could also add one more parameter to the function to make it possible to use an AND operator instead of OR, while still keeping OR as the default: def formatSQL2(dataList, sqlTemplate, operator=" OR "):    'a function to generate a SQL statement'    sql = ''    for count, data in enumerate(dataList):        if count != len(dataList)-1:            sql += sqlTemplate.format(data) + operator        else:            sql += sqlTemplate.format(data)    return sql   >>> sql = formatSQL2(dataVals, sqlOID," AND ") >>> print sql The output is as follows: OBJECTID = 1 AND OBJECTID = 2 AND OBJECTID = 3 AND OBJECTID = 4 While it would make no sense to use an AND operator on ObjectIDs, there are other cases where it would make sense, hence leaving OR as the default while allowing for AND. Either way, this function can now be used to generate our bus stop SQL statement for multiple stops (ignoring, for now, the bus signage field): >>> sqlTemplate = "NAME = '{0}'" >>> lineNames = ['71 IB','71 OB'] >>> sql = formatSQL2(lineNames, sqlTemplate) >>> print sql The output is as follows: NAME = '71 IB' OR NAME = '71 OB' However, we can't ignore the Bus Signage field for the inbound line, as there are two starting points for the line, so we will need to adjust the function to accept multiple values: def formatSQLMultiple(dataList, sqlTemplate, operator=" OR "):    'a function to generate a SQL statement'    sql = ''    for count, data in enumerate(dataList):        if count != len(dataList)-1:            sql += sqlTemplate.format(*data) + operator        else:            sql += sqlTemplate.format(*data)    return sql   >>> sqlTemplate = "(NAME = '{0}' AND BUS_SIGNAG = '{1}')" >>> lineNames = [('71 IB', 'Ferry Plaza'),('71 OB','48th Avenue')] >>> sql = formatSQLMultiple(lineNames, sqlTemplate) >>> print sql The output is as follows: (NAME = '71 IB' AND BUS_SIGNAG = 'Ferry Plaza') OR (NAME = '71 OB' AND BUS_SIGNAG = '48th Avenue') The slight difference in this function, the asterisk before the data variable, allows the values inside the data variable to be correctly formatted into the SQL template by exploding the values within the tuple. Notice that the SQL template has been created to segregate each conditional by using parentheses. The function(s) are now ready for reuse, and the SQL statement is now ready for insertion into the Select tool: sql = formatSQLMultiple(lineNames, sqlTemplate) arcpy.Select_analysis(Bus_Stops, Inbound71, sql) Next up is the Buffer tool. We have already taken steps towards making it generalized by adding a variable for the distance. In this case, we will only add one more variable to it, a unit variable that will make it possible to adjust the buffer unit from feet to meter or any other allowed unit. We will leave the other defaults alone. Here is an adjusted version of the Buffer tool: bufferDist = 400 bufferUnit = "Feet" arcpy.Buffer_analysis(Inbound71,                      Inbound71_400ft_buffer,                      "{0} {1}".format(bufferDist, bufferUnit),                      "FULL", "ROUND", "NONE", "") Now, both the buffer distance and buffer unit are controlled by a variable defined in the previous script, and this will make it easily adjustable if it is decided that the distance was not sufficient and the variables might need to be adjusted. The next step towards adjusting the ArcPy tools is to write a function, which will allow for any number of feature classes to be intersected together using the Intersect tool. This new function will be similar to the formatSQL functions as previous, as they will use string formatting and addition to allow for a list of feature classes to be processed into the correct string format for the Intersect tool to accept them. However, as this function will be built to be as general as possible, it must be designed to accept any number of feature classes to be intersected: def formatIntersect(features):    'a function to generate an intersect string'    formatString = ''    for count, feature in enumerate(features):        if count != len(features)-1:            formatString += feature + " #;"        else:            formatString += feature + " #"        return formatString >>> shpNames = ["example.shp","example2.shp"] >>> iString = formatIntersect(shpNames) >>> print iString The output is as follows: example.shp #;example2.shp # Now that we have written the formatIntersect() function, all that needs to be created is a list of the feature classes to be passed to the function. The string returned by the function can then be passed to the Intersect tool: intersected = [Inbound71_400ft_buffer, CensusBlocks2010] iString = formatIntersect(intersected) # Process: Intersect arcpy.Intersect_analysis(iString,                          Intersect71Census, "ALL", "", "INPUT") Because we avoided creating a function that only fits this script or analysis, we now have two (or more) useful functions that can be applied in later analyses, and we know how to manipulate the ArcPy tools to accept the data that we want to supply to them. Summary In this article, we discussed how to take autogenerated code and make it generalized, while adding functions that can be reused in other scripts and will make the generation of the necessary code components, such as SQL statements, much easier. Resources for Article: Further resources on this subject: Enterprise Geodatabase [article] Adding Graphics to the Map [article] Image classification and feature extraction from images [article]
Read more
  • 0
  • 0
  • 27288

article-image-your-first-fuelphp-application-7-easy-steps
Packt
04 Mar 2015
12 min read
Save for later

Your first FuelPHP application in 7 easy steps

Packt
04 Mar 2015
12 min read
In this article by Sébastien Drouyer, author of the book FuelPHP Application Development Blueprints we will see that FuelPHP is an open source PHP framework using the latest technologies. Its large community regularly creates and improves packages and extensions, and the framework’s core is constantly evolving. As a result, FuelPHP is a very complete solution for developing web applications. (For more resources related to this topic, see here.) In this article, we will also see how easy it is for developers to create their first website using the PHP oil utility. The target application Suppose you are a zoo manager and you want to keep track of the monkeys you are looking after. For each monkey, you want to save: Its name If it is still in the zoo Its height A description input where you can enter custom information You want a very simple interface with five major features. You want to be able to: Create new monkeys Edit existing ones List all monkeys View a detailed file for each monkey Delete monkeys These preceding five major features, very common in computer applications, are part of the Create, Read, Update and Delete (CRUD) basic operations. Installing the environment The FuelPHP framework needs the three following components: Webserver: The most common solution is Apache PHP interpreter: The 5.3 version or above Database: We will use the most popular one, MySQL The installation and configuration procedures of these components will depend on the operating system you use. We will provide here some directions to get you started in case you are not used to install your development environment. Please note though that these are very generic guidelines. Feel free to search the web for more information, as there are countless resources on the topic. Windows A complete and very popular solution is to install WAMP. This will install Apache, MySQL and PHP, in other words everything you need to get started. It can be accessed at the following URL: http://www.wampserver.com/en/ Mac PHP and Apache are generally installed on the latest version of the OS, so you just have to install MySQL. To do that, you are recommended to read the official documentation: http://dev.mysql.com/doc/refman/5.1/en/macosx-installation.html A very convenient solution for those of you who have the least system administration skills is to install MAMP, the equivalent of WAMP but for the Mac operating system. It can be downloaded through the following URL: http://www.mamp.info/en/downloads/ Ubuntu As this is the most popular Linux distribution, we will limit our instructions to Ubuntu. You can install a complete environment by executing the following command lines: # Apache, MySQL, PHP sudo apt-get install lamp-server^   # PHPMyAdmin allows you to handle the administration of MySQL DB sudo apt-get install phpmyadmin   # Curl is useful for doing web requests sudo apt-get install curl libcurl3 libcurl3-dev php5-curl   # Enabling the rewrite module as it is needed by FuelPHP sudo a2enmod rewrite   # Restarting Apache to apply the new configuration sudo service apache2 restart Getting the FuelPHP framework There are four common ways to download FuelPHP: Downloading and unzipping the compressed package which can be found on the FuelPHP website. Executing the FuelPHP quick command-line installer. Downloading and installing FuelPHP using Composer. Cloning the FuelPHP GitHub repository. It is a little bit more complicated but allows you to select exactly the version (or even the commit) you want to install. The easiest way is to download and unzip the compressed package located at: http://fuelphp.com/files/download/28 You can get more information about this step in Chapter 1 of FuelPHP Application Development Blueprints, which can be accessed freely. It is also well-documented on the website installation instructions page: http://fuelphp.com/docs/installation/instructions.html Installation directory and apache configuration Now that you know how to install FuelPHP in a given directory, we will explain where to install it and how to configure Apache. The simplest way The simplest way is to install FuelPHP in the root folder of your web server (generally the /var/www directory on Linux systems). If you install fuel in the DIR directory inside the root folder (/var/www/DIR), you will be able to access your project on the following URL: http://localhost/DIR/public/ However, be warned that fuel has not been implemented to support this, and if you publish your project this way in the production server, it will introduce security issues you will have to handle. In such cases, you are recommended to use the second way we explained in the section below, although, for instance if you plan to use a shared host to publish your project, you might not have the choice. A complete and up to date documentation about this issue can be found in the Fuel installation instruction page: http://fuelphp.com/docs/installation/instructions.html By setting up a virtual host Another way is to create a virtual host to access your application. You will need a *nix environment and a little bit more apache and system administration skills, but the benefit is that it is more secured and you will be able to choose your working directory. You will need to change two files: Your apache virtual host file(s) in order to link a virtual host to your application Your system host file, in order redirect the wanted URL to your virtual host In both cases, the files location will be very dependent on your operating system and the server environment you are using, so you will have to figure their location yourself (if you are using a common configuration, you won’t have any problem to find instructions on the web). In the following example, we will set up your system to call your application when requesting the my.app URL on your local environment. Let’s first edit the virtual host file(s); add the following code at the end: <VirtualHost *:80>    ServerName my.app    DocumentRoot YOUR_APP_PATH/public    SetEnv FUEL_ENV "development"    <Directory YOUR_APP_PATH/public>        DirectoryIndex index.php        AllowOverride All        Order allow,deny        Allow from all    </Directory> </VirtualHost> Then, open your system host files and add the following line at the end: 127.0.0.1 my.app Depending on your environment, you might need to restart Apache after that. You can now access your website on the following URL: http://my.app/ Checking that everything works Whether you used a virtual host or not, the following should now appear when accessing your website: Congratulation! You just have successfully installed the FuelPHP framework. The welcome page shows some recommended directions to continue your project. Database configuration As we will store our monkeys into a MySQL database, it is time to configure FuelPHP to use our local database. If you open fuel/app/config/db.php, all you will see is an empty array but this configuration file is merged to fuel/app/config/ENV/db.php, ENV being the current Fuel’s environment, which in that case is development. You should therefore open fuel/app/config/development/db.php: <?php //... return array( 'default' => array(    'connection' => array(      'dsn'       => 'mysql:host=localhost;dbname=fuel_dev',      'username'   => 'root',      'password'   => 'root',    ), ), ); You should adapt this array to your local configuration, particularly the database name (currently set to fuel_dev), the username, and password. You must create your project’s database manually. Scaffolding Now that the database configuration is set, we will be able to generate a scaffold. We will use for that the generate feature of the oil utility. Open the command-line utility and go to your website root directory. To generate a scaffold for a new model, you will need to enter the following line: php oil generate scaffold/crud MODEL ATTR_1:TYPE_1 ATTR_2:TYPE_2 ... Where: MODEL is the model name ATTR_1, ATTR_2… are the model’s attributes names TYPE_1, TYPE_2… are each attribute type In our case, it should be: php oil generate scaffold/crud monkey name:string still_here:bool height:float description:text Here we are telling oil to generate a scaffold for the monkey model with the following attributes: name: The name of the monkey. Its type is string and the associated MySQL column type will be VARCHAR(255). still_here: Whether or not the monkey is still in the facility. Its type is boolean and the associated MySQL column type will be TINYINT(1). height: Height of the monkey. Its type is float and its associated MySQL column type will be FLOAT. description: Description of the monkey. Its type is text and its associated MySQL column type will be TEXT. You can do much more using the oil generate feature, as generating models, controllers, migrations, tasks, package and so on. We will see some of these in the FuelPHP Application Development Blueprints book and you are also recommended to take a look at the official documentation: http://fuelphp.com/docs/packages/oil/generate.html When you press Enter, you will see the following lines appear: Creating migration: APPPATH/migrations/001_create_monkeys.php Creating model: APPPATH/classes/model/monkey.php Creating controller: APPPATH/classes/controller/monkey.php Creating view: APPPATH/views/monkey/index.php Creating view: APPPATH/views/monkey/view.php Creating view: APPPATH/views/monkey/create.php Creating view: APPPATH/views/monkey/edit.php Creating view: APPPATH/views/monkey/_form.php Creating view: APPPATH/views/template.php Where APPPATH is your website directory/fuel/app. Oil has generated for us nine files: A migration file, containing all the necessary information to create the model’s associated table The model A controller Five view files and a template file More explanation about these files and how they interact with each other can be accessed in Chapter 1 of the FuelPHP Application Development Blueprints book, freely available. For those of you who are not yet familiar with MVC and HMVC frameworks, don’t worry; the chapter contains an introduction to the most important concepts. Migrating One of the generated files was APPPATH/migrations/001_create_monkeys.php. It is a migration file and contains the required information to create our monkey table. Notice the name is structured as VER_NAME where VER is the version number and NAME is the name of the migration. If you execute the following command line: php oil refine migrate All migrations files that have not been yet executed will be executed from the oldest version to the latest version (001, 002, 003, and so on). Once all files are executed, oil will display the latest version number. Once executed, if you take a look at your database, you will observe that not one, but two tables have been created: monkeys: As expected, a table have been created to handle your monkeys. Notice that the table name is the plural version of the word we typed for generating the scaffold; such a transformation was internally done using the Inflector::pluralize method. The table will contain the specified columns (name, still_here), the id column, but also created_at and updated_at. These columns respectively store the time an object was created and updated, and are added by default each time you generate your models. It is though possible to not generate them with the --no-timestamp argument. migration: This other table was automatically created. It keeps track of the migrations that were executed. If you look into its content, you will see that it already contains one row; this is the migration you just executed. You can notice that the row does not only indicate the name of the migration, but also a type and a name. This is because migrations files can be placed at many places such as modules or packages. The oil utility allows you to do much more. Don’t hesitate to take a look at the official documentation: http://fuelphp.com/docs/packages/oil/intro.html Or, again, to read FuelPHP Application Development Blueprints’ Chapter 1 which is available for free. Using your application Now that we generated the code and migrated the database, our application is ready to be used. Request the following URL: If you created a virtual host: http://my.app/monkey Otherwise (don’t forget to replace DIR): http://localhost/DIR/public/monkey As you can notice, this webpage is intended to display the list of all monkeys, but since none have been added, the list is empty. Then let’s add a new monkey by clicking on the Add new Monkey button. The following webpage should appear: You can enter your monkey’s information here. The form is certainly not perfect - for instance the Still here field use a standard input although a checkbox would be more appropriated - but it is a great start. All we will have to do is refine the code a little bit. Once you have added several monkeys, you can again take a look at the listing page: Again, this is a great start, though we might want to refine it. Each item on the list has three associated actions: View, Edit, and Delete. Let’s first click on View: Again a great start, though we will refine this webpage. You can return back to the listing by clicking on Back or edit the monkey file by clicking on Edit. Either accessed from the listing page or the view page, it will display the same form as when creating a new monkey, except that the form will be prefilled of course. Finally, if you click on Delete, a confirmation box will appear to prevent any miss clicking. Want to learn more ? Don’t hesitate to check out FuelPHP Application Development Blueprints’ Chapter 1 which is freely available in Packt Publishing’s website. In this chapter, you will find a more thorough introduction to FuelPHP and we will show how to improve this first application. You are also recommended to explore FuelPHP website, which contains a lot of useful information and an excellent documentation: http://www.fuelphp.com There is much more to discover about this wonderful framework. Summary In this article we leaned about the installation of the FuelPHP environment and installation of directories in it. Resources for Article: Further resources on this subject: PHP Magic Features [Article] FuelPHP [Article] Building a To-do List with Ajax [Article]
Read more
  • 0
  • 0
  • 7271

article-image-prototyping-arduino-projects-using-python
Packt
04 Mar 2015
18 min read
Save for later

Prototyping Arduino Projects using Python

Packt
04 Mar 2015
18 min read
In this article by Pratik Desai, the author of Python Programming for Arduino, we will cover the following topics: Working with pyFirmata methods Servomotor – moving the motor to a certain angle The Button() widget – interfacing GUI with Arduino and LEDs (For more resources related to this topic, see here.) Working with pyFirmata methods The pyFirmata package provides useful methods to bridge the gap between Python and Arduino's Firmata protocol. Although these methods are described with specific examples, you can use them in various different ways. This section also provides detailed description of a few additional methods. Setting up the Arduino board To set up your Arduino board in a Python program using pyFirmata, you need to specifically follow the steps that we have written down. We have distributed the entire code that is required for the setup process into small code snippets in each step. While writing your code, you will have to carefully use the code snippets that are appropriate for your application. You can always refer to the example Python files containing the complete code. Before we go ahead, let's first make sure that your Arduino board is equipped with the latest version of the StandardFirmata program and is connected to your computer: Depending upon the Arduino board that is being utilized, start by importing the appropriate pyFirmata classes to the Python code. Currently, the inbuilt pyFirmata classes only support the Arduino Uno and Arduino Mega boards: from pyfirmata import Arduino In case of Arduino Mega, use the following line of code: from pyfirmata import ArduinoMega Before we start executing any methods that is associated with handling pins, it is required to properly set the Arduino board. To perform this task, we have to first identify the USB port to which the Arduino board is connected and assign this location to a variable in the form of a string object. For Mac OS X, the port string should approximately look like this: port = '/dev/cu.usbmodemfa1331' For Windows, use the following string structure: port = 'COM3' In the case of the Linux operating system, use the following line of code: port = '/dev/ttyACM0' The port's location might be different according to your computer configuration. You can identify the correct location of your Arduino USB port by using the Arduino IDE. Once you have imported the Arduino class and assigned the port to a variable object, it's time to engage Arduino with pyFirmata and associate this relationship to another variable: board = Arduino(port) Similarly, for Arduino Mega, use this: board = ArduinoMega(port) The synchronization between the Arduino board and pyFirmata requires some time. Adding sleep time between the preceding assignment and the next set of instructions can help to avoid any issues that are related to serial port buffering. The easiest way to add sleep time is to use the inbuilt Python method, sleep(time): from time import sleep sleep(1) The sleep() method takes seconds as the parameter and a floating-point number can be used to provide the specific sleep time. For example, for 200 milliseconds, it will be sleep(0.2). At this point, you have successfully synchronized your Arduino Uno or Arduino Mega board to the computer using pyFirmata. What if you want to use a different variant (other than Arduino Uno or ArduinoMega) of the Arduino board? Any board layout in pyFirmata is defined as a dictionary object. The following is a sample of the dictionary object for the Arduino board: arduino = {     'digital' : tuple(x for x in range(14)),     'analog' : tuple(x for x in range(6)),     'pwm' : (3, 5, 6, 9, 10, 11),     'use_ports' : True,     'disabled' : (0, 1) # Rx, Tx, Crystal     } For your variant of the Arduino board, you have to first create a custom dictionary object. To create this object, you need to know the hardware layout of your board. For example, an Arduino Nano board has a layout similar to a regular Arduino board, but it has eight instead of six analog ports. Therefore, the preceding dictionary object can be customized as follows: nano = {     'digital' : tuple(x for x in range(14)),     'analog' : tuple(x for x in range(8)),     'pwm' : (3, 5, 6, 9, 10, 11),     'use_ports' : True,     'disabled' : (0, 1) # Rx, Tx, Crystal     } As you have already synchronized the Arduino board earlier, modify the layout of the board using the setup_layout(layout) method: board.setup_layout(nano) This command will modify the default layout of the synchronized Arduino board to the Arduino Nano layout or any other variant for which you have customized the dictionary object. Configuring Arduino pins Once your Arduino board is synchronized, it is time to configure the digital and analog pins that are going to be used as part of your program. Arduino board has digital I/O pins and analog input pins that can be utilized to perform various operations. As we already know, some of these digital pins are also capable of PWM. The direct method Now before we start writing or reading any data to these pins, we have to first assign modes to these pins. In the Arduino sketch-based, we use the pinMode function, that is, pinMode(11, INPUT) for this operation. Similarly, in pyFirmata, this assignment operation is performed using the mode method on the board object as shown in the following code snippet: from pyfirmata import Arduino from pyfirmata import INPUT, OUTPUT, PWM   # Setting up Arduino board port = '/dev/cu.usbmodemfa1331' board = Arduino(port)   # Assigning modes to digital pins board.digital[13].mode = OUTPUT board.analog[0].mode = INPUT The pyFirmata library includes classes for the INPUT and OUTPUT modes, which are required to be imported before you utilized them. The preceding example shows the delegation of digital pin 13 as an output and the analog pin 0 as an input. The mode method is performed on the variable assigned to the configured Arduino board using the digital[] and analog[] array index assignment. The pyFirmata library also supports additional modes such as PWM and SERVO. The PWM mode is used to get analog results from digital pins, while SERVO mode helps a digital pin to set the angle of the shaft between 0 to 180 degrees. If you are using any of these modes, import their appropriate classes from the pyFirmata library. Once these classes are imported from the pyFirmata package, the modes for the appropriate pins can be assigned using the following lines of code: board.digital[3].mode = PWM board.digital[10].mode = SERVO Assigning pin modes The direct method of configuring pin is mostly used for a single line of execution calls. In a project containing a large code and complex logic, it is convenient to assign a pin with its role to a variable object. With an assignment like this, you can later utilize the assigned variable throughout the program for various actions, instead of calling the direct method every time you need to use that pin. In pyFirmata, this assignment can be performed using the get_pin(pin_def) method: from pyfirmata import Arduino port = '/dev/cu.usbmodemfa1311' board = Arduino(port)   # pin mode assignment ledPin = board.get_pin('d:13:o') The get_pin() method lets you assign pin modes using the pin_def string parameter, 'd:13:o'. The three components of pin_def are pin type, pin number, and pin mode separated by a colon (:) operator. The pin types ( analog and digital) are denoted with a and d respectively. The get_pin() method supports three modes, i for input, o for output, and p for PWM. In the previous code sample, 'd:13:o' specifies the digital pin 13 as an output. In another example, if you want to set up the analog pin 1 as an input, the parameter string will be 'a:1:i'. Working with pins As you have configured your Arduino pins, it's time to start performing actions using them. Two different types of methods are supported while working with pins: reporting methods and I/O operation methods. Reporting data When pins get configured in a program as analog input pins, they start sending input values to the serial port. If the program does not utilize this incoming data, the data starts getting buffered at the serial port and quickly overflows. The pyFirmata library provides the reporting and iterator methods to deal with this phenomenon. The enable_reporting() method is used to set the input pin to start reporting. This method needs to be utilized before performing a reading operation on the pin: board.analog[3].enable_reporting() Once the reading operation is complete, the pin can be set to disable reporting: board.analog[3].disable_reporting() In the preceding example, we assumed that you have already set up the Arduino board and configured the mode of the analog pin 3 as INPUT. The pyFirmata library also provides the Iterator() class to read and handle data over the serial port. While working with analog pins, we recommend that you start an iterator thread in the main loop to update the pin value to the latest one. If the iterator method is not used, the buffered data might overflow your serial port. This class is defined in the util module of the pyFirmata package and needs to be imported before it is utilized in the code: from pyfirmata import Arduino, util # Setting up the Arduino board port = 'COM3' board = Arduino(port) sleep(5)   # Start Iterator to avoid serial overflow it = util.Iterator(board) it.start() Manual operations As we have configured the Arduino pins to suitable modes and their reporting characteristic, we can start monitoring them. The pyFirmata provides the write() and read() methods for the configured pins. The write() method The write() method is used to write a value to the pin. If the pin's mode is set to OUTPUT, the value parameter is a Boolean, that is, 0 or 1: board.digital[pin].mode = OUTPUT board.digital[pin].write(1) If you have used an alternative method of assigning the pin's mode, you can use the write() method as follows: ledPin = board.get_pin('d:13:o') ledPin.write(1) In case of the PWM signal, the Arduino accepts a value between 0 and 255 that represents the length of the duty cycle between 0 and 100 percent. The PyFiramta library provides a simplified method to deal with the PWM values as instead of values between 0 and 255, as you can just provide a float value between 0 and 1.0. For example, if you want a 50 percent duty cycle (2.5V analog value), you can specify 0.5 with the write() method. The pyFirmata library will take care of the translation and send the appropriate value, that is, 127, to the Arduino board via the Firmata protocol: board.digital[pin].mode = PWM board.digital[pin].write(0.5) Similarly, for the indirect method of assignment, you can use code similar to the following one: pwmPin = board.get_pin('d:13:p') pwmPin.write(0.5) If you are using the SERVO mode, you need to provide the value in degrees between 0 and 180. Unfortunately, the SERVO mode is only applicable for direct assignment of the pins and will be available in future for indirect assignments: board.digital[pin].mode = SERVO board.digital[pin].write(90) The read() method The read() method provides an output value at the specified Arduino pin. When the Iterator() class is being used, the value received using this method is the latest updated value at the serial port. When you read a digital pin, you can get only one of the two inputs, HIGH or LOW, which will translate to 1 or 0 in Python: board.digital[pin].read() The analog pins of Arduino linearly translate the input voltages between 0 and +5V to 0 and 1023. However, in pyFirmata, the values between 0 and +5V are linearly translated into the float values of 0 and 1.0. For example, if the voltage at the analog pin is 1V, an Arduino program will measure a value somewhere around 204, but you will receive the float value as 0.2 while using pyFirmata's read() method in Python. Servomotor – moving the motor to certain angle Servomotors are widely used electronic components in applications such as pan-tilt camera control, robotics arm, mobile robot movements, and so on where precise movement of the motor shaft is required. This precise control of the motor shaft is possible because of the position sensing decoder, which is an integral part of the servomotor assembly. A standard servomotor allows the angle of the shaft to be set between 0 and 180 degrees. The pyFirmata provides the SERVO mode that can be implemented on every digital pin. This prototyping exercise provides a template and guidelines to interface a servomotor with Python. Connections Typically, a servomotor has wires that are color-coded red, black and yellow, respectively to connect with the power, ground, and signal of the Arduino board. Connect the power and the ground of the servomotor to the 5V and the ground of the Arduino board. As displayed in the following diagram, connect the yellow signal wire to the digital pin 13: If you want to use any other digital pin, make sure that you change the pin number in the Python program in the next section. Once you have made the appropriate connections, let's move on to the Python program. The Python code The Python file consisting this code is named servoCustomAngle.py and is located in the code bundle of this book, which can be downloaded from https://www.packtpub.com/books/content/support/19610. Open this file in your Python editor. Like other examples, the starting section of the program contains the code to import the libraries and set up the Arduino board: from pyfirmata import Arduino, SERVO from time import sleep   # Setting up the Arduino board port = 'COM5' board = Arduino(port) # Need to give some time to pyFirmata and Arduino to synchronize sleep(5) Now that you have Python ready to communicate with the Arduino board, let's configure the digital pin that is going to be used to connect the servomotor to the Arduino board. We will complete this task by setting the mode of pin 13 to SERVO: # Set mode of the pin 13 as SERVO pin = 13 board.digital[pin].mode = SERVO The setServoAngle(pin,angle) custom function takes the pins on which the servomotor is connected and the custom angle as input parameters. This function can be used as a part of various large projects that involve servos: # Custom angle to set Servo motor angle def setServoAngle(pin, angle):   board.digital[pin].write(angle)   sleep(0.015) In the main logic of this template, we want to incrementally move the motor shaft in one direction until it achieves the maximum achievable angle (180 degrees) and then move it back to the original position with the same incremental speed. In the while loop, we will ask the user to provide inputs to continue this routine, which will be captured using the raw_input() function. The user can enter character y to continue this routine or enter any other character to abort the loop: # Testing the function by rotating motor in both direction while True:   for i in range(0, 180):     setServoAngle(pin, i)   for i in range(180, 1, -1):     setServoAngle(pin, i)     # Continue or break the testing process   i = raw_input("Enter 'y' to continue or Enter to quit): ")   if i == 'y':     pass   else:     board.exit()     break While working with all these prototyping examples, we used the direct communication method by using digital and analog pins to connect the sensor with Arduino. Now, let's get familiar with another widely used communication method between Arduino and the sensors. This is called I2C communication. The Button() widget – interfacing GUI with Arduino and LEDs Now that you have had your first hands-on experience in creating a Python graphical interface, let's integrate Arduino with it. Python makes it easy to interface various heterogeneous packages within each other and that is what you are going to do. In the next coding exercise, we will use Tkinter and pyFirmata to make the GUI work with Arduino. In this exercise, we are going to use the Button() widget to control the LEDs interfaced with the Arduino board. Before we jump to the exercises, let's build the circuit that we will need for all upcoming programs. The following is a Fritzing diagram of the circuit where we use two different colored LEDs with pull up resistors. Connect these LEDs to digital pins 10 and 11 on your Arduino Uno board, as displayed in the following diagram: While working with the code provided in this section, you will have to replace the Arduino port that is used to define the board variable according to your operating system. Also, make sure that you provide the correct pin number in the code if you are planning to use any pins other than 10 and 11. For some exercises, you will have to use the PWM pins, so make sure that you have correct pins. You can use the entire code snippet as a Python file and run it. But, this might not be possible in the upcoming exercises due to the length of the program and the complexity involved. For the Button() widget exercise, open the exampleButton.py file. The code contains three main components: pyFirmata and Arduino configurations Tkinter widget definitions for a button The LED blink function that gets executed when you press the button As you can see in the following code snippet, we have first imported libraries and initialized the Arduino board using the pyFirmata methods. For this exercise, we are only going to work with one LED and we have initialized only the ledPin variable for it: import Tkinter import pyfirmata from time import sleep port = '/dev/cu.usbmodemfa1331' board = pyfirmata.Arduino(port) sleep(5) ledPin = board.get_pin('d:11:o') As we are using the pyFirmata library for all the exercises in this article, make sure that you have uploaded the latest version of the standard Firmata sketch on your Arduino board. In the second part of the code, we have initialized the root Tkinter widget as top and provided a title string. We have also fixed the size of this window using the minsize() method. In order to get more familiar with the root widget, you can play around with the minimum and maximum size of the window: top = Tkinter.Tk() top.title("Blink LED using button") top.minsize(300,30) The Button() widget is a standard Tkinter widget that is mostly used to obtain the manual, external input stimulus from the user. Like the Label() widget, the Button() widget can be used to display text or images. Unlike the Label() widget, it can be associated with actions or methods when it is pressed. When the button is pressed, Tkinter executes the methods or commands specified by the command option: startButton = Tkinter.Button(top,                              text="Start",                              command=onStartButtonPress) startButton.pack() In this initialization, the function associated with the button is onStartButtonPress and the "Start" string is displayed as the title of the button. Similarly, the top object specifies the parent or the root widget. Once the button is instantiated, you will need to use the pack() method to make it available in the main window. In the preceding lines of code, the onStartButonPress() function includes the scripts that are required to blink the LEDs and change the state of the button. A button state can have the state as NORMAL, ACTIVE, or DISABLED. If it is not specified, the default state of any button is NORMAL. The ACTIVE and DISABLED states are useful in applications when repeated pressing of the button needs to be avoided. After turning the LED on using the write(1) method, we will add a time delay of 5 seconds using the sleep(5) function before turning it off with the write(0) method: def onStartButtonPress():   startButton.config(state=Tkinter.DISABLED)   ledPin.write(1)   # LED is on for fix amount of time specified below   sleep(5)   ledPin.write(0)   startButton.config(state=Tkinter.ACTIVE) At the end of the program, we will execute the mainloop() method to initiate the Tkinter loop. Until this function is executed, the main window won't appear. To run the code, make appropriate changes to the Arduino board variable and execute the program. The following screenshot with a button and title bar will appear as the output of the program. Clicking on the Start button will turn on the LED on the Arduino board for the specified time delay. Meanwhile, when the LED is on, you will not be able to click on the Start button again. Now, in this particular program, we haven't provided sufficient code to safely disengage the Arduino board and it will be covered in upcoming exercises. Summary In this article, we learned about the Python library pyFirmata to interface Arduino to your computer using the Firmata protocol. We build a prototype using pyFirmata and Arduino to control servomotor and also developed another one with GUI, based on the Tkinter library, to control LEDs. Resources for Article: Further resources on this subject: Python Functions : Avoid Repeating Code? [article] Python 3 Designing Tasklist Application [article] The Five Kinds Of Python Functions Python 3.4 Edition [article]
Read more
  • 0
  • 0
  • 24158

article-image-angularjs-performance
Packt
04 Mar 2015
20 min read
Save for later

AngularJS Performance

Packt
04 Mar 2015
20 min read
In this article by Chandermani, the author of AngularJS by Example, we focus our discussion on the performance aspect of AngularJS. For most scenarios, we can all agree that AngularJS is insanely fast. For standard size views, we rarely see any performance bottlenecks. But many views start small and then grow over time. And sometimes the requirement dictates we build large pages/views with a sizable amount of HTML and data. In such a case, there are things that we need to keep in mind to provide an optimal user experience. Take any framework and the performance discussion on the framework always requires one to understand the internal working of the framework. When it comes to Angular, we need to understand how Angular detects model changes. What are watches? What is a digest cycle? What roles do scope objects play? Without a conceptual understanding of these subjects, any performance guidance is merely a checklist that we follow without understanding the why part. Let's look at some pointers before we begin our discussion on performance of AngularJS: The live binding between the view elements and model data is set up using watches. When a model changes, one or many watches linked to the model are triggered. Angular's view binding infrastructure uses these watches to synchronize the view with the updated model value. Model change detection only happens when a digest cycle is triggered. Angular does not track model changes in real time; instead, on every digest cycle, it runs through every watch to compare the previous and new values of the model to detect changes. A digest cycle is triggered when $scope.$apply is invoked. A number of directives and services internally invoke $scope.$apply: Directives such as ng-click, ng-mouse* do it on user action Services such as $http and $resource do it when a response is received from server $timeout or $interval call $scope.$apply when they lapse A digest cycle tracks the old value of the watched expression and compares it with the new value to detect if the model has changed. Simply put, the digest cycle is a workflow used to detect model changes. A digest cycle runs multiple times till the model data is stable and no watch is triggered. Once you have a clear understanding of the digest cycle, watches, and scopes, we can look at some performance guidelines that can help us manage views as they start to grow. (For more resources related to this topic, see here.) Performance guidelines When building any Angular app, any performance optimization boils down to: Minimizing the number of binding expressions and hence watches Making sure that binding expression evaluation is quick Optimizing the number of digest cycles that take place The next few sections provide some useful pointers in this direction. Remember, a lot of these optimization may only be necessary if the view is large. Keeping the page/view small The sanest advice is to keep the amount of content available on a page small. The user cannot interact/process too much data on the page, so remember that screen real estate is at a premium and only keep necessary details on a page. The lesser the content, the lesser the number of binding expressions; hence, fewer watches and less processing are required during the digest cycle. Remember, each watch adds to the overall execution time of the digest cycle. The time required for a single watch can be insignificant but, after combining hundreds and maybe thousands of them, they start to matter. Angular's data binding infrastructure is insanely fast and relies on a rudimentary dirty check that compares the old and the new values. Check out the stack overflow (SO) post (http://stackoverflow.com/questions/9682092/databinding-in-angularjs), where Misko Hevery (creator of Angular) talks about how data binding works in Angular. Data binding also adds to the memory footprint of the application. Each watch has to track the current and previous value of a data-binding expression to compare and verify if data has changed. Keeping a page/view small may not always be possible, and the view may grow. In such a case, we need to make sure that the number of bindings does not grow exponentially (linear growth is OK) with the page size. The next two tips can help minimize the number of bindings in the page and should be seriously considered for large views. Optimizing watches for read-once data In any Angular view, there is always content that, once bound, does not change. Any read-only data on the view can fall into this category. This implies that once the data is bound to the view, we no longer need watches to track model changes, as we don't expect the model to update. Is it possible to remove the watch after one-time binding? Angular itself does not have something inbuilt, but a community project bindonce (https://github.com/Pasvaz/bindonce) is there to fill this gap. Angular 1.3 has added support for bind and forget in the native framework. Using the syntax {{::title}}, we can achieve one-time binding. If you are on Angular 1.3, use it! Hiding (ng-show) versus conditional rendering (ng-if/ng-switch) content You have learned two ways to conditionally render content in Angular. The ng-show/ng-hide directive shows/hides the DOM element based on the expression provided and ng-if/ng-switch creates and destroys the DOM based on an expression. For some scenarios, ng-if can be really beneficial as it can reduce the number of binding expressions/watches for the DOM content not rendered. Consider the following example: <div ng-if='user.isAdmin'>   <div ng-include="'admin-panel.html'"></div></div> The snippet renders an admin panel if the user is an admin. With ng-if, if the user is not an admin, the ng-include directive template is neither requested nor rendered saving us of all the bindings and watches that are part of the admin-panel.html view. From the preceding discussion, it may seem that we should get rid of all ng-show/ng-hide directives and use ng-if. Well, not really! It again depends; for small size pages, ng-show/ng-hide works just fine. Also, remember that there is a cost to creating and destroying the DOM. If the expression to show/hide flips too often, this will mean too many DOMs create-and-destroy cycles, which are detrimental to the overall performance of the app. Expressions being watched should not be slow Since watches are evaluated too often, the expression being watched should return results fast. The first way we can make sure of this is by using properties instead of functions to bind expressions. These expressions are as follows: {{user.name}}ng-show='user.Authorized' The preceding code is always better than this: {{getUserName()}}ng-show = 'isUserAuthorized(user)' Try to minimize function expressions in bindings. If a function expression is required, make sure that the function returns a result quickly. Make sure a function being watched does not: Make any remote calls Use $timeout/$interval Perform sorting/filtering Perform DOM manipulation (this can happen inside directive implementation) Or perform any other time-consuming operation Be sure to avoid such operations inside a bound function. To reiterate, Angular will evaluate a watched expression multiple times during every digest cycle just to know if the return value (a model) has changed and the view needs to be synchronized. Minimizing the deep model watch When using $scope.$watch to watch for model changes in controllers, be careful while setting the third $watch function parameter to true. The general syntax of watch looks like this: $watch(watchExpression, listener, [objectEquality]); In the standard scenario, Angular does an object comparison based on the reference only. But if objectEquality is true, Angular does a deep comparison between the last value and new value of the watched expression. This can have an adverse memory and performance impact if the object is large. Handling large datasets with ng-repeat The ng-repeat directive undoubtedly is the most useful directive Angular has. But it can cause the most performance-related headaches. The reason is not because of the directive design, but because it is the only directive that allows us to generate HTML on the fly. There is always the possibility of generating enormous HTML just by binding ng-repeat to a big model list. Some tips that can help us when working with ng-repeat are: Page data and use limitTo: Implement a server-side paging mechanism when a number of items returned are large. Also use the limitTo filter to limit the number of items rendered. Its syntax is as follows: <tr ng-repeat="user in users |limitTo:pageSize">…</tr> Look at modules such as ngInfiniteScroll (http://binarymuse.github.io/ngInfiniteScroll/) that provide an alternate mechanism to render large lists. Use the track by expression: The ng-repeat directive for performance tries to make sure it does not unnecessarily create or delete HTML nodes when items are added, updated, deleted, or moved in the list. To achieve this, it adds a $$hashKey property to every model item allowing it to associate the DOM node with the model item. We can override this behavior and provide our own item key using the track by expression such as: <tr ng-repeat="user in users track by user.id">…</tr> This allows us to use our own mechanism to identify an item. Using your own track by expression has a distinct advantage over the default hash key approach. Consider an example where you make an initial AJAX call to get users: $scope.getUsers().then(function(users){ $scope.users = users;}) Later again, refresh the data from the server and call something similar again: $scope.users = users; With user.id as a key, Angular is able to determine what elements were added/deleted and moved; it can also determine created/deleted DOM nodes for such elements. Remaining elements are not touched by ng-repeat (internal bindings are still evaluated). This saves a lot of CPU cycles for the browser as fewer DOM elements are created and destroyed. Do not bind ng-repeat to a function expression: Using a function's return value for ng-repeat can also be problematic, depending upon how the function is implemented. Consider a repeat with this: <tr ng-repeat="user in getUsers()">…</tr> And consider the controller getUsers function with this: $scope.getUser = function() {   var orderBy = $filter('orderBy');   return orderBy($scope.users, predicate);} Angular is going to evaluate this expression and hence call this function every time the digest cycle takes place. A lot of CPU cycles were wasted sorting user data again and again. It is better to use scope properties and presort the data before binding. Minimize filters in views, use filter elements in the controller: Filters defined on ng-repeat are also evaluated every time the digest cycle takes place. For large lists, if the same filtering can be implemented in the controller, we can avoid constant filter evaluation. This holds true for any filter function that is used with arrays including filter and orderBy. Avoiding mouse-movement tracking events The ng-mousemove, ng-mouseenter, ng-mouseleave, and ng-mouseover directives can just kill performance. If an expression is attached to any of these event directives, Angular triggers a digest cycle every time the corresponding event occurs and for events like mouse move, this can be a lot. We have already seen this behavior when working with 7 Minute Workout, when we tried to show a pause overlay on the exercise image when the mouse hovers over it. Avoid them at all cost. If we just want to trigger some style changes on mouse events, CSS is a better tool. Avoiding calling $scope.$apply Angular is smart enough to call $scope.$apply at appropriate times without us explicitly calling it. This can be confirmed from the fact that the only place we have seen and used $scope.$apply is within directives. The ng-click and updateOnBlur directives use $scope.$apply to transition from a DOM event handler execution to an Angular execution context. Even when wrapping the jQuery plugin, we may require to do a similar transition for an event raised by the JQuery plugin. Other than this, there is no reason to use $scope.$apply. Remember, every invocation of $apply results in the execution of a complete digest cycle. The $timeout and $interval services take a Boolean argument invokeApply. If set to false, the lapsed $timeout/$interval services does not call $scope.$apply or trigger a digest cycle. Therefore, if you are going to perform background operations that do not require $scope and the view to be updated, set the last argument to false. Always use Angular wrappers over standard JavaScript objects/functions such as $timeout and $interval to avoid manually calling $scope.$apply. These wrapper functions internally call $scope.$apply. Also, understand the difference between $scope.$apply and $scope.$digest. $scope.$apply triggers $rootScope.$digest that evaluates all application watches whereas, $scope.$digest only performs dirty checks on the current scope and its children. If we are sure that the model changes are not going to affect anything other than the child scopes, we can use $scope.$digest instead of $scope.$apply. Lazy-loading, minification, and creating multiple SPAs I hope you are not assuming that the apps that we have built will continue to use the numerous small script files that we have created to separate modules and module artefacts (controllers, directives, filters, and services). Any modern build system has the capability to concatenate and minify these files and replace the original file reference with a unified and minified version. Therefore, like any JavaScript library, use minified script files for production. The problem with the Angular bootstrapping process is that it expects all Angular application scripts to be loaded before the application can bootstrap. We cannot load modules, controllers, or in fact, any of the other Angular constructs on demand. This means we need to provide every artefact required by our app, upfront. For small applications, this is not a problem as the content is concatenated and minified; also, the Angular application code itself is far more compact as compared to the traditional JavaScript of jQuery-based apps. But, as the size of the application starts to grow, it may start to hurt when we need to load everything upfront. There are at least two possible solutions to this problem; the first one is about breaking our application into multiple SPAs. Breaking applications into multiple SPAs This advice may seem counterintuitive as the whole point of SPAs is to get rid of full page loads. By creating multiple SPAs, we break the app into multiple small SPAs, each supporting parts of the overall app functionality. When we say app, it implies a combination of the main (such as index.html) page with ng-app and all the scripts/libraries and partial views that the app loads over time. For example, we can break the Personal Trainer application into a Workout Builder app and a Workout Runner app. Both have their own start up page and scripts. Common scripts such as the Angular framework scripts and any third-party libraries can be referenced in both the applications. On similar lines, common controllers, directives, services, and filters too can be referenced in both the apps. The way we have designed Personal Trainer makes it easy to achieve our objective. The segregation into what belongs where has already been done. The advantage of breaking an app into multiple SPAs is that only relevant scripts related to the app are loaded. For a small app, this may be an overkill but for large apps, it can improve the app performance. The challenge with this approach is to identify what parts of an application can be created as independent SPAs; it totally depends upon the usage pattern of the application. For example, assume an application has an admin module and an end consumer/user module. Creating two SPAs, one for admin and the other for the end customer, is a great way to keep user-specific features and admin-specific features separate. A standard user may never transition to the admin section/area, whereas an admin user can still work on both areas; but transitioning from the admin area to a user-specific area will require a full page refresh. If breaking the application into multiple SPAs is not possible, the other option is to perform the lazy loading of a module. Lazy-loading modules Lazy-loading modules or loading module on demand is a viable option for large Angular apps. But unfortunately, Angular itself does not have any in-built support for lazy-loading modules. Furthermore, the additional complexity of lazy loading may be unwarranted as Angular produces far less code as compared to other JavaScript framework implementations. Also once we gzip and minify the code, the amount of code that is transferred over the wire is minimal. If we still want to try our hands on lazy loading, there are two libraries that can help: ocLazyLoad (https://github.com/ocombe/ocLazyLoad): This is a library that uses script.js to load modules on the fly angularAMD (http://marcoslin.github.io/angularAMD): This is a library that uses require.js to lazy load modules With lazy loading in place, we can delay the loading of a controller, directive, filter, or service script, until the page that requires them is loaded. The overall concept of lazy loading seems to be great but I'm still not sold on this idea. Before we adopt a lazy-load solution, there are things that we need to evaluate: Loading multiple script files lazily: When scripts are concatenated and minified, we load the complete app at once. Contrast it to lazy loading where we do not concatenate but load them on demand. What we gain in terms of lazy-load module flexibility we lose in terms of performance. We now have to make a number of network requests to load individual files. Given these facts, the ideal approach is to combine lazy loading with concatenation and minification. In this approach, we identify those feature modules that can be concatenated and minified together and served on demand using lazy loading. For example, Personal Trainer scripts can be divided into three categories: The common app modules: This consists of any script that has common code used across the app and can be combined together and loaded upfront The Workout Runner module(s): Scripts that support workout execution can be concatenated and minified together but are loaded only when the Workout Runner pages are loaded. The Workout Builder module(s): On similar lines to the preceding categories, scripts that support workout building can be combined together and served only when the Workout Builder pages are loaded. As we can see, there is a decent amount of effort required to refactor the app in a manner that makes module segregation, concatenation, and lazy loading possible. The effect on unit and integration testing: We also need to evaluate the effect of lazy-loading modules in unit and integration testing. The way we test is also affected with lazy loading in place. This implies that, if lazy loading is added as an afterthought, the test setup may require tweaking to make sure existing tests still run. Given these facts, we should evaluate our options and check whether we really need lazy loading or we can manage by breaking a monolithic SPA into multiple smaller SPAs. Caching remote data wherever appropriate Caching data is the one of the oldest tricks to improve any webpage/application performance. Analyze your GET requests and determine what data can be cached. Once such data is identified, it can be cached from a number of locations. Data cached outside the app can be cached in: Servers: The server can cache repeated GET requests to resources that do not change very often. This whole process is transparent to the client and the implementation depends on the server stack used. Browsers: In this case, the browser caches the response. Browser caching depends upon the server sending HTTP cache headers such as ETag and cache-control to guide the browser about how long a particular resource can be cached. Browsers can honor these cache headers and cache data appropriately for future use. If server and browser caching is not available or if we also want to incorporate any amount of caching in the client app, we do have some choices: Cache data in memory: A simple Angular service can cache the HTTP response in the memory. Since Angular is SPA, the data is not lost unless the page refreshes. This is how a service function looks when it caches data: var workouts;service.getWorkouts = function () {   if (workouts) return $q.resolve(workouts);   return $http.get("/workouts").then(function (response){       workouts = response.data;       return workouts;   });}; The implementation caches a list of workouts into the workouts variable for future use. The first request makes a HTTP call to retrieve data, but subsequent requests just return the cached data as promised. The usage of $q.resolve makes sure that the function always returns a promise. Angular $http cache: Angular's $http service comes with a configuration option cache. When set to true, $http caches the response of the particular GET request into a local cache (again an in-memory cache). Here is how we cache a GET request: $http.get(url, { cache: true}); Angular caches this cache for the lifetime of the app, and clearing it is not easy. We need to get hold of the cache dedicated to caching HTTP responses and clear the cache key manually. The caching strategy of an application is never complete without a cache invalidation strategy. With cache, there is always a possibility that caches are out of sync with respect to the actual data store. We cannot affect the server-side caching behavior from the client; consequently, let's focus on how to perform cache invalidation (clearing) for the two client-side caching mechanisms described earlier. If we use the first approach to cache data, we are responsible for clearing cache ourselves. In the case of the second approach, the default $http service does not support clearing cache. We either need to get hold of the underlying $http cache store and clear the cache key manually (as shown here) or implement our own cache that manages cache data and invalidates cache based on some criteria: var cache = $cacheFactory.get('$http');cache.remove("http://myserver/workouts"); //full url Using Batarang to measure performance Batarang (a Chrome extension), as we have already seen, is an extremely handy tool for Angular applications. Using Batarang to visualize app usage is like looking at an X-Ray of the app. It allows us to: View the scope data, scope hierarchy, and how the scopes are linked to HTML elements Evaluate the performance of the application Check the application dependency graph, helping us understand how components are linked to each other, and with other framework components. If we enable Batarang and then play around with our application, Batarang captures performance metrics for all watched expressions in the app. This data is nicely presented as a graph available on the Performance tab inside Batarang: That is pretty sweet! When building an app, use Batarang to gauge the most expensive watches and take corrective measures, if required. Play around with Batarang and see what other features it has. This is a very handy tool for Angular applications. This brings us to the end of the performance guidelines that we wanted to share in this article. Some of these guidelines are preventive measures that we should take to make sure we get optimal app performance whereas others are there to help when the performance is not up to the mark. Summary In this article, we looked at the ever-so-important topic of performance, where you learned ways to optimize an Angular app performance. Resources for Article: Further resources on this subject: Role of AngularJS [article] The First Step [article] Recursive directives [article]
Read more
  • 0
  • 0
  • 5548
article-image-going-beyond-zabbix-agents
Packt
03 Mar 2015
17 min read
Save for later

Going beyond Zabbix agents

Packt
03 Mar 2015
17 min read
In this article by Andrea Dalle Vacche and Stefano Kewan Lee, author of Zabbix Network Monitoring Essentials, we will learn the different possibilities Zabbix offers to the enterprising network administrator. There are certainly many advantages in using Zabbix's own agents and protocol when it comes to monitoring Windows and Unix operating systems or the applications that run on them. However, when it comes to network monitoring, the vast majority of monitored objects are network appliances of various kinds, where it's often impossible to install and run a dedicated agent of any type. This by no means implies that you'll be unable to fully leverage Zabbix's power to monitor your network. Whether it's a simple ICMP echo request, an SNMP query, an SNMP trap, netflow logging, or a custom script, there are many possibilities to extract meaningful data from your network. This section will show you how to set up these different methods of gathering data, and give you a few examples on how to use them. (For more resources related to this topic, see here.) Simple checks An interesting use case is using one or more net.tcp.service items to make sure that some services are not running on a given interface. Take for example, the case of a border router or firewall. Unless you have some very special and specific needs, you'll typically want to make sure that no admin consoles are available on the external interfaces. You might have double-checked the appliance's initial configuration, but a system update, a careless admin, or a security bug might change the aforesaid configuration and open your appliance's admin interfaces to a far wider audience than intended. A security breach like this one could pass unobserved for a long time unless you configure a few simple TCP/IP checks on your appliance's external interfaces and then set up some triggers that will report a problem if those checks report an open and responsive port. Let's take the example of the router with two production interfaces and a management interface shown in the section about host interfaces. If the router's HTTPS admin console is available on TCP port 8000, you'll want to configure a simple check item for every interface: Item name Item key management_https_console net.tcp.service[https,192.168.1.254,8000] zoneA_https_console net.tcp.service[https,10.10.1.254,8000] zoneB_https_console net.tcp.service[https,172.16.7.254,8000] All these checks will return 1 if the service is available, and 0 if the service is not available. What changes is how you implement the triggers on these items. For the management item, you'll have a problem if the service is not available, while for the other two, you'll have a problem if the service is indeed available, as shown in the following table: Trigger name Trigger expression Management console down {it-1759-r1:net.tcp.service[http,192.168.1.254,8000].last()}=0 Console available from zone A {it-1759-r1:net.tcp.service[http,10.10.1.254,8000].last()}=1 Console available from zone B {it-1759-r1:net.tcp.service[http,172.16.7.254,8000].last()}=1 This way, you'll always be able to make sure that your device's configuration when it comes to open or closed ports will always match your expected setup and be notified when it diverges from the standard you set. To summarize, simple checks are great for all cases where you don't need complex monitoring data from your network as they are quite fast and lightweight. For the same reason, they could be the preferred solution if you have to monitor availability for hundreds to thousands of hosts as they will impart a relatively low overhead on your overall network traffic. When you do need more structure and more detail in your monitoring data, it's time to move to the bread and butter of all network monitoring solutions: SNMP. Keeping SNMP simple The Simple Network Monitoring Protocol (SNMP) is an excellent, general purpose protocol that has become widely used beyond its original purpose. When it comes to network monitoring though, it's also often the only protocol supported by many appliances, so it's often a forced, albeit natural and sensible, choice to integrate it into your monitoring scenarios. As a network administrator, you probably already know all there is to know about SNMP and how it works, so let's focus on how it's integrated into Zabbix and what you can do with it. Mapping SNMP OIDs to Zabbix items An SNMP value is composed of three different parts: the OID, the data type, and the value itself. When you use snmpwalk or snmpget to get values from an SNMP agent, the output looks like this: SNMPv2-MIB::sysObjectID.0 = OID: CISCO-PRODUCTS-MIB::cisco3640DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (83414) 0:13:54.14SNMPv2-MIB::sysContact.0 = STRING:SNMPv2-MIB::sysName.0 = STRING: R1SNMPv2-MIB::sysLocation.0 = STRING: Upper floor room 13SNMPv2-MIB::sysServices.0 = INTEGER: 78SNMPv2-MIB::sysORLastChange.0 = Timeticks: (0) 0:00:00.00...IF-MIB::ifPhysAddress.24 = STRING: c4:1:22:4:f2:fIF-MIB::ifPhysAddress.26 = STRING:IF-MIB::ifPhysAddress.27 = STRING: c4:1:1e:c8:0:0IF-MIB::ifAdminStatus.1 = INTEGER: up(1)IF-MIB::ifAdminStatus.2 = INTEGER: down(2)… And so on. The first part, the one before the = sign is, naturally, the OID. This will go into the SNMP OID field in the Zabbix item creation page and is the unique identifier for the metric you are interested in. Some OIDs represent a single and unique metric for the device, so they are easy to identify and address. In the above excerpt, one such OID is DISMAN-EVENT-MIB::sysUpTimeInstance. If you are interested in monitoring that OID, you'd only have to fill out the item creation form with the OID itself and then define an item name, a data type, and a retention policy, and you are ready to start monitoring it. In the case of an uptime value, time-ticks are expressed in seconds, so you'll choose a numeric decimal data type. We'll see in the next section how to choose Zabbix item data types and how to store values based on SNMP data types. You'll also want to store the value as is and optionally specify a unit of measure. This is because an uptime is already a relative value as it expresses the time elapsed since a device's latest boot. There would be no point in calculating a further delta when getting this measurement. Finally, you'll define a polling interval and choose a retention policy. In the following example, the polling interval is shown to be 5 minutes (300 seconds), the history retention policy as 3 days, and the trend storage period as one year. These should be sensible values as you don't normally need to store the detailed history of a value that either resets to zero, or, by definition, grows linearly by one tick every second. The following screenshot encapsulates what has been discussed in this paragraph: Remember that the item's key value still has to be unique at the host/template level as it will be referenced to by all other Zabbix components, from calculated items to triggers, maps, screens, and so on. Don't forget to put the right credentials for SNMPv3 if you are using this version of the protocol. Many of the more interesting OIDs, though, are a bit more complex: multiple OIDs can be related to one another by means of the same index. Let's look at another snmpwalk output excerpt: IF-MIB::ifNumber.0 = INTEGER: 26IF-MIB::ifIndex.1 = INTEGER: 1IF-MIB::ifIndex.2 = INTEGER: 2IF-MIB::ifIndex.3 = INTEGER: 3…IF-MIB::ifDescr.1 = STRING: FastEthernet0/0IF-MIB::ifDescr.2 = STRING: Serial0/0IF-MIB::ifDescr.3 = STRING: FastEthernet0/1…IF-MIB::ifType.1 = INTEGER: ethernetCsmacd(6)IF-MIB::ifType.2 = INTEGER: propPointToPointSerial(22)IF-MIB::ifType.3 = INTEGER: ethernetCsmacd(6)…IF-MIB::ifMtu.1 = INTEGER: 1500IF-MIB::ifMtu.2 = INTEGER: 1500IF-MIB::ifMtu.3 = INTEGER: 1500…IF-MIB::ifSpeed.1 = Gauge32: 10000000IF-MIB::ifSpeed.2 = Gauge32: 1544000IF-MIB::ifSpeed.3 = Gauge32: 10000000…IF-MIB::ifPhysAddress.1 = STRING: c4:1:1e:c8:0:0IF-MIB::ifPhysAddress.2 = STRING:IF-MIB::ifPhysAddress.3 = STRING: c4:1:1e:c8:0:1…IF-MIB::ifAdminStatus.1 = INTEGER: up(1)IF-MIB::ifAdminStatus.2 = INTEGER: down(2)IF-MIB::ifAdminStatus.3 = INTEGER: down(2)…IF-MIB::ifOperStatus.1 = INTEGER: up(1)IF-MIB::ifOperStatus.2 = INTEGER: down(2)IF-MIB::ifOperStatus.3 = INTEGER: down(2)…IF-MIB::ifLastChange.1 = Timeticks: (1738) 0:00:17.38IF-MIB::ifLastChange.2 = Timeticks: (1696) 0:00:16.96IF-MIB::ifLastChange.3 = Timeticks: (1559) 0:00:15.59…IF-MIB::ifInOctets.1 = Counter32: 305255IF-MIB::ifInOctets.2 = Counter32: 0IF-MIB::ifInOctets.3 = Counter32: 0…IF-MIB::ifInDiscards.1 = Counter32: 0IF-MIB::ifInDiscards.2 = Counter32: 0IF-MIB::ifInDiscards.3 = Counter32: 0…IF-MIB::ifInErrors.1 = Counter32: 0IF-MIB::ifInErrors.2 = Counter32: 0IF-MIB::ifInErrors.3 = Counter32: 0…IF-MIB::ifOutOctets.1 = Counter32: 347968IF-MIB::ifOutOctets.2 = Counter32: 0IF-MIB::ifOutOctets.3 = Counter32: 0 As you can see, for every network interface, there are several OIDs, each one detailing a specific aspect of the interface: its name, its type, whether it's up or down, the amount of traffic coming in or going out, and so on. The different OIDs are related through their last number, the actual index of the OID. Looking at the preceding excerpt, we know that the device has 26 interfaces, of which we are showing some values for just the first three. By correlating the index numbers, we also know that interface 1 is called FastEthernet0/0, its MAC address is c4:1:1e:c8:0:0, the interface is up and has been up for just 17 seconds, and some traffic already went through it. Now, one way to monitor several of these metrics for the same interface is to manually correlate these values when creating the items, putting the complete OID in the SNMP OID field, and making sure that both the item key and its name reflect the right interface. This process is not only prone to errors during the setup phase, but it could also introduce some inconsistencies down the road. There is no guarantee, in fact, that the index will remain consistent across hardware or software upgrades or even across configurations when it comes to more volatile states like the number of VLANs or routing tables instead of network interfaces. Fortunately Zabbix provides a feature, called dynamic indexes, that allows you to actually correlate different OIDs in the same SNMP OID field so that you can define an index based on the index exposed by another OID. This means that if you want to know the admin status of FastEthernet0/0, you don't need to find the index associated with FastEthernet0/0 (in this case it would be 1) and then add that index to IF-MIB::ifAdminStatus of the base OID, hoping that it won't ever change in the future. You can instead use the following code: IF-MIB::ifAdminStatus["index", "IF-MIB::ifDescr",   "FastEthernet0/0"] Upon using the preceding code in the SNMP OID field of your item, the item will dynamically find the index of the IF-MIB::ifDescr OID where the value is FastEthernet0/0 and append it to IF-MIB::ifAdminStatus in order to get the right status for the right interface. If you organize your items this way, you'll always be sure that related items actually show the right related values for the component you are interested in and not those of another one because things changed on the device's side without your knowledge. Moreover, we'll build on this technique to develop low-level discovery of a device. You can use the same technique to get other interesting information out of a device. Consider, for example, the following excerpt: ENTITY-MIB::entPhysicalVendorType.1 = OID: CISCO-ENTITY-VENDORTYPEOID-MIB::cevChassis3640ENTITY-MIB::entPhysicalVendorType.2 = OID: CISCO-ENTITY-VENDORTYPEOID-MIB::cevContainerSlotENTITY-MIB::entPhysicalVendorType.3 = OID: CISCO-ENTITY-VENDORTYPEOID-MIB::cevCpu37452feENTITY-MIB::entPhysicalClass.1 = INTEGER: chassis(3)ENTITY-MIB::entPhysicalClass.2 = INTEGER: container(5)ENTITY-MIB::entPhysicalClass.3 = INTEGER: module(9)ENTITY-MIB::entPhysicalName.1 = STRING: 3745 chassisENTITY-MIB::entPhysicalName.2 = STRING: 3640 Chassis Slot 0ENTITY-MIB::entPhysicalName.3 = STRING: c3745 Motherboard with FastEthernet on Slot 0ENTITY-MIB::entPhysicalHardwareRev.1 = STRING: 2.0ENTITY-MIB::entPhysicalHardwareRev.2 = STRING:ENTITY-MIB::entPhysicalHardwareRev.3 = STRING: 2.0ENTITY-MIB::entPhysicalSerialNum.1 = STRING: FTX0945W0MYENTITY-MIB::entPhysicalSerialNum.2 = STRING:ENTITY-MIB::entPhysicalSerialNum.3 = STRING: XXXXXXXXXXX It should be immediately clear to you that you can find the chassis's serial number by creating an item with: ENTITY-MIB::entPhysicalSerialNum["index", "ENTITY-MIB::entPhysicalName", "3745 chassis"] Then you can specify, in the same item, that it should populate the Serial Number field of the host's inventory. This is how you can have a more automatic, dynamic population of inventory fields. The possibilities are endless as we've only just scratched the surface of what any given device can expose as SNMP metrics. Before you go and find your favorite OIDs to monitor though, let's have a closer look at the preceding examples, and let's discuss data types. Getting data types right We have already seen how an OID's value has a specific data type that is usually clearly stated with the default snmpwalk command. In the preceding examples, you can clearly see the data type just after the = sign, before the actual value. There are a number of SNMP data types—some still current and some deprecated. You can find the official list and documentation in RFC2578 (http://tools.ietf.org/html/rfc2578), but let's have a look at the most important ones from the perspective of a Zabbix user: SNMP type Description Suggested Zabbix item type and options INTEGER This can have negative values and is usually used for enumerations Numeric unsigned, decimal Store value as is Show with value mappings STRING This is a regular character string and can contain new lines Text Store value as is OID This is an SNMP object identifier Character Store value as is IpAddress IPv4 only Character Store value as is Counter32 This includes only non-negative and nondecreasing values Numeric unsigned, decimal Store value as delta (speed per second) Gauge32 This includes only non-negative values, which can decrease Numeric unsigned, decimal Store value as is Counter64 This includes non-negative and nondecreasing 64-bit values Numeric unsigned, decimal Store value as delta (speed per second) TimeTicks This includes non-negative, nondecreasing values Numeric unsigned, decimal Store value as is First of all, remember that the above suggestions are just that—suggestions. You should always evaluate how to store your data on a case-by-case basis, but you'll probably find that in many cases those are indeed the most useful settings. Moving on to the actual data types, remember that the command line SNMP tools by default parse the values and show some already interpreted information. This is especially true for Timeticks values and for INTEGER values when these are used as enumerations. In other words, you see the following from the command line: VRRP-MIB::vrrpNotificationCntl.0 = INTEGER: disabled(2) However, what is actually passed as a request is the bare OID: 1.3.6.1.2.1.68.1.2.0 The SNMP agent will respond with just the value, which, in this case, is the value 2. This means that in the case of enumerations, Zabbix will just receive and store a number and not the string disabled(2) as seen from the command line. If you want to display monitoring values that are a bit clearer, you can apply value mappings to your numeric items. Value maps contain the mapping between numeric values and arbitrary string representations for a human-friendly representation. You can specify which one you need in the item configuration form, as follows: Zabbix comes with a few predefined value mappings. You can create your own mappings by following the show value mappings link and, provided you have admin roles on Zabbix, you'll be taken to a page where you can configure all value mappings that will be used by Zabbix. From there, click on Create value map in the upper-right corner of the page, and you'll be able to create a new mapping. Not all INTEGER values are enumerations, but those that are used as such will be clearly recognizable from your command-line tools as they will be defined as INTEGER values but will show a string label along with the actual value, just as in the preceding example. On the other hand, when they are not used as enumerations, they can represent different things depending on the context. As seen in the previous paragraph, they can represent the number of indexes available for a given OID. They can also represent application or protocol-specific values, such as default MTU, default TTL, route metrics, and so on. The main difference between gauges, counters, and integers is that integers can assume negative values, while gauges and counters cannot. In addition to that, counters can only increase or wrap around and start again from the bottom of their value range once they reach the upper limits of it. From the perspective of Zabbix, this marks the difference in how you'll want to store their values. Gauges are usually employed when a value can vary within a given range, such as the speed of an interface, the amount of free memory, or any limits and timeouts you might find for notifications, the number of instances, and so on. In all of these cases, the value can increase or decrease in time, so you'll want to store them as they are because once put on a graph, they'll draw a meaningful curve. Counters, on the other hand, can only increase by definition. They are typically used to show how many packets were processed by an interface, how many were dropped, how many errors were encountered, and so on. If you store counter values as they are, you'll find in your graphs some ever-ascending curves that won't tell you very much for your monitoring or capacity planning purposes. This is why you'll usually want to track a counter's amount of change in time, more than its actual value. To do that, Zabbix offers two different ways to store deltas or differences between successive values. The delta (simple change) storage method does exactly what it says: it simply computes the difference between the currently received value and the previously received one, and stores the result. It doesn't take into consideration the elapsed time between the two measurements, nor the fact that the result can even have a negative value if the counter overflows. The fact is that most of the time, you'll be very interested in evaluating how much time has passed between two different measurements and in treating correctly any negative values that can appear as a result. The delta (speed per second) will divide the difference between the currently received value and the previously received one by the difference between the current timestamp and the previous one, as follows: (value – prev_value)/(time - prev_time) This will ensure that the scale of the change will always be constant, as opposed to the scale of the simple change delta, which will vary every time you modify the update interval of the item, giving you inconsistent results. Moreover, the speed-per-second delta will ignore any negative values and just wait for the next measurement, so you won't find any false dips in your graph due to overflowing. Finally, while SNMP uses specific data types for IP addresses and SNMP OIDs, there are no such types in Zabbix, so you'll need to map them to some kind of string item. The suggested type here is character as both values won't be bigger than 255 characters and won't contain any newlines. String values, on the other hand, can be quite long as the SNMP specification allows for 65,535-character-long texts; however, text that long would be of little practical value. Even if they are usually much shorter, string values can often contain newlines and be longer than 255 characters. Consider, for example, the following SysDescr OID for this device: NMPv2-MIB::sysDescr.0 = STRING: Cisco IOS Software, 3700 Software(C3745-ADVENTERPRISEK9_SNA-M), Version 12.4(15)T14, RELEASE SOFTWARE(fc2)^MTechnical Support: http://www.cisco.com/techsupport^MCopyright (c) 1986-2010 by Cisco Systems, Inc.^MCompiled Tue 17-Aug-10 12:56 by prod_rel_tea As you can see, the string spans multiple lines, and it's definitely longer than 255 characters. This is why the suggested type for string values is text as it allows text of arbitrary length and structure. On the other hand, if you're sure that a specific OID value will always be much shorter and simpler, you can certainly use the character data type for your corresponding Zabbix item. Now, you are truly ready to get the most out of your devices' SNMP agents as you are now able to find the OID you want to monitor and map them perfectly to Zabbix items, down to how to store the values, their data types, with what frequency, and with any value mapping that might be necessary. Summary In this article, you have learned the different possibilities offered by Zabbix to the enterprising network administrator. You should now be able to choose, design, and implement all the monitoring items you need, based on the methods illustrated in the preceding paragraphs. Resources for Article: Further resources on this subject: Monitoring additional servers [Article] Bar Reports in Zabbix 1.8 [Article] Using Proxies to Monitor Remote Locations with Zabbix 1.8 [Article]
Read more
  • 0
  • 0
  • 15146

article-image-speeding-vagrant-development-docker
Packt
03 Mar 2015
13 min read
Save for later

Speeding Vagrant Development With Docker

Packt
03 Mar 2015
13 min read
In this article by Chad Thompson, author of Vagrant Virtual Development Environment Cookbook, we will learn that many software developers are familiar with using Vagrant (http://vagrantup.com) to distribute and maintain development environments. In most cases, Vagrant is used to manage virtual machines running in desktop hypervisor software such as VirtualBox or the VMware Desktop product suites. (VMware Fusion for OS X and VMware Desktop for Linux and Windows environments.) More recently, Docker (http://docker.io) has become increasingly popular for deploying containers—Linux processes that can run in a single operating system environment yet be isolated from one another. In practice, this means that a container includes the runtime environment for an application, down to the operating system level. While containers have been popular for deploying applications, we can also use them for desktop development. Vagrant can use Docker in a couple of ways: As a target for running a process defined by Vagrant with the Vagrant provider. As a complete development environment for building and testing containers within the context of a virtual machine. This allows you to build a complete production-like container deployment environment with the Vagrant provisioner. In this example, we'll take a look at how we can use the Vagrant provider to build and run a web server. Running our web server with Docker will allow us to build and test our web application without the added overhead of booting and provisioning a virtual machine. (For more resources related to this topic, see here.) Introducing the Vagrant Provider The Vagrant Docker provider will build and deploy containers to a Docker runtime. There are a couple of cases to consider when using Vagrant with Docker: On a Linux host machine, Vagrant will use a native (locally installed) Docker environment to deploy containers. Make sure that Docker is installed before using Vagrant. Docker itself is a technology built on top of Linux Containers (LXC) technology—so Docker itself requires an operating system with a recent version (newer than Linux 3.8 which was released in February, 2013) of the Linux kernel. Most recent Linux distributions should support the ability to run Docker. On nonLinux environments (namely OS X and Windows), the provider will require a local Linux runtime to be present for deploying containers. When running the Docker provisioner in these environments, Vagrant will download and boot a version of the boot2docker (http://boot2docker.io) environment—in this case, a repackaging of boot2docker in Vagrant box format. Let's take a look at two scenarios for using the Docker provider. In each of these examples, we'll start these environments from an OS X environment so we will see some tasks that are required for using the boot2docker environment. Installing a Docker image from a repository We'll start with a simple case: installing a Docker container from a repository (a MySQL container) and connecting it to an external tool for development (the MySQL Workbench or a client tool of your choice). We'll need to initialize the boot2docker environment and use some Vagrant tools to interact with the environment and the deployed containers. Before we can start, we'll need to find a suitable Docker image to launch. One of the unique advantages to use Docker as a development environment is its ability to select a base Docker image, then add successive build steps on top of the base image. In this simple example, we can find a base MySQL image on the Docker Hub registry. (https://registry.hub.docker.com).The MySQL project provides an official Docker image that we can build from. We'll note from the repository the command for using the image: docker pull mysql and note that the image name is mysql. Start with a Vagrantfile that defines the docker: # -*- mode: ruby -*- # vi: set ft=ruby :   VAGRANTFILE_API_VERSION = "2" ENV['VAGRANT_DEFAULT_PROVIDER'] = 'vmware_fusion' Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.define"database" do |db|    db.vm.provider"docker"do |d|      d.image="mysql"    end end end An important thing to note immediately is that when we define the database machine and the provider with the Docker provider, we do not specify a box file. The Docker provider will start and launch containers into a boot2docker environment, negating the need for a Vagrant box or virtual machine definition. This will introduce a bit of a complication in interacting with the Vagrant environment in later steps. Also note the mysql image taken from the Docker Hub Registry. We'll need to launch the image with a few basic parameters. Add the following to the Docker provider block:    db.vm.provider "docker" do |d|      d.image="mysql"      d.env = {        :MYSQL_ROOT_PASSWORD => ""root",        :MYSQL_DATABASE     => ""dockertest",        :MYSQL_USER         => ""dockertest",        :MYSQL_PASSWORD     => ""d0cker"      }      d.ports =["3306:3306"]      d.remains_running = "true"    end The environment variables (d.env) are taken from the documentation on the MySQL Docker image page (https://registry.hub.docker.com/_/mysql/). This is how the image expects to set certain parameters. In this case, our parameters will set the database root password (for the root user) and create a database with a new user that has full permissions to that database. The d.ports parameter is an array of port listings that will be forwarded from the container (the default MySQL port of 3306) to the host operating system, in this case also 3306.The contained application will, thus, behave like a natively installed MySQL installation. The port forwarding here is from the container to the operating system that hosts the container (in this case, the container host is our boot2docker image). If we are developing and hosting containers natively with Vagrant on a Linux distribution, the port forwarding will be to localhost, but boot2docker introduces something of a wrinkle in doing Docker development on Windows or OS X. We'll either need to refer to our software installation by the IP of the boot2docker container or configure a second port forwarding configuration that allows a Docker contained application to be available to the host operating system as localhost. The final parameter (d.remains_running = true) is a flag for Vagrant to note that the Vagrant run should mark as failed if the Docker container exits on start. In the case of software that runs as a daemon process (such as the MySQL database), a Docker container that exits immediately is an error condition. Start the container using the vagrant up –provider=docker command. A few things will happen here: If this is the first time you have started the project, you'll see some messages about booting a box named mitchellh/boot2docker. This is a Vagrant-packaged version of the boot2docker project. Once the machine boots, it becomes a host for all Docker containers managed with Vagrant. Keep in mind that boot2doocker is necessary only for nonLinux operating systems that are running Docker through a virtual machine. On a Linux system running Docker natively, you will not see information about boot2docker. After the container is booted (or if it is already running), Vagrant will display notifications about rsyncing a folder (if we are using boot2docker) and launching the image: Docker generates unique identifiers for containers and notes any port mapping information. Let's take a look at some details on the containers that are running in the Docker host. We'll need to find a way to gain access to the Vagrant boot2docker image (and only if we are using boot2docker and not a native Linux environment), which is not quite as straightforward as a vagrant ssh; we'll need to identify the Vagrant container to access. First, identify the Docker Vagrant machine from the global Vagrant status. Vagrant keeps track of running instances that can be accessed from Vagrant itself. In this case, we are only interested in the Vagrant instance named docker-host. The instance we're interested in can be found with the vagrant global-status command: In this case, Vagrant identifies the instance as d381331 (a unique value for every Vagrant machine launched). We can access this instance with a vagrant ssh command: vagrant ssh d381331 This will display an ASCII-art boot2docker logo and a command prompt for the boot2docker instance. Let's take a look at Docker containers running on the system with the docker psps command: The docker ps command will provide information about the running Docker containers on the system; in this case, the unique ID of the container (output during the Vagrant startup) and other information about the container. Find the IP address of the boot2docker (only if we're using boot2docker) to connect to the MySQL instance. In this case, execute the ifconfig command: docker@boot2docker:~$ ifconfig This will output information about the network interfaces on the machine; we are interested in the eth0 entry. In particular, we can note the IP address of the machine on the eth0 interface: Make a note of the IP address noted as the inet addr; in this case 192.168.30.129. Connect a MySQL client to the running Docker container. In this case, we'll need to note some information to the connection: The IP address of the boot2docker virtual machine (if using boot2docker). In this case, we'll note 192.168.30.129. The port that the MySQL instance will respond to on the Docker host. In this case, the Docker container is forwarding port 3306 in the container to port 3306 on the host. Information noted in the Vagrantfile for the username or password on the MySQL instance. With this information in hand, we can configure a MySQL client. The MySQL project provides a supported GUI client named MySQL Workbench (http://www.mysql.com/products/workbench/). With the client installed on our host operating system, we can create a new connection in the Workbench client (consult the documentation for your version of Workbench, or use a MySQL client of your choice). In this case, we're connecting to the boot2docker instance. If you are running Docker natively on a Linux instance, the connection should simply forward to localhost. If the connection is successful, the Workbench client once connected will display an empty database: Once we've connected, we can use the MySQL database as we would for any other MySQL instance that is hosted this time in a Docker container without having to install and configure the MySQL package itself. Building a Docker image with Vagrant While launching packaged Docker, applications can be useful (particularly in the case where launching a Docker container is simpler than native installation steps), Vagrant becomes even more useful when used to launch containers that are being developed. On OS X and Windows machines, the use of Vagrant can make managing the container deployment somewhat simpler through the boot2docker containers, while on Linux, using the native Docker tools could be somewhat simpler. In this example, we'll use a simple Dockerfile to modify a base image. First, start with a simple Vagrantfile. In this case, we'll specify a build directory rather than a image file: # -*- mode: ruby -*- # vi: set ft=ruby :   # Vagrantfile API/syntax version. Don't touch unless you know what you're doing! VAGRANTFILE_API_VERSION = "2" ENV['VAGRANT_DEFAULT_PROVIDER'] = 'vmware_fusion'   Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.define "nginx" do |nginx|    nginx.vm.provider "docker" do |d|      d.build_dir = "build"      d.ports = ["49153:80"]    end end end This Vagrantfile specifies a build directory as well as the ports forwarded to the host from the container. In this case, the standard HTTP port (80) forwards to port 49153 on the host machine, which in this case is the boot2docker instance. Create our build directory in the same directory as the Vagrantfile. In the build directory, create a Dockerfile. A Dockerfile is a set of instructions on how to build a Docker container. See https://docs.docker.com/reference/builder/ or James Turnbull's The Docker Book for more information on how to construct a Dockerfile. In this example, we'll use a simple Dockerfile to copy a working HTML directory to a base NGINX image: FROM nginx COPY content /usr/share/nginx/html Create a directory in our build directory named content. In the directory, place a simple index.html file that will be served from the new container: <html> <body>    <div style="text-align:center;padding-top:40px;border:dashed 2px;">      This is an NGINX build.    </div> </body> </html> Once all the pieces are in place, our working directory will have the following structure: . ├── Vagrantfile └── build ├── Dockerfile    └── content        └── index.html Start the container in the working directory with the command: vagrant up nginx --provider=docker This will start the container build and deploy process. Once the container is launched, the web server can be accessed using the IP address of the boot2docker instance (see the previous section for more information on obtaining this address) and the forwarded port. One other item to note, especially, if you have completed both steps in this section without halting or destroying the Vagrant project is that when using the Docker provider, containers are deployed to a single shared virtual machine. If the boot2docker instance is accessed and the docker ps command is executed, it can be noted that two separate Vagrant projects deploy containers to a single host. When using the Docker provider, the single instance has a few effects: The single virtual machine can use fewer resources on your development workstation Deploying and rebuilding containers is a process that is much faster than booting and shutting down entire operating systems Docker development with the Docker provider can be a useful technique to create and test Docker containers, although Vagrant might not be of particular help in packaging and distributing Docker containers. If you wish to publish containers, consult the documentation or The Docker Book on getting started with packaging and distributing Docker containers. See also Docker: http://docker.io boot2docker: http://boot2docker.io The Docker Book: http://www.dockerbook.com The Docker repository: https://registry.hub.docker.com Summary In this article, we learned how to use Docker provisioner with Vagrant by covering the topics mentioned in the preceding paragraphs. Resources for Article: Further resources on this subject: Going Beyond the Basics [article] Module, Facts, Types and Reporting tools in Puppet [article] Setting Up a Development Environment [article]
Read more
  • 0
  • 0
  • 13344

article-image-basics-programming-julia
Packt
03 Mar 2015
17 min read
Save for later

Basics of Programming in Julia

Packt
03 Mar 2015
17 min read
 In this article by Ivo Balbaert, author of the book Getting Started with Julia Programming, we will explore how Julia interacts with the outside world, reading from standard input and writing to standard output, files, networks, and databases. Julia provides asynchronous networking I/O using the libuv library. We will see how to handle data in Julia. We will also discover the parallel processing model of Julia. In this article, the following topics are covered: Working with files (including the CSV files) Using DataFrames (For more resources related to this topic, see here.) Working with files To work with files, we need the IOStream type. IOStream is a type with the supertype IO and has the following characteristics: The fields are given by names(IOStream) 4-element Array{Symbol,1}:  :handle   :ios    :name   :mark The types are given by IOStream.types (Ptr{None}, Array{Uint8,1}, String, Int64) The file handle is a pointer of the type Ptr, which is a reference to the file object. Opening and reading a line-oriented file with the name example.dat is very easy: // code in Chapter 8io.jl fname = "example.dat"                                 f1 = open(fname) fname is a string that contains the path to the file, using escaping of special characters with when necessary; for example, in Windows, when the file is in the test folder on the D: drive, this would become d:\test\example.dat. The f1 variable is now an IOStream(<file example.dat>) object. To read all lines one after the other in an array, use data = readlines(f1), which returns 3-element Array{Union(ASCIIString,UTF8String),1}: "this is line 1.rn" "this is line 2.rn" "this is line 3." For processing line by line, now only a simple loop is needed: for line in data   println(line) # or process line end close(f1) Always close the IOStream object to clean and save resources. If you want to read the file into one string, use readall. Use this only for relatively small files because of the memory consumption; this can also be a potential problem when using readlines. There is a convenient shorthand with the do syntax for opening a file, applying a function process, and closing it automatically. This goes as follows (file is the IOStream object in this code): open(fname) do file     process(file) end The do command creates an anonymous function, and passes it to open. Thus, the previous code example would have been equivalent to open(process, fname). Use the same syntax for processing a file fname line by line without the memory overhead of the previous methods, for example: open(fname) do file     for line in eachline(file)         print(line) # or process line     end end Writing a file requires first opening it with a "w" flag, then writing strings to it with write, print, or println, and then closing the file handle that flushes the IOStream object to the disk: fname =   "example2.dat" f2 = open(fname, "w") write(f2, "I write myself to a filen") # returns 24 (bytes written) println(f2, "even with println!") close(f2) Opening a file with the "w" option will clear the file if it exists. To append to an existing file, use "a". To process all the files in the current folder (or a given folder as an argument to readdir()), use this for loop: for file in readdir()   # process file end Reading and writing CSV files A CSV file is a comma-separated file. The data fields in each line are separated by commas "," or another delimiter such as semicolons ";". These files are the de-facto standard for exchanging small and medium amounts of tabular data. Such files are structured so that one line contains data about one data object, so we need a way to read and process the file line by line. As an example, we will use the data file Chapter 8winequality.csv that contains 1,599 sample measurements, 12 data columns, such as pH and alcohol per sample, separated by a semicolon. In the following screenshot, you can see the top 20 rows:   In general, the readdlm function is used to read in the data from the CSV files: # code in Chapter 8csv_files.jl: fname = "winequality.csv" data = readdlm(fname, ';') The second argument is the delimiter character (here, it is ;). The resulting data is a 1600x12 Array{Any,2} array of the type Any because no common type could be found:     "fixed acidity"   "volatile acidity"      "alcohol"   "quality"      7.4                        0.7                                9.4              5.0      7.8                        0.88                              9.8              5.0      7.8                        0.76                              9.8              5.0   … If the data file is comma separated, reading it is even simpler with the following command: data2 = readcsv(fname) The problem with what we have done until now is that the headers (the column titles) were read as part of the data. Fortunately, we can pass the argument header=true to let Julia put the first line in a separate array. It then naturally gets the correct datatype, Float64, for the data array. We can also specify the type explicitly, such as this: data3 = readdlm(fname, ';', Float64, 'n', header=true) The third argument here is the type of data, which is a numeric type, String or Any. The next argument is the line separator character, and the fifth indicates whether or not there is a header line with the field (column) names. If so, then data3 is a tuple with the data as the first element and the header as the second, in our case, (1599x12 Array{Float64,2}, 1x12 Array{String,2}) (There are other optional arguments to define readdlm, see the help option). In this case, the actual data is given by data3[1] and the header by data3[2]. Let's continue working with the variable data. The data forms a matrix, and we can get the rows and columns of data using the normal array-matrix syntax). For example, the third row is given by row3 = data[3, :] with data:  7.8  0.88  0.0  2.6  0.098  25.0  67.0  0.9968  3.2  0.68  9.8  5.0, representing the measurements for all the characteristics of a certain wine. The measurements of a certain characteristic for all wines are given by a data column, for example, col3 = data[ :, 3] represents the measurements of citric acid and returns a column vector 1600-element Array{Any,1}:   "citric acid" 0.0  0.0  0.04  0.56  0.0  0.0 …  0.08  0.08  0.1  0.13  0.12  0.47. If we need columns 2-4 (volatile acidity to residual sugar) for all wines, extract the data with x = data[:, 2:4]. If we need these measurements only for the wines on rows 70-75, get these with y = data[70:75, 2:4], returning a 6 x 3 Array{Any,2} outputas follows: 0.32   0.57  2.0 0.705  0.05  1.9 … 0.675  0.26  2.1 To get a matrix with the data from columns 3, 6, and 11, execute the following command: z = [data[:,3] data[:,6] data[:,11]] It would be useful to create a type Wine in the code. For example, if the data is to be passed around functions, it will improve the code quality to encapsulate all the data in a single data type, like this: type Wine     fixed_acidity::Array{Float64}     volatile_acidity::Array{Float64}     citric_acid::Array{Float64}     # other fields     quality::Array{Float64} end Then, we can create objects of this type to work with them, like in any other object-oriented language, for example, wine1 = Wine(data[1, :]...), where the elements of the row are splatted with the ... operator into the Wine constructor. To write to a CSV file, the simplest way is to use the writecsv function for a comma separator, or the writedlm function if you want to specify another separator. For example, to write an array data to a file partial.dat, you need to execute the following command: writedlm("partial.dat", data, ';') If more control is necessary, you can easily combine the more basic functions from the previous section. For example, the following code snippet writes 10 tuples of three numbers each to a file: // code in Chapter 8tuple_csv.jl fname = "savetuple.csv" csvfile = open(fname,"w") # writing headers: write(csvfile, "ColName A, ColName B, ColName Cn") for i = 1:10   tup(i) = tuple(rand(Float64,3)...)   write(csvfile, join(tup(i),","), "n") end close(csvfile) Using DataFrames If you measure n variables (each of a different type) of a single object of observation, then you get a table with n columns for each object row. If there are m observations, then we have m rows of data. For example, given the student grades as data, you might want to know "compute the average grade for each socioeconomic group", where grade and socioeconomic group are both columns in the table, and there is one row per student. The DataFrame is the most natural representation to work with such a (m x n) table of data. They are similar to pandas DataFrames in Python or data.frame in R. A DataFrame is a more specialized tool than a normal array for working with tabular and statistical data, and it is defined in the DataFrames package, a popular Julia library for statistical work. Install it in your environment by typing in Pkg.add("DataFrames") in the REPL. Then, import it into your current workspace with using DataFrames. Do the same for the packages DataArrays and RDatasets (which contains a collection of example datasets mostly used in the R literature). A common case in statistical data is that data values can be missing (the information is not known). The DataArrays package provides us with the unique value NA, which represents a missing value, and has the type NAtype. The result of the computations that contain the NA values mostly cannot be determined, for example, 42 + NA returns NA. (Julia v0.4 also has a new Nullable{T} type, which allows you to specify the type of a missing value). A DataArray{T} array is a data structure that can be n-dimensional, behaves like a standard Julia array, and can contain values of the type T, but it can also contain the missing (Not Available) values NA and can work efficiently with them. To construct them, use the @data macro: // code in Chapter 8dataarrays.jl using DataArrays using DataFrames dv = @data([7, 3, NA, 5, 42]) This returns 5-element DataArray{Int64,1}: 7  3   NA  5 42. The sum of these numbers is given by sum(dv) and returns NA. One can also assign the NA values to the array with dv[5] = NA; then, dv becomes [7, 3, NA, 5, NA]). Converting this data structure to a normal array fails: convert(Array, dv) returns ERROR: NAException. How to get rid of these NA values, supposing we can do so safely? We can use the dropna function, for example, sum(dropna(dv)) returns 15. If you know that you can replace them with a value v, use the array function: repl = -1 sum(array(dv, repl)) # returns 13 A DataFrame is a kind of an in-memory database, versatile in the ways you can work with the data. It consists of columns with names such as Col1, Col2, Col3, and so on. Each of these columns are DataArrays that have their own type, and the data they contain can be referred to by the column names as well, so we have substantially more forms of indexing. Unlike two-dimensional arrays, columns in a DataFrame can be of different types. One column might, for instance, contain the names of students and should therefore be a string. Another column could contain their age and should be an integer. We construct a DataFrame from the program data as follows: // code in Chapter 8dataframes.jl using DataFrames # constructing a DataFrame: df = DataFrame() df[:Col1] = 1:4 df[:Col2] = [e, pi, sqrt(2), 42] df[:Col3] = [true, false, true, false] show(df) Notice that the column headers are used as symbols. This returns the following 4 x 3 DataFrame object: We could also have used the full constructor as follows: df = DataFrame(Col1 = 1:4, Col2 = [e, pi, sqrt(2), 42],    Col3 = [true, false, true, false]) You can refer to the columns either by an index (the column number) or by a name, both of the following expressions return the same output: show(df[2]) show(df[:Col2]) This gives the following output: [2.718281828459045, 3.141592653589793, 1.4142135623730951,42.0] To show the rows or subsets of rows and columns, use the familiar splice (:) syntax, for example: To get the first row, execute df[1, :]. This returns 1x3 DataFrame.  | Row | Col1 | Col2    | Col3 |  |-----|------|---------|------|  | 1   | 1    | 2.71828 | true | To get the second and third row, execute df [2:3, :] To get only the second column from the previous result, execute df[2:3, :Col2]. This returns [3.141592653589793, 1.4142135623730951]. To get the second and third column from the second and third row, execute df[2:3, [:Col2, :Col3]], which returns the following output: 2x2 DataFrame  | Row | Col2    | Col3  |  |---- |-----   -|-------|  | 1   | 3.14159 | false |  | 2   | 1.41421 | true  | The following functions are very useful when working with DataFrames: The head(df) and tail(df) functions show you the first six and the last six lines of data respectively. The names function gives the names of the columns names(df). It returns 3-element Array{Symbol,1}:  :Col1  :Col2  :Col3. The eltypes function gives the data types of the columns eltypes(df). It gives the output as 3-element Array{Type{T<:Top},1}:  Int64  Float64  Bool. The describe function tries to give some useful summary information about the data in the columns, depending on the type, for example, describe(df) gives for column 2 (which is numeric) the min, max, median, mean, number, and percentage of NAs: Col2 Min      1.4142135623730951 1st Qu.  2.392264761937558  Median   2.929937241024419 Mean     12.318522011105483  3rd Qu.  12.856194490192344  Max      42.0  NAs      0  NA%      0.0% To load in data from a local CSV file, use the method readtable. The returned object is of type DataFrame: // code in Chapter 8dataframes.jl using DataFrames fname = "winequality.csv" data = readtable(fname, separator = ';') typeof(data) # DataFrame size(data) # (1599,12) Here is a fraction of the output: The readtable method also supports reading in gzipped CSV files. Writing a DataFrame to a file can be done with the writetable function, which takes the filename and the DataFrame as arguments, for example, writetable("dataframe1.csv", df). By default, writetable will use the delimiter specified by the filename extension and write the column names as headers. Both readtable and writetable support numerous options for special cases. Refer to the docs for more information (refer to http://dataframesjl.readthedocs.org/en/latest/). To demonstrate some of the power of DataFrames, here are some queries you can do: Make a vector with only the quality information data[:quality] Give the wines with alcohol percentage equal to 9.5, for example, data[ data[:alcohol] .== 9.5, :] Here, we use the .== operator, which does element-wise comparison. data[:alcohol] .== 9.5 returns an array of Boolean values (true for datapoints, where :alcohol is 9.5, and false otherwise). data[boolean_array, : ] selects those rows where boolean_array is true. Count the number of wines grouped by quality with by(data, :quality, data -> size(data, 1)), which returns the following: 6x2 DataFrame | Row | quality | x1  | |-----|---------|-----| | 1    | 3      | 10  | | 2    | 4      | 53  | | 3    | 5      | 681 | | 4    | 6      | 638 | | 5    | 7      | 199 | | 6    | 8      | 18  | The DataFrames package contains the by function, which takes in three arguments: A DataFrame, here it takes data A column to split the DataFrame on, here it takes quality A function or an expression to apply to each subset of the DataFrame, here data -> size(data, 1), which gives us the number of wines for each quality value Another easy way to get the distribution among quality is to execute the histogram hist function hist(data[:quality]) that gives the counts over the range of quality (2.0:1.0:8.0,[10,53,681,638,199,18]). More precisely, this is a tuple with the first element corresponding to the edges of the histogram bins, and the second denoting the number of items in each bin. So there are, for example, 10 wines with quality between 2 and 3, and so on. To extract the counts as a variable count of type Vector, we can execute _, count = hist(data[:quality]); the _ means that we neglect the first element of the tuple. To obtain the quality classes as a DataArray class, we will execute the following: class = sort(unique(data[:quality])) We can now construct a df_quality DataFrame with the class and count columns as df_quality = DataFrame(qual=class, no=count). This gives the following output: 6x2 DataFrame | Row | qual | no  | |-----|------|-----| | 1   | 3    | 10  | | 2   | 4    | 53  | | 3   | 5    | 681 | | 4   | 6    | 638 | | 5   | 7    | 199 | | 6   | 8    | 18  | To deepen your understanding and learn about the other features of Julia DataFrames (such as joining, reshaping, and sorting), refer to the documentation available at http://dataframesjl.readthedocs.org/en/latest/. Other file formats Julia can work with other human-readable file formats through specialized packages: For JSON, use the JSON package. The parse method converts the JSON strings into Dictionaries, and the json method turns any Julia object into a JSON string. For XML, use the LightXML package For YAML, use the YAML package For HDF5 (a common format for scientific data), use the HDF5 package For working with Windows INI files, use the IniFile package Summary In this article we discussed the basics of network programming in Julia. Resources for Article: Further resources on this subject: Getting Started with Electronic Projects? [article] Getting Started with Selenium Webdriver and Python [article] Handling The Dom In Dart [article]
Read more
  • 0
  • 0
  • 18945
article-image-basic-sql-server-administration
Packt
03 Mar 2015
11 min read
Save for later

Basic SQL Server Administration

Packt
03 Mar 2015
11 min read
 In this article by Donabel Santos, the author of PowerShell for SQL Server Essentials, we will look at how to accomplish typical SQL Server administration tasks by using PowerShell. Many of the tasks that we will see can be accomplished by using SQL Server Management Objects (SMO). As we encounter new SMO classes, it is best to verify the properties and methods of that class using Get-Help, or by directly visiting the TechNet or MSDN website. (For more resources related to this topic, see here.) Listing databases and tables Let's start out by listing the current databases. The SMO Server class has access to all the databases in that instance, so a server variable will have to be created first. To create one using Windows Authentication, you can use the following snippet: Import-Module SQLPS -DisableNameChecking #current server name $servername = "ROGUE"   #below should be a single line of code $server = New-Object "Microsoft.SqlServer.Management.  Smo.Server" $servername If you need to use SQL Server Authentication, you can set the LoginSecure property to false, and prompt the user for the database credentials: #with SQL authentication, we need #to supply the SQL Login and password $server.ConnectionContext.LoginSecure=$false; $credential = Get-Credential $server.ConnectionContext.set_Login($credential.UserName) $server.ConnectionContext.set_SecurePassword($credential.Password) Another way is to create a Microsoft.SqlServer.Management.Common.ServerConnection object and pass the database connection string: #code below is a single line $connectionString = "Server=$dataSource;uid=$username;   pwd=$passwordd;Database=$database;Integrated Security=False"   $connection = New-Object System.Data.SqlClient.SqlConnection $connection.ConnectionString = $connectionString To find out how many databases are there, you can use the Count property of the Databases property: $server.databases.Count In addition to simply displaying the number of databases in an instance, we can also find out additional information such as creation data, recovery model, number of tables, stored procedures, and user-defined functions. The following is a sample script that pulls this information: #create empty array $result = @() $server.Databases | Where-Object IsSystemObject -eq $false | ForEach-Object {     $db = $_     $object = [PSCustomObject] @{        Name          = $db.Name        CreateDate    = $db.CreateDate        RecoveryModel = $db.RecoveryModel        NumTables     = $db.Tables.Count        NumUsers      = $db.Users.Count        NumSP         = $db.StoredProcedures.Count        NumUDF        = $db.UserDefinedFunctions.Count     }     $result += $object } $result | Format-Table -AutoSize A sample result looks like the following screenshot: In this script, we have manipulated the output a little. Since we want information in a format different from the default, we created a custom object using the PSCustomObject class to store all this information. The PSCustomObject class was introduced in PowerShell V3. You can also use PSCustomObject to draw data points from different objects and pull them together in a single result set. Each line in the sample result shown in the preceding screenshot is a single PSCustomObject. All of these, in turn, are stored in the $result array, which can be piped to the Format-Table cmdlet for a little easier display. After learning these basics about PSCustomObject, you can adapt this script to increase the list of properties you are querying and change the formatting of the display. You can also export these to a file if you need to. To find out additional properties, you can pipe $server.Databases to the Get-Member cmdlet: $server.Databases | Get-Member | Where-Object MemberType –eq "Property" Once you execute this, your resulting screen should look similar to the following screenshot: To find out which methods are available for SMO database objects, we can use a very similar snippet, but this time, we will filter based on methods: $server.Databases | Get-Member | Where-Object MemberType –eq "Method" Once you execute this, your resulting screen should look similar to the following screenshot: Listing database files and filegroups Managing databases also incorporates monitoring and managing of the files and filegroups associated with these databases. Still, using SMO, we can pull this information via PowerShell. You can start by pulling all non-system databases: $server.Databases | Where-Object IsSystemObject -eq $false The preceding snippet iterates over all the databases in the system. You can use the Foreach-Object cmdlet to do the iteration, and for each iteration, you can get a handle to the current database object. The SMO database object will have access to a Filegroups property, which you can query to find out more about the filegroups associated with each database: ForEach-Object {   $db = $_   $db.FileGroups } This FileGroups class, in turn, can access all the files in that specific filegroup. Here is the complete script that lists all files and filegroups for all databases. Note that we use Foreach-Object several times: once to loop through all databases, then to loop through all filegroups for each database, and again to loop through all files in each filegroup: Import-Module SQLPS -DisableNameChecking   #current server name $servername = "ROGUE"   $server = New-Object "Microsoft.SqlServer.Management.Smo.  Server" $servername   $result = @()   $server.Databases | Where-Object IsSystemObject -eq $false | ForEach-Object {    $db = $_    $db.FileGroups |    ForEach-Object {       $fg = $_       $fg.Files |       ForEach-Object {          $file = $_            $object = [PSCustomObject] @{                 Database = $db.Name                 FileGroup = $fg.Name                 FileName = $file.FileName | Split-Path -Leaf                 "Size(MB)" = "{0:N2}" -f ($file.Size/1024)                 "UsedSpace(MB)" = "{0:N2}" -f ($file.UsedSpace/1MB)                 }          $result += $object         }    } } $result | Format-Table -AutoSize A sample result looks like the following screenshot: We have adjusted the result to make the display a bit more readable. For the FileName property, we extracted just the actual filename and did not report the path by piping the FileName property to the Split-Path cmdlet. The -Leaf option provides the filename part of the full path: $file.FileName | Split-Path -Leaf With Size and UsedSpace, we report the value in megabytes (MB). Since the default sizes are reported in kilobytes (KB), we have to divide the value by 1024. We also display the values with two decimal places: "Size(MB)" = "{0:N2}" -f ($file.Size/1024)< "UsedSpace(MB)" = "{0:N2}" -f ($file.UsedSpace/1MB) If you simply want to get the directory where the primary datafile is stored, you can use the following command: $db.PrimaryFilePath If you want to export the results to Excel or CSV, you simply need to take $result and instead of piping it to Format-Table, use one of the Export or Convert cmdlets. Adding files and filegroups Filegroups in SQL Server allow for a group of files to be managed together. It is almost akin to having folders on your desktop to allow you to manage, move, and save files together. To add a filegroup, you have to use the Microsoft.SqlServer.Management.Smo.Filegroup class. Assuming you already have variables that point to your server instance, you can create a variable that references the database you wish to work with, as shown in the following snippet: $dbname = "Registration" $db = $server.Databases[$dbname] Instantiating a Filegroup variable requires the handle to the SMO database object and a filegroup name. We have shown this in the following screenshot: #code below is a single line $fg = New-Object "Microsoft.SqlServer.Management.Smo.  Filegroup" $db, "FG1" When you're ready to create, invoke the Create() method: $fg.Create() Adding a datafile uses a similar approach. You need to identify which filegroup this new datafile belongs to. You will also need to identify the logical filename and actual file path of the new file. The following snippet will help you do that: #code below is a single line $datafile = New-Object "Microsoft.SqlServer.Management.Smo.DataFile" $fg, "data4"   $datafile.FileName = "C:DATAdata4.ndf" $datafile.Create() You can verify the changes visually in SQL Server Management Studio when you go to the database's properties. Under Files, you will see that the new secondary file, data4.ndf, has been added. If, at a later time, you need to increase any of the files' sizes, you can use SMO to create a handle to the file and change the Size property. The Size property is allocated by KB, so you will need to calculate accordingly. After the Size property is changed, invoke the Alter() method to persist the changes. The following is an example snippet to do this: $db = $server.Databases[$dbname] $fg = $db.FileGroups["FG1"] $file = $fg.Files["data4"] $file.Size = 2 * 1024 #2MB $file.Alter() Listing the processes SQL Server has a number of processes in the background that are needed for a normal operation. The SMO server class can access the list of processes by using the method EnumProcesses(). The following is an example script to pull current non-system processes, the programs that are using them, the databases that are using them, and the account that's configured to use/run them: Import-Module SQLPS -DisableNameChecking   #current server name $servername = "ROGUE"   $server = New-Object "Microsoft.SqlServer.Management.Smo.Server" $servername   $server.EnumProcesses() | Where-Object IsSystem -eq $false | Select-Object Spid, Database, IsSystem, Login, Status, Cpu, MemUsage, Program | Format-Table -AutoSize The result that you will get looks like the following screenshot: You can adjust this script based on your needs. For example, if you only need running queries, you can pipe it to the Where-Object cmdlet and filter by status. You can also sort the result based on the highest CPU or memory usage by piping this to the Sort-Object cmdlet. Should you need to kill any process, for example when some processes are blocked, you can use the KillProcess() method of the SMO server object. You will need to pass the SQL Server session ID (or SPID) to this method: $server.KillProcess($blockingSpid) If you want to kill all processes in a specific database, you can use the KillAllProcesses() method and pass the database name: $server.KillAllProcesses($dbname) Be careful though. Killing processes should not be done lightly. Before you kill a process, investigate what the process does, why you need to kill it, and what potential effects killing it will have on your database. Otherwise, killing processes could result in varying levels of system instability. Checking enabled features SQL has many features. We can find out if certain features are enabled by using SMO and PowerShell. To determine this, you need to access the object that owns that feature. For example, some features are available to be queried once you create an SMO server object: Import-Module SQLPS -DisableNameChecking   #current server name $servername = "ROGUE"   $server = New-Object "Microsoft.SqlServer.Management.Smo.Server" $servername   $server | Select-Object IsClustered, ClusterName, FilestreamLevel, IsFullTextInstalled, LinkedServers, IsHadrEnabled, AvailabilityGroups In the preceding script, we can easily find out the following parameters: Is the server clustered (IsClustered)? Does it support FileStream and to what level (FilestreamLevel)? Is FullText installed (IsFullTextInstalled)? Are there any configured linked servers in the system (LinkedServers)? Is AlwaysOn enabled (IsHadrEnabled) and are any availability groups configured (AvailabilityGroups)? There are also a number of cmdlets available with the SQLPS module that allow you to manage the AlwaysOn parameter: Replication can also be managed programmatically using the Replication Management Objects assembly. More information can be found at http://msdn.microsoft.com/en-us/library/ms146869.aspx. Summary In this article, we looked at some of the commands that can used to perform basic SQL Server administration tasks in PowerShell. Resources for Article: Further resources on this subject: Sql Server Analysis Services Administering and Monitoring Analysis Services? [article] Unleashing your Development Skills Powershell [article] The Arduino Mobile Robot [article]
Read more
  • 0
  • 0
  • 1635

article-image-scipy-signal-processing
Packt
03 Mar 2015
14 min read
Save for later

SciPy for Signal Processing

Packt
03 Mar 2015
14 min read
In this article by Sergio J. Rojas G. and Erik A Christensen, authors of the book Learning SciPy for Numerical and Scientific Computing - Second Edition, we will focus on the usage of some most commonly used routines that are included in SciPy modules—scipy.signal, scipy.ndimage, and scipy.fftpack, which are used for signal processing, multidimensional image processing, and computing Fourier transforms, respectively. We define a signal as data that measures either a time-varying or spatially varying phenomena. Sound or electrocardiograms are excellent examples of time-varying quantities, while images embody the quintessential spatially varying cases. Moving images are treated with the techniques of both types of signals, obviously. The field of signal processing treats four aspects of this kind of data: its acquisition, quality improvement, compression, and feature extraction. SciPy has many routines to treat effectively tasks in any of the four fields. All these are included in two low-level modules (scipy.signal being the main module, with an emphasis on time-varying data, and scipy.ndimage, for images). Many of the routines in these two modules are based on Discrete Fourier Transform of the data. SciPy has an extensive package of applications and definitions of these background algorithms, scipy.fftpack, which we will start covering first. (For more resources related to this topic, see here.) Discrete Fourier Transforms The Discrete Fourier Transform (DFT from now on) transforms any signal from its time/space domain into a related signal in the frequency domain. This allows us not only to be able to analyze the different frequencies of the data, but also for faster filtering operations, when used properly. It is possible to turn a signal in the frequency domain back to its time/spatial domain; thanks to the Inverse Fourier Transform. We will not go into detail of the mathematics behind these operators, since we assume familiarity at some level with this theory. We will focus on syntax and applications instead. The basic routines in the scipy.fftpack module compute the DFT and its inverse, for discrete signals in any dimension, which are fft and ifft (one dimension), fft2 and ifft2 (two dimensions), and fftn and ifftn (any number of dimensions). All of these routines assume that the data is complex valued. If we know beforehand that a particular dataset is actually real valued, and should offer real-valued frequencies, we use rfft and irfft instead, for a faster algorithm. All these routines are designed so that composition with their inverses always yields the identity. The syntax is the same in all cases, as follows: fft(x[, n, axis, overwrite_x]) The first parameter, x, is always the signal in any array-like form. Note that fft performs one-dimensional transforms. This means in particular, that if x happens to be two-dimensional, for example, fft will output another two-dimensional array where each row is the transform of each row of the original. We can change it to columns instead, with the optional parameter, axis. The rest of parameters are also optional; n indicates the length of the transform, and overwrite_x gets rid of the original data to save memory and resources. We usually play with the integer n when we need to pad the signal with zeros, or truncate it. For higher dimension, n is substituted by shape (a tuple), and axis by axes (another tuple). To better understand the output, it is often useful to shift the zero frequencies to the center of the output arrays with fftshift. The inverse of this operation, ifftshift, is also included in the module. The following code shows some of these routines in action, when applied to a checkerboard image: >>> import numpy >>> from scipy.fftpack import fft,fft2, fftshift >>> import matplotlib.pyplot as plt >>> B=numpy.ones((4,4)); W=numpy.zeros((4,4)) >>> signal = numpy.bmat("B,W;W,B") >>> onedimfft = fft(signal,n=16) >>> twodimfft = fft2(signal,shape=(16,16)) >>> plt.figure() >>> plt.gray() >>> plt.subplot(121,aspect='equal') >>> plt.pcolormesh(onedimfft.real) >>> plt.colorbar(orientation='horizontal') >>> plt.subplot(122,aspect='equal') >>> plt.pcolormesh(fftshift(twodimfft.real)) >>> plt.colorbar(orientation='horizontal') >>> plt.show() Note how the first four rows of the one-dimensional transform are equal (and so are the last four), while the two-dimensional transform (once shifted) presents a peak at the origin, and nice symmetries in the frequency domain. In the following screenshot (obtained from the preceding code), the left-hand side image is fft and the right-hand side image is fft2 of a 2 x 2 checkerboard signal: The scipy.fftpack module also offers the Discrete Cosine Transform with its inverse (dct, idct) as well as many differential and pseudo-differential operators defined in terms of all these transforms: diff (for derivative/integral), hilbert and ihilbert (for the Hilbert transform), tilbert and itilbert (for the h-Tilbert transform of periodic sequences), and so on. Signal construction To aid in the construction of signals with predetermined properties, the scipy.signal module has a nice collection of the most frequent one-dimensional waveforms in the literature: chirp and sweep_poly (for the frequency-swept cosine generator), gausspulse (a Gaussian modulated sinusoid) and sawtooth and square (for the waveforms with those names). They all take as their main parameter a one-dimensional ndarray representing the times at which the signal is to be evaluated. Other parameters control the design of the signal, according to frequency or time constraints. Let's take a look into the following code snippet, which illustrates the use of these one dimensional waveforms that we just discussed: >>> import numpy >>> from scipy.signal import chirp, sawtooth, square, gausspulse >>> import matplotlib.pyplot as plt >>> t=numpy.linspace(-1,1,1000) >>> plt.subplot(221); plt.ylim([-2,2]) >>> plt.plot(t,chirp(t,f0=100,t1=0.5,f1=200))   # plot a chirp >>> plt.subplot(222); plt.ylim([-2,2]) >>> plt.plot(t,gausspulse(t,fc=10,bw=0.5))     # Gauss pulse >>> plt.subplot(223); plt.ylim([-2,2]) >>> t*=3*numpy.pi >>> plt.plot(t,sawtooth(t))                     # sawtooth >>> plt.subplot(224); plt.ylim([-2,2]) >>> plt.plot(t,square(t))                       # Square wave >>> plt.show() Generated by this code, the following diagram shows waveforms for chirp (upper-left), gausspulse (upper-right), sawtooth (lower-left), and square (lower-right): The usual method of creating signals is to import them from the file. This is possible by using purely NumPy routines, for example fromfile: fromfile(file, dtype=float, count=-1, sep='') The file argument may point to either a file or a string, the count argument is used to determine the number of items to read, and sep indicates what constitutes a separator in the original file/string. For images, we have the versatile routine, imread in either the scipy.ndimage or scipy.misc module: imread(fname, flatten=False) The fname argument is a string containing the location of an image. The routine infers the type of file, and reads the data into an array, accordingly. In case the flatten argument is turned to True, the image is converted to gray scale. Note that, in order to work, the Python Imaging Library (PIL) needs to be installed. It is also possible to load .wav files for analysis, with the read and write routines from the wavfile submodule in the scipy.io module. For instance, given any audio file with this format, say audio.wav, the command, rate,data = scipy.io.wavfile.read("audio.wav"), assigns an integer value to the rate variable, indicating the sample rate of the file (in samples per second), and a NumPy ndarray to the data variable, containing the numerical values assigned to the different notes. If we wish to write some one-dimensional ndarray data into an audio file of this kind, with the sample rate given by the rate variable, we may do so by issuing the following command: >>> scipy.io.wavfile.write("filename.wav",rate,data) Filters A filter is an operation on signals that either removes features or extracts some component. SciPy has a very complete set of known filters, as well as the tools to allow construction of new ones. The complete list of filters in SciPy is long, and we encourage the reader to explore the help documents of the scipy.signal and scipy.ndimage modules for the complete picture. We will introduce in these pages, as an exposition, some of the most used filters in the treatment of audio or image processing. We start by creating a signal worth filtering: >>> from numpy import sin, cos, pi, linspace >>> f=lambda t: cos(pi*t) + 0.2*sin(5*pi*t+0.1) + 0.2*sin(30*pi*t)    + 0.1*sin(32*pi*t+0.1) + 0.1*sin(47* pi*t+0.8) >>> t=linspace(0,4,400); signal=f(t) We first test the classical smoothing filter of Wiener and Kolmogorov, wiener. We present in a plot, the original signal (in black) and the corresponding filtered data, with a choice of a Wiener window of the size 55 samples (in blue). Next, we compare the result of applying the median filter, medfilt, with a kernel of the same size as before (in red): >>> from scipy.signal import wiener, medfilt >>> import matplotlib.pylab as plt >>> plt.plot(t,signal,'k') >>> plt.plot(t,wiener(signal,mysize=55),'r',linewidth=3) >>> plt.plot(t,medfilt(signal,kernel_size=55),'b',linewidth=3) >>> plt.show() This gives us the following graph showing the comparison of smoothing filters (wiener is the one that has its starting point just below 0.5 and medfilt has its starting point just above 0.5): Most of the filters in the scipy.signal module can be adapted to work in arrays of any dimension. But in the particular case of images, we prefer to use the implementations in the scipy.ndimage module, since they are coded with these objects in mind. For instance, to perform a median filter on an image for smoothing, we use scipy.ndimage.median_filter. Let's see an example. We will start by loading Lena to the array and corrupting the image with Gaussian noise (zero mean and standard deviation of 16): >>> from scipy.stats import norm     # Gaussian distribution >>> import matplotlib.pyplot as plt >>> import scipy.misc >>> import scipy.ndimage >>> plt.gray() >>> lena=scipy.misc.lena().astype(float) >>> plt.subplot(221); >>> plt.imshow(lena) >>> lena+=norm(loc=0,scale=16).rvs(lena.shape) >>> plt.subplot(222); >>> plt.imshow(lena) >>> denoised_lena = scipy.ndimage.median_filter(lena,3) >>> plt.subplot(224); >>> plt.imshow(denoised_lena) The set of filters for images come in two flavors—statistical and morphological. For example, among the filters of statistical nature, we have the Sobel algorithm oriented to detection of edges (singularities along curves). Its syntax is as follows: sobel(image, axis=-1, output=None, mode='reflect', cval=0.0) The optional parameter, axis, indicates the dimension in which the computations are performed. By default, this is always the last axis (-1). The mode parameter, which is one of the strings 'reflect', 'constant', 'nearest', 'mirror', or 'wrap', indicates how to handle the border of the image, in case there is insufficient data to perform the computations there. In case the mode is 'constant', we may indicate the value to use in the border, with the cval parameter. Let's look into the following code snippet, which illustrates the use of the sobel filter: >>> from scipy.ndimage.filters import sobel >>> import numpy >>> lena=scipy.misc.lena() >>> sblX=sobel(lena,axis=0); sblY=sobel(lena,axis=1) >>> sbl=numpy.hypot(sblX,sblY) >>> plt.subplot(223); >>> plt.imshow(sbl) >>> plt.show() The following screenshot illustrates Lena (upper-left) and noisy Lena (upper-right) with the preceding two filters in action—edge map with sobel (lower-left) and median filter (lower-right): Morphology We also have the possibility of creating and applying filters to images based on mathematical morphology, both to binary and gray-scale images. The four basic morphological operations are opening (binary_opening), closing (binary_closing), dilation (binary_dilation), and erosion (binary_erosion). Note that the syntax for each of these filters is very simple, since we only need two ingredients—the signal to filter and the structuring element to perform the morphological operation. Let's take a look into the general syntax for these morphological operations: binary_operation(signal, structuring_element) We may use combinations of these four basic morphological operations to create more complex filters for removal of holes, hit-or-miss transforms (to find the location of specific patterns in binary images), denoising, edge detection, and many more. The SciPy module also allows for creating some common filters using the preceding syntax. For instance, for the location of the letter e in a text, we could use the following command instead: >>> binary_hit_or_miss(text, letterE) For comparative purposes, let's use this command in the following code snippet: >>> import numpy >>> import scipy.ndimage >>> import matplotlib.pylab as plt >>> from scipy.ndimage.morphology import binary_hit_or_miss >>> text = scipy.ndimage.imread('CHAP_05_input_textImage.png') >>> letterE = text[37:53,275:291] >>> HitorMiss = binary_hit_or_miss(text, structure1=letterE,    origin1=1) >>> eLocation = numpy.where(HitorMiss==True) >>> x=eLocation[1]; y=eLocation[0] >>> plt.imshow(text, cmap=plt.cm.gray, interpolation='nearest') >>> plt.autoscale(False) >>> plt.plot(x,y,'wo',markersize=10) >>> plt.axis('off') >>> plt.show() The output for the preceding lines of code is generated as follows: For gray-scale images, we may use a structuring element (structuring_element) or a footprint. The syntax is, therefore, a little different: grey_operation(signal, [structuring_element, footprint, size, ...]) If we desire to use a completely flat and rectangular structuring element (all ones), then it is enough to indicate the size as a tuple. For instance, to perform gray-scale dilation of a flat element of size (15,15) on our classical image of Lena, we issue the following command: >>> grey_dilation(lena, size=(15,15)) The last kind of morphological operations coded in the scipy.ndimage module perform distance and feature transforms. Distance transforms create a map that assigns to each pixel, the distance to the nearest object. Feature transforms provide with the index of the closest background element instead. These operations are used to decompose images into different labels. We may even choose different metrics such as Euclidean distance, chessboard distance, and taxicab distance. The syntax for the distance transform (distance_transform) using a brute force algorithm is as follows: distance_transform_bf(signal, metric='euclidean', sampling=None, return_distances=True, return_indices=False,                      distances=None, indices=None) We indicate the metric with the strings such as 'euclidean', 'taxicab', or 'chessboard'. If we desire to provide the feature transform instead, we switch return_distances to False and return_indices to True. Similar routines are available with more sophisticated algorithms—distance_transform_cdt (using chamfering for taxicab and chessboard distances). For Euclidean distance, we also have distance_transform_edt. All these use the same syntax. Summary In this article, we explored signal processing (any dimensional) including the treatment of signals in frequency space, by means of their Discrete Fourier Transforms. These correspond to the fftpack, signal, and ndimage modules. Resources for Article: Further resources on this subject: Signal Processing Techniques [article] SciPy for Computational Geometry [article] Move Further with NumPy Modules [article]
Read more
  • 0
  • 0
  • 13934
Modal Close icon
Modal Close icon