Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-eav-model
Packt
10 Aug 2015
11 min read
Save for later

EAV model

Packt
10 Aug 2015
11 min read
In this article by Allan MacGregor, author of the book Magento PHP Developer's Guide - Second Edition, we cover details about EAV models, its usefulness in retrieving data, and the advantages it provides to the merchants and developers. EAV stands for entity, attribute, and value and is probably the most difficult concept for new Magento developers to grasp. While the EAV concept is not unique to Magento, it is rarely implemented on modern systems. Additionally, a Magento implementation is not a simple one. (For more resources related to this topic, see here.) What is EAV? In order to understand what EAV is and what its role within Magento is, we need to break down parts of the EAV model: Entity: This represents the data items (objects) inside Magento products, customers, categories, and orders. Each entity is stored in the database with a unique ID. Attribute: These are our object properties. Instead of having one column per attribute on the product table, attributes are stored on separate sets of tables. Value: As the name implies, it is simply the value link to a particular attribute. This data model is the secret behind Magento's flexibility and power, allowing entities to add and remove new properties without having to make any changes to the code, templates, or the database schema. This model can be seen as a vertical way of growing our database (new attributes and more rows), while the traditional model involves a horizontal growth pattern (new attributes and more columns), which would result in a schema redesign every time new attributes are added. The EAV model not only allows for the fast evolution of our database, but is also more effective because it only works with non-empty attributes, avoiding the need to reserve additional space in the database for null values. If you are interested in exploring and learning more about the Magento database structure, I highly recommend visiting www.magereverse.com. Adding a new product attribute is as simple going to the Magento backend and specifying the new attribute type, be it color, size, brand, or anything else. The opposite is true as well and we can get rid of unused attributes on our products or customer models. For more information on managing attributes, visit http://www.magentocommerce.com/knowledge-base/entry/how-do-attributes-work-in-magento. The Magento community edition currently has eight different types of EAV objects: Customer Customer Address Products Product Categories Orders Invoices Credit Memos Shipments The Magento Enterprise Edition has one additional type called RMA item, which is part of the Return Merchandise Authorization (RMA) system. All this flexibility and power is not free; there is a price to pay. Implementing the EAV model results in having our entity data distributed on a large number of tables. For example, just the Product Model is distributed to around 40 different tables. The following diagram only shows a few of the tables involved in saving the information of Magento products: Other major downsides of EAV are the loss of performance while retrieving large collections of EAV objects and an increase in the database query complexity. As the data is more fragmented (stored in more tables), selecting a single record involves several joins. One way Magento works around this downside of EAV is by making use of indexes and flat tables. For example, Magento can save all the product information into the flat_catalog table for easier and faster access. Let's continue using Magento products as our example and manually build the query to retrieve a single product. If you have phpmyadmin or MySQL Workbench installed on your development environment, you can experiment with the following queries. Each can be downloaded on the PHPMyAdmin website at http://www.phpmyadmin.net/ and the MySQL Workbench website at http://www.mysql.com/products/workbench/. The first table that we need to use is the catalog_product_entity table. We canconsider this our main product EAV table since it contains the main entity records for our products: Let's query the table by running the following SQL query: SELECT FROM `catalog_product_entity`; The table contains the following fields: entity_id: This is our product unique identifier that is used internally by Magento. entity_type_id: Magento has several different types of EAV models. Products, customers, and orders are just some of them. Identifying each of these by type allows Magento to retrieve the attributes and values from the appropriate tables. attribute_set_id: Product attributes can be grouped locally into attribute sets. Attribute sets allow even further flexibility on the product structure as products are not forced to use all available attributes. type_id: There are several different types of products in Magento: simple, configurable, bundled, downloadable, and grouped products; each with unique settings and functionality. sku: This stands for Stock Keeping Unit and is a number or code used to identify each unique product or item for sale in a store. This is a user-defined value. has_options: This is used to identify if a product has custom options. required_options: This is used to identify if any of the custom options that are required. created_at: This is the row creation date. updated_at: This is the last time the row was modified. Now we have a basic understanding of the product entity table. Each record represents a single product in our Magento store, but we don't have much information about that product beyond the SKU and the product type. So, where are the attributes stored? And how does Magento know the difference between a product attribute and a customer attribute? For this, we need to take a look into the eav_attribute table by running the following SQL query: SELECT FROM `eav_attribute`; As a result, we will not only see the product attributes, but also the attributes corresponding to the customer model, order model, and so on. Fortunately, we already have a key to filter the attributes from this table. Let's run the following query: SELECT FROM `eav_attribute` WHERE entity_type_id = 4; This query tells the database to only retrieve the attributes where the entity_type_id column is equal to the product entity_type_id(4). Before moving, let's analyze the most important fields inside the eav_attribute table: attribute_id: This is the unique identifier for each attribute and primary key of the table. entity_type_id: This relates each attribute to a specific eav model type. attribute_code: This is the name or key of our attribute and is used to generate the getters and setters for our magic methods. backend_model: These manage loading and storing data into the database. backend_type: This specifies the type of value stored in the backend (database). backend_table: This is used to specify if the attribute should be stored on a special table instead of the default EAV table. frontend_model: These handle the rendering of the attribute element into a web browser. frontend_input: Similar to the frontend model, the frontend input specifies the type of input field the web browser should render. frontend_label: This is the label/name of the attribute as it should be rendered by the browser. source_model: These are used to populate an attribute with possible values. Magento comes with several predefined source models for countries, yes or no values, regions, and so on. Retrieving the data At this point, we have successfully retrieved a product entity and the specific attributes that apply to that entity. Now it's time to start retrieving the actual values. In order to simplify the example (and the query) a little, we will only try to retrieve the name attribute of our products. How do we know which table our attribute values are stored on? Well, thankfully, Magento follows a naming convention to name the tables. If we inspect our database structure, we will notice that there are several tables using the catalog_product_entity prefix: catalog_product_entity catalog_product_entity_datetime catalog_product_entity_decimal catalog_product_entity_int catalog_product_entity_text catalog_product_entity_varchar catalog_product_entity_gallery catalog_product_entity_media_gallery catalog_product_entity_tier_price Wait! How do we know which is the right table to query for our name attribute values? If you were paying attention, I already gave you the answer. Remember that the eav_attribute table had a column called backend_type? Magento EAV stores each attribute on a different table based on the backend type of that attribute. If we want to confirm the backend type of our name attribute, we can do so by running the following code: SELECT FROM `eav_attribute` WHERE `entity_type_id` =4 AND `attribute_code` = 'name'; As a result, we should see that the backend type is varchar and that the values for this attribute are stored in the catalog_product_entity_varchar table. Let's inspect this table: The catalog_product_entity_varchar table is formed by only 6 columns: value_id: This is the attribute value unique identifier and primary key entity_type_id: This is the entity type ID to which this value belongs attribute_id: This is the foreign key that relates the value to our eav_entity table store_id: This is the foreign key matching an attribute value with a storeview entity_id: This is the foreign key relating to the corresponding entity table, in this case, catalog_product_entity value: This is the actual value that we want to retrieve Depending on the attribute configuration, we can have it as a global value, meaning, it applies across all store views or a value per storeview. Now that we finally have all the tables that we need to retrieve the product information, we can build our query: SELECT p.entity_id AS product_id, var.value AS product_name, p.sku AS product_sku FROM catalog_product_entity p, eav_attribute eav, catalog_product_entity_varchar var WHERE p.entity_type_id = eav.entity_type_id AND var.entity_id = p.entity_id    AND eav.attribute_code = 'name'    AND eav.attribute_id = var.attribute_id From our query, we should see a result set with three columns, product_id, product_name, and product_sku. So let's step back for a second in order to get product names with SKUs with raw SQL. We had to write a five-line SQL query, and we only retrieved two values from our products, from one single EAV value table if we want to retrieve a numeric field such as price or a text-value-like product. If we didn't have an ORM in place, maintaining Magento would be almost impossible. Fortunately, we do have an ORM in place, and most likely, you will never need to deal with raw SQL to work with Magento. That said, let's see how we can retrieve the same product information by using the Magento ORM: Our first step is going to be to instantiate a product collection: $collection = Mage::getModel('catalog/product')->getCollection(); Then we will specifically tell Magento to select the name attribute: $collection->addAttributeToSelect('name'); Then, we will ask it to sort the collection by name: $collection->setOrder('name', 'asc'); Finally, we will tell Magento to load the collection: $collection->load(); The end result is a collection of all products in the store sorted by name. We can inspect the actual SQL query by running the following code: echo $collection->getSelect()->__toString(); In just three lines of code, we are telling Magento to grab all the products in the store, to specifically select the name, and finally order the products by name. The last line $collection->getSelect()->__toString(); allows to see the actual query that Magento is executing in our behalf. The actual query being generated by Magento is as follows: SELECT `e`.. IF( at_name.value_id >0, at_name.value, at_name_default.value ) AS `name` FROM `catalog_product_entity` AS `e` LEFT JOIN `catalog_product_entity_varchar` AS `at_name_default` ON (`at_name_default`.`entity_id` = `e`.`entity_id`) AND (`at_name_default`.`attribute_id` = '65') AND `at_name_default`.`store_id` =0 LEFT JOIN `catalog_product_entity_varchar` AS `at_name` ON ( `at_name`.`entity_id` = `e`.`entity_id` ) AND (`at_name`.`attribute_id` = '65') AND (`at_name`.`store_id` =1) ORDER BY `name` ASC As we can see, the ORM and the EAV models are wonderful tools that not only put a lot of power and flexibility in the hands of the developers, but they also do it in a way that is comprehensive and easy to use. Summary In this article, we learned about EAV models and how they are structured to provide Magento with data flexibility and extensibility that both merchants and developers can take advantage of. Resources for Article: Further resources on this subject: Creating a Shipping Module [article] Preparing and Configuring Your Magento Website [article] Optimizing Magento Performance — Using HHVM [article]
Read more
  • 0
  • 0
  • 18756

article-image-sending-and-syncing-data
Packt
10 Aug 2015
4 min read
Save for later

Sending and Syncing Data

Packt
10 Aug 2015
4 min read
This article, by Steven F. Daniel, author of the book, Android Wearable Programming, will provide you with the background and understanding of how you can effectively build applications that communicate between the Android handheld device and the Android wearable. Android Wear comes with a number of APIs that will help to make communicating between the handheld and the wearable a breeze. We will be learning the differences between using MessageAPI, which is sometimes referred to as a "fire and forget" type of message, and DataLayerAPI that supports syncing of data between a handheld and a wearable, and NodeAPI that handles events related to each of the local and connected device nodes. (For more resources related to this topic, see here.) Creating a wearable send and receive application In this section, we will take a look at how to create an Android wearable application that will send an image and a message, and display this on our wearable device. In the next sections, we will take a look at the steps required to send data to the Android wearable using DataAPI, NodeAPI, and MessageAPIs. Firstly, create a new project in Android Studio by following these simple steps: Launch Android Studio, and then click on the File | New Project menu option. Next, enter SendReceiveData for the Application name field. Then, provide the name for the Company Domain field. Now, choose Project location and select where you would like to save your application code: Click on the Next button to proceed to the next step. Next, we will need to specify the form factors for our phone/tablet and Android Wear devices using which our application will run. On this screen, we will need to choose the minimum SDK version for our phone/tablet and Android Wear. Click on the Phone and Tablet option and choose API 19: Android 4.4 (KitKat) for Minimum SDK. Click on the Wear option and choose API 21: Android 5.0 (Lollipop) for Minimum SDK: Click on the Next button to proceed to the next step. In our next step, we will need to add Blank Activity to our application project for the mobile section of our app. From the Add an activity to Mobile screen, choose the Add Blank Activity option from the list of activities shown and click on the Next button to proceed to the next step: Next, we need to customize the properties for Blank Activity so that it can be used by our application. Here we will need to specify the name of our activity, layout information, title, and menu resource file. From the Customize the Activity screen, enter MobileActivity for Activity Name shown and click on the Next button to proceed to the next step in the wizard: In the next step, we will need to add Blank Activity to our application project for the Android wearable section of our app. From the Add an activity to Wear screen, choose the Blank Wear Activity option from the list of activities shown and click on the Next button to proceed to the next step: Next, we need to customize the properties for Blank Wear Activity so that our Android wearable can use it. Here we will need to specify the name of our activity and the layout information. From the Customize the Activity screen, enter WearActivity for Activity Name shown and click on the Next button to proceed to the next step in the wizard:   Finally, click on the Finish button and the wizard will generate your project and after a few moments, the Android Studio window will appear with your project displayed. Summary In this article, we learned about three new APIs, DataAPI, NodeAPI, and MessageAPIs, and how we can use them and their associated methods to transmit information between the handheld mobile and the wearable. If, for whatever reason, the connected wearable node gets disconnected from the paired handheld device, the DataApi class is smart enough to try sending again automatically once the connection is reestablished. Resources for Article: Further resources on this subject: Speeding up Gradle builds for Android [article] Saying Hello to Unity and Android [article] Testing with the Android SDK [article]
Read more
  • 0
  • 0
  • 7490

article-image-cross-platform-building
Packt
10 Aug 2015
11 min read
Save for later

Cross-platform Building

Packt
10 Aug 2015
11 min read
In this article by Karan Sequeira, author of the book Cocos2d-x Game Development Blueprints, we'll leverage the awesome aspect of Cocos2d-x to build one of our games on Android and Windows Phone 8! (For more resources related to this topic, see here.) Setting up the environment for Android At this point in the timeline of technological evolution, Android needs no introduction. This mobile operating system was acquired by Google, and it has reached far and wide across the globe. It is now one of the top choices for application developers and game developers. With octa-core CPUs and ever-powerful GPUs, the sheer power offered by Android devices is a motivating factor! While setting up the environment for Android, you have more choices than any other mobile development platform. Your workstation could be running any of the three major operating systems (Windows, Mac OS, or Linux) and you would be able to build to Android just fine. Since Android is not fussy about its build environment, developers mostly choose their work environment based on which other platforms they will be developing for. As such, you might choose to build for Android on a machine running Mac OS since you would be able to build for iOS and Android on the same machine. The same applies for a machine running Windows as well. You would be able to build for both Android and Windows Phone. Although building for Windows Phone 8 requires you to have at least Windows 8 installed. We will discuss more on that later. Let's begin listing down the various software required to set up the environment for Android. Java Development Kit 7+ Since you already know that Java is the programming language used within the Android SDK, you must ensure that you have the environment set up to compile and run Java files. So go ahead and download the Java Development Kit (JDK)version 6 or later. You can download and install a Standard Edition (SE) version from the page available at the following link: http://www.oracle.com/technetwork/java/javase/downloads/index.html Mac OS comes with JDK installed and as such, you won't have to follow this step if you're setting up your development environment on a Mac. The Android SDK Once you've downloaded JDK, it's time to download the Android SDK from the following URL: http://developer.android.com/sdk/index.html If you're installing the Android SDK on Windows, a custom installer is provided that will take care of downloading and setting up the required parts of the Android SDK for you. For other operating systems, you can choose to download the respective archive files and extract them at the location of your choice. Eclipse or the ADT bundle Eclipse is the most commonly used IDE when it comes to Android application development. You can choose to download a standard Eclipse IDE for Java developers and then install the ADT plugin into Eclipse, or you can download the ADT bundle, which is a specialized version of Eclipse with the ADT plugin preinstalled. At the time of writing this article, the Android developer site had already deprecated ADT in favor of Android Studio. As such, we will choose the former approach for setting up our environment in Eclipse. You can download and install the standard Eclipse IDE for Java Developers for your specific machine from the following URL: http://www.eclipse.org/downloads/ ADT plugin for Eclipse Once you've downloaded Eclipse, you must now install a custom plugin for Eclipse: Android Development Tools (ADT). Visit the following URL and follow the detailed instructions that will help you install the ADT plugin into Eclipse: http://developer.android.com/sdk/installing/installing-adt.html Once you've followed the instructions on the preceding page, you will need to inform Eclipse about the location of the Android SDK that you downloaded earlier. So, open up the Preferences page for Eclipse and go to the location where you've placed the Android SDK in the Android section. With that done, we can now fire up the SDK Manager to install a few more necessary pieces of software. To launch the Android SDK Manager, select Android SDK Manager from the Windows menu in Eclipse. The resultant window should look something like this: By default, you will see a whole lot of packages selected, out of which Android SDK Platform-tools and Android SDK Build-tools are necessary. From the rest, you must select at least one of the target Android platforms. An additional package will be required if you're target environment is Windows: Google USB Driver. It is located under the Extras list. I would suggest skipping downloading the documentation and samples. If you already have an Android device, I would go one step further and suggest you skip downloading the system images as well. However, if you don't have an Android device, you will need at least one system image so that you can at least test on an emulator. Once you've chosen from the various platforms needed, proceed to install the packages and you get a window like this: Now, you must select Accept License and click on the Install button to install the respective packages. Once these packages have been installed, you have to add their locations to the path variable on your respective machines. For Windows, modify your path variable (go to Properties | Advance Settings | Environment Variables) to include the following: ;E:Androidandroid-sdkplatform-tools For Mac OS, you can add the following line to the .bash_profile file found under the home directory: export PATH=$PATH:/Android/android-sdk/platform-tools/ The preceding line can also be added to the .bash_rc file found under the home directory on your Linux machine. At this point, you can use Eclipse for Android development. Installing Cygwin for Windows Developers working on Linux can skip this step as most Linux distributions come with the make utility. Also, developers working on Mac OS may download Xcode from the Mac App Store, which will install the make utility on their respective Macs. We need to install Cygwin on Windows specifically for the GNU make utility. So, go to the following URL and download the installer for Cygwin: http://www.cygwin.com/install.html Once you've run the .exe file that you downloaded and get a window like this, click on the Next button: The next window will ask how you would like to install the required packages. Here, select option Install from Internet and click on Next: The next window will ask where you would like to install Cygwin. I'd recommend leaving it at the default value unless you have a reason to change it. Proceed by clicking on Next. In the next window, you will be asked to specify a path where the installation can download the files it requires. You can fill in a suitable path of your choice in the box and click on Next. In the next window, you will be asked to specify your Internet connection. Leave it at the Direct Connection option and click on Next. In the next window, you will be asked to select a mirror location from where to download the installation files. Here, select the site that is geographically closest to you and click on Next. In the window that follows, expand the Devel section and search for make: The GNU version of the 'make' utility. Click on the Skip option to select this package. The version of the make utility that will be installed is now displayed in place of Skip. Your window should look something like this: You can now go ahead and click the Next button to begin the download and installation of the required packages. The window should look something like this: Once all the packages have been downloaded, click on Finish to close the installation. Now that we have the make utility installed, we can go ahead and download the Android NDK, which will actually build our entire C++ code base. The Android NDK To download the Android NDK for your respective development machine, navigate to the following URL: https://developer.android.com/tools/sdk/ndk/index.html Unzip the downloaded archive and place it in the same location as the Android SDK. We must now add an environment variable named NDK_ROOT that points to the root of the Android NDK. For Windows, add a new user variable NDK_ROOT with the location of the Android NDK on your filesystem as its value. You can do this by going to Properties | Advance Settings | Environment Variables. Once you've done that, the Environment Variables window should look something like this: I'm sure you noticed the value of the NDK_ROOT variable in the previous screenshot. The value of this variable is given in Unix style and depends on the Cygwin environment, since it will be accessed within a Cygwin bash shell while executing the build script for each Android project. Mac OS and Linux users can add the following line to their .bash_profile and .bashrc files, respectively: export NDK_ROOT=/Android/android-ndk-r10 We have now successfully completed setting up the environment to build our Cocos2d-x games on Android. To test this, open up a Cygwin bash terminal (for Windows) or a standard terminal (for Mac OS or Linux) and navigate to the Cocos2d-x test bed located inside the samples folder of your Cocos2d-x source. Now, navigate to the proj.android folder and run the build_native.sh file. This is what my Cygwin bash terminal looks like on a Windows 7 machine: If you've followed the aforementioned instructions correctly, the build_native.sh script will then go on to compile the C++ source files required by the TestCpp project and will result in a single shared object (.so) file in the libs folder within the proj.android folder. Creating an Android Virtual Device We're close to running the game, but we need to create an Android Virtual Device (AVD) before we proceed. Open up the Android Virtual Device Manager from the Windows menu and click on Create.   In the next window, fill in the required details as per your requirements and configuration and click OK. This is what my window looks like with everything filled in: From the Android Virtual Device Manager window, select the newly created AVD and click on Start to boot it. Building the tests on Android With an Android device that is ready to run our project, let's begin by first importing the project into Eclipse. Within Eclipse, select File | Import.... In the following window, select Existing Projects into Workspace under the General setting and click on Next: In the next window, browse to the proj.android folder under the cocos2d-x-2.2.5samplesCppTestCpp path and click on Finish: Once imported, you can find the TestCpp project under Package Explorer. It should look something like this: As you can see, there are a few errors with the project. If you look at the Problems view (Window | Show View | Problems) located on the bottom-half of Eclipse, you might see something like this: All these errors are due to the fact that the Android project for our game depends on Cocos2d-x's Android project for Android-specific functionality, things such as the actual OpenGL surface where everything is rendered, the music player, accelerometer functionality, and many more. So let's import the Android project for Cocos2d-x located inside the following path in your Cocos2d-x source bundle: cocos2d-x-2.2.5cocos2dxplatformandroid You can import it the same way you imported TestCpp. Once the project has been imported, it will be titled libcocos2dx in Package Explorer. Now, select Clean... from the Project menu; You will notice that when the clean operation has finished, the pumpkindefense dependency on libcocos2dx is taken care of and the project for pumpkindefense builds error-free. Running the tests on Android Running the tests is as simple as right-clicking on the TestCpp project in Package Explorer and selecting Run As | Android Application. It might take a bit more time running on an emulator as compared to an actual device, but ultimately you will have something like this: Summary In this article, you learned what necessary software components are needed to set up your workstation to build and run an Android native application. You had also set up an Android Virtual Device and ran the Cocos2d-x test bed application on it. Resources for Article: Further resources on this subject: Run Xcode Run [article] Creating Games with Cocos2d-x is Easy and 100 percent Free [article] Creating Cool Content [article]
Read more
  • 0
  • 0
  • 21598

article-image-securing-openstack-networking
Packt
10 Aug 2015
10 min read
Save for later

Securing OpenStack Networking

Packt
10 Aug 2015
10 min read
In this article by Fabio Alessandro Locati, author of the book OpenStack Cloud Security, you will learn about the importance of firewall, IDS, and IPS. You will also learn about Generic Routing Encapsulation, VXLAN. (For more resources related to this topic, see here.) The importance of firewall, IDS, and IPS The security of a network can and should be achieved in multiple ways. Three components that are critical to the security of a network are: Firewall Intrusion detection system (IDS) Intrusion prevention system (IPS) Firewall Firewalls are systems that control traffic passing through them based on rules. This can seem something like a router, but they are very different. The router allows communication between different networks while the firewall limits communication between networks and hosts. The root of this confusion may occur because very often the router will have the firewall functionality and vice versa. Firewalls need to be connected in a series to your infrastructure. The first paper on the firewall technology appeared in 1988 and designed the packet filter firewall. This kind of firewall is often known as first generation firewall. This kind of firewall analyzes the packages passing through and if the package matches a rule, the firewall will act accordingly to that rule. This firewall will analyze each package by itself and will not consider other aspects such as other packages. It works on the first three layers of the OSI model with very few features using layer 4 specifically to check port numbers and protocols (UDP/TCP). First generation firewalls are still in use, because in a lot of situations, to do the job properly and are cheap and secure. Examples of typical filtering those firewalls prohibit (or allow) to IPs of certain classes (or specific IPs), to access certain IPs, or allow traffic to a specific IP only on specific ports. There are no known attacks to those kind of firewalls, but specific models can have specific bugs that can be exploited. In 1990, a new generation of firewall appeared. The initial name was circuit-level gateway, but today it is far more commonly known as stateful firewalls or second generation firewall. These firewalls are able to understand when connections are being initialized and closed so that the firewall comes to know what is the current state of a connection when a package arrives. To do so, this kind of firewall uses the first four layers of the networking stack. This allows the firewall to drop all packages that are not establishing a new connection or are in an already established connection. These firewalls are very powerful with the TCP protocol because it has states, while they have very small advantages compared to first generation firewalls handling UDP or ICMP packages, since those packages travel with no connection. In these cases, the firewall sets the connection as established; only the first valid package passes through and closes it after the connection times out. Performance-wise, stateful firewall can be faster than packet firewall because if the package is part of an active connection, no further test will be performed against that package. These kinds of firewalls are more susceptible to bugs in their code since reading more about the package makes it easier to exploit. Also, on many devices, it is possible to open connections (with SYN packages) until the firewall is saturated. In such cases, the firewall usually downgrades itself as a simple router allowing all traffic to pass through it. In 1991, improvements were made to the stateful firewall allowing it to understand more about the protocol of the package it was evaluating. The firewalls of this kind before 1994 had major problems, such as working as a proxy that the user had to interact with. In 1994, the first application firewall, as we know it, was born doing all its job completely transparently. To be able to understand the protocol, this kind of firewall requires an understanding of all seven layers of the OSI model. As for security, the same as the stateful firewall does apply to the application firewall as well. Intrusion detection system (IDS) IDSs are systems that monitor the network traffic looking for policy violation and malicious traffic. The goal of the IDS is not to block malicious activity, but instead to log and report them. These systems act in a passive mode, so you'll not see any traffic coming from them. This is very important because it makes them invisible to attackers so you can gain information about the attack, without the attacker knowing. IDSs need to be connected in parallel to your infrastructure. Intrusion prevention system (IPS) IPSs are sometimes referred to as Intrusion Detection and Prevention Systems (IDPS), since they are IDS that are also able to fight back malicious activities. IPSs have greater possibility to act than IDSs. Other than reporting, like IDS, they can also drop malicious packages, reset the connection, and block the traffic from the offending IP address. IPSs need to be connected in series to your infrastructure. Generic Routing Encapsulation (GRE) GRE is a Cisco tuning protocol that is difficult to position in the OSI model. The best place for it to be is between layers 2 and 3. Being above layer 2 (where VLANs are), we can use GRE inside VLAN. We will not go deep into the technicalities of this protocol. I'd like to focus more on the advantages and disadvantages it has over VLAN. The first advantage of (extended) GRE over VLAN is scalability. In fact, VLAN is limited to 4,096, while GRE tunnels do not have this limitation. If you are running a private cloud and you are working in a small corporation, 4,096 networks could be enough, but will definitely not be enough if you work for a big corporation or if you are running a public cloud. Also, unless you use VTP for your VLANs, you'll have to add VLANs to each network device, while GREs don't need this. You cannot have more than 4,096 VLANs in an environment. The second advantage is security. Since you can deploy multiple GRE tunnels in a single VLAN, you can connect a machine to a single VLAN and multiple GRE networks without the risks that come with putting a port in trunking that is needed to bring more VLANs in the same physical port. For these reasons, GRE has been a very common choice in a lot of OpenStack clusters deployed up to OpenStack Havana. The current preferred networking choice (since Icehouse) is Virtual Extensible LAN (VXLAN). VXLAN VXLAN is a network virtualization technology whose specifications have been originally created by Arista Networks, Cisco, and VMWare, and many other companies have backed the project. Its goal is to offer a standardized overlay encapsulation protocol and it was created because the standard VLAN were too limited for the current cloud needs and the GRE protocol was a Cisco protocol. It works using layer 2 Ethernet frames within layer 4 UDP packages on port 4789. As for the maximum number of networks, the limit is 16 million logical networks. Since the Icehouse release, the suggested standard for networking is VXLAN. Flat network versus VLAN versus GRE in OpenStack Quantum In OpenStack Quantum, you can decide to use multiple technologies for your networks: flat network, VLAN, GRE, and the most recent, VXLAN. Let's discuss them in detail: Flat network: It is often used in private clouds since it is very easy to set up. The downside is that any virtual machine will see any other virtual machines in our cloud. I strongly discourage people from using this network design because it's unsafe, and in the long run, it will have problems, as we have seen earlier. VLAN: It is sometimes used in bigger private clouds and sometimes even in small public clouds. The advantage is that many times you already have a VLAN-based installation in your company. The major disadvantages are the need to trunk ports for each physical host and the possible problems in propagation. I discourage this approach, since in my opinion, the advantages are very limited while the disadvantages are pretty strong. VXLAN: It should be used in any kind of cloud due to its technical advantages. It allows a huge number of networks, its way more secure, and often eases debugging. GRE: Until the Havana release, it was the suggested protocol, but since the Icehouse release, the suggestion has been to move toward VXLAN, where the majority of the development is focused. Design a secure network for your OpenStack deployment As for the physical infrastructure, we have to design it securely. We have seen that the network security is critical and that there a lot of possible attacks in this realm. Is it possible to design a secure environment to run OpenStack? Yes it is, if you remember a few rules: Create different networks, at the very least for management and external data (this network usually already exists in your organization and is the one where all your clients are) Never put ports on trunking mode if you use VLANs in your infrastructure, otherwise physically separated networks will be needed The following diagram is an example of how to implement it: Here, the management, tenant external networks could be either VLAN or real networks. Remember that to not use VLAN trunking, you need at least the same amount of physical ports as of VLAN, and the machine has to be subscribed to avoid port trunking that can be a huge security hole. A management network is needed for the administrator to administer the machines and for the OpenStack services to speak to each other. This network is critical, since it may contain sensible data, and for this reason, it has to be disconnected from other networks, or if not possible, have very limited connectivity. The external network is used by virtual machines to access the Internet (and vice versa). In this network, all machines will need an IP address reachable from the Web. The tenant network, sometimes even called internal or guest network is the network where the virtual machines can communicate with other virtual machines in the same cloud. This network, in some deployment cases, can be merged with the external network, but this choice has some security drawbacks. The API network is used to expose OpenStack APIs to the users. This network requires IP addresses reachable from the Web, and for this reason, is often merged into the external network. There are cases where provider networks are needed to connect tenant networks to existing networks outside the OpenStack cluster. Those networks are created by the OpenStack administrator and map directly to an existing physical network in the data center. Summary In this article, we have seen how networking works, which attacks we can expect, and how we can counter them. Also, we have seen how to implement a secure deployment of OpenStack Networking. Resources for Article: Further resources on this subject: Cloud distribution points [Article] Photo Stream with iCloud [Article] Integrating Accumulo into Various Cloud Platforms [Article]
Read more
  • 0
  • 0
  • 12711

article-image-exploring-jenkins
Packt
10 Aug 2015
7 min read
Save for later

Exploring Jenkins

Packt
10 Aug 2015
7 min read
In this article by Mitesh Soni, the author of the book Jenkins Essentials, introduces us to Jenkins. (For more resources related to this topic, see here.) Jenkins is an open source application written in Java. It is one of the most popular continuous integration (CI) tools used to build and test different kinds of projects. In this article, we will have a quick overview of Jenkins, essential features, and its impact on DevOps culture. Before we can start using Jenkins, we need to install it. In this article, we have provided a step-by-step guide to install Jenkins. Installing Jenkins is a very easy task and is different from the OS flavors. This article will also cover the DevOps pipeline. To be precise, we will discuss the following topics in this article: Introduction to Jenkins and its features Installation of Jenkins on Windows and the CentOS operating system How to change configuration settings in Jenkins What is the deployment pipeline On your mark, get set, go! Introduction to Jenkins and its features Let's first understand what continuous integration is. CI is one of the most popular application development practices in recent times. Developers integrate bug fix, new feature development, or innovative functionality in code repository. The CI tool verifies the integration process with an automated build and automated test execution to detect issues with the current source of an application, and provide quick feedback. Jenkins is a simple, extensible, and user-friendly open source tool that provides CI services for application development. Jenkins supports SCM tools such as StarTeam, Subversion, CVS, Git, AccuRev and so on. Jenkins can build Freestyle, Apache Ant, and Apache Maven-based projects. The concept of plugins makes Jenkins more attractive, easy to learn, and easy to use. There are various categories of plugins available such as Source code management, Slave launchers and controllers, Build triggers, Build tools, Build notifies, Build reports, other post-build actions, External site/tool integrations, UI plugins, Authentication and user management, Android development, iOS development, .NET development, Ruby development, Library plugins, and so on. Jenkins defines interfaces or abstract classes that model a facet of a build system. Interfaces or abstract classes define an agreement on what needs to be implemented; Jenkins uses plugins to extend those implementations. To learn more about all plugins, visit https://wiki.jenkins-ci.org/x/GIAL. To learn how to create a new plugin, visit https://wiki.jenkins-ci.org/x/TYAL. To download different versions of plugins, visit https://updates.jenkins-ci.org/download/plugins/. Features Jenkins is one of the most popular CI servers in the market. The reasons for its popularity are as follows: Easy installation on different operating systems. Easy upgrades—Jenkins has very speedy release cycles. Simple and easy-to-use user interface. Easily extensible with the use of third-party plugins—over 400 plugins. Easy to configure the setup environment in the user interface. It is also possible to customize the user interface based on likings. The master slave architecture supports distributed builds to reduce loads on the CI server. Jenkins is available with test harness built around JUnit; test results are available in graphical and tabular forms. Build scheduling based on the cron expression (to know more about cron, visit http://en.wikipedia.org/wiki/Cron). Shell and Windows command execution in prebuild steps. Notification support related to the build status. Installation of Jenkins on Windows and CentOS Go to https://jenkins-ci.org/. Find the Download Jenkins section on the home page of Jenkins's website. Download the war file or native packages based on your operating system. A Java installation is needed to run Jenkins. Install Java based on your operating system and set the JAVA_HOME environment variable accordingly. Installing Jenkins on Windows Select the native package available for Windows. It will download jenkins-1.xxx.zip. In our case, it will download jenkins-1.606.zip. Extract it and you will get setup.exe and jenkins-1.606.msi files. Click on setup.exe and perform the following steps in sequence. On the welcome screen, click Next: Select the destination folder and click on Next. Click on Install to begin installation. Please wait while the Setup Wizard installs Jenkins. Once the Jenkins installation is completed, click on the Finish button. Verify the Jenkins installation on the Windows machine by opening URL http://<ip_address>:8080 on the system where you have installed Jenkins. Installation of Jenkins on CentOS To install Jenkins on CentOS, download the Jenkins repository definition to your local system at /etc/yum.repos.d/ and import the key. Use the wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat/jenkins.repo command to download repo. Now, run yum install Jenkins; it will resolve dependencies and prompt for installation. Reply with y and it will download the required package to install Jenkins on CentOS. Verify the Jenkins status by issuing the service jenkins status command. Initially, it will be stopped. Start Jenkins by executing service jenkins start in the terminal. Verify the Jenkins installation on the CentOS machine by opening the URL http://<ip_address>:8080 on the system where you have installed Jenkins. How to change configuration settings in Jenkins Click on the Manage Jenkins link on the dashboard to configure system, security, to manage plugins, slave nodes, credentials, and so on. Click on the Configure System link to configure Java, Ant, Maven, and other third-party products' related information. Jenkins uses Groovy as its scripting language. To execute the arbitrary script for administration/trouble-shooting/diagnostics on the Jenkins dashboard, go to the Manage Jenkins link on the dashboard, click on Script Console, and run println(Jenkins.instance.pluginManager.plugins). To verify the system log, go to the Manage Jenkins link on the dashboard and click on the System Log link or visit http://localhost:8080/log/all. To get more information on third-party libraries—version and license information in Jenkins, go to the Manage Jenkins link on the dashboard and click on the About Jenkins link. What is the deployment pipeline? The application development life cycle is a traditionally lengthy and a manual process. In addition, it requires effective collaboration between development and operations teams. The deployment pipeline is a demonstration of automation involved in the application development life cycle containing the automated build execution and test execution, notification to the stakeholder, and deployment in different runtime environments. Effectively, the deployment pipeline is a combination of CI and continuous delivery, and hence is a part of DevOps practices. The following diagram depicts the deployment pipeline process: Members of the development team check code into a source code repository. CI products such as Jenkins are configured to poll changes from the code repository. Changes in the repository are downloaded to the local workspace and Jenkins triggers an automated build process, which is assisted by Ant or Maven. Automated test execution or unit testing, static code analysis, reporting, and notification of successful or failed build process are also part of the CI process. Once the build is successful, it can be deployed to different runtime environments such as testing, preproduction, production, and so on. Deploying a war file in terms of the JEE application is normally the final stage in the deployment pipeline. One of the biggest benefits of the deployment pipeline is the faster feedback cycle. Identification of issues in the application at early stages and no dependencies on manual efforts make this entire end-to-end process more effective. To read more, visit http://martinfowler.com/bliki/DeploymentPipeline.html and http://www.informit.com/articles/article.aspx?p=1621865&seqNum=2. Summary Congratulations! We reached the end of this article and hence we have Jenkins installed on our physical or virtual machine. Till now, we covered the basics of CI and the introduction to Jenkins and its features. We completed the installation of Jenkins on Windows and CentOS platforms. In addition to this, we discussed the deployment pipeline and its importance in CI. Resources for Article: Further resources on this subject: Jenkins Continuous Integration [article] Running Cucumber [article] Introduction to TeamCity [article]
Read more
  • 0
  • 0
  • 2321

article-image-using-handlebars-express
Packt
10 Aug 2015
17 min read
Save for later

Using Handlebars with Express

Packt
10 Aug 2015
17 min read
In this article written by Paul Wellens, author of the book Practical Web Development, we cover a brief description about the following topics: Templates Node.js Express 4 Templates Templates come in many shapes or forms. Traditionally, they are non-executable files with some pre-formatted text, used as the basis of a bazillion documents that can be generated with a computer program. I worked on a project where I had to write a program that would take a Microsoft Word template, containing parameters like $first, $name, $phone, and so on, and generate a specific Word document for every student in a school. Web templating does something very similar. It uses a templating processor that takes data from a source of information, typically a database and a template, a generic HTML file with some kind of parameters inside. The processor then merges the data and template to generate a bunch of static web pages or dynamically generates HTML on the fly. If you have been using PHP to create dynamic webpages, you will have been busy with web templating. Why? Because you have been inserting PHP code inside HTML files in between the <?php and ?> strings. Your templating processor was the Apache web server that has many additional roles. By the time your browser gets to see the result of your code, it is pure HTML. This makes this an example of server side templating. You could also use Ajax and PHP to transfer data in the JSON format and then have the browser process that data using JavaScript to create the HTML you need. Combine this with using templates and you will have client side templating. Node.js What Le Sacre du Printemps by Stravinsky did to the world of classical music, Node.js may have done to the world of web development. At its introduction, it shocked the world. By now, Node.js is considered by many as the coolest thing. Just like Le Sacre is a totally different kind of music—but by now every child who has seen Fantasia has heard it—Node.js is a different way of doing web development. Rather than writing an application and using a web server to soup up your code to a browser, the application and the web server are one and the same. This may sound scary, but you should not worry as there is an entire community that developed modules you can obtain using the npm tool. Before showing you an example, I need to point out an extremely important feature of Node.js: the language in which you will write your web server and application is JavaScript. So Node.js gives you server side JavaScript. Installing Node.js How to install Node.js will be different, depending on your OS, but the result is the same everywhere. It gives you two programs: Node and npm. npm The node packaging manager (npm)is the tool that you use to look for and install modules. Each time you write code that needs a module, you will have to add a line like this in: var module = require('module'); The module will have to be installed first, or the code will fail. This is how it is done: npm install module or npm -g install module The latter will attempt to install the module globally, the former, in the directory where the command is issued. It will typically install the module in a folder called node_modules. node The node program is the command to use to start your Node.js program, for example: node myprogram.js node will start and interpret your code. Type Ctrl-C to stop node. Now create a file myprogram.js containing the following text: var http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello Worldn'); }).listen(8080, 'localhost'); console.log('Server running at http://localhost:8080'); So, if you installed Node.js and the required http module, typing node myprogram.js in a terminal window, your console will start up a web server. And, when you type http://localhost:8080 in a browser, you will see the world famous two word program example on your screen. This is the equivalent of getting the It works! thing, after testing your Apache web server installation. As a matter of fact, if you go to http://localhost:8080/it/does/not/matterwhat, the same will appear. Not very useful maybe, but it is a web server. Serving up static content This does not work in a way we are used to. URLs typically point to a file (or a folder, in which case the server looks for an index.html file) , foo.html, or bar.php, and when present, it is served up to the client. So what if we want to do this with Node.js? We will need a module. Several exist to do the job. We will use node-static in our example. But first we need to install it: npm install node-static In our app, we will now create not only a web server, but a fileserver as well. It will serve all the files in the local directory public. It is good to have all the so called static content together in a separate folder. These are basically all the files that will be served up to and interpreted by the client. As we will now end up with a mix of client code and server code, it is a good practice to separate them. When you use the Express framework, you have an option to have Express create these things for you. So, here is a second, more complete, Node.js example, including all its static content. hello.js, our node.js app var http = require('http'); var static = require('node-static'); var fileServer = new static.Server('./public'); http.createServer(function (req, res) { fileServer.serve(req,res); }).listen(8080, 'localhost'); console.log('Server running at http://localhost:8080'); hello.html is stored in ./public. <!DOCTYPE html> <html> <head> <meta charset="UTF-8" /> <title>Hello world document</title> <link href="./styles/hello.css" rel="stylesheet"> </head> <body> <h1>Hello, World</h1> </body> </html> hello.css is stored in public/styles. body { background-color:#FFDEAD; } h1 { color:teal; margin-left:30px; } .bigbutton { height:40px; color: white; background-color:teal; margin-left:150px; margin-top:50px; padding:15 15 25 15; font-size:18px; } So, if we now visit http://localhost:8080/hello, we will see our, by now too familiar, Hello World message with some basic styling, proving that our file server also delivered the CSS file. You can easily take it one step further and add JavaScript and the jQuery library and put it in, for example, public/js/hello.js and public/js/jquery.js respectively. Too many notes With Node.js, you only install the modules that you need, so it does not include the kitchen sink by default! You will benefit from that for as far as performance goes. Back in California, I have been a proud product manager of a PC UNIX product, and one of our coolest value-add was a tool, called kconfig, that would allow people to customize what would be inside the UNIX kernel, so that it would only contain what was needed. This is what Node.js reminds me of. And it is written in C, as was UNIX. Deja vu. However, if we wanted our Node.js web server to do everything the Apache Web Server does, we would need a lot of modules. Our application code needs to be added to that as well. That means a lot of modules. Like the critics in the movie Amadeus said: Too many notes. Express 4 A good way to get the job done with fewer notes is by using the Express framework. On the expressjs.com website, it is called a minimal and flexible Node.js web application framework, providing a robust set of features for building web applications. This is a good way to describe what Express can do for you. It is minimal, so there is little overhead for the framework itself. It is flexible, so you can add just what you need. It gives a robust set of features, which means you do not have to create them yourselves, and they have been tested by an ever growing community. But we need to get back to templating, so all we are going to do here is explain how to get Express, and give one example. Installing Express As Express is also a node module, we install it as such. In your project directory for your application, type: npm install express You will notice that a folder called express has been created inside node_modules, and inside that one, there is another collection of node-modules. These are examples of what is called middleware. In the code example that follows, we assume app.js as the name of our JavaScript file, and app for the variable that you will use in that file for your instance of Express. This is for the sake of brevity. It would be better to use a string that matches your project name. We will now use Express to rewrite the hello.js example. All static resources in the public directory can remain untouched. The only change is in the node app itself: var express = require('express'); var path = require('path'); var app = express(); app.set('port', process.env.PORT || 3000); var options = { dotfiles: 'ignore', extensions: ['htm', 'html'], index: false }; app.use(express.static(path.join(__dirname, 'public') , options )); app.listen(app.get('port'), function () { console.log('Hello express started on http://localhost:' + app.get('port') + '; press Ctrl-C to terminate.' ); }); This code uses so called middleware (static) that is included with express. There is a lot more available from third parties. Well, compared to our node.js example, it is about the same number of lines. But it looks a lot cleaner and it does more for us. You no longer need to explicitly include the HTTP module and other such things. Templating and Express We need to get back to templating now. Imagine all the JavaScript ecosystem we just described. Yes, we could still put our client JavaScript code in between the <script> tags but what about the server JavaScript code? There is no such thing as <?javascript ?> ! Node.js and Express, support several templating languages that allow you to separate layout and content, and which have the template system do the work of fetching the content and injecting it into the HTML. The default templating processor for Express appears to be Jade, which uses its own, albeit more compact than HTML, language. Unfortunately, that would mean that you have to learn yet another syntax to produce something. We propose to use handlebars.js. There are two reasons why we have chosen handlebars.js: It uses <html> as the language It is available on both the client and server side Getting the handlebars module for Express Several Express modules exist for handlebars. We happen to like the one with the surprising name express-handlebars. So, we install it, as follows: npm install express-handlebars Layouts I almost called this section templating without templates as our first example will not use a parameter inside the templates. Most websites will consist of several pages, either static or dynamically generated ones. All these pages usually have common parts; a header and footer part, a navigation part or menu, and so on. This is the layout of our site. What distinguishes one page from another, usually, is some part in the body of the page where the home page has different information than the other pages. With express-handlebars, you can separate layout and content. We will start with a very simple example. Inside your project folder that contains public, create a folder, views, with a subdirectory layout. Inside the layouts subfolder, create a file called main.handlebars. This is your default layout. Building on top of the previous example, have it say: <!doctype html> <html> <head> <title>Handlebars demo</title> </head> <link href="./styles/hello.css" rel="stylesheet"> <body> {{{body}}} </body> </html> Notice the {{{body}}} part. This token will be replaced by HTML. Handlebars escapes HTML. If we want our HTML to stay intact, we use {{{ }}}, instead of {{ }}. Body is a reserved word for handlebars. Create, in the folder views, a file called hello.handlebars with the following content. This will be one (of many) example of the HTML {{{body}}}, which will be replaced by: <h1>Hello, World</h1> Let’s create a few more june.handlebars with: <h1>Hello, June Lake</h1> And bodie.handlebars containing: <h1>Hello, Bodie</h1> Our first handlebars example Now, create a file, handlehello.js, in the project folder. For convenience, we will keep the relevant code of the previous Express example: var express = require('express'); var path = require('path'); var app = express(); var exphbs = require(‘express-handlebars’); app.engine('handlebars', exphbs({defaultLayout: 'main'})); app.set('view engine', 'handlebars'); app.set('port', process.env.PORT || 3000); var options = { dotfiles: 'ignore', etag: false, extensions: ['htm', 'html'], index: false }; app.use(express.static(path.join(__dirname, 'public') , options  )); app.get('/', function(req, res) { res.render('hello');   // this is the important part }); app.get('/bodie', function(req, res) { res.render('bodie'); }); app.get('/june', function(req, res) { res.render('june'); }); app.listen(app.get('port'),  function () { console.log('Hello express started on http://localhost:' + app.get('port') + '; press Ctrl-C to terminate.' ); }); Everything that worked before still works, but if you type http://localhost:3000/, you will see a page with the layout from main.handlebars and {{{body}}} replaced by, you guessed it, the same Hello World with basic markup that looks the same as our hello.html example. Let’s look at the new code. First, of course, we need to add a require statement for our express-handlebars module, giving us an instance of express-handlebars. The next two lines specify what the view engine is for this app and what the extension is that is used for the templates and layouts. We pass one option to express-handlebars, defaultLayout, setting the default layout to be main. This way, we could have different versions of our app with different layouts, for example, one using Bootstrap and another using Foundation. The res.render calls determine which views need to be rendered, so if you type http:// localhost:3000/june, you will get Hello, June Lake, rather than Hello World. But this is not at all useful, as in this implementation, you still have a separate file for each Hello flavor. Let’s create a true template instead. Templates In the views folder, create a file, town.handlebars, with the following content: {{!-- Our first template with tokens --}} <h1>Hello, {{town}} </h1> Please note the comment line. This is the syntax for a handlebars comment. You could HTML comments as well, of course, but the advantage of using handlebars comments is that it will not show up in the final output. Next, add this to your JavaScript file: app.get('/lee', function(req, res) { res.render('town', { town: "Lee Vining"}); }); Now, we have a template that we can use over and over again with different context, in this example, a different town name. All you have to do is pass a different second argument to the res.render call, and {{town}} in the template, will be replaced by the value of town in the object. In general, what is passed as the second argument is referred to as the context. Helpers The token can also be replaced by the output of a function. After all, this is JavaScript. In the context of handlebars, we call those helpers. You can write your own, or use some of the cool built-in ones, such as #if and #each. #if/else Let us update town.handlebars as follows: {{#if town}} <h1>Hello, {{town}} </h1> {{else}} <h1>Hello, World </h1> {{/if}} This should be self explanatory. If the variable town has a value, use it, if not, then show the world. Note that what comes after #if can only be something that is either true of false, zero or not. The helper does not support a construct such as #if x < y. #each A very useful built-in helper is #each, which allows you to walk through an array of things and generate HTML accordingly. This is an example of the code that could be inside your app and the template you could use in your view folder: app.js code snippet var californiapeople = {    people: [ {“name":"Adams","first":"Ansel","profession":"photographer",    "born"       :"SanFrancisco"}, {“name":"Muir","first":"John","profession":"naturalist",    "born":"Scotland"}, {“name":"Schwarzenegger","first":"Arnold",    "profession":"governator","born":"Germany"}, {“name":"Wellens","first":"Paul","profession":"author",    "born":"Belgium"} ]   }; app.get('/californiapeople', function(req, res) { res.render('californiapeople', californiapeople); }); template (californiapeople.handlebars) <table class=“cooltable”> {{#each people}}    <tr><td>{{first}}</td><td>{{name}}</td>    <td>{{profession}}</td></tr> {{/each}} </table> Now we are well on our way to do some true templating. You can also write your own helpers, which is beyond the scope of an introductory article. However, before we leave you, there is one cool feature of handlebars you need to know about: partials. Partials In web development, where you dynamically generate HTML to be part of a web page, it is often the case that you repetitively need to do the same thing, albeit on a different page. There is a cool feature in express-handlebars that allows you to do that very same thing: partials. Partials are templates you can refer to inside a template, using a special syntax and drastically shortening your code that way. The partials are stored in a separate folder. By default, that would be views/partials, but you can even use subfolders. Let's redo the previous example but with a partial. So, our template is going to be extremely petite: {{!-- people.handlebars inside views  --}}    {{> peoplepartial }} Notice the > sign; this is what indicates a partial. Now, here is the familiar looking partial template: {{!-- peoplepartialhandlebars inside views/partials --}} <h1>Famous California people </h1> <table> {{#each people}} <tr><td>{{first}}</td><td>{{name}}</td> <td>{{profession}}</td></tr> {{/each}} </table> And, following is the JavaScript code that triggers it: app.get('/people', function(req, res) { res.render('people', californiapeople); }); So, we give it the same context but the view that is rendered is ridiculously simplistic, as there is a partial underneath that will take care of everything. Of course, these were all examples to demonstrate how handlebars and Express can work together really well, nothing more than that. Summary In this article, we talked about using templates in web development. Then, we zoomed in on using Node.js and Express, and introduced Handlebars.js. Handlebars.js is cool, as it lets you separate logic from layout and you can use it server-side (which is where we focused on), as well as client-side. Moreover, you will still be able to use HTML for your views and layouts, unlike with other templating processors. For those of you new to Node.js, I compared it to what Le Sacre du Printemps was to music. To all of you, I recommend the recording by the Los Angeles Philharmonic and Esa-Pekka Salonen. I had season tickets for this guy and went to his inaugural concert with Mahler’s third symphony. PHP had not been written yet, but this particular performance I had heard on the radio while on the road in California, and it was magnificent. Check it out. And, also check out Express and handlebars. Resources for Article: Let's Build with AngularJS and Bootstrap The Bootstrap grid system MODx Web Development: Creating Lists
Read more
  • 0
  • 2
  • 38319
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-hands-prezi-mechanics
Packt
10 Aug 2015
8 min read
Save for later

Hands-on with Prezi Mechanics

Packt
10 Aug 2015
8 min read
In this In this article by J.J. Sylvia IV, author of the book Mastering Prezi for Business Presentations - Second Edition, we will see how to edit a figure and to style symbols. Also we will see the Grouping feature and brief introduction of the Prezi text editor. (For more resources related to this topic, see here.) Editing lines When editing lines or arrows, you can change them from being straight to curved by dragging the center point in any direction: This is extremely useful when creating the line drawings we saw earlier. It's also useful to get arrows pointing at various objects on your canvas: Styled symbols If you're on a tight deadline, or trying to create drawings with shapes simply isn't for you, then the styles available in Prezi may be of more interest to you. These are common symbols that Prezi has created in a few different styles that can be easily inserted into any of your presentations. You can select these from the same Symbols & shapes… option from the Insert menu where we found the symbols. You'll see several different styles to choose from on the right-hand side of your screen. Each of these categories has similar symbols, but styled differently. There is a wide variety of symbols available ranging from people to social media logos. You can pick a style that best matches your theme or the atmosphere you've created for your presentation. Instead of creating your own person from shapes, you can select from a variety of people symbols available: Although these symbols can be very handy, you should be aware that you can't edit them as part of your presentation. If you decide to use one, note that it will work as it is—there are no new hairstyles for these symbols. Highlighter The highlighter tool is extremely useful for pointing out key pieces of information such as an interesting fact. To use it, navigate to the Insert menu and select the Highlighter option. Then, just drag the cursor across the text you'd like to highlight. Once you've done this, the highlighter marks become objects in their own right, so you can click on them to change their size or position just as you would do for a shape. To change the color of your highlighter, you will need to go into the Theme Wizard and edit the RGB values. We'll cover how to do this later when we discuss branding. Grouping Grouping is a great feature that allows you to move or edit several different elements of your presentation at once. This can be especially useful if you're trying to reorganize the layout of your Prezi after it's been created, or to add animations to several elements at once. Let's go back to the drawing we created earlier to see how this might work: The first way to group items is to hold down the Ctrl key (Command on Mac OS) and to left-click on each element you want to group individually. In this case, I need to click on each individual line that makes up the flat top hair in the preceding image. This might be necessary if I only want to group the hair, for example: Another method for grouping is to hold down the Shift key while dragging your mouse to select multiple items at once. In the preceding screenshot, I've selected my entire person at once. Now, I can easily rotate, resize, or move the entire person at once, without having to move each individual line or shape. If you select a group of objects, move them, and then realize that a piece is missing because it didn't get selected, just press the Ctrl+Z (Command+Z on Mac OS) keys on your keyboard to undo the move. Then, broaden your selection and try again. Alternatively, you can hold down the Shift key and simply click on the piece you missed to add it to the group. If we want to keep these elements grouped together instead of having to reselect them each time we decide to make a change, we can click on the Group button that appears with this change. Now these items will stay grouped unless we click on the new Ungroup button, now located in the same place as the Group button previously was: You can also use frames to group material together. If you already created frames as part of your layout, this might make the grouping process even easier. Prezi text editor Over the years, the Prezi text editor has evolved to be quite robust, and it's now possible to easily do all of your text editing directly within Prezi. Spell checker When you spell something incorrectly, Prezi will underline the word it doesn't recognize with a red line. This is just as you would see it in Microsoft Word or any other text editor. To correct the word, simply right-click on it (or Command + Click on Mac OS) and select the word you meant to type from the suggestions, as shown in the following screenshot: The text drag-apart feature So a colleague of yours has just e-mailed you the text that they want to appear in the Prezi you're designing for them? That's great news as it'll help you understand the flow of the presentation. What's frustrating, though, is that you'll have to copy and paste every single line or paragraph across to put it in the right place on your canvas. At least, that used to be the case before Prezi introduced the drag-apart feature in the text editor. This means you can now easily drag a selection of text anywhere on your canvas without having to rely on the copy and paste options. Let's see how we can easily change the text we spellchecked previously, as shown in the following screenshot: In order to drag your text apart, simply highlight the area you require, hold the mouse button down, and then drag the text anywhere on your canvas. Once you have separated your text, you can then edit the separate parts as you would edit any other individual object on your canvas. In this example, we can change the size of the company name and leave the other text as it is, which we couldn't do within a single textbox: Building Prezis for colleagues If you've kindly offered to build a Prezi for one of your colleagues, ask them to supply the text for it in Word format. You'll be able to run a spellcheck on it from there before you copy and paste it into Prezi. Any bad spellings you miss will also get highlighted on your Prezi canvas but it's good to use both options as a safety net. Font colors Other than dragging text apart to make it stand out more on its own, you might want to highlight certain words so that they jump out at your audience even more. The great news is that you can now highlight individual lines of text or single words and change their color. To do so, just highlight a word by clicking and dragging your mouse across it. Then, click on the color picker at the top of the textbox to see the color menu, as shown in the following screenshot: Select any of the colors available in the palette to change the color of that piece of text. Nothing else in the textbox will be affected apart from the text you have selected. This gives you much greater freedom to use colored text in your Prezi design, and doesn't leave you restricted as in older versions of the software. Choose the right color To make good use of this feature, we recommend that you use a color that completely contrasts to the rest of your design. For example, if your design and corporate colors are blue, we suggest you use red or purple to highlight key words. Also, once you pick a color, stick to it throughout the presentation so that your audience knows when they see a key piece of information. Bullet points and indents Bullets and indents make it much easier to put together your business presentations and helps to give the audience some short, simple information as text in the same format they're used to seeing in other presentations. This can be done by simply selecting the main body of text and clicking on the bullet point icon at the top of the textbox. This is a really simple feature, but a useful one nonetheless. We'd obviously like to point out that too much text on any presentation is a bad thing. Keep it short and to the point. Also, remember that too many bullets can kill a presentation. Summary In this article, we discussed the basic mechanics of Prezi. Learning to combine these tools in creative ways will help you move from a Prezi novice to master. Shapes can be used creatively to create content and drawings, and can be grouped together for easy movement and editing. Prezi also features basic text editing which are explained in this article. Resources for Article: Further resources on this subject: Turning your PowerPoint into a Prezi [Article] The Fastest Way to Go from an Idea to a Prezi [Article] Using Prezi - The Online Presentation Software Tool [Article]
Read more
  • 0
  • 0
  • 923

article-image-integrating-muzzley
Packt
10 Aug 2015
12 min read
Save for later

Integrating with Muzzley

Packt
10 Aug 2015
12 min read
In this article by Miguel de Sousa, author of the book Internet of Things with Intel Galileo, we will cover the following topics: Wiring the circuit The Muzzley IoT ecosystem Creating a Muzzley app Lighting up the entrance door (For more resources related to this topic, see here.) One identified issue regarding IoT is that there will be lots of connected devices and each one speaks its own language, not sharing the same protocols with other devices. This leads to an increasing number of apps to control each of those devices. Every time you purchase connected products, you'll be required to have the exclusive product app, and, in the near future, where it is predicted that more devices will be connected to the Internet than people, this is indeed a problem, which is known as the basket of remotes. Many solutions have been appearing for this problem. Some of them focus on creating common communication standards between the devices or even creating their own protocol such as the Intel Common Connectivity Framework (CCF). A different approach consists in predicting the device's interactions, where collected data is used to predict and trigger actions on the specific devices. An example using this approach is Muzzley. It not only supports a common way to speak with the devices, but also learns from the users' interaction, allowing them to control all their devices from a common app, and on collecting usage data, it can predict users' actions and even make different devices work together. In this article, we will start by understanding what Muzzley is and how we can integrate with it. We will then do some development to allow you to control your own building's entrance door. For this purpose, we will use Galileo as a bridge to communicate with a relay and the Muzzley cloud, allowing you to control the door from a common mobile app and from anywhere as long as there is Internet access. Wiring the circuit In this article, we'll be using a real home AC inter-communicator with a building entrance door unlock button and this will require you to do some homework. This integration will require you to open your inter-communicator and adjust the inner circuit, so be aware that there are always risks of damaging it. If you don't want to use a real inter-communicator, you can replace it by an LED or even by the buzzer module. If you want to use a real device, you can use a DC inter-communicator, but in this guide, we'll only be explaining how to do the wiring using an AC inter-communicator. The first thing you have to do is to take a look at the device manual and check whether it works with AC current, and the voltage it requires. If you can't locate your product manual, search for it online. In this article, we'll be using the solid state relay. This relay accepts a voltage range from 24 V up to 380 V AC, and your inter-communicator should also work in this voltage range. You'll also need some electrical wires and electrical wires junctions: Wire junctions and the solid state relay This equipment will be used to adapt the door unlocking circuit to allow it to be controlled from the Galileo board using a relay. The main idea is to use a relay to close the door opener circuit, resulting in the door being unlocked. This can be accomplished by joining the inter-communicator switch wires with the relay wires. Use some wire and wire junctions to do it, as displayed in the following image: Wiring the circuit The building/house AC circuit is represented in yellow, and S1 and S2 represent the inter-communicator switch (button). On pressing the button, we will also be closing this circuit, and the door will be unlocked. This way, the lock can be controlled both ways, using the original button and the relay. Before starting to wire the circuit, make sure that the inter-communicator circuit is powered off. If you can't switch it off, you can always turn off your house electrical board for a couple of minutes. Make sure that it is powered off by pressing the unlock button and trying to open the door. If you are not sure of what you must do or don't feel comfortable doing it, ask for help from someone more experienced. Open your inter-communicator, locate the switch, and perform the changes displayed in the preceding image (you may have to do some soldering). The Intel Galileo board will then activate the relay using pin 13, where you should wire it to the relay's connector number 3, and the Galileo's ground (GND) should be connected to the relay's connector number 4. Beware that not all the inter-communicator circuits work the same way and although we try to provide a general way to do it, there're always the risk of damaging your device or being electrocuted. Do it at your own risk. Power on your inter-communicator circuit and check whether you can open the door by pressing the unlock door button. If you prefer not using the inter-communicator with the relay, you can always replace it with a buzzer or an LED to simulate the door opening. Also, since the relay is connected to Galileo's pin 13, with the same relay code, you'll have visual feedback from the Galileo's onboard LED. The Muzzley IoT ecosystem Muzzley is an Internet of Things ecosystem that is composed of connected devices, mobile apps, and cloud-based services. Devices can be integrated with Muzzley through the device cloud or the device itself: It offers device control, a rules system, and a machine learning system that predicts and suggests actions, based on the device usage. The mobile app is available for Android, iOS, and Windows phone. It can pack all your Internet-connected devices in to a single common app, allowing them to be controlled together, and to work with other devices that are available in real-world stores or even other homemade connected devices, like the one we will create in this article. Muzzley is known for being one of the first generation platforms with the ability to predict a users' actions by learning from the user's interaction with their own devices. Human behavior is mostly unpredictable, but for convenience, people end up creating routines in their daily lives. The interaction with home devices is an example where human behavior can be observed and learned by an automated system. Muzzley tries to take advantage of these behaviors by identifying the user's recurrent routines and making suggestions that could accelerate and simplify the interaction with the mobile app and devices. Devices that don't know of each others' existence get connected through the user behavior and may create synergies among themselves. When the user starts using the Muzzley app, the interaction is observed by a profiler agent that tries to acquire a behavioral network of the linked cause-effect events. When the frequency of these network associations becomes important enough, the profiler agent emits a suggestion for the user to act upon. For instance, if every time a user arrives home, he switches on the house lights, check the thermostat, and adjust the air conditioner accordingly, the profiler agent will emit a set of suggestions based on this. The cause of the suggestion is identified and shortcuts are offered for the effect-associated action. For instance, the user could receive in the Muzzley app the following suggestions: "You are arriving at a known location. Every time you arrive here, you switch on the «Entrance bulb». Would you like to do it now?"; or "You are arriving at a known location. The thermostat «Living room» says that the temperature is at 15 degrees Celsius. Would you like to set your «Living room» air conditioner to 21 degrees Celsius?" When it comes to security and privacy, Muzzley takes it seriously and all the collected data is used exclusively to analyze behaviors to help make your life easier. This is the system where we will be integrating our door unlocker. Creating a Muzzley app The first step is to own a Muzzley developer account. If you don't have one yet, you can obtain one by visiting https://muzzley.com/developers, clicking on the Sign up button, and submitting the displayed form. To create an app, click on the top menu option Apps and then Create app. Name your App Galileo Lock and if you want to, add a description to your project. As soon as you click on Submit, you'll see two buttons displayed, allowing you to select the integration type: Muzzley allows you to integrate through the product manufacturer cloud or directly with a device. In this example, we will be integrating directly with the device. To do so, click on Device to Cloud integration. Fill in the provider name as you wish and pick two image URLs to be used as the profile (for example, http://hub.packtpub.com/wp-content/uploads/2015/08/Commercial1.jpg) and channel (for example, http://hub.packtpub.com/wp-content/uploads/2015/08/lock.png) images. We can select one of two available ways to add our device: it can be done using UPnP discovery or by inserting a custom serial number. Pick the device discovery option Serial number and ignore the fields Interface UUID and Email Access List; we will come back for them later. Save your changes by pressing the Save changes button. Lighting up the entrance door Now that we can unlock our door from anywhere using the mobile phone with an Internet connection, a nice thing to have is the entrance lights turn on when you open the building door using your Muzzley app. To do this, you can use the Muzzley workers to define rules to perform an action when the door is unlocked using the mobile app. To do this, you'll need to own one of the Muzzley-enabled smart bulbs such as Philips Hue, WeMo LED Lighting, Milight, Easybulb, or LIFX. You can find all the enabled devices in the app profiles selection list: If you don't have those specific lighting devices but have another type of connected device, search the available list to see whether it is supported. If it is, you can use that instead. Add your bulb channel to your account. You should now find it listed in your channels under the category Lighting. If you click on it, you'll be able to control the lights. To activate the trigger option in the lock profile we created previously, go to the Muzzley website and head back to the Profile Spec app, located inside App Details. Expand the property lock status by clicking on the arrow sign in the property #1 - Lock Status section and then expand the controlInterfaces section. Create a new control interface by clicking on the +controlInterface button. In the new controlInterface #1 section, we'll need to define the possible choices of label-values for this property when setting a rule. Feel free to insert an id, and in the control interface option, select the text-picker option. In the config field, we'll need to specify each of the available options, setting the display label and the real value that will be published. Insert the following JSON object: {"options":[{"value":"true","label":"Lock"}, {"value":"false","label":"Unlock"}]}. Now we need to create a trigger. In the profile spec, expand the trigger section. Create a new trigger by clicking on the +trigger button. Inside the newly created section, select the equals condition. Create an input by clicking on +input, insert the ID value, insert the ID of the control interface you have just created in the controlInterfaceId text field. Finally, add the [{"source":"selection.value","target":"data.value"}].path to map the data. Open your mobile app and click on the workers icon. Clicking on Create Worker will display the worker creation menu to you. Here, you'll be able to select a channel component property as a trigger to some other channel component property: Select the lock and select the Lock Status is equal to Unlock trigger. Save it and select the action button. In here, select the smart bulb you own and select the Status On option: After saving this rule, give it a try and use your mobile phone to unlock the door. The smart bulb should then turn on. With this, you can configure many things in your home even before you arrive there. In this specific scenario, we used our door locker as a trigger to accomplish an action on a lightbulb. If you want, you can do the opposite and open the door when a lightbulb lights up a specific color for instance. To do it, similar to how you configured your device trigger, you just have to set up the action options in your device profile page. Summary Everyday objects that surround us are being transformed into information ecosystems and the way we interact with them is slowly changing. Although IoT is growing up fast, it is nowadays in an early stage, and many issues must be solved in order to make it successfully scalable. By 2020, it is estimated that there will be more than 25 billion devices connected to the Internet. This fast growth without security regulations and deep security studies are leading to major concerns regarding the two biggest IoT challenges—security and privacy. Devices in our home that are remotely controllable or even personal data information getting into the wrong hands could be the recipe for a disaster. In this article you have learned the basic steps in wiring the circuit of your Galileo board, creating a Muzzley app, and lighting up the entrance door of your building through your Muzzley app, by using Intel Galileo board as a bridge to communicate with Muzzley cloud. Resources for Article: Further resources on this subject: Getting Started with Intel Galileo [article] Getting the current weather forecast [article] Controlling DC motors using a shield [article]
Read more
  • 0
  • 0
  • 8991

article-image-share-and-share-alike
Packt
10 Aug 2015
13 min read
Save for later

Share and Share Alike

Packt
10 Aug 2015
13 min read
In this article by Kevin Harvey, author of the book Test-Driven Development with Django, we'll expose the data in our application via a REST API. As we do, we'll learn: The importance of documentation in the API development process How to write functional tests for API endpoints API patterns and best practices (For more resources related to this topic, see here.) It's an API world, we're just coding in it It's very common nowadays to include a public REST API in your web project. Exposing your services or data to the world is generally done for one of two reasons: You've got interesting data, and other developers might want to integrate that information into a project they're working on You're building a secondary system that you expect your users to interact with, and that system needs to interact with your data (that is, a mobile or desktop app, or an AJAX-driven front end) We've got both reasons in our application. We're housing novel, interesting data in our database that someone might want to access programmatically. Also, it would make sense to build a desktop application that could interact with a user's own digital music collection so they could actually hear the solos we're storing in our system. Deceptive simplicity The good news is that there are some great options for third-party plugins for Django that allow you to build a REST API into an existing application. The bad news is that the simplicity of adding one of these packages can let you go off half-cocked, throwing an API on top of your project without a real plan for it. If you're lucky, you'll just wind up with a bird's nest of an API: inconsistent URLs, wildly varying payloads, and difficult authentication. In the worst-case scenario, your bolt-on API exposes data you didn't intend to make public and wind up with a self-inflicted security issue. Never forget that an API is sort of invisible. Unlike traditional web pages, where bugs are very public and easy to describe, API bugs are only visible to other developers. Take special care to make sure your API behaves exactly as intended by writing thorough documentation and tests to make sure you've implemented it correctly. Writing documentation first "Documentation is king." - Kenneth Reitz If you've spent any time at all working with Python or Django, you know what good documentation looks like. The Django folks in particular seem to understand this well: the key to getting developers to use your code is great documentation. In documenting an API, be explicit. Most of your API methods' docs should take the form of "if you send this, you will get back this", with real-world examples of input and output. A great side effect of prewriting documentation is that it makes the intention of your API crystal clear. You're allowing yourself to conjure up the API from thin air without getting bogged down in any of the details, so you can get a bird's-eye view of what you're trying to accomplish. Your documentation will keep you oriented throughout the development process. Documentation-Driven testing Once you've got your documentation done, testing is simply a matter of writing test cases that match up with what you've promised. The actions of the test methods exercise HTTP methods, and your assertions check the responses. Test-Driven Development really shines when it comes to API development. There are great tools for sending JSON over the wire, but properly formatting JSON can be a pain, and reading it can be worse. Enshrining test JSON in test methods and asserting they match the real responses will save you a ton of headache. More developers, more problems Good documentation and test coverage are exponentially more important when two groups are developing in tandem—one on the client application and one on the API. Changes to an API are hard for teams like this to deal with, and should come with a lot of warning (and apologies). If you have to make a change to an endpoint, it should break a lot of tests, and you should methodically go and fix them all. What's more, no one feels the pain of regression bugs like the developer of an API-consuming client. You really, really, really need to know that all the endpoints you've put out there are still going to work when you add features or refactor. Building an API with Django REST framework Now that you're properly terrified of developing an API, let's get started. What sort of capabilities should we add? Here are a couple possibilities: Exposing the Album, Track, and Solo information we have Creating new Solos or updating existing ones Initial documentation In the Python world it's very common for documentation to live in docstrings, as it keeps the description of how to use an object close to the implementation. We'll eventually do the same with our docs, but it's kind of hard to write a docstring for a method that doesn't exist yet. Let's open up a new Markdown file API.md, right in the root of the project, just to get us started. If you've never used Markdown before, you can read an introduction to GitHub's version of Markdown at https://help.github.com/articles/markdown-basics/. Here's a sample of what should go in API.md. Have a look at https://github.com/kevinharvey/jmad/blob/master/API.md for the full, rendered version. ...# Get a Track with Solos* URL: /api/tracks/<pk>/* HTTP Method: GET## Example Response{"name": "All Blues","slug": "all-blues","album": {"name": "Kind of Blue","url": "http://jmad.us/api/albums/2/"},"solos": [{"artist": "Cannonball Adderley","instrument": "saxophone","start_time": "4:05","end_time": "6:04","slug": "cannonball-adderley","url": "http://jmad.us/api/solos/281/"},...]}# Add a Solo to a Track* URL: /api/solos/* HTTP Method: POST## Example Request{"track": "/api/tracks/83/","artist": "Don Cherry","instrument": "cornet","start_time": "2:13","end_time": "3:54"}## Example Response{"url": "http://jmad.us/api/solos/64/","artist": "Don Cherry","slug": "don-cherry","instrument": "cornet","start_time": "2:13","end_time": "3:54","track": "http://jmad.us/api/tracks/83/"} There's not a lot of prose, and there needn't be. All we're trying to do is layout the ins and outs of our API. It's important at this point to step back and have a look at the endpoints in their totality. Is there enough of a pattern that you can sort of guess what the next one is going to look like? Does it look like a fairly straightforward API to interact with? Does anything about it feel clunky? Would you want to work with this API by yourself? Take time to think through any weirdness now before anything gets out in the wild. $ git commit -am 'Initial API Documentation'$ git tag -a ch7-1-init-api-docs Introducing Django REST framework Now that we've got some idea what we're building, let's actually get it going. We'll be using Django REST Framework (http://www.django-rest-framework.org/). Start by installing it in your environment: $ pip install djangorestframework Add rest_framework to your INSTALLED_APPS in jmad/settings.py: INSTALLED_APPS = (...'rest_framework') Now we're ready to start testing. Writing tests for API endpoints While there's no such thing as browser-based testing for an external API, it is important to write tests that cover its end-to-end processing. We need to be able to send in requests like the ones we've documented and confirm that we receive the responses our documentation promises. Django REST Framework (DRF from here on out) provides tools to help write tests for the application functionality it provides. We'll use rest_framework.tests.APITestCase to write functional tests. Let's kick off with the list of albums. Convert albums/tests.py to a package, and add a test_api.py file. Then add the following: from rest_framework.test import APITestCasefrom albums.models import Albumclass AlbumAPITestCase(APITestCase):def setUp(self):self.kind_of_blue = Album.objects.create(name='Kind of Blue')self.a_love_supreme = Album.objects.create(name='A Love Supreme')def test_list_albums(self):"""Test that we can get a list of albums"""response = self.client.get('/api/albums/')self.assertEqual(response.status_code, 200)self.assertEqual(response.data[0]['name'],'A Love Supreme')self.assertEqual(response.data[1]['url'],'http://testserver/api/albums/1/') Since much of this is very similar to other tests that we've seen before, let's talk about the important differences: We import and subclass APITestCase, which makes self.client an instance of rest_framework.test.APIClient. Both of these subclass their respective django.test counterparts add a few niceties that help in testing APIs (none of which are showcased yet). We test response.data, which we expect to be a list of Albums. response.data will be a Python dict or list that corresponds to the JSON payload of the response. During the course of the test, APIClient (a subclass of Client) will use http://testserver as the protocol and hostname for the server, and our API should return a host-specific URI. Run this test, and we get the following: $ python manage.py test albums.tests.test_apiCreating test database for alias 'default'...F=====================================================================FAIL: test_list_albums (albums.tests.test_api.AlbumAPITestCase)Test that we can get a list of albums---------------------------------------------------------------------Traceback (most recent call last):File "/Users/kevin/dev/jmad-project/jmad/albums/tests/test_api.py",line 17, in test_list_albumsself.assertEqual(response.status_code, 200)AssertionError: 404 != 200---------------------------------------------------------------------Ran 1 test in 0.019sFAILED (failures=1) We're failing because we're getting a 404 Not Found instead of a 200 OK status code. Proper HTTP communication is important in any web application, but it really comes in to play when you're using AJAX. Most frontend libraries will properly classify responses as successful or erroneous based on the status code: making sure the code are on point will save your frontend developers friends a lot of headache. We're getting a 404 because we don't have a URL defined yet. Before we set up the route, let's add a quick unit test for routing. Update the test case with one new import and method: from django.core.urlresolvers import resolve...def test_album_list_route(self):"""Test that we've got routing set up for Albums"""route = resolve('/api/albums/')self.assertEqual(route.func.__name__, 'AlbumViewSet') Here, we're just confirming that the URL routes to the correct view. Run it: $ python manage.py testalbums.tests.test_api.AlbumAPITestCase.test_album_list_route...django.core.urlresolvers.Resolver404: {'path': 'api/albums/','tried': [[<RegexURLResolver <RegexURLPattern list> (admin:admin)^admin/>], [<RegexURLPattern solo_detail_view^recordings/(?P<album>[w-]+)/(?P<track>[w-]+)/(?P<artist>[w-]+)/$>], [<RegexURLPattern None ^$>]]}---------------------------------------------------------------------Ran 1 test in 0.003sFAILED (errors=1) We get a Resolver404 error, which is expected since Django shouldn't return anything at that path. Now we're ready to set up our URLs. API routing with DRF's SimpleRouter Take a look at the documentation for routers at http://www.django-rest-framework.org/api-guide/routers/. They're a very clean way of setting up URLs for DRF-powered views. Update jmad/urls.py like so: ...from rest_framework import routersfrom albums.views import AlbumViewSetrouter = routers.SimpleRouter()router.register(r'albums', AlbumViewSet)urlpatterns = [# Adminurl(r'^admin/', include(admin.site.urls)),# APIurl(r'^api/', include(router.urls)),# Appsurl(r'^recordings/(?P<album>[w-]+)/(?P<track>[w-]+)/(?P<artist>[w-]+)/$','solos.views.solo_detail',name='solo_detail_view'),url(r'^$', 'solos.views.index'),] Here's what we changed: We created an instance of SimpleRouter and used the register method to set up a route. The register method has two required arguments: a prefix to build the route methods from, and something called a viewset. Here we've supplied a non-existent class AlbumViewSet, which we'll come back to later. We've added a few comments to break up our urls.py, which was starting to look a little like a rat's nest. The actual API URLs are registered under the '^api/' path using Django's include function. Run the URL test again, and we'll get ImportError for AlbumViewSet. Let's add a stub to albums/views.py: class AlbumViewSet():pass Run the test now, and we'll start to see some specific DRF error messages to help us build out our view: $ python manage.py testalbums.tests.test_api.AlbumAPITestCase.test_album_list_routeCreating test database for alias 'default'...F...File "/Users/kevin/.virtualenvs/jmad/lib/python3.4/sitepackages/rest_framework/routers.py", line 60, in registerbase_name = self.get_default_base_name(viewset)File "/Users/kevin/.virtualenvs/jmad/lib/python3.4/sitepackages/rest_framework/routers.py", line 135, inget_default_base_nameassert queryset is not None, ''base_name' argument not specified,and could ' AssertionError: 'base_name' argument not specified, and could notautomatically determine the name from the viewset, as it does nothave a '.queryset' attribute. After a fairly lengthy output, the test runner tells us that it was unable to get base_name for the URL, as we did not specify the base_name in the register method, and it couldn't guess the name because the viewset (AlbumViewSet) did not have a queryset attribute. In the router documentation, we came across the optional base_name argument for register (as well as the exact wording of this error). You can use that argument to control the name your URL gets. However, let's keep letting DRF do its default behavior. We haven't read the documentation for viewsets yet, but we know that a regular Django class-based view expects a queryset parameter. Let's stick one on AlbumViewSet and see what happens: from .models import Albumclass AlbumViewSet():queryset = Album.objects.all() Run the test again, and we get: django.core.urlresolvers.Resolver404: {'path': 'api/albums/','tried': [[<RegexURLResolver <RegexURLPattern list> (admin:admin)^admin/>], [<RegexURLPattern solo_detail_view^recordings/(?P<album>[w-]+)/(?P<track>[w-]+)/(?P<artist>[w-]+)/$>], [<RegexURLPattern None ^$>]]}---------------------------------------------------------------------Ran 1 test in 0.011sFAILED (errors=1) Huh? Another 404 is a step backwards. What did we do wrong? Maybe it's time to figure out what a viewset really is. Summary In this article, we covered basic API design and testing patterns, including the importance of documentation when developing an API. In doing so, we took a deep dive into Django REST Framework and the utilities and testing tools available in it. Resources for Article: Further resources on this subject: Test-driven API Development with Django REST Framework [Article] Adding a developer with Django forms [Article] Code Style in Django [Article]
Read more
  • 0
  • 0
  • 2211

article-image-nltk-hackers
Packt
07 Aug 2015
9 min read
Save for later

NLTK for hackers

Packt
07 Aug 2015
9 min read
In this article written by Nitin Hardeniya, author of the book NLTK Essentials, we will learn that "Life is short, we need Python" that's the mantra I follow and truly believe in. As fresh graduates, we learned and worked mostly with C/C++/JAVA. While these languages have amazing features, Python has a charm of its own. The day I started using Python I loved it. I really did. The big coincidence here is that I finally ended up working with Python during my initial projects on the job. I started to love the kind of datastructures, Libraries, and echo system Python has for beginners as well as for an expert programmer. (For more resources related to this topic, see here.) Python as a language has advanced very fast and spatially. If you are a Machine learning/ Natural language Processing enthusiast, then Python is 'the' go-to language these days. Python has some amazing ways of dealing with strings. It has a very easy and elegant coding style, and most importantly a long list of open libraries. I can go on and on about Python and my love for it. But here I want to talk about very specifically about NLTK (Natural Language Toolkit), one of the most popular Python libraries for Natural language processing. NLTK is simply awesome, and in my opinion,it's the best way to learn and implement some of the most complex NLP concepts. NLTK has variety of generic text preprocessing tool, such as Tokenization, Stop word removal, Stemming, and at the same time,has some very NLP-specific tools,such as Part of speech tagging, Chunking, Named Entity recognition, and dependency parsing. NLTK provides some of the easiest solutions to all the above stages of NLP and that's why it is the most preferred library for any text processing/ text mining application. NLTK not only provides some pretrained models that can be applied directly to your dataset, it also provides ways to customize and build your own taggers, tokenizers, and so on. NLTK is a big library that has many tools available for an NLP developer. I have provided a cheat-sheet of some of the most common steps and their solutions using NLTK. In our book, NLTK Essentials, I have tried to give you enough information to deal with all these processing steps using NLTK. To show you the power of NLTK, let's try to develop a very easy application of finding topics in the unstructured text in a word cloud. Word CloudNLTK Instead of going further into the theoretical aspects of natural language processing, let's start with a quick dive into NLTK. I am going to start with some basic example use cases of NLTK. There is a good chance that you have already done something similar. First, I will give a typical Python programmer approach and then move on to NLTK for a much more efficient, robust, and clean solution. We will start analyzing with some example text content: >>>import urllib2>>># urllib2 is use to download the html content of the web link>>>response = urllib2.urlopen('http://python.org/')>>># You can read the entire content of a file using read() method>>>html = response.read()>>>print len(html)47020 For the current example, I have taken the content from Python's home page: https://www.python.org/. We don't have any clue about the kind of topics that are discussed in this URL, so let's say that we want to start an exploratory data analysis (EDA). Typically in a text domain, EDA can have many meanings, but will go with a simple case of what kinds of terms dominate the documents. What are the topics? How frequent are they? The process will involve some level of preprocessing we will try to do this in a pure Python wayand then we will do it using NLTK. Let's start with cleaning the html tags. One way to do this is to select just tokens, including numbers and character. Anybody who has worked with regular expression should be able to convert html string into a list of tokens: >>># regular expression based split the string>>>tokens = [tok for tok in html.split()]>>>print "Total no of tokens :"+ str(len(tokens))>>># first 100 tokens>>>print tokens[0:100]Total no of tokens :2860['<!doctype', 'html>', '<!--[if', 'lt', 'IE', '7]>', '<html', 'class="no-js', 'ie6', 'lt-ie7', 'lt-ie8', 'lt-ie9">', '<![endif]-->', '<!--[if', 'IE', '7]>', '<html', 'class="no-js', 'ie7', 'lt-ie8', 'lt-ie9">', '<![endif]-->', ''type="text/css"', 'media="not', 'print,', 'braille,' ...] As you can see, there is an excess of html tags and other unwanted characters when we use the preceding method. A cleaner version of the same task will look something like this: >>>import re>>># using the split function https://docs.python.org/2/library/re.html>>>tokens = re.split('W+',html)>>>print len(tokens)>>>print tokens[0:100]5787['', 'doctype', 'html', 'if', 'lt', 'IE', '7', 'html', 'class', 'no', 'js', 'ie6', 'lt', 'ie7', 'lt', 'ie8', 'lt', 'ie9', 'endif', 'if', 'IE', '7', 'html', 'class', 'no', 'js', 'ie7', 'lt', 'ie8', 'lt', 'ie9', 'endif', 'if', 'IE', '8', 'msapplication', 'tooltip', 'content', 'The', 'official', 'home', 'of', 'the', 'Python', 'Programming', 'Language', 'meta', 'name', 'apple' ...] This looks much cleaner now. But still you can do more; I leave it to you to try to remove as much noise as you can. You can still look for word length as a criteria and remove words that have a length one—it will remove elements,such as 7, 8, and so on, which are just noise in this case. Now let's go to NLTK for the same task. There is a function called clean_html() that can do all the work we were looking for: >>>import nltk>>># http://www.nltk.org/api/nltk.html#nltk.util.clean_html>>>clean = nltk.clean_html(html)>>># clean will have entire string removing all the html noise>>>tokens = [tok for tok in clean.split()]>>>print tokens[:100]['Welcome', 'to', 'Python.org', 'Skip', 'to', 'content', '&#9660;', 'Close', 'Python', 'PSF', 'Docs', 'PyPI', 'Jobs', 'Community', '&#9650;', 'The', 'Python', 'Network', '&equiv;', 'Menu', 'Arts', 'Business' ...] Cool, right? This definitely is much cleaner and easier to do. No analysis in any EDA can start without distribution. Let's try to get the frequency distribution. First, let's do it the Python way, then I will tell you the NLTK recipe. >>>import operator>>>freq_dis={}>>>for tok in tokens:>>>    if tok in freq_dis:>>>        freq_dis[tok]+=1>>>    else:>>>        freq_dis[tok]=1>>># We want to sort this dictionary on values ( freq in this case )>>>sorted_freq_dist= sorted(freq_dis.items(), key=operator.itemgetter(1), reverse=True)>>> print sorted_freq_dist[:25][('Python', 55), ('>>>', 23), ('and', 21), ('to', 18), (',', 18), ('the', 14), ('of', 13), ('for', 12), ('a', 11), ('Events', 11), ('News', 11), ('is', 10), ('2014-', 10), ('More', 9), ('#', 9), ('3', 9), ('=', 8), ('in', 8), ('with', 8), ('Community', 7), ('The', 7), ('Docs', 6), ('Software', 6), (':', 6),  ('3:', 5), ('that', 5), ('sum', 5)] Naturally, as this is Python's home page, Python and the >>> interpreters are the most common terms, also giving a sense about the website. A better and efficient approach is to use NLTK's FreqDist() function. For this, we will take a look at the same code we developed before: >>>import nltk>>>Freq_dist_nltk=nltk.FreqDist(tokens)>>>print Freq_dist_nltk>>>for k,v in Freq_dist_nltk.items():>>>    print str(k)+':'+str(v)<FreqDist: 'Python': 55, '>>>': 23, 'and': 21, ',': 18, 'to': 18, 'the': 14, 'of': 13, 'for': 12, 'Events': 11, 'News': 11, ...>Python:55>>>:23and:21,:18to:18the:14of:13for:12Events:11News:11 Let's now do some more funky things. Let's plot this: >>>Freq_dist_nltk.plot(50, cumulative=False)>>># below is the plot for the frequency distributions We can see that the cumulative frequency is growing, and at words such as other and frequency 400, the curve is going into long tail. Still, there is some noise, and there are words such asthe, of, for, and =. These are useless words, and there is a terminology for these words. These words are stop words,such asthe, a, and an. Article pronouns are generally present in most of the documents; hence, they are not discriminative enough to be informative. In most of the NLP and information retrieval tasks, people generally remove stop words. Let's go back again to our running example: >>>stopwords=[word.strip().lower() for word in open("PATH/english.stop.txt")]>>>clean_tokens=[tok for tok in tokens if len(tok.lower())>1 and (tok.lower() not in stopwords)]>>>Freq_dist_nltk=nltk.FreqDist(clean_tokens)>>>Freq_dist_nltk.plot(50, cumulative=False) This looks much cleaner now! After finishing this much, you should be able to get something like this using word cloud: Please go to http://www.wordle.net/advanced for more word clouds. Summary To summarize, this article was intended to give you a brief introduction toNatural Language Processing. The book does assume some background in NLP andprogramming in Python, but we have tried to give a very quick head start to Pythonand NLP. Resources for Article: Further resources on this subject: Hadoop Monitoring and its aspects [Article] Big Data Analysis (R and Hadoop) [Article] SciPy for Signal Processing [Article]
Read more
  • 0
  • 0
  • 2823
article-image-bootstrap-box
Packt
07 Aug 2015
6 min read
Save for later

Bootstrap in a Box

Packt
07 Aug 2015
6 min read
In this article written by Snig Bhaumik, author of the book Bootstrap Essentails, we explain the concept of Bootstrap, responsive design patterns, navigation patterns, and the different components that are included in Bootstrap. (For more resources related to this topic, see here.) Responsive design patterns Here are the few established and well-adopted patterns in Responsive Web Design: Fluid design: This is the most popular and easiest option for responsive design. In this pattern, larger screen multiple columns layout renders as a single column in a smaller screen in absolutely same sequence. Column drop: In this pattern also, the page gets rendered in a single column; however, the order of blocks gets altered. That means, if a content block is visible first in order in case of a larger screen, that might be rendered as second or third in case of a smaller screen. Layout shifter: This is a complex but powerful pattern where the whole layout of the screen contents gets altered in case of a smaller screen. This means that you need to develop different page layouts for large, medium, and small screens. Navigation patterns You should take care of the following things while designing a responsive web page. These are essentially the major navigational elements that you would concentrate on while developing a mobile friendly and responsive website: Menu bar Navigation/app bar Footer Main container shell Images Tabs HTML forms and elements Alerts and popups Embedded audios and videos, and so on You can see that there are lots of elements and aspects you need to take care of to create a fully responsive design. While all of these are achieved by using various features and technologies in CSS3, it is of course not an easy problem to solve without a framework that could help you do so. Precisely, you need a frontend framework that takes care of all the pains of technical responsive design implementation and releases you only for your brand and application design. Now, we introduce Bootstrap that would help you design and develop a responsive web design in a much optimized and efficient way. Introducing Bootstrap Simply put, Bootstrap is a frontend framework for faster and easier web development in the new standard of mobile-first philosophy. It uses HTML, CSS, and JavaScript. In August 2010, Twitter released Bootstrap as Open Source. There are quite a few similar frontend frameworks available in the industry, but Bootstrap is arguably the most popular framework in the lot. It is evident when we see Bootstrap is the most starred project in GitHub since 2012. Until now, you must be in a position to fathom why and where we need to use Bootstrap for web development; however, just to recap, here are the points in short. The mobile-first approach A responsive design Automatic browser support and handling Easy to adapt and get going What Bootstrap includes The following diagram demonstrates the overall structure of Bootstrap: CSS Bootstrap comes with fundamental HTML elements styled, global CSS classes, classes for advanced grid patterns, and lots of enhanced and extended CSS classes. For example, this is how the HTML global element is configured in Bootstrap CSS: html { font-family: sans-serif; -webkit-text-size-adjust: 100%; -ms-text-size-adjust: 100%; } This is how a standard HR HTML element is styled: hr { height: 0; -webkit-box-sizing: content-box; -moz-box-sizing: content-box; box-sizing: content-box; } Here is an example of new classes introduced in Bootstrap: .glyphicon { position: relative; top: 1px; display: inline-block; font-family: 'Glyphicons Halflings'; font-style: normal; font-weight: normal; line-height: 1; -webkit-font-smoothing: antialiased; -moz-osx-font-smoothing: grayscale; } Components Bootstrap offers a rich set of reusable and built-in components, such as breadcrumbs, progress bars, alerts, and navigation bars. The components are technically custom CSS classes specially crafted for the specific purpose. For example, if you want to create a breadcrumb in your page, you simply add a DIV tag in your HTML using Bootstrap’s breadcrumb class: <ol class="breadcrumb"> <li><a href="#">Home</a></li> <li><a href="#">The Store</a></li> <li class="active">Offer Zone</li> </ol> In the background (stylesheet), this Bootstrap class is used to create your breadcrumb: .breadcrumb { padding: 8px 15px; margin-bottom: 20px; list-style: none; background-color: #f5f5f5; border-radius: 4px; } .breadcrumb > li { display: inline-block; } .breadcrumb > li + li:before { padding: 0 5px; color: #ccc; content: "/ 0a0"; } .breadcrumb > .active { color: #777; } Please note that these set of code blocks are simply snippets. JavaScript Bootstrap framework comes with a number of ready-to-use JavaScript plugins. Thus, when you need to create Popup windows, Tabs, Carousels or Tooltips, and so on, you just use one of the prepackaged JavaScript plugins. For example, if you need to create a tab control in your page, you use this: <div role="tabpanel"> <ul class="nav nav-tabs" role="tablist"> <li role="presentation" class="active"><a href="#recent" aria-controls="recent" role="tab" data-toggle="tab">Recent Orders</a></li> <li role="presentation"><a href="#all" aria-controls="al" role="tab" data-toggle="tab">All Orders</a></li> <li role="presentation"><a href="#redeem" aria-controls="redeem" role="tab" data-toggle="tab">Redemptions</a></li> </ul>   <div class="tab-content"> <div role="tabpanel" class="tab-pane active" id="recent"> Recent Orders</div> <div role="tabpanel" class="tab-pane" id="all">All Orders</div> <div role="tabpanel" class="tab-pane" id="redeem">Redemption History</div> </div> </div> To activate (open) a tab, you write this JavaScript code: $('#profileTab li:eq(1) a').tab('show'); As you could guess by looking at the syntax of this JavaScript line that the Bootstrap JS plugins are built on top of jQuery. Thus, the JS code you would write for Bootstrap are also all based on jQuery. Customization Even though Bootstrap offers most (if not all) standard features and functionalities for Responsive Web Design, there might be several cases when you would want to customize and extend the framework. One of the very basic requirements for customization would be to deploy your own branding and color combinations (themes) instead of the Bootstrap default ones. There can be several such use cases where you would want to change the default behavior of the framework. Bootstrap offers very easy and stable ways to customize the platform. When you use the Bootstrap CSS, all the global and fundamental HTML elements automatically become responsive and would properly behave as the client device on which the web page is browsed. The built-in components are also designed to be responsive. As the developer, you shouldn’t be worried about how these advanced components would behave in different devices and client agents. Summary In this article we have discussed the basics of Bootstarp along with a brief explanation on the design patterns and the navigation patterns. Resources for Article: Further resources on this subject: Deep Customization of Bootstrap [article] The Bootstrap grid system [article] Creating a Responsive Magento Theme with Bootstrap 3 [article]
Read more
  • 0
  • 0
  • 11725

article-image-storage-ergonomics
Packt
07 Aug 2015
19 min read
Save for later

Storage Ergonomics

Packt
07 Aug 2015
19 min read
In this article by Saurabh Grover, author of the book Designing Hyper-V Solutions, we will be discussing the last of the basics to get you equipped to create and manage a simple Hyper-V structure. No server environment, physical or virtual, is complete without a clear consideration and consensus over the underlying storage. In this article, you will learn about the details of virtual storage, how to differentiate one from the other, and how to convert one to the other and vice versa. We will also see how Windows Server 2012 R2 removes dependencies on raw device mappings by way of pass-through or iSCSI LUN, which were required for guest clustering. VHDX can now be shared and delivers better results than pass-through disks. There are more merits to VHDX than the former, as it allows you to extend the size even if the virtual machine is alive. Previously, Windows Server 2012 added a very interesting facet for storage virtualization in Hyper-V when it introduced virtual SAN, which adds a virtual host bus adapter (HBA) capability to a virtual machine. This allows a VM to directly view the fibre channel SAN. This in turn allows FC LUN accessibility to VMs and provides you with one more alternative for shared storage for guest clustering. Windows Server 2012 also introduced the ability to utilize the SMI-S capability, which was initially tested on System Center VMM 2012. Windows 2012 R2 carries the torch forward, with the addition of new capabilities. We will discuss this feature briefly in this article. In this article, you will cover the following: Two types of virtual disks, namely VHD and VHDX Merits of using VHDX from Windows 2012 R2 onwards Virtual SAN storage Implementing guest clustering using shared VHDX Getting an insight into SMI-S (For more resources related to this topic, see here.) Virtual storage A virtual machine is a replica of a physical machine in all rights and with respect to the building components, regardless of the fact that it is emulated, resembles, and delivers the same performance as a physical machine. Every computer ought to have storage for the OS or application loading. This condition applies to virtual machines as well. If VMs are serving as independent servers for roles such as domain controller or file server, where the server needs to maintain additional storage apart from the OS, the extended storage can be extended for domain user access without any performance degradation. Virtual machines can benefit from multiple forms of storage, namely VHD/VHDX, which are file-based storage; iSCSI LUNs; pass-through LUNs, which are raw device mappings; and of late, virtual-fibre-channel-assigned LUNs. There have been enhancements to each of these, and all of these options have a straightforward implementation procedure. However, before you make a selection, you should identify the use case according to your design strategy and planned expenditure. In the following section, we will look at the storage choices more closely. VHD and VHDX VHD is the old flag bearer for Microsoft virtualization ever since the days of virtual PC and virtual server. The same was enhanced and employed in early Hyper-V releases. However, as a file-based storage that gets mounted as a normal storage for a virtual machine, VHD had its limitations. VHDX, a new feature addition to Windows Server 2012, was built further upon the limitations of its predecessor and provides greater storage capacity, support for large sector disks, and better protection against corruption. In the current release of Windows Server 2012 R2, VHDX has been bundled with more ammo. VHDX packed a volley of feature enhancements when it was initially launched, and with Windows Server 2012 R2, Microsoft only made it better. If we compare the older, friendlier version of VHD with VHDX, we can draw the following inferences: Size factor: VHD had an upper size limit of 2 TB, while VHDX gives you a humungous maximum capacity of 64 TB. Large disk support: With the storage industry progressing towards 4 KB sector disks from the 512 bytes sector, for applications that still may depend on the older sector format, there are two offerings from the disk alignment perspective: native 4 KB disk and 512e (or 512 byte emulation disks). The operating system, depending on whether it supports native 4 KB disk or not, will either write 4 KB chunks of data or inject 512 bytes of data into a 4 KB sector. The process of injecting 512 bytes into a 4 KB sector is called RMW, or Read-Write-Modify. VHDs are generically supported on 512e disks. Windows Server 2012 and R2 both support native 4 KB disks. However, the VHD driver has a limitation; it cannot open VHD files on physical 4 KB disks. This limitation is checked by enabling VHD to be aligned to 4 KB and RMW ready, but if you are migrating from the older Hyper-V platform, you will need to convert it accordingly. VHDX, on the other hand, is the "superkid". It can be used on all disk forms, namely 512, 512e, and the native 4 KB disk as well, without any RMW dependency. Data corruption safety: In the event of power outages or failures, the possibility of data corruption is reduced with VHDX. Metadata inside the VHDX is updated via a logging process that ensures that the allocations inside VHDX are committed successfully. Offloaded data transfers (ODX): With Windows Server 2012 Hyper-V supporting this feature, data transfer and moving and sizing of virtual disks can be achieved at the drop of a hat, without host server intervention. The basic prerequisite for utilizing this feature is to host the virtual machines on ODX-capable hardware. Thereafter, Windows Server 2012 self-detects and enables the feature. Another important clause is that virtual disks (VHDX) should be attached to the SCSI, not IDE. TRIM/UNMAP: Termed by Microsoft in its documentation as efficiency in representing data, this feature works in tandem with thin provisioning. It adds the ability to allow the underlying storage to reclaim space and maintain it optimally small. Shared VHDX: This is the most interesting feature in the collection released with Windows Server 2012 R2. It made guest clustering (failover clustering in virtual machines) in Hyper-V a lot simpler. With Windows Server 2012, you could set up a guest cluster using virtual fibre channel or iSCSI LUN. However, the downside was that the LUN was exposed to the user of the virtual machine. Shared VHDX proves to be the ideal shared storage. It gives you the benefit of storage abstraction, flexibility, and faster deployment of guest clusters, and it can be stored on an SMB share or a cluster-shared volume (CSV). Now that we know the merits of using VHDX over VHD, it is important to realize that either of the formats can be converted into the other and can be used under various types of virtual disks, allowing users to decide a trade-off between performance and space utilization. Virtual disk types Beyond the two formats of virtual hard disks, let's talk about the different types of virtual hard disks and their utility as per the virtualization design. There are three types of virtual hard disks, namely dynamically expanding, fixed-size, and differencing virtual hard disks: Dynamically expanding: Also called a dynamic virtual hard disk, this is the default type. It gets created when you create a new VM or a new VHD/VHDX. This is Hyper-V's take on thin provisioning. The VHD/VHDX file will start off from a small size and gradually grow up to the maximum defined size for the file as and when chunks of data get appended or created inside the OSE (short for operating system environment) hosted by the virtual disk. This disk type is quite beneficial, as it prevents storage overhead and utilizes as much as required, rather than committing the entire block. However, due to the nature of the virtual storage, as it spawns in size, the actual file gets written in fragments across the Hyper-V CSV or LUN (physical storage). Hence, it affects the performance of the disk I/O operations of the VM. Fixed size: As the name indicates, the virtual disk type commits the same block size on the physical storage as its defined size. In other words, if you have specified a fixed size 1 TB, it will create a 1 TB VHDX file in the storage. The creation of a fixed size takes a considerable amount of time, commits space on the underlying storage, and does allow SAN thin provisioning to reclaim it, somewhat like whitespaces in a database. The advantage of using this type is that it delivers amazing read performance and heavy workloads from SQL, and exchange can benefit from it. Differencing: This is the last of the lot, but quite handy as an option when it comes to quick deployment of virtual machines. This is by far an unsuitable option, unless employed for VMs with a short lifespan, namely pooled VDI (short for virtual desktop infrastructure) or lab testing. The idea behind the design is to have a generic virtual operating system environment (VOSE) in a shut down state at a shared location. The VHDX of the VOSE is used as a parent or root, and thereafter, multiple VMs can be spawned with differencing or child virtual disks that use the generalized OS from the parent and append changes or modifications to the child disk. So, the parent stays unaltered and serves as a generic image. It does not grow in size; on the contrary, the child disk keeps on growing as and when data is added to the particular VM. Unless used for short-lived VMs, the long-running VMs could enter an outage state or may be performance-stricken soon due to the unpredictable growth pattern of a differencing disk. Hence, these should be avoided for server virtual machines without even a second thought. Virtual disk operations Now we will apply all of the knowledge gained about virtual hard disks, and check out what actions and customizations we can perform on them. Creating virtual hard disks This goal can be achieved in different ways: You can create a new VHD when you are creating a new VM, using the New Virtual Machine Wizard. It picks up the VHDX as the default option. You can also launch the New Virtual Hard Disk Wizard from a virtual machine's settings. This can be achieved by PowerShell cmdlets as well:New-VHD You may employ the Disk Management snap-in to create a new VHD as well. The steps to create a VHD here are pretty simple: In the Disk Management snap-in, select the Action menu and select Create VHD, like this: Figure 5-1: Disk Management – Create VHD This opens the Create and Attach Virtual Hard Disk applet. Specify the location to save the VHD at, and fill in Virtual hard disk format and Virtual hard disk type as depicted here in figure 5-2: Figure 5-2: Disk Management – Create and Attach Virtual Hard Disk The most obvious way to create a new VHD/VHDX for a VM is by launching New Virtual Hard Disk Wizard from the Actions pane in the Hyper-V Manager console. Click on New and then select the Hard Disk option. It will take you to the following set of screens: On the Before You Begin screen, click on Next, as shown in this screenshot: Figure 5-3: New Virtual Hard Disk Wizard – Create VHD The next screen is Choose Disk Format, as shown in figure 5-4. Select the relevant virtual hard disk format, namely VHD or VHDX, and click on Next. Figure 5-4: New Virtual Hard Disk Wizard – Virtual Hard Disk Format In the screen for Choose Disk Type, select the relevant virtual hard disk type and click on Next, as shown in the following screenshot: Figure 5-5: New Virtual Hard Disk Wizard– Virtual Hard Disk Type The next screen, as shown in figure 5-6, is Specify Name and Location. Update the Name and Location fields to store the virtual hard disk and click on Next. Figure 5-6: New Virtual Hard Disk Wizard – File Location The Configure Disk screen, shown in figure 5-7, is an interesting one. If needs be, you can convert or copy the content of a physical storage (local, LUN, or something else) to the new virtual hard disk. Similarly, you can copy the content from an older VHD file to the Windows Server 2012 or R2 VHDX format. Then click on Next. Figure 5-7: New Virtual Hard Disk Wizard – Configure Disk On the Summary screen, as shown in the following screenshot, click on Finish to create the virtual hard disk: Figure 5-8: New Virtual Hard Disk Wizard – Summary Editing virtual hard disks There may be one or more reasons for you to feel the need to modify a previously created virtual hard disk to suit a purpose. There are many available options that you may put to use, given a particular virtual disk type. Before you edit a VHDX, it's a good practice to inspect the VHDX or VHD. The Inspect Disk option can be invoked from two locations: from the VM settings under the IDE or SCSI controller, or from the Actions pane of the Hyper-V Manager console. Also, don't forget how to do this via PowerShell: Get-VHD -Path "E:Hyper-VVirtual hard disks1.vhdx" You may now proceed with editing a virtual disk. Again, the Edit Disk option can be invoked in exactly the same fashion as Inspect Disk. When you edit a VHDX, you are presented with four options, as shown in figure 5-9. It may sound obvious, but not all the options are for all the disk types: Compact: This operation is used to reduce or compact the size of a virtual hard disk, though the preset capacity remains the same. A dynamic disk, or differencing disk, grows as data elements are added, though deletion of the content does not automatically reclaim the storage capacity. Hence, a manual compact operation becomes imperative reduce the file size. PowerShell cmdlet can also do this trick, as follows: Optimize-VHD Convert: This is an interesting one, and it almost makes you change your faith. As the name indicates, this operation allows you to convert one virtual disk type to another and vice versa. You can also create a new virtual disk of the desired format and type at your preferred location. The PowerShell construct used to help you achieve the same goal is as follows: Convert-VHD Expand: This operation comes in handy, similar to Extend a LUN. You end up increasing the size of a virtual hard disk, which happens visibly fast for a dynamic disk and a bit slower for its fixed-size cousins. After this action, you have to perform the follow-up action inside the virtual machine to increase the volume size from disk management. Now, for the PowerShell code: Resize-VHD Merge: This operation is disk-type-specific—differencing virtual disks. It allows two different actions. You can either merge the differencing disk with the original parent, or create a new merged VHD out of all the contributing VHDs, namely the parent and the child or the differencing disk. The latter is the preferred way of doing it, as in utmost probability, there would be more than differencing to a parent. In PowerShell, the alternative the cmdlet is this: Merge-VHD Figure 5-9: Edit Virtual Hard Disk Wizard – Choose Action Pass-through disks As the name indicates, these are physical LUNs or hard drives passed on from the Hyper-V hosts, and can be assigned to a virtual machine as a standard disk. A once popular method on older Hyper-V platforms, this allowed the VM to harness the full potential of the raw device bypassing the Hyper-V host filesystem and also not getting restricted by the 2 TB limit of VHDs. A lot has changed over the years, as Hyper-V has matured into a superior virtualization platform and introduced VHDX, which went past the size limitation. with Windows Server 2012 R2 can be used as a shared storage for Hyper-V guest clusters. There are, however, demerits to this virtual storage. When you employ a pass-through disk, the virtual machine configuration file is stored separately. Hence, the snapshotting becomes unknown to this setup. You would not be able to utilize the dynamic disk's or differential disk's abilities here too. Another challenge of using this form of virtual storage is that when using a VSS-based backup, the VSS writer ignores the pass-through and iSCSI LUN. Hence, a complex backup plan has to be implemented by involving a running backup within VM and on the virtualization host separately. The following are steps, along with a few snapshots, that show you how to set up a pass-through disk: Present a LUN to the Hyper-V host. Confirm the LUN in Disk Management and ensure that it stays in the Offline State and as Not Initialized. Figure 5-10: Hyper-V Host Disk Management In Hyper-V Manager, right-click on the VM you wish to assign the pass-through to and select Settings. Figure 5-11: VM Settings – Pass-through disk placement Select SCSI Controller (or IDE in the case of Gen-1 VM) and then select the Physical hard disk option, as shown in the preceding screenshot. In the drop-down menu, you will see the raw device or LUN you wish to assign. Select the appropriate option and click on OK. Check Disk Management within the virtual machine to confirm that the disk has visibility. Figure 5-12: VM Disk Management – Pass-through Assignment Bring it online and initialize. Figure 5-13: VM Disk Management – Pass-through Initialization As always the preceding path can be chalked out with the help of a PowerShell cmdlet: Add-VMHardDiskDrive -VMName VM5 –ControllerType SCSI – ControllerNumber 0 –ControllerLocation 2 –DiskNumber 3 Virtual fibre channel Let's move on to the next big offering in Windows Server 2012 and R2 Hyper-V Server. There was pretty much a clamor for direct FC connectivity to virtual machines, as pass-through disks were supported only via iSCSI LUNs (with some major drawbacks not with FC). Also, needless to say, FC is faster. Enterprises with high-performance workloads relying on the FC SAN refrained from virtualizing or migrating to the cloud. Windows Server 2012 introduced the virtual fibre channel SAN ability in Hyper-V, which extended the HBA (short for host bus adapter) abilities to a virtual machine, granting them a WWN (short for world wide node name) and allowing access to a fibre channel SAN over a virtual SAN. The fundamental principle behind the virtual SAN is the same as the Hyper-V virtual switch, wherein you create a virtual SAN that hooks up to the SAN fabric over the physical HBA of the Hyper-V host. The virtual machine has new synthetic hardware for the last piece. It is called a virtual host bus adapter or vHBA, which gets its own set of WWNs, namely WWNN (node name) and WWPN (port name). The WWN is to the FC protocol as MAC is to the Ethernet. Once the WWNs are identified at the fabric and the virtual SAN, the storage admins can set up zoning and present the LUN to the specific virtual machine. The concept is straightforward, but there are prerequisites that you will need to ensure are in place before you can get down to the nitty-gritty of the setup: One or more Windows Server 2012 or R2 Hyper-V hosts. Hosts should have one or more FC HBAs with the latest drivers, and should support the virtual fibre channel and NPIV. NPIV may be disabled at the HBA level (refer to the vendor documentation prior to deployment). The same can be enabled using command-line utilities or GUI-based such as OneCommand manager, SANSurfer, and so on. NPIV should be enabled on the SAN fabric or actual ports. Storage arrays are transparent to NPIV, but they should support devices that present LUNs. Supported guest operating systems for virtual SAN are Windows 2008, Windows 2008 R2, Windows 2012, and Windows 2012 R2. The virtual fibre channel does not allow boot from SAN, unlike pass-through disks. We are now done with the prerequisites! Now, let's look at two important aspects of SAN infrastructure, namely NPIV and MPIO. N_Port ID virtualization (NPIV) An ANSI T11 standard extension, this feature allows virtualization of the N_Port (WWPN) of an HBA, allowing multiple FC initiators to share a single HBA port. The concept is popular and is widely accepted and promoted by different vendors. Windows Server 2012 and R2 Hyper-V utilizes this feature to the best, wherein each virtual machine partaking in the virtual SAN gets assigned a unique WWPN and access to the SAN over a physical HBA spawning its own N_Port. Zoning follows next, wherein the fabric can have the zone directed to the VM WWPN. This attribute leads to a very small footprint, and thereby, easier management and operational and capital expenditure. Summary It is going to be quite a realization that we have covered almost all the basic attributes and aspects required for a simple Windows Server 2012 R2 Hyper-V infrastructure setup. If we revise the contents, we will notice this: we started off in this article by understanding and defining the purpose of virtual storage, and what the available options are for storage to be used with a virtual machine. We reviewed various virtual hard disk types, formats, and associated operations that may be required to customize a particular type or modify it accordingly. We recounted how the VHDX format is superior to its predecessor VHD and which features were added with the latest Window Server releases, namely 2012 and 2012 R2. We discussed shared VHDX and how it can be used as an alternative to the old-school iSCSI or FC LUN as a shared storage for Windows guest clustering. Pass-through disks are on their way out, and we all know the reason why. The advent of the virtual fibre channel with Windows Server 2012 has opened the doors for virtualization of high-performance workloads relying heavily on FC connectivity, which until now was a single reason and enough of a reason to decline consolidation of these workloads. Resources for Article: Further resources on this subject: Hyper-V Basics [article] Getting Started with Hyper-V Architecture and Components [article] Hyper-V building blocks for creating your Microsoft virtualization platform [article]
Read more
  • 0
  • 0
  • 7174

article-image-camera-api
Packt
07 Aug 2015
4 min read
Save for later

The Camera API

Packt
07 Aug 2015
4 min read
In this article by Purusothaman Ramanujam, the author of PhoneGap Beginner's Guide Third Edition, we will look at the Camera API. The Camera API provides access to the device's camera application using the Camera plugin identified by the cordova-plugin-camera key. With this plugin installed, an app can take a picture or gain access to a media file stored in the photo library and albums that the user created on the device. The Camera API exposes the following two methods defined in the navigator.camera object: getPicture: This opens the default camera application or allows the user to browse the media library, depending on the options specified in the configuration object that the method accepts as an argument cleanup: This cleans up any intermediate photo file available in the temporary storage location (supported only on iOS) (For more resources related to this topic, see here.) As arguments, the getPicture method accepts a success handler, failure handler, and optionally an object used to specify several camera options through its properties as follows: quality: This is a number between 0 and 100 used to specify the quality of the saved image. destinationType: This is a number used to define the format of the value returned in the success handler. The possible values are stored in the following Camera.DestinationType pseudo constants: DATA_URL(0): This indicates that the getPicture method will return the image as a Base64-encoded string FILE_URI(1): This indicates that the method will return the file URI NATIVE_URI(2): This indicates that the method will return a platform-dependent file URI (for example, assets-library:// on iOS or content:// on Android) sourceType: This is a number used to specify where the getPicture method can access an image. The following possible values are stored in the Camera.PictureSourceType pseudo constants: PHOTOLIBRARY (0), CAMERA (1), and SAVEDPHOTOALBUM (2): PHOTOLIBRARY: This indicates that the method will get an image from the device's library CAMERA: This indicates that the method will grab a picture from the camera SAVEDPHOTOALBUM: This indicates that the user will be prompted to select an album before picking an image allowEdit: This is a Boolean value (the value is true by default) used to indicate that the user can make small edits to the image before confirming the selection; it works only in iOS. encodingType: This is a number used to specify the encoding of the returned file. The possible values are stored in the Camera.EncodingType pseudo constants: JPEG (0) and PNG (1). targetWidth and targetHeight: These are the width and height in pixels, to which you want the captured image to be scaled; it's possible to specify only one of the two options. When both are specified, the image will be scaled to the value that results in the smallest aspect ratio (the aspect ratio of an image describes the proportional relationship between its width and height). mediaType: This is a number used to specify what kind of media files have to be returned when the getPicture method is called using the Camera.PictureSourceType.PHOTOLIBRARY or Camera.PictureSourceType.SAVEDPHOTOALBUM pseudo constants as sourceType; the possible values are stored in the Camera.MediaType object as pseudo constants and are PICTURE (0), VIDEO (1), and ALLMEDIA (2). correctOrientation: This is a Boolean value that forces the device camera to correct the device orientation during the capture. cameraDirection: This is a number used to specify which device camera has to be used during the capture. The values are stored in the Camera.Direction object as pseudo constants and are BACK (0) and FRONT (1). popoverOptions: This is an object supported on iOS to specify the anchor element location and arrow direction of the popover used on iPad when selecting images from the library or album. saveToPhotoAlbum: This is a Boolean value (the value is false by default) used in order to save the captured image in the device's default photo album. The success handler receives an argument that contains the URI to the file or data stored in the file's Base64-encoded string, depending on the value stored in the encodingType property of the options object. The failure handler receives a string containing the device's native code error message as an argument. Similarly, the cleanup method accepts a success handler and a failure handler. The only difference between the two is that the success handler doesn't receive any argument. The cleanup method is supported only on iOS and can be used when the sourceType property value is Camera.PictureSourceType.CAMERA and the destinationType property value is Camera.DestinationType.FILE_URI. Summary In this article, we looked at the various properties available with the Camera API. Resources for Article: Further resources on this subject: Geolocation – using PhoneGap features to improve an app's functionality, write once use everywhere [article] Using Location Data with PhoneGap [article] iPhone JavaScript: Installing Frameworks [article]
Read more
  • 0
  • 0
  • 4574
article-image-rundown-example
Packt
06 Aug 2015
10 min read
Save for later

Rundown Example

Packt
06 Aug 2015
10 min read
In this article by Miguel Oliveira, author of the book Microsoft System Center Orchestrator 2012 R2 Essentials, we will learn to get started on the creational process. We will be able to driven on how to address and connect all the pieces together in order to successfully create a Runbook. (For more resources related to this topic, see here.) Runbook for Active Directory User Account Provisioning Now, for this Runbook, we've been challenged by our HR department to come out with a solution for them to be able to create new user accounts for recently joined employees. The request was specifically drawn with the target for them (HR) to be able to: Provide the first and last name Provide the department name Get that user added to the proper department group and get all the information of the user Send the newly created account to the IT department to provide a machine, a phone, and an e-mail address With these requirements at the back of our heads, let's see which activities we need to get into our Runbook. I'll place these in steps for this example, so it's easy to follow: Data input: So, we definitely need an activity to allow the HR to feed the information into the Runbook. For this, we can use the Initialize Data activity (Runbook control category), or we could work along with a monitored file and read the data from a line, or even from a SharePoint list. But to keep it simple for now, let's use the Initialize Data. Data processing: In here, the idea would be to retrieve the Department given by the HR and process it to retrieve the group (the Get Group activity from the Active Directory category) and include our user (the Add User To Group activity from the Active Directory category) into the group we've retrieved; but in between, we'll need to create the user account (Create User activity from the Active Directory category), and generate a password (the Generate Random Text activity from the Utilities category). Data output: At the very end of all this, send an e-mail (the Send Email activity from the Email category) back to HR with the account information and status of its creation and inform our IT department (for security reasons) too about the account that has been created. We're also going to closely watch for errors with a few activities that will show us whether an error occurs. Let's see the look of this Runbook from a structured point (and actually almost how it's going to look in the end) and we'll detail the activities and options within them step by step from there. Here's the aspect of the Runbook structured with the activities properly linked between them allowing the data bus to flow and transport the published data from the beginning to the end: As described in the steps, we start with an Initialize Data activity in which we're going to request some inputs from the person executing the Runbook. To create a user, we'll need his First Name and Last Name and also the Department. For that, we'll fill in the following information in the Fetch User Details activity seen in the previous screenshot. For the sake of avoiding errors, the HR department should have a proper list of departments that we know will translate into a proper group in the upcoming activities. After filling the information, the processing of the information begins and with it, our automation process that will find the group for that department, create our user account, set a password, change password on the first login, add the user to the group, and enable the account. For that, we'll start with the Get Group activity in which we'll fill in the following: Set up the proper configuration in the Get Group Properties window for the Active Directory Domain in which you'll want this to execute, and in the Filters options, set to filter Sam Account Name of the group as the Department filled by the HR department. Now we'll set another prerequisite to create the account—the password! For this, we'll get the Generate Random Text activity and set it with the following parameters: These values should be set to accordingly accommodate your existing security policy and minimum password requirements for your domain. These previous activities are all we need to have the necessary values to proceed with the account creation by using the Create User activity. These should be the parameters filled in. All of these parameters are actually being retrieved from the Published Data from the last activities. As the list is long, we'll detail them here for your better understanding. Everything that's between {} is Published Data: Common Name: {First Name from "Fetch User Details"} {Last Name from "Fetch User Details"} Department: {Display Name from "Get Group"} Display Name: {First Name from "Fetch User Details"} {Last Name from "Fetch User Details"} First Name: {First Name from "Fetch User Details"} Last Name: {Last Name from "Fetch User Details"} Password: {Random text from "Generate Random Text"} User Must Change Password: True SAM Account Name: {First Name from "Fetch User Details"}.{Last Name from "Fetch User Details"} User Principal Name: {First Name from "Fetch User Details"}.{Last Name from "Fetch User Details"}@test.local Email: {First Name from "Fetch User Details"}.{Last Name from "Fetch User Details"}@test.com Manager: {Managed By from "Get Group"} As said previously, most of the data comes from the Published Data and we've created subscriptions in all these fields to retrieve it. The only two fields that have data different from Published Data are the User Must Change Password, User Principal Name (UPN), and Email. The User Must Change Password is a Boolean field that will display only Yes or No, and in the UPN and e-mail we've added the domain information (@test.local and @test.com) to the Published Data. Depending on the Create User activity's output, it will trigger a different activity. For now, let's assume that the activity returns a success on the execution, this will make the Runbook follow the smart link that goes on with the Get User activity. The Get User activity will retrieve all the information concerning the newly created user account that will be useful for the next activities down the line. In order to retrieve the proper information, we'll need to configure the following in the Filters area within the activity: You'll need to add a filter, selecting Sam Account Name and Relation as Equals for Value under the subscribed data from Sam Account Name that comes out of the Create User activity. From here, we'll link with the activity that Add User to Group (here renamed to Add User to Department) and within that activity we're going to specify the group and the user so the activity can add the user into the group. It should look exactly like the screenshot that follows: We'll once again assume that everything's running as expected and prepare our next activity that is to enable user account and for this one, we'll use the Enable User activity. The configuration of the activity can be seen in the next screenshot: Once again, we'll get the information out of the Published Data and feed it into the activity. After this activity is completed, we're going to log the execution and information output into the platform with the Send Platform Event activity so we can see any necessary information available from the execution. Here is a sample of the configuration for the message output: To get the Details text box expanded this way, right-click on it and select Expand… from the menu, then you can format and include the data that you feel is more important for you to see it. Then we'll send an e-mail for the HR team with the account creation details so they can communicate to the newly arrived employee and another e-mail for the IT department only with the account name and the department (plus the group name) for security reasons. Here are the samples of these two activities, starting with the HR e-mail: Let's go point by point on this configuration sample. In the Details section, we've settled the following: Subject: Account {Sam Account Name from "Get User"} Created Recipients: to: hr.dept@test.com Message: The message description is given in the following screenshot: Message option that consists of choosing the Priority of the message (high, normal, or low), and set the necessary SMTP authentication parameters (account, password, and domain) so you can send the message through your e-mail service. If you have an application e-mail service relay, you can leave the SMTP authentication without any configuration. In connect Connect option, you'll find the place to configure the e-mail address that you want the user to see and the SMTP connection (server, port, and SSL) through which you'll send your messages. Now our Send Email IT activity will be more or less the same, with the exception for the destination and the message itself. It should be something a little more or less like the following screenshot: By now you've got the idea and you're pumped to create new Runbooks, but we still have to do some error control on some of these tasks; although they're chained, if one fails, everything fails. So for this Runbook, we'll create error control on two tasks that if we observe well, are more or less the only two that can fail! One is the Create User Account activity, which can fail due to the user account existing or by some issue with privileges on its creation. The other is Add User To Department that might fail to add the user into the group for some reason. So for this, we'll create two notification activities called Send Event and Log Message that we'll rename to User Account Error and Group Error respectively. If we look into the User Account Error activity, we'll set something more or less like the following screenshot: A quick explanation of the settings is as follows: Computer: This is the computer to which Windows Event Viewer we're going to write the event into. In this case, we'll concentrate over our Management Server, but you might have a logging server for this. Message: The message gets logged into the windows event viewer. Here, we can subscribe for the error data coming out of the last activity executed. Severity: This is usually an Error. You can set Information or Warning if you are deploying these activities to keep a track on each given step. So for our Group Error Properties the philosophy will be the same. Now that we are all set, we'll need to work our smart links so that they can direct the Runbook execution flow into the following activity depending on the previous activity output (success or error). In the end, your Runbook should look a little bit more like this: That's it for the Runbook for Active Directory User Account Provisioning. We'll now speed up a little bit more on the other Runbooks as you'll have a much clearer understanding after this first sample. Summary We've seen one of the Runbook samples these Runbooks should serve as the base for real case scenarios in the environment and help us in the creativity process and also to better understand the configurations necessary on each activity in order to proceed successfully. Resources for Article: Further resources on this subject: Unpacking System Center 2012 Orchestrator [article] Working with VMware Infrastructure [article] Unboxing Docker [article]
Read more
  • 0
  • 0
  • 1783

article-image-simplify-deployment-infrastructure-manifest-part-2
Cody A.
06 Aug 2015
8 min read
Save for later

Simplify Deployment with an Infrastructure Manifest, Part 2

Cody A.
06 Aug 2015
8 min read
This is the second part of a post on using a Manifest of your infrastructure for automation. The first part described how to use your Cloud API to transform Application Definitions into an Infrastructure Manifest. This post will show examples of automation tools built using an Infrastructure Manifest. In particular, we'll explore application deployment and load balancer configuration management. Recall our example Infrastructure Manifest from Part 1: { "prod": { "us-east-1": { "appserve01ea1": { "applications": [ "appserve" ], "zone": "us-east-1a", "fqdn": "ec2-1-2-3-4.compute-1.amazonaws.com", "private ip": "10.9.8.7", "public ip": "1.2.3.4", "id": "i-a1234bc5" }, ... }, ... } As I mentioned previously, this Manifest can form the basis for numerous automations. Some tools my team at Signal has built on top of this concept are automated deployments, load balancing, security group management, and DNS. Application Deployment Let's see how an Infrastructure Manifest can simplify application deployment. Although we'll use Fabric as the basis for our deployment system, the concept should work with Chef and many other push-based deployment systems as well. from json import load as json_decode from urllib2 import urlopen MANIFEST = json_decode(urlopen(env.manifest)) for hostname, meta in MANIFEST.iteritems(): for role in meta['roles']: env.roledefs[role].append(hostname) Note: For this to work, you must set the manifest URL in Fabric's environment as env.manifest. For example, you can set this in the ~/.fabricrc file or pass it on the command-line. manifest=http://manifest:5000/api/prod/us-east-1/manifest That's all Fabric really requires to know where to deploy each application! Given the manifest above, this would add the "appserve" role so that you can run tasks on these instances simultaneously. For example, to deploy the "appserve" application to all the hosts with this role: @task @roles('appserve') def deploy_appserve(): # standard Fabric deploy logic here Now calling fab deploy_appserve will run the commands to deploy the "appserve application on each host with the "appserve" role. Easy, right? You might want to deploy some applications to every host in your infrastructure. Instead of adding these special roles to every Application Definition, you can include them here. For example, if you have a custom monitoring application ("mymon"), then you can read the list of all hosts from the Manifest and add them to the "mymon" role. # set up special cases for roledefs: env.roledefs = defaultdict(list, { 'mymon': list(MANIFEST.keys()), }) Now, after adding a deploy_mymon task, you'll be able to easily deploy "mymon" to all hosts in your infrastructure. Even if you auto-deploy using a specialized git receiver, Jenkins hooks, or similar, this approach will enable you to make your deployments cloud-aware, to deploy each application to the appropriate hosts in your cloud. That's it! Deployments can't be much simpler than this. Load Balancer Configuration Management A common challenge in cloud environments is maintaining the list of all hosts for load balancer configurations. If you don't want to lock in to a vendor or cloud-specific solution such as Amazon ELB, you may choose an open source software load balancer such as HAProxy. However, this leaves you with the challenge of maintaining the configurations as hosts appear and disappear in your cloud-based infrastructure. This problem is amplified when you use software-based load balancers between each set of services (or each tier) in your application. Using the Infrastructure Manifest, a first-pass solution can be quite simple. You can revision-control the configuration templates and interpolate the application ports and host information from the Manifest. Then periodically update the generated configuration files and distribute them using your existing configuration management software (such as Puppet or Chef). Let's say you want to generate a HAProxy configuration for your load balancer. The complete configuration file might look like this: global user haproxy group haproxy daemon frontend main_vip bind *:80 # ACLs for basic name-based virtual-hosts acl appserve_host hdr_beg(host) -i app.example.com acl uiserve_host hdr_beg(host) -i portal.example.com use_backend appserve if appserve_host use_backend uiserve if uiserve_host default_backend uiserve backend appserve balance roundrobin option httpclose option httpchk GET /hc http-check disable-on-404 server appserve01ea1 10.42.1.91:8080 check server appserve02ea1 10.42.1.92:8080 check server appserve03ea1 10.42.1.93:8080 check backend uiserve balance roundrobin option httpclose option httpchk GET /hc server uiserve01ea1 10.42.1.111:8082 check server uiserve02ea1 10.42.1.112:8082 check The simplest way to produce this configuration file is to generate it from a template. There are many templating solutions from which to choose. I'm fond of Jinja2, so we'll use that for exploring this solution in Python. We want to load the template from a file located in a "templates" directory, so we start by creating a Jinja2 loader and environment: from jinja2 import Environment, FileSystemLoader import os loader = FileSystemLoader(os.path.join(os.path.dirname(__file__), 'templates')) environment = Environment(loader=loader, lstrip_blocks=True) The template corresponding to this output could look like this. We'll call it 'lb.txt' since it's for the lb server group. global user haproxy group haproxy daemon frontend main_vip bind *:80 # ACLs for basic name-based virtual-hosts acl appserve_host hdr_beg(host) -i app.example.com acl uiserve_host hdr_beg(host) -i portal.example.com use_backend appserve if appserve_host use_backend uiserve if uiserve_host default_backend uiserve backend appserve balance roundrobin option httpclose option httpchk GET {{ vips.appserve.healthcheck_resource }} http-check disable-on-404 {%- for server in vips.appserve.servers %} server {{ server['name'] }} {{ server.details['private_ip'] }}:{{ vips.appserve.backend_port }} check {%- endfor %} backend uiserve balance roundrobin option httpclose option httpchk GET {{ vips.uiserve.healthcheck_resource }} {%- for server in vips.uiserve.servers %} server {{ server['name'] }} {{ server.details['private_ip'] }}:{{ vips.uiserve.backend_port }} check {%- endfor %} You can see by examining the template that it only expects a single variable: vips. This is a map of application names to their load balancer configuration. Specifically, each vip contains a backend port, a healthcheck resource (i.e., HTTP path), and a list of servers (with server name and private IP address for each). Coincidentally, all of this information is available in the Infrastructure Manifest and Application Definitions we developed in Part 1. We can easily fetch this information from the webapp. from requests import get def main(manifest_host, env, region, server_group): manifest = get('http://%s/api/%s/%s/manifest' % (manifest_host, env, region)).json() applications = get('http://%s/api/applications' % manifest_host).json() print generate_haproxy(manifest, applications, server_group) Note: we didn't actually add the /api/applications endpoint last week, so its left as an exercise for the reader; hint: jsonify(config()['APPLICATIONS']). Now we can dive into the meat of this tool, the generate_haproxy function. As you might guess, this uses the Jinja2 environment to render the template. But first it must merge the Application Definitions and Manifest into the vips variable that the template expects. def generate_haproxy(manifest, applications, server_group): apps = {} for application, meta in applications.iteritems(): app_object = { 'servers': [], 'frontend_port': meta['frontend'], 'backend_port': meta['backend'], 'healthcheck_resource': meta['healthcheck']['resource'] } for server in manifest: if application in manifest[server]['applications']: app_object['servers'].append({'name': server, 'details': manifest[server]}) app_object['servers'].sort(key=lambda e: e['name']) apps[application] = app_object return environment.get_template("%s.txt" % server_group).render(vips=apps) There's not much going on here. We iterate through all the applications and create a vip (app_object) with all the needed variables for each one. Then we render the server_group's template with Jinja2. Finally, we can call the main we created above to see this in action: main('localhost:5000', 'prod', 'us-east-1', 'lb') This will print the HAProxy configuration for the lb load balancer group for your production us-east-1 region. (It assumes that the Manifest webapp is running on the same host.) Depending on what hosts you have in your cloud infrastructure, this should print something like the complete HAProxy configuration file shown at the top. To easily keep your load balancer configurations up-to-date, you could run this regularly for each environment and region. Then the generated files could be distributed using your existing configuration management system. Alternatively, if your load balancers support programmatic rule updates, that would be even cleaner than this simple first-pass approach which relies on configuration file updates. I hope this spurs your imagination and shows the benefit of using an Infrastructure Manifest to automate all the things. About the author Cody A. Ray is an inquisitive, tech-savvy, entrepreneurially-spirited dude. Currently, he is a software engineer at Signal, an amazing startup in downtown Chicago, where he gets to work with a dream team that's changing the service model underlying the Internet.
Read more
  • 0
  • 0
  • 2574
Modal Close icon
Modal Close icon