Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7008 Articles
article-image-diving-data-search-and-report
Packt
17 Oct 2016
11 min read
Save for later

Diving into Data – Search and Report

Packt
17 Oct 2016
11 min read
In this article by Josh Diakun, Paul R Johnson, and Derek Mock authors of the books Splunk Operational Intelligence Cookbook - Second Edition, we will cover the basic ways to search the data in Splunk. We will cover how to make raw event data readable (For more resources related to this topic, see here.) The ability to search machine data is one of Splunk's core functions, and it should come as no surprise that many other features and functions of Splunk are heavily driven-off searches. Everything from basic reports and dashboards to data models and fully featured Splunk applications are powered by Splunk searches behind the scenes. Splunk has its own search language known as the Search Processing Language (SPL). This SPL contains hundreds of search commands, most of which also have several functions, arguments, and clauses. While a basic understanding of SPL is required in order to effectively search your data in Splunk, you are not expected to know all the commands! Even the most seasoned ninjas do not know all the commands and regularly refer to the Splunk manuals, website, or Splunk Answers (http://answers.splunk.com). To get you on your way with SPL, be sure to check out the search command cheat sheet and download the handy quick reference guide available at http://docs.splunk.com/Documentation/Splunk/latest/SearchReference/SplunkEnterpriseQuickReferenceGuide. Searching Searches in Splunk usually start with a base search, followed by a number of commands that are delimited by one or more pipe (|) characters. The result of a command or search to the left of the pipe is used as the input for the next command to the right of the pipe. Multiple pipes are often found in a Splunk search to continually refine data results as needed. As we go through this article, this concept will become very familiar to you. Splunk allows you to search for anything that might be found in your log data. For example, the most basic search in Splunk might be a search for a keyword such as error or an IP address such as 10.10.12.150. However, searching for a single word or IP over the terabytes of data that might potentially be in Splunk is not very efficient. Therefore, we can use the SPL and a number of Splunk commands to really refine our searches. The more refined and granular the search, the faster the time to run and the quicker you get to the data you are looking for! When searching in Splunk, try to filter as much as possible before the first pipe (|) character, as this will save CPU and disk I/O. Also, pick your time range wisely. Often, it helps to run the search over a small time range when testing it and then extend the range once the search provides what you need. Boolean operators There are three different types of Boolean operators available in Splunk. These are AND, OR, and NOT. Case sensitivity is important here, and these operators must be in uppercase to be recognized by Splunk. The AND operator is implied by default and is not needed, but does no harm if used. For example, searching for the term error or success would return all the events that contain either the word error or the word success. Searching for error success would return all the events that contain the words error and success. Another way to write this can be error AND success. Searching web access logs for error OR success NOT mozilla would return all the events that contain either the word error or success, but not those events that also contain the word mozilla. Common commands There are many commands in Splunk that you will likely use on a daily basis when searching data within Splunk. These common commands are outlined in the following table: Command Description chart/timechart This command outputs results in a tabular and/or time-based output for use by Splunk charts. dedup This command de-duplicates results based upon specified fields, keeping the most recent match. eval This command evaluates new or existing fields and values. There are many different functions available for eval. fields This command specifies the fields to keep or remove in search results. head This command keeps the first X (as specified) rows of results. lookup This command looks up fields against an external source or list, to return additional field values. rare This command identifies the least common values of a field. rename This command renames the fields. replace This command replaces the values of fields with another value. search This command permits subsequent searching and filtering of results. sort This command sorts results in either ascending or descending order. stats This command performs statistical operations on the results. There are many different functions available for stats. table This command formats the results into a tabular output. tail This command keeps only the last X (as specified) rows of results. top This command identifies the most common values of a field. transaction This command merges events into a single event based upon a common transaction identifier. Time modifiers The drop-down time range picker in the Graphical User Interface (GUI) to the right of the Splunk search bar allows users to select from a number of different preset and custom time ranges. However, in addition to using the GUI, you can also specify time ranges directly in your search string using the earliest and latest time modifiers. When a time modifier is used in this way, it automatically overrides any time range that might be set in the GUI time range picker. The earliest and latest time modifiers can accept a number of different time units: seconds (s), minutes (m), hours (h), days (d), weeks (w), months (mon), quarters (q), and years (y). Time modifiers can also make use of the @ symbol to round down and snap to a specified time. For example, searching for sourcetype=access_combined earliest=-1d@d latest=-1h will search all the access_combined events from midnight, a day ago until an hour ago from now. Note that the snap (@) will round down such that if it were 12 p.m. now, we would be searching from midnight a day and a half ago until 11 a.m. today. Working with fields Fields in Splunk can be thought of as keywords that have one or more values. These fields are fully searchable by Splunk. At a minimum, every data source that comes into Splunk will have the source, host, index, and sourcetype fields, but some source might have hundreds of additional fields. If the raw log data contains key-value pairs or is in a structured format such as JSON or XML, then Splunk will automatically extract the fields and make them searchable. Splunk can also be told how to extract fields from the raw log data in the backend props.conf and transforms.conf configuration files. Searching for specific field values is simple. For example, sourcetype=access_combined status!=200 will search for events with a sourcetype field value of access_combined that has a status field with a value other than 200. Splunk has a number of built-in pre-trained sourcetypes that ship with Splunk Enterprise that might work with out-of-the-box, common data sources. These are available at http://docs.splunk.com/Documentation/Splunk/latest/Data/Listofpretrainedsourcetypes. In addition, Technical Add-Ons (TAs), which contain event types and field extractions for many other common data sources such as Windows events, are available from the Splunk app store at https://splunkbase.splunk.com. Saving searches Once you have written a nice search in Splunk, you may wish to save the search so that you can use it again at a later date or use it for a dashboard. Saved searches in Splunk are known as Reports. To save a search in Splunk, you simply click on the Save As button on the top right-hand side of the main search bar and select Report. Making raw event data readable When a basic search is executed in Splunk from the search bar, the search results are displayed in a raw event format by default. To many users, this raw event information is not particularly readable, and valuable information is often clouded by other less valuable data within the event. Additionally, if the events span several lines, only a few events can be seen on the screen at any one time. In this recipe, we will write a Splunk search to demonstrate how we can leverage Splunk commands to make raw event data readable, tabulating events and displaying only the fields we are interested in. Getting ready You should be familiar with the Splunk search bar and search results area. How to do it… Follow the given steps to search and tabulate the selected event data: Log in to your Splunk server. Select the Search & Reporting application from the drop-down menu located in the top left-hand side of the screen. Set the time range picker to Last 24 hours and type the following search into the Splunk search bar: index=main sourcetype=access_combined Then, click on Search or hit Enter. Splunk will return the results of the search and display the raw search events under the search bar. Let's rerun the search, but this time we will add the table command as follows: index=main sourcetype=access_combined | table _time, referer_domain, method, uri_path, status, JSESSIONID, useragent Splunk will now return the same number of events, but instead of presenting the raw events to you, the data will be in a nicely formatted table, displaying only the fields we specified. This is much easier to read! Save this search by clicking on Save As and then on Report. Give the report the name cp02_tabulated_webaccess_logs and click on Save. On the next screen, click on Continue Editing to return to the search. How it works… Let's break down the search piece by piece: Search fragment Description index=main All the data in Splunk is held in one or more indexes. While not strictly necessary, it is a good practice to specify the index (es) to search, as this will ensure a more precise search. sourcetype=access_combined This tells Splunk to search only the data associated with the access_combined sourcetype, which, in our case, is the web access logs. | table _time, referer_domain, method, uri_path, action, JSESSIONID, useragent Using the table command, we take the result of our search to the left of the pipe and tell Splunk to return the data in a tabular format. Splunk will only display the fields specified after the table command in the table of results.  In this recipe, you used the table command. The table command can have a noticeable performance impact on large searches. It should be used towards the end of a search, once all the other processing on the data by the other Splunk commands has been performed. The stats command is more efficient than the table command and should be used in place of table where possible. However, be aware that stats and table are two very different commands. There's more… The table command is very useful in situations where we wish to present data in a readable format. Additionally, tabulated data in Splunk can be downloaded as a CSV file, which many users find useful for offline processing in spreadsheet software or for sending to others. There are some other ways we can leverage the table command to make our raw event data readable. Tabulating every field Often, there are situations where we want to present every event within the data in a tabular format, without having to specify each field one by one. To do this, we simply use a wildcard (*) character as follows: index=main sourcetype=access_combined | table * Removing fields, then tabulating everything else While tabulating every field using the wildcard (*) character is useful, you will notice that there are a number of Splunk internal fields, such as _raw, that appear in the table. We can use the fields command before the table command to remove the fields as follows: index=main sourcetype=access_combined | fields - sourcetype, index, _raw, source date* linecount punct host time* eventtype | table * If we do not include the minus (-) character after the fields command, Splunk will keep the specified fields and remove all the other fields. Summary In this article we covered along with the introduction to Splunk, how to make raw event data readable Resources for Article: Further resources on this subject: Splunk's Input Methods and Data Feeds [Article] The Splunk Interface [Article] The Splunk Web Framework [Article]
Read more
  • 0
  • 0
  • 1170

article-image-how-build-desktop-app-using-electron
Amit Kothari
17 Oct 2016
9 min read
Save for later

How to build a desktop app using Electron

Amit Kothari
17 Oct 2016
9 min read
Desktop apps are making a comeback. Even companies with cloud-based applications with awesome web apps are investing in desktop apps to offer a better user experience. One example is team collaboration tool called Slack. They built a really good desktop app with web technologies using Electron. Electron is an open source framework used to build cross-platform desktop apps using web technologies. It uses Node.js and Chromium and allows us to develop desktop GUI apps using HTML, CSS and JavaScript. Electron is developed by GitHub, initially for Atom editor but now used by many companies, including Slack, Wordpress, Microsoft and Docker to name a few. Electron apps are web apps running in embedded Chromium web browser, with access to the full suite of Node.js modules and underlying operating system. In this post we will build a simple desktop app using Electron. Hello Electron Let’s start by creating a simple app. Before we start, we need Node.js and npm installed. Follow the instructions on the Node.js website if you do not have these installed already. Create a new director for your application and inside the app directory, create a package.json file by using the npm init command. Follow the prompts and remember to set main.js as the entry point. Once the file is generated, install electron-prebuild, which is the precomplied version of electron, and add it as a dev depenency in the package.json using the command npm install --save-dev electron-prebuilt. Also add "start": "electron ." under scripts, which we will use later to start our app. The package.json file will look something like this: { "name": "electron-tutorial", "version": "1.0.0", "description": "Electron Tutorial ", "main": "main.js", "scripts": { "start": "electron ." }, "devDependencies": { "electron-prebuilt": "^1.3.3" } } Create a file main.js with the following content: const {app, BrowserWindow} = require('electron'); // Global reference of the window object. let mainWindow; // When Electron finish initialization, create window and load app index.html app.on('ready', () => { mainWindow = new BrowserWindow({ width: 800, height: 600 }); mainWindow.loadURL(`file://${__dirname}/index.html`); }); We defined main.js as the entry point to our app in package.json. In main.js the electron app module controls the application lifecyle and BrowserWindow is used to create a native browser window. When Electron finishes initializing and our app is ready, we create a browser window to load our web page—index.html. As mentioned in the Electron documentation, remember to keep a global reference of the window object to avoid it from closing automatically when the JavaScript garbage collector kicks in. Finally, create the index.html file: <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>Hello Electron</title> </head> <body> <h1>Hello Electron</h1> </body> </html> We can now start our app by running the npm start command. Testing the Electron app Let’s write some integration tests for our app using Spectron. spectron allows us to test Electron apps using ChromeDriver and WebdriverIO. It is a test framework that is agnostic, but for this example, we will use mocha to write the tests. Let’s start by adding spectron and mocha as dev dependecies using the npm install --save-dev spectron and npm install --save-dev mocha commands. Then add "test": "./node_modules/mocha/bin/mocha" under scripts in the package.json file. This will be used to run our tests later. The package.json should look something like this: { "name": "electron-tutorial", "version": "1.0.0", "description": "Electron Tutorial ", "main": "main.js", "scripts": { "start": "electron .", "test": "./node_modules/mocha/bin/mocha" }, "devDependencies": { "electron-prebuilt": "^1.3.3", "mocha": "^3.0.2", "spectron": "^3.3.0" } } Now that we have all the dependencies installed, let’s write some tests. Create a directory called test and a file called test.js inside it. Copy the following content to test.js: var Application = require('spectron').Application; var electron = require('electron-prebuilt'); var assert = require('assert'); describe('Sample app', function () { var app; beforeEach(function () { app = new Application({ path: electron, args: ['.'] }); return app.start(); }); afterEach(function () { if (app && app.isRunning()) { return app.stop(); } }); it('should show initial window', function () { return app.browserWindow.isVisible() .then(function (isVisible) { assert.equal(isVisible, true); }); }); it('should have correct app title', function () { return app.client.getTitle() .then(function (title) { assert.equal(title, 'Hello Electron') }); }); }); Here we have couple of simple tests. We start the app before each test and stop after each test. The first test is to verify that the app's browserWindow is visible, and the second test is to verify the app’s title. We can run these tests using the npm run test command. spectron not only allows us to easily set up and tear down our app, but also give access to various APIs, allowing us to write sophisticated tests covering various business requirements. Please have a look at their documentation for more details. Packaging our app Now that we have a basic app, we are ready to package and build it for distribution. We will use electron-builder for this, which offers a complete solution to distribute apps on different platforms with the option to auto-update. It is recommended to use two separate package.jsons when using electron-builder, one for the development environment and build scripts and another one with app dependencies. But for our simple app, we can just use one package.json file. Let’s start by adding electron-builder as dev dependency using command npm install --save-dev electron-builder. Make sure you have the name, desciption, version and author defined in package.json. You also need to add electron-builder-specific options as build property in package.json: "build": { "appId": "com.amitkothari.electronsample", "category": "public.app-category.productivity" } For Mac OS, we need to specify appId and category. Look at the documentation for options for other platforms. Finally add script in package.json to package and build the app: "dist": "build" The updated package.json will look like this: { "name": "electron-tutorial", "version": "1.0.0", "description": "Electron Tutorial ", "author": "Amit Kothari", "main": "main.js", "scripts": { "start": "electron .", "test": "./node_modules/mocha/bin/mocha", "dist": "build" }, "devDependencies": { "electron-prebuilt": "^1.3.3", "mocha": "^3.0.2", "spectron": "^3.3.0", "electron-builder": "^5.25.1" }, "build": { "appId": "com.amitkothari.electronsample", "category": "public.app-category.productivity" } } Next we need to create a build directory under our project root directory. In this, put a file background.png for the Mac OS DMG background and icon.icns for app icon. We can now package our app by running the npm run dist command. Todo App We’ve built a very simple app, but Electron apps can do more than just show static text. Lets add some dynamic behavior to our app and convert it into a Todo list manager. We can use any JavaScript framework of choice, from AngularJS to React, with Electron, but for this example, we will use plain JavaScript. To start with, let’s update our index.html to display a todo list: <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>Hello Electron</title> <link rel="stylesheet" type="text/css" href="./style.css"> </head> <body> <div class="container"> <ul id="todoList"></ul> <textarea id="todoInput" placeholder="What needs to be done ?"></textarea> <button id="addTodoButton">Add to list</button> </div> </body> <script>require('./app.js')</script> </html> We also included style.css and app.js in index.html. All our CSS will be in style.css and our app logic will be in app.js. Create the style.css file with the following content: body { margin: 0; } ul { list-style-type: none; margin: 0; padding: 0; } li { padding: 10px; border-bottom: 1px solid #ddd; } button { background-color: black; color: #fff; margin: 5px; padding: 5px; cursor: pointer; border: none; font-size: 12px; } .container { width: 100%; } #todoInput { float: left; display: block; overflow: auto; margin: 15px; padding: 10px; font-size: 12px; width: 250px; } #addTodoButton { float: left; margin: 25px 10px; } And finally create the app.js file: (function () { const addTodoButton = document.getElementById('addTodoButton'); const todoList = document.getElementById('todoList'); // Create delete button for todo item const createTodoDeleteButton = () => { const deleteButton = document.createElement("button"); deleteButton.innerHTML = "X"; deleteButton.onclick = function () { this.parentNode.outerHTML = ""; }; return deleteButton; } // Create element to show todo text const createTodoText = (todo) => { const todoText = document.createElement("span"); todoText.innerHTML = todo; return todoText; } // Create a todo item with delete button and text const createTodoItem = (todo) => { const todoItem = document.createElement("li"); todoItem.appendChild(createTodoDeleteButton()); todoItem.appendChild(createTodoText(todo)); return todoItem; } // Clear input field const clearTodoInputField = () => { document.getElementById("todoInput").value = ""; } // Add new todo item and clear input field const addTodoItem = () => { const todo = document.getElementById('todoInput').value; if (todo) { todoList.appendChild(createTodoItem(todo)); clearTodoInputField(); } } addTodoButton.addEventListener("click", addTodoItem, false); } ()); Our app.js has a self invoking function which registers a listener (addTodoItem) on addTodoButton click event. On add button click event, the addTodoItem function will add a new todo item and clear the text area. Run the app again using the npm start command. Conclusion We built a very simple app, but it shows the potential of Electron. As stated on the Electron website, if you can build a website, you can build a desktop app. I hope you find this post interesting. If you have built an application with Electron, please share it with us. About the author Amit Kothari is a full-stack software developer based in Melbourne, Australia. He has 10+ years experience in designing and implementing software, mainly in Java/JEE. His recent experience is in building web applications using JavaScript frameworks such as React and AngularJS and backend microservices/REST API in Java. He is passionate about lean software development and continuous delivery.
Read more
  • 0
  • 0
  • 42912

article-image-learning-how-manage-records-visualforc
Packt
14 Oct 2016
7 min read
Save for later

Learning How to Manage Records in Visualforc

Packt
14 Oct 2016
7 min read
In this article by Keir Bowden, author of the book, Visualforce Development Cookbook - Second Edition we will cover the following styling fields and table columns as per requirement One of the common use cases for Visualforce pages is to simplify, streamline, or enhance the management of sObject records. In this article, we will use Visualforce to carry out some more advanced customization of the user interface—redrawing the form to change available picklist options, or capturing different information based on the user's selections. (For more resources related to this topic, see here.) Styling fields as required Standard Visualforce input components, such as <apex:inputText />, can take an optional required attribute. If set to true, the component will be decorated with a red bar to indicate that it is required, and form submission will fail if a value has not been supplied, as shown in the following screenshot: In the scenario where one or more inputs are required and there are additional validation rules, for example, when one of either the Email or Phone fields is defined for a contact, this can lead to a drip feed of error messages to the user. This is because the inputs make repeated unsuccessful attempts to submit the form, each time getting slightly further in the process. Now, we will create a Visualforce page that allows a user to create a contact record. The Last Name field is captured through a non-required input decorated with a red bar identical to that created for required inputs. When the user submits the form, the controller validates that the Last Name field is populated and that one of the Email or Phone fields is populated. If any of the validations fail, details of all errors are returned to the user. Getting ready This topic makes use of a controller extension so this must be created before the Visualforce page. How to do it… Navigate to the Apex Classes setup page by clicking on Your Name | Setup | Develop | Apex Classes. Click on the New button. Paste the contents of the RequiredStylingExt.cls Apex class from the code downloaded into the Apex Class area. Click on the Save button. Navigate to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Click on the New button. Enter RequiredStyling in the Label field. Accept the default RequiredStyling that is automatically generated for the Name field. Paste the contents of the RequiredStyling.page file from the code downloaded into the Visualforce Markup area and click on the Save button. Navigate to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Locate the entry for the RequiredStyling page and click on the Security link. On the resulting page, select which profiles should have access and click on the Save button. How it works… Opening the following URL in your browser displays the RequiredStyling page to create a new contact record: https://<instance>/apex/RequiredStyling. Here, <instance> is the Salesforce instance specific to your organization, for example, na6.salesforce.com. Clicking on the Save button without populating any of the fields results in the save failing with a number of errors: The Last Name field is constructed from a label and text input component rather than a standard input field, as an input field would enforce the required nature of the field and stop the submission of the form: <apex:pageBlockSectionItem > <apex:outputLabel value="Last Name"/> <apex:outputPanel id="detailrequiredpanel" layout="block" styleClass="requiredInput"> <apex:outputPanel layout="block" styleClass="requiredBlock" /> <apex:inputText value="{!Contact.LastName}"/> </apex:outputPanel> </apex:pageBlockSectionItem> The required styles are defined in the Visualforce page rather than relying on any existing Salesforce style classes to ensure that if Salesforce changes the names of its style classes, this does not break the page. The controller extension save action method carries out validation of all fields and attaches error messages to the page for all validation failures: if (String.IsBlank(cont.name)) { ApexPages.addMessage(new ApexPages.Message( ApexPages.Severity.ERROR, 'Please enter the contact name')); error=true; } if ( (String.IsBlank(cont.Email)) && (String.IsBlank(cont.Phone)) ) { ApexPages.addMessage(new ApexPages.Message( ApexPages.Severity.ERROR, 'Please supply the email address or phone number')); error=true; } Styling table columns as required When maintaining records that have required fields through a table, using regular input fields can end up with an unsightly collection of red bars striped across the table. Now, we will create a Visualforce page to allow a user to create a number of contact records via a table. The contact Last Name column header will be marked as required, rather than the individual inputs. Getting ready This topic makes use of a custom controller, so this will need to be created before the Visualforce page. How to do it… First, create the custom controller by navigating to the Apex Classes setup page by clicking on Your Name | Setup | Develop | Apex Classes. Click on the New button. Paste the contents of the RequiredColumnController.cls Apex class from the code downloaded into the Apex Class area. Click on the Save button. Next, create a Visualforce page by navigating to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Click on the New button. Enter RequiredColumn in the Label field. Accept the default RequiredColumn that is automatically generated for the Name field. Paste the contents of the RequiredColumn.page file from the code downloaded into the Visualforce Markup area and click on the Save button. Navigate to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Locate the entry for the RequiredColumn page and click on the Security link. On the resulting page, select which profiles should have access and click on the Save button. How it works… Opening the following URL in your browser displays the RequiredColumn page: https://<instance>/apex/RequiredColumn. Here, <instance> is the Salesforce instance specific to your organization, for example, na6.salesforce.com. The Last Name column header is styled in red, indicating that this is a required field. Attempting to create a record where only First Name is specified results in an error message being displayed against the Last Name input for the particular row: The Visualforce page sets the required attribute on the inputField components in the Last Name column to false, which removes the red bar from the component: <apex:column > <apex:facet name="header"> <apex:outputText styleclass="requiredHeader" value="{!$ObjectType.Contact.fields.LastName.label}" /> </apex:facet> <apex:inputField value="{!contact.LastName}" required="false"/> </apex:column> The Visualforce page custom controller Save method checks if any of the fields in the row are populated, and if this is the case, it checks that the last name is present. If the last name is missing from any record, an error is added. If an error is added to any record, the save does not complete: if ( (!String.IsBlank(cont.FirstName)) || (!String.IsBlank(cont.LastName)) ) { // a field is defined - check for last name if (String.IsBlank(cont.LastName)) { error=true; cont.LastName.addError('Please enter a value'); } String.IsBlank() is used as this carries out three checks at once: to check that the supplied string is not null, it is not empty, and it does not only contain whitespace. Summary Thus in this article we successfully mastered the techniques to style fields and table columns as per the custom needs. Resources for Article: Further resources on this subject: Custom Components in Visualforce [Article] Visualforce Development with Apex [Article] Learning How to Manage Records in Visualforce [Article]
Read more
  • 0
  • 0
  • 9223

article-image-loops-conditions-and-recursion
Packt
14 Oct 2016
14 min read
Save for later

Loops, Conditions, and Recursion

Packt
14 Oct 2016
14 min read
In this article from Paul Johnson, author of the book Learning Rust, we would take a look at how loops and conditions within any programming language are a fundamental aspect of operation. You may be looping around a list attempting to find when something matches, and when a match occurs, branching out to perform some other task; or, you may just want to check a value to see if it meets a condition. In any case, Rust allows you to do this. (For more resources related to this topic, see here.) In this article, we will cover the following topics: Types of loop available Different types of branching within loops Recursive methods When the semi-colon (;) can be omitted and what it means Loops Rust has essentially three types of loop—for, loop, and while. The for loop This type of loop is very simple to understand, yet rather powerful in operation. It is simple. In that, we have a start value, an end condition, and some form of value change. Although, the power comes in those two last points. Let's take a simple example to start with—a loop that goes from 0 to 10 and outputs the value: for x in 0..10 { println!("{},", x); } We create a variable x that takes the expression (0..10) and does something with it. In Rust terminology, x is not only a variable but also an iterator, as it gives back a value from a series of elements. This is obviously a very simple example. We can also go down as well, but the syntax is slightly different. In C, you will expect something akin to for (i = 10; i > 0; --i). This is not available in Rust, at least, not in the stable branches. Instead, we will use the rev() method, which is as follows: for x in (0..10).rev() { println!("{},", x); } It is worth noting that, as with the C family, the last number is to be excluded. So, for the first example, the values outputted are 9 to 0; essentially, the program generates the output values from 0 to 10 and then outputs them in reverse. Notice also that the condition is in braces. This is because the second parameter is the condition. In C#, this will be the equivalent of a foreach. In Rust, it will be as follows: for var in condition { // do something } The C# equivalent for the preceding code is: foreach(var t in condition) // do something Using enumerate A loop condition can also be more complex using multiple conditions and variables. For example, the for loop can be tracked using enumerate. This will keep track of how many times the loop has executed, as shown here: for(i, j) in (10..20).enumerate() { println!("loop has executed {} times. j = {}", i, j); } 'The following is the output: The enumeration is given in the first variable with the condition in the second. This example is not of that much use, but where it comes into its own is when looping over an iterator. Say we have an array that we need to iterate over to obtain the values. Here, the enumerate can be used to obtain the value of the array members. However, the value returned in the condition will be a pointer, so a code such as the one shown in the following example will fail to execute (line is a & reference whereas an i32 is expected) fn main() { let my_array: [i32; 7] = [1i32,3,5,7,9,11,13]; let mut value = 0i32; for(_, line) in my_array.iter().enumerate() { value += line; } println!("{}", value); } This can be simply converted back from the reference value, as follows: for(_, line) in my_array.iter().enumerate() { value += *line; } The iter().enumerate() method can equally be used with the Vec type, as shown in the following code: fn main() { let my_array = vec![1i32,3,5,7,9,11,13]; let mut value = 0i32; for(_,line) in my_array.iter().enumerate() { value += *line; } println!("{}", value); } In both cases, the value given at the end will be 49, as shown in the following screenshot: The _ parameter You may be wondering what the _ parameter is. It's Rust, which means that there is an argument, but we'll never do anything with it, so it's a parameter that is only there to ensure that the code compiles. It's a throw-away. The _ parameter cannot be referred to either; whereas, we can do something with linenumber in for(linenumber, line), but we can't do anything with _ in for(_, line). The simple loop A simple form of the loop is called loop: loop { println!("Hello"); } The preceding code will either output Hello until the application is terminated or the loop reaches a terminating statement. While… The while condition is of slightly more use, as you will see in the following code snippet: while (condition) { // do something } Let's take a look at the following example: fn main() { let mut done = 0u32; while done != 32 { println!("done = {}", done); done+=1; } } The preceding code will output done = 0 to done = 31. The loop terminates when done equals 32. Prematurely terminating a loop Depending on the size of the data being iterated over within a loop, the loop can be costly on processor time. For example, say the server is receiving data from a data-logging application, such as measuring values from a gas chromatograph, over the entire scan, it may record roughly half a million data points with an associated time position. For our purposes, we want to add all of the recorded values until the value is over 1.5 and once that is reached, we can stop the loop. Sound easy? There is one thing not mentioned, there is no guarantee that the recorded value will ever reach over 1.5, so how can we terminate the loop if the value is reached? We can do this one of two ways. First is to use a while loop and introduce a Boolean to act as the test condition. In the following example, my_array represents a very small subsection of the data sent to the server. fn main() { let my_array = vec![0.6f32, 0.4, 0.2, 0.8, 1.3, 1.1, 1.7, 1.9]; let mut counter: usize = 0; let mut result = 0f32; let mut test = false; while test != true { if my_array[counter] > 1.5 { test = true; } else { result += my_array[counter]; counter += 1; } } println!("{}", result); } The result here is 4.4. This code is perfectly acceptable, if slightly long winded. Rust also allows the use of break and continue keywords (if you're familiar with C, they work in the same way). Our code using break will be as follows: fn main() { let my_array = vec![0.6f32, 0.4, 0.2, 0.8, 1.3, 1.1, 1.7, 1.9]; let mut result = 0f32; for(_, value) in my_array.iter().enumerate() { if *value > 1.5 { break; } else { result += *value; } } println!("{}", result); } Again, this will give an answer of 4.4, indicating that the two methods used are the equivalent of each other. If we replace break with continue in the preceding code example, we will get the same result (4.4). The difference between break and continue is that continue jumps to the next value in the iteration rather than jumping out, so if we had the final value of my_array as 1.3, the output at the end should be 5.7. When using break and continue, always keep in mind this difference. While it may not crash the code, mistaking break and continue may lead to results that you may not expect or want. Using loop labels Rust allows us to label our loops. This can be very useful (for example with nested loops). These labels act as symbolic names to the loop and as we have a name to the loop, we can instruct the application to perform a task on that name. Consider the following simple example: fn main() { 'outer_loop: for x in 0..10 { 'inner_loop: for y in 0..10 { if x % 2 == 0 { continue 'outer_loop; } if y % 2 == 0 { continue 'inner_loop; } println!("x: {}, y: {}", x, y); } } } What will this code do? Here x % 2 == 0 (or y % 2 == 0) means that if variable divided by two returns no remainder, then the condition is met and it executes the code in the braces. When x % 2 == 0, or when the value of the loop is an even number, we will tell the application to skip to the next iteration of outer_loop, which is an odd number. However, we will also have an inner loop. Again, when y % 2 is an even value, we will tell the application to skip to the next iteration of inner_loop. In this case, the application will output the following results: While this example may seem very simple, it does allow for a great deal of speed when checking data. Let's go back to our previous example of data being sent to the web service. Recall that we have two values—the recorded data and some other value, for ease, it will be a data point. Each data point is recorded 0.2 seconds apart; therefore, every 5th data point is 1 second. This time, we want all of the values where the data is greater than 1.5 and the associated time of that data point but only on a time when it's dead on a second. As we want the code to be understandable and human readable, we can use a loop label on each loop. The following code is not quite correct. Can you spot why? The code compiles as follows: fn main() { let my_array = vec![0.6f32, 0.4, 0.2, 0.8, 1.3, 1.1, 1.7, 1.9, 1.3, 0.1, 1.6, 0.6, 0.9, 1.1, 1.31, 1.49, 1.5, 0.7]; let my_time = vec![0.2f32, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4, 1.6, 1.8, 2.0, 2.2, 2.4, 2.6, 2.8, 3.0, 3.2, 3.4, 3.6, 3.8]; 'time_loop: for(_, time_value) in my_time.iter().enumerate() { 'data_loop: for(_, value) in my_array.iter().enumerate() { if *value < 1.5 { continue 'data_loop; } if *time_value % 5f32 == 0f32 { continue 'time_loop; } println!("Data point = {} at time {}s", *value, *time_value); } } } This example is a very good one to demonstrate the correct operator in use. The issue is the if *time_value % 5f32 == 0f32 line. We are taking a float value and using the modulus of another float to see if we end up with 0 as a float. Comparing any value that is not a string, int, long, or bool type to another is never a good plan; especially, if the value is returned by some form of calculation. We can also not simply use continue on the time loop, so, how can we solve this problem? If you recall, we're using _ instead of a named parameter for the enumeration of the loop. These values are always an integer, therefore if we replace _ for a variable name, then we can use % 5 to perform the calculation and the code becomes: 'time_loop: for(time_enum, time_value) in my_time.iter().enumerate() { 'data_loop: for(_, value) in my_array.iter().enumerate() { if *value < 1.5 { continue 'data_loop; } if time_enum % 5 == 0 { continue 'time_loop; } println!("Data point = {} at time {}s", *value, *time_value); } } The next problem is that the output isn't correct. The code gives the following: Data point = 1.7 at time 0.4s Data point = 1.9 at time 0.4s Data point = 1.6 at time 0.4s Data point = 1.5 at time 0.4s Data point = 1.7 at time 0.6s Data point = 1.9 at time 0.6s Data point = 1.6 at time 0.6s Data point = 1.5 at time 0.6s The data point is correct, but the time is way out and continually repeats. We still need the continue statement for the data point step, but the time step is incorrect. There are a couple of solutions, but possibly the simplest will be to store the data and the time into a new vector and then display that data at the end. The following code gets closer to what is required: fn main() { let my_array = vec![0.6f32, 0.4, 0.2, 0.8, 1.3, 1.1, 1.7, 1.9, 1.3, 0.1, 1.6, 0.6, 0.9, 1.1, 1.31, 1.49, 1.5, 0.7]; let my_time = vec![0.2f32, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4, 1.6, 1.8, 2.0, 2.2, 2.4, 2.6, 2.8, 3.0, 3.2, 3.4, 3.6, 3.8]; let mut my_new_array = vec![]; let mut my_new_time = vec![]; 'time_loop: for(t, _) in my_time.iter().enumerate() { 'data_loop: for(v, value) in my_array.iter().enumerate() { if *value < 1.5 { continue 'data_loop; } else { if t % 5 != 0 { my_new_array.push(*value); my_new_time.push(my_time[v]); } } if v == my_array.len() { break; } } } for(m, my_data) in my_new_array.iter().enumerate() { println!("Data = {} at time {}", *my_data, my_new_time[m]); } } We will now get the following output: Data = 1.7 at time 1.4 Data = 1.9 at time 1.6 Data = 1.6 at time 2.2 Data = 1.5 at time 3.4 Data = 1.7 at time 1.4 Yes, we now have the correct data, but the time starts again. We're close, but it's not right yet. We aren't continuing the time_loop loop and we will also need to introduce a break statement. To trigger the break, we will create a new variable called done. When v, the enumerator for my_array, reaches the length of the vector (this is the number of elements in the vector), we will change this from false to true. This is then tested outside of the data_loop. If done == true, break out of the loop. The final version of the code is as follows: fn main() { let my_array = vec![0.6f32, 0.4, 0.2, 0.8, 1.3, 1.1, 1.7, 1.9, 1.3, 0.1, 1.6, 0.6, 0.9, 1.1, 1.31, 1.49, 1.5, 0.7]; let my_time = vec![0.2f32, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4, 1.6, 1.8, 2.0, 2.2, 2.4, 2.6, 2.8, 3.0, 3.2, 3.4, 3.6]; let mut my_new_array = vec![]; let mut my_new_time = vec![]; let mut done = false; 'time_loop: for(t, _) in my_time.iter().enumerate() { 'data_loop: for(v, value) in my_array.iter().enumerate() { if v == my_array.len() - 1 { done = true; } if *value < 1.5 { continue 'data_loop; } else { if t % 5 != 0 { my_new_array.push(*value); my_new_time.push(my_time[v]); } else { continue 'time_loop; } } } if done {break;} } for(m, my_data) in my_new_array.iter().enumerate() { println!("Data = {} at time {}", *my_data, my_new_time[m]); } } Our final output from the code is this: Recursive functions The final form of loop to consider is known as a recursive function. This is a function that calls itself until a condition is met. In pseudocode, the function looks like this: float my_function(i32:a) { // do something with a if (a != 32) { my_function(a); } else { return a; } } An actual implementation of a recursive function would look like this: fn recurse(n:i32) { let v = match n % 2 { 0 => n / 2, _ => 3 * n + 1 }; println!("{}", v); if v != 1 { recurse(v) } } fn main() { recurse(25) } The idea of a recursive function is very simple, but we need to consider two parts of this code. The first is the let line in the recurse function and what it means: let v = match n % 2 { 0 => n / 2, _ => 3 * n + 1 }; Another way of writing this is as follows: let mut v = 0i32; if n % 2 == 0 { v = n / 2; } else { v = 3 * n + 1; } In C#, this will equate to the following: var v = n % 2 == 0 ? n / 2 : 3 * n + 1; The second part is that the semicolon is not being used everywhere. Consider the following example: fn main() { recurse(25) } What is the difference between having and not having a semicolon? Rust operates on a system of blocks called closures. The semicolon closes a block. Let's see what that means. Consider the following code as an example: fn main() { let x = 5u32; let y = { let x_squared = x * x; let x_cube = x_squared * x; x_cube + x_squared + x }; let z = { 2 * x; }; println!("x is {:?}", x); println!("y is {:?}", y); println!("z is {:?}", z); } We have two different uses of the semicolon. If we look at the let y line first: let y = { let x_squared = x * x; let x_cube = x_squared * x; x_cube + x_squared + x // no semi-colon }; This code does the following: The code within the braces is processed. The final line, without the semicolon, is assigned to y. Essentially, this is considered as an inline function that returns the line without the semicolon into the variable. The second line to consider is for z: let z = { 2 * x; }; Again, the code within the braces is evaluated. In this case, the line ends with a semicolon, so the result is suppressed and () to z. When it is executed, we will get the following results: In the code example, the line within fn main calling recurse gives the same result with or without the semicolon. Summary In this, we've covered the different types of loops that are available within Rust, as well as gained an understanding of when to use a semicolon and what it means to omit it. We have also considered enumeration and iteration over a vector and array and how to handle the data held within them. Resources for Article: Further resources on this subject: Extra, Extra Collection, and Closure Changes that Rock! [article] Create a User Profile System and use the Null Coalesce Operator [article] Fine Tune Your Web Application by Profiling and Automation [article]
Read more
  • 0
  • 0
  • 6229

article-image-features-dynamics-gp
Packt
14 Oct 2016
10 min read
Save for later

Features of Dynamics GP

Packt
14 Oct 2016
10 min read
 In this article by Ian Grieve and Mark Polino, authors of the book, Microsoft Dynamics GP 2016 Cookbook, we will see few features of Dynamics GP. Dynamics GP provides a number of features to better organize the overall system and improve its usefulness for all users; these recipes are designed for the use of administrators rather than typical users. This article is designed to demonstrate how to implement and fine-tune these features to provide the most benefit. In this article, we will look at the following topics: Speeding account entry with account aliases Cleaning account lookups by removing accounts from lookups Streamlining payables processing by prioritizing vendors Getting clarity with user-defined fields (For more resources related to this topic, see here.) Speeding account entry with account aliases As organizations grow up, the chart of accounts tends to grow larger and more complex as well. Companies want to segment their business by departments, locations, or divisions; all of this means that more and more accounts get added to the chart and, as the chart of accounts grows, it gets more difficult to select the right account. Dynamics GP provides the Account Alias feature as a way to quickly select the right account. Account aliases provide a way to create shortcuts to specific accounts which can dramatically speed up the process of selecting the correct account. We'll look at how that works in this recipe. Getting ready Setting up account aliases requires a user with access to the Account Maintenance window. To get to this window perform the following steps: Select Financial from the Navigation pane on the left. Click Accounts on the Financial Area page underCards. This will open the Account Maintenance window. Click the lookup button (magnifying glass) next to the account number or use the keyboard shortcut Ctrl + Q. Find and select account 000-2100-00. In the middle of the Account Maintenance window is the Account Alias field. Enter AP in the Alias field. This associates the letters AP with the accounts payable account selected. This means that the user now only has to enter AP instead of the full account number to use the accounts payable account: How to do it… Once aliases have been setup, let's see how the user can quickly select an account using the alias: To demonstrate how this works, click Financial on the Navigation pane on the left. Select General from the Financial area page under Transactions. On the Transaction Entry window, select the top line in the grid area on the lower half of the window. Click the expansion button (represented by a blue arrow) next to the Account heading to open the Account Entry window. In the Alias field type AP and press Enter: The Account Alias window will close and the account represented by the alias will appear in the Transaction Entry window: How it works… Account aliases provide quick shortcuts for account entry. Keeping them short and obvious makes them easy to use. Aliases are less useful if users have to think about them. Limiting them to the most commonly used accounts makes them more useful. Most users don't mind occasionally looking up the odd account but they shouldn't have to memorize long account strings for regularly used account numbers. It's counter-productive to put an alias on every account since that would make finding the right alias as difficult as finding the right account number. The setup process should be performed on the most commonly used accounts to provide easy access. Cleaning account lookups by removing accounts from lookups A consequence of company growth is that the chart of accounts grows and the account lookups can get clogged up by the number of accounts on the system. While the general ledger will stop showing an account in a lookup when the account is made inactive, other modules will continue to show these inactive codes. However, Dynamics GP does contain a feature which can be used to remove inactive account from lookups; this same feature can also be used to remove accounts from lookups in series where the account should not be used, such as a sales account in the purchasing or inventory series. How to do it… Here we will see how to remove inactive accounts from lookups. Open Financial from the Navigation pane on the left. In the main Area page, under Cards, select Account. Enter, or do a lookup for, the account to be made inactive and removed from the lookups: Mark the Inactive checkbox. Press and hold the Ctrl key and click on each of the lines in the Include in Lookup list. Click Save to commit the changes. Next time a lookup is done in any of the now deselected modules, the account will not be included in the list. If the account is to be included in lookups in some modules but not others, simply leave selected the modules in which the account should be included: How it works… Accounts will only show in lookups when the series is selected in the Include in Lookup list. For series other than General Ledger, simply marking an account as Inactive is not enough to remove it from the lookup although the code can't be used when the account is inactive. Streamlining payables processing by prioritizing vendors Management of vendor payments is a critical activity for any firm; it's even more critical in difficult economic times. Companies need to understand and control payments and a key component of this is prioritizing vendors. Every firm has both critical and expendable vendors. Paying critical vendors on time is a key business driver. For example, a newspaper that doesn't pay their newsprint supplier won't be in business long. However, they can safely delay payments to their janitorial vendor without worrying about going under. Dynamics GP provides a mechanism to prioritize vendors and apply those priorities when selecting which checks to print. That is the focus of this recipe. Getting ready Setting this up first requires that the company figure out who the priority vendors are. That part is beyond the scope of this book. The Vendor Priority field in Dynamics GP is a three-character field, but users shouldn't be seduced by the possibilities of three characters. A best practice is to keep the priorities simple by using 1, 2, 3 or A, B, C. Anything more complicated than that tends to confuse users and actually makes it harder to prioritize vendors. Once the vendor priorities have been determined, the priority needs to be set in Dynamics GP. Attaching a priority to a vendor is the first step. To do that follow these steps: Select Purchasing from the Navigation pane. In the Purchasing area page under Cards, click Vendor Maintenance. Once the Vendor Maintenance window opens, select the lookup button (magnifying glass) next to Vendor ID. Select a vendor and click OK. Once the vendor information is populated, click the Options button. This opens the Vendor Maintenance Options screen. In the center left is the Payment Priority field. Enter 1 in Payment Priority and click Save: How to do it… Now that a vendor has been set up with a priority, let's see how to apply that information when selecting checks to print: To use vendor priorities to select invoices for payment, click Select Checks from the Transactions on the Purchasing area page. In the Select Payables Checks window enter CHECKS to name the check batch. Press Tab to move off of the Batch ID field and click Add to add the batch. Pick a checkbook ID and click Save to save the batch. In the Select By field, click the drop down box and select Payment Priority. Enter 1 in both the From and To boxes. Click the Insert >> button to lock in Payment Priority as an option: Click Build Batch at the top. If there are any transactions where the vendor is set to a priority of 1 this will populate a batch of checks based on the vendor priority: How it works… Since priority is one of the built-in options for selecting checks, it's easy to ensure that high priority vendors get selected to be paid first. All of this is easily accomplished with basic Dynamics GP functionality that most people miss. Getting clarity with user-defined fields Throughout Dynamics GP, maintenance cards typically include at least two user defined fields. User-defined fields can be renamed in the setup screen for the related module. This provides a great mechanism to add in special information. We'll take a look at a typical use of a user defined field in this recipe. How to do it… For our example, we'll look at using a user-defined field to rename the User-Defined1 field to Region in Customer Master:. To do so use the following steps: From the Navigation pane select Sales. In the Sales area page click Setup, then Receivables, and finally Options. In the User-Defined 1 field type Region and click OK to close each window: Back on the Sales area page click Customer under the Cards area. On the bottom left above User-Defined 2 is the newly named Region field ready to be filled in: How it works… Changing the field name only changes the display field; it doesn't change the underlying field name in the database. SmartLists are smart enough to show the new name. In our example, the description Region would appear in a SmartList, not User-Defined 1. User-defined fields like this are present for customers, vendors, accounts, sales orders, fixed assets, inventory items, and purchase receipts among others. They can each be renamed in their respective setup screens. There's more... All user defined fields are not the same; some have special features. Special User-Defined 1 features User-Defined 1 has special features inside of Dynamics GP. Most of the built-in reports inside of Dynamics GP allow sorting and selection with the User-Defined 1 field. These options aren't provided for User-Defined 2. Consequently, administrators should carefully consider what information belongs in User-Defined 1 before changing its name since the effects of this selection will be felt throughout the system. Company setup user-defined fields On the Company Setup window there are two user defined fields at the top right and there is no option in Dynamics GP to rename these fields. The Company Setup window is accessed by clicking Administration on the Navigation pane, then clicking on Company under the Setup and Company headers. Expanded user-defined fields Certain areas such as Fixed Assets, Inventory Items, and Purchase Receipts have more complex types of user-defined fields that can include dates, list selections, and currency. Summary So in this article we covered few features of Dynamics GP such as speeding account entries, cleaning account lookups, and so on.For more information on Dynamics GP, you can check other books by Packt, mentioned as follows: Microsoft Dynamics GP 2010 Cookbook: https://www.packtpub.com/application-development/microsoft-dynamics-gp-2010-cookbook Microsoft Dynamics GP 2013 Cookbook: https://www.packtpub.com/application-development/microsoft-dynamics-gp-2013-cookbook Microsoft Dynamics GP 2010 Implementation: https://www.packtpub.com/application-development/microsoft-dynamics-gp-2010-implementation Resources for Article: Further resources on this subject: Code Analysis and Debugging Tools in Microsoft Dynamics NAV 2009 [article] Connecting to Microsoft SQL Server Compact 3.5 with Visual Studio [article] MySQL Data Transfer using Sql Server Integration Services (SSIS) [article]
Read more
  • 0
  • 0
  • 2184

article-image-deployment-and-devops
Packt
14 Oct 2016
16 min read
Save for later

Deployment and DevOps

Packt
14 Oct 2016
16 min read
 In this article by Makoto Hashimoto and Nicolas Modrzyk, the authors of the book Clojure Programming Cookbook, we will cover the recipe Clojure on Amazon Web Services. (For more resources related to this topic, see here.) Clojure on Amazon Web Services This recipe is a standalone dish where you can learn how to combine the elegance of Clojure with Amazon Web Services (AWS). AWS was started in 2006 and is used by many businesses as easy to use web services. This style of serverless services is becoming more and more popular. You can use computer resources and software services on demand, without the need of preparing hardware or installing software by yourselves. You will mostly make use of the amazonica library, which is a comprehensive Clojure client for the entire Amazon AWS set of APIs. This library wraps the Amazon AWS APIs and supports most of AWS services including EC2, S3, Lambda, Kinesis, Elastic Beanstalk, Elastic MapReduce, and RedShift. This recipe has received a lot of its content and love from Robin Birtle, a leading member of the Clojure Community in Japan. Getting ready You need an AWS account and credentials to use AWS, so this recipe starts by showing you how to do the setup and acquire the necessary keys to get started. Signing up on AWS You need to sign up AWS if you don't have your account in AWS yet. In this case, go to https://aws.amazon.com, click on Sign In to the Console, and follow the instruction for creating your account:   To complete the sign up, enter the number of a valid credit card and a phone number. Getting access key and secret access key To call the API, you now need your AWS's access key and secret access key. Go to AWS console and click on your name, which is located in the top right corner of the screen, and select Security Credential, as shown in the following screenshot: Select Access Keys (Access Key ID and Secret Access Key), as shown in the following screenshot:   Then, the following screen appears; click on New Access Key: You can see your access key and secret access key, as shown in the following screenshot: Copy and save these strings for later use. Setting up dependencies in your project.clj Let's add amazonica library to your project.clj and restart your REPL: :dependencies [[org.clojure/clojure "1.8.0"] [amazonica "0.3.67"]] How to do it… From there on, we will go through some sample usage of the core Amazon services, accessed with Clojure, and the amazonica library. The three main ones we will review are as follows: EC2, Amazon's Elastic Cloud, which allows to run Virtual Machines on Amazon's Cloud S3, Simple Storage Service, which gives you Cloud based storage SQS, Simple Queue Services, which gives you Cloud-based data streaming and processing Let's go through each of these one by one. Using EC2 Let's assume you have an EC2 micro instance in Tokyo region: First of all, we will declare core and ec2 namespace in amazonica to use: (ns aws-examples.ec2-example (:require [amazonica.aws.ec2 :as ec2] [amazonica.core :as core])) We will set the access key and secret access key for enabling AWS client API accesses AWS. core/defcredential does as follows: (core/defcredential "Your Access Key" "Your Secret Access Key" "your region") ;;=> {:access-key "Your Access Key", :secret-key "Your Secret Access Key", :endpoint "your region"} The region you need to specify is ap-northeast-1, ap-south-1, or us-west-2. To get full regions list, use ec2/describe-regions: (ec2/describe-regions) ;;=> {:regions [{:region-name "ap-south-1", :endpoint "ec2.ap-south-1.amazonaws.com"} ;;=> ..... ;;=> {:region-name "ap-northeast-2", :endpoint "ec2.ap-northeast-2.amazonaws.com"} ;;=> {:region-name "ap-northeast-1", :endpoint "ec2.ap-northeast-1.amazonaws.com"} ;;=> ..... ;;=> {:region-name "us-west-2", :endpoint "ec2.us-west-2.amazonaws.com"}]} ec2/describe-instances returns very long information as the following: (ec2/describe-instances) ;;=> {:reservations [{:reservation-id "r-8efe3c2b", :requester-id "226008221399", ;;=> :owner-id "182672843130", :group-names [], :groups [], .... To get only necessary information of instance, we define the following __get-instances-info: (defn get-instances-info[] (let [inst (ec2/describe-instances)] (->> (mapcat :instances (inst :reservations)) (map #(vector [:node-name (->> (filter (fn [x] (= (:key x)) "Name" ) (:tags %)) first :value)] [:status (get-in % [:state :name])] [:instance-id (:instance-id %)] [:private-dns-name (:private-dns-name %)] [:global-ip (-> % :network-interfaces first :private-ip-addresses first :association :public-ip)] [:private-ip (-> % :network-interfaces first :private-ip-addresses first :private-ip-address)])) (map #(into {} %)) (sort-by :node-name)))) ;;=> #'aws-examples.ec2-example/get-instances-info Let's try to use the following function: get-instances-info) ;;=> ({:node-name "ECS Instance - amazon-ecs-cli-setup-my-cluster", ;;=> :status "running", ;;=> :instance-id "i-a1257a3e", ;;=> :private-dns-name "ip-10-0-0-212.ap-northeast-1.compute.internal", ;;=> :global-ip "54.199.234.18", ;;=> :private-ip "10.0.0.212"} ;;=> {:node-name "EcsInstanceAsg", ;;=> :status "terminated", ;;=> :instance-id "i-c5bbef5a", ;;=> :private-dns-name "", ;;=> :global-ip nil, ;;=> :private-ip nil}) As in the preceding example function, we can obtain instance-id list. So, we can start/stop instances using ec2/start-instances and ec2/stop-instances_ accordingly: (ec2/start-instances :instance-ids '("i-c5bbef5a")) ;;=> {:starting-instances ;;=> [{:previous-state {:code 80, :name "stopped"}, ;;=> :current-state {:code 0, :name "pending"}, ;;=> :instance-id "i-c5bbef5a"}]} (ec2/stop-instances :instance-ids '("i-c5bbef5a")) ;;=> {:stopping-instances ;;=> [{:previous-state {:code 16, :name "running"}, ;;=> :current-state {:code 64, :name "stopping"}, ;;=> :instance-id "i-c5bbef5a"}]} Using S3 Amazon S3 is secure, durable, and scalable storage in AWS cloud. It's easy to use for developers and other users. S3 also provide high durability, availability, and low cost. The durability is 99.999999999 % and the availability is 99.99 %. Let's create s3 buckets names makoto-bucket-1, makoto-bucket-2, and makoto-bucket-3 as follows: (s3/create-bucket "makoto-bucket-1") ;;=> {:name "makoto-bucket-1"} (s3/create-bucket "makoto-bucket-2") ;;=> {:name "makoto-bucket-2"} (s3/create-bucket "makoto-bucket-3") ;;=> {:name "makoto-bucket-3"} s3/list-buckets returns buckets information: (s3/list-buckets) ;;=> [{:creation-date #object[org.joda.time.DateTime 0x6a09e119 "2016-08-01T07:01:05.000+09:00"], ;;=> :owner ;;=> {:id "3d6e87f691897059c23bcfb88b17da55f0c9aa02cc2a44e461f1594337059d27", ;;=> :display-name "tokoma1"}, ;;=> :name "makoto-bucket-1"} ;;=> {:creation-date #object[org.joda.time.DateTime 0x7392252c "2016-08-01T17:35:30.000+09:00"], ;;=> :owner ;;=> {:id "3d6e87f691897059c23bcfb88b17da55f0c9aa02cc2a44e461f1594337059d27", ;;=> :display-name "tokoma1"}, ;;=> :name "makoto-bucket-2"} ;;=> {:creation-date #object[org.joda.time.DateTime 0x4d59b4cb "2016-08-01T17:38:59.000+09:00"], ;;=> :owner ;;=> {:id "3d6e87f691897059c23bcfb88b17da55f0c9aa02cc2a44e461f1594337059d27", ;;=> :display-name "tokoma1"}, ;;=> :name "makoto-bucket-3"}] We can see that there are three buckets in your AWS console, as shown in the following screenshot: Let's delete two of the three buckets as follows: (s3/list-buckets) ;;=> [{:creation-date #object[org.joda.time.DateTime 0x56387509 "2016-08-01T07:01:05.000+09:00"], ;;=> :owner {:id "3d6e87f691897059c23bcfb88b17da55f0c9aa02cc2a44e461f1594337059d27", :display-name "tokoma1"}, :name "makoto-bucket-1"}] We can see only one bucket now, as shown in the following screenshot: Now we will demonstrate how to send your local data to s3. s3/put-object uploads a file content to the specified bucket and key. The following code uploads /etc/hosts and makoto-bucket-1: (s3/put-object :bucket-name "makoto-bucket-1" :key "test/hosts" :file (java.io.File. "/etc/hosts")) ;;=> {:requester-charged? false, :content-md5 "HkBljfktNTl06yScnMRsjA==", ;;=> :etag "1e40658df92d353974eb249c9cc46c8c", :metadata {:content-disposition nil, ;;=> :expiration-time-rule-id nil, :user-metadata nil, :instance-length 0, :version-id nil, ;;=> :server-side-encryption nil, :etag "1e40658df92d353974eb249c9cc46c8c", :last-modified nil, ;;=> :cache-control nil, :http-expires-date nil, :content-length 0, :content-type nil, ;;=> :restore-expiration-time nil, :content-encoding nil, :expiration-time nil, :content-md5 nil, ;;=> :ongoing-restore nil}} s3/list-objects lists objects in a bucket as follows: (s3/list-objects :bucket-name "makoto-bucket-1") ;;=> {:truncated? false, :bucket-name "makoto-bucket-1", :max-keys 1000, :common-prefixes [], ;;=> :object-summaries [{:storage-class "STANDARD", :bucket-name "makoto-bucket-1", ;;=> :etag "1e40658df92d353974eb249c9cc46c8c", ;;=> :last-modified #object[org.joda.time.DateTime 0x1b76029c "2016-08-01T07:01:16.000+09:00"], ;;=> :owner {:id "3d6e87f691897059c23bcfb88b17da55f0c9aa02cc2a44e461f1594337059d27", ;;=> :display-name "tokoma1"}, :key "test/hosts", :size 380}]} To obtain the contents of objects in buckets, use s3/get-object: (s3/get-object :bucket-name "makoto-bucket-1" :key "test/hosts") ;;=> {:bucket-name "makoto-bucket-1", :key "test/hosts", ;;=> :input-stream #object[com.amazonaws.services.s3.model.S3ObjectInputStream 0x24f810e9 ;;=> ...... ;;=> :last-modified #object[org.joda.time.DateTime 0x79ad1ca9 "2016-08-01T07:01:16.000+09:00"], ;;=> :cache-control nil, :http-expires-date nil, :content-length 380, :content-type "application/octet-stream", ;;=> :restore-expiration-time nil, :content-encoding nil, :expiration-time nil, :content-md5 nil, ;;=> :ongoing-restore nil}} The result is a map, the content is a stream data, and the value of :object-content. To get the result as a string, we will use slurp_ as follows: (slurp (:object-content (s3/get-object :bucket-name "makoto-bucket-1" :key "test/hosts"))) ;;=> "127.0.0.1tlocalhostn127.0.1.1tphenixnn# The following lines are desirable for IPv6 capable hostsn::1 ip6-localhost ip6-loopbacknfe00::0 ip6-localnetnff00::0 ip6-mcastprefixnff02::1 ip6-allnodesnff02::2 ip6-allroutersnn52.8.30.189 my-cluster01-proxy1 n52.8.169.10 my-cluster01-master1 n52.8.198.115 my-cluster01-slave01 n52.9.12.12 my-cluster01-slave02nn52.8.197.100 my-node01n" Using Amazon SQS Amazon SQS is a high-performance, high-availability, and scalable Queue Service. We will demonstrate how easy it is to handle messages on queues in SQS using Clojure: (ns aws-examples.sqs-example (:require [amazonica.core :as core] [amazonica.aws.sqs :as sqs])) To create a queue, you can use sqs/create-queue as follows: (sqs/create-queue :queue-name "makoto-queue" :attributes {:VisibilityTimeout 3000 :MaximumMessageSize 65536 :MessageRetentionPeriod 1209600 :ReceiveMessageWaitTimeSeconds 15}) ;;=> {:queue-url "https://sqs.ap-northeast-1.amazonaws.com/864062283993/makoto-queue"} To get information of queue, use sqs/get-queue-attributes as follows: (sqs/get-queue-attributes "makoto-queue") ;;=> {:QueueArn "arn:aws:sqs:ap-northeast-1:864062283993:makoto-queue", ... You can configure a dead letter queue using sqs/assign-dead-letter-queue as follows: (sqs/create-queue "DLQ") ;;=> {:queue-url "https://sqs.ap-northeast-1.amazonaws.com/864062283993/DLQ"} (sqs/assign-dead-letter-queue (sqs/find-queue "makoto-queue") (sqs/find-queue "DLQ") 10) ;;=> nil Let's list queues defined: (sqs/list-queues) ;;=> {:queue-urls ;;=> ["https://sqs.ap-northeast-1.amazonaws.com/864062283993/DLQ" ;;=> "https://sqs.ap-northeast-1.amazonaws.com/864062283993/makoto-queue"]} The following image is of the console of SQS: Let's examine URLs of queues: (sqs/find-queue "makoto-queue") ;;=> "https://sqs.ap-northeast-1.amazonaws.com/864062283993/makoto-queue" (sqs/find-queue "DLQ") ;;=> "https://sqs.ap-northeast-1.amazonaws.com/864062283993/DLQ" To send messages, we use sqs/send-message: (sqs/send-message (sqs/find-queue "makoto-queue") "hello sqs from Clojure") ;;=> {:md5of-message-body "00129c8cc3c7081893765352a2f71f97", :message-id "690ddd68-a2f6-45de-b6f1-164eb3c9370d"} To receive messages, we use sqs/receive-message: (sqs/receive-message "makoto-queue") ;;=> {:messages [ ;;=> {:md5of-body "00129c8cc3c7081893765352a2f71f97", ;;=> :receipt-handle "AQEB.....", :message-id "bd56fea8-4c9f-4946-9521-1d97057f1a06", ;;=> :body "hello sqs from Clojure"}]} To remove all messages in your queues, we use sqs/purge-queue: (sqs/purge-queue :queue-url (sqs/find-queue "makoto-queue")) ;;=> nil To delete queues, we use sqs/delete-queue: (sqs/delete-queue "makoto-queue") ;;=> nil (sqs/delete-queue "DLQ") ;;=> nil Serverless Clojure with AWS Lambda Lambda is an AWS product that allows you to run Clojure code without the hassle and expense of setting up and maintaining a server environment. Behind the scenes, there are still servers involved, but as far as you are concerned, it is a serverless environment. Upload a JAR and you are good to go. Code running on Lambda is invoked in response to an event, such as a file being uploaded to S3, or according to a specified schedule. In production environments, Lambda is normally used in wider AWS deployment that includes standard server environments to handle discrete computational tasks. Particularly those that benefit from Lambda's horizontal scaling that just happens with configuration required. For Clojurians working on personal project, Lambda is a wonderful combination of power and limitation. Just how far can you hack Lambda given the constraints imposed by AWS? Clojure namespace helloworld Start off with a clean empty projected generated using lein new. From there, in your IDE of choice, configure and package and a new Clojure source file. In the following example, the package is com.sakkam and the source file uses the Clojure namespace helloworld. The entry point to your Lambda code is a Clojure function that is exposed as a method of a Java class using Clojure's gen-class. Similar to use and require, the gen-class function can be included in the Clojure ns definition, as the following, or specified separately. You can use any name you want for the handler function but the prefix must be a hyphen unless an alternate prefix is specified as part of the :methods definition: (ns com.sakkam.lambda.helloworld (:gen-class :methods [^:static [handler [String] String]])) (defn -myhandler [s] (println (str "Hello," s))) From the command line, use lein uberjar to create a JAR that can be uploaded to AWS Lambda. Hello World – the AWS part Getting your Hello World to work is now a matter of creating a new Lambda within AWS, uploading your JAR, and configuring your handler. Hello Stream The handler method we used in our Hello World Lambda function was coded directly and could be extended to accept custom Java classes as part of the method signature. However, for more complex Java integrations, implementing one of AWS's standard interfaces for Lambda is both straightforward and feels more like idiomatic Clojure. The following example replaces our own definition of a handler method with an implementation of a standard interface that is provided as part of the aws-lambda-java-core library. First of all, add the dependency [com.amazonaws/aws-lambda-java-core "1.0.0"] into your project.clj. While you are modifying your project.clj, also add in the dependency for [org.clojure/data.json "0.2.6"] since we will be manipulating JSON formatted objects as part of this exercise. Then, either create a new Clojure namespace or modify your existing one so that it looks like the following (the handler function must be named -handleRequest since handleRequest is specified as part of the interface): (ns aws-examples.lambda-example (:gen-class :implements [com.amazonaws.services.lambda.runtime.RequestStreamHandler]) (:require [clojure.java.io :as io] [clojure.data.json :as json] [clojure.string :as str])) (defn -handleRequest [this is os context] (let [w (io/writer os) parameters (json/read (io/reader is) :key-fn keyword)] (println "Lambda Hello Stream Output ") (println "this class: " (class this)) (println "is class:" (class is)) (println "os class:" (class os)) (println "context class:" (class context)) (println "Parameters are " parameters)) (.flush w)) Use lein uberjar again to create a JAR file. Since we have an existing Lambda function in AWS, we can overwrite the JAR used in the Hello World example. Since the handler function name has changed, we must modify our Lambda configuration to match. This time, the default test that provides parameters in JSON format should work as is, and the result will look something like the following: We can very easily get a more interesting test of Hello Stream by configuring this Lambda to run whenever a file is uploaded to S3. At the Lambda management page, choose the Event Sources tab, click on Add Event, and choose an S3 bucket to which you can easily add a file. Now, upload a file to the specified S3 bucket and then navigate to the logs of the Hello World Lambda function. You will find that Hello World has been automatically invoked, and a fairly complicated object that represents the uploaded file is supplied as a parameter to our Lambda function. Real-world Lambdas To graduate from a Hello World Lambda to real-world Lambdas, the chances are you going to need richer integration with other AWS facilities. As a minimum, you will probably want to write a file to an S3 bucket or insert a notification into SNS queue. Amazon provides an SDK that makes this integration straightforward for developers using standard Java. For Clojurians, using the Amazon Clojure wrapper Amazonica is a very fast and easy way to achieve the same. How it works… Here, we will explain how AWS works. What Is Amazon EC2? Using EC2, we don't need to buy hardware or installing operating system. Amazon provides various types of instances for customers' use cases. Each instance type has varies combinations of CPU, memory, storage, and networking capacity. Some instance types are given in the following table. You can select appropriate instances according to the characteristics of your application. Instance type Description M4 M4 type instance is designed for general purpose computing. This family provides a balanced CPU, memory and network bandwidth C4 C4 type instance is designed for applications that consume CPU resources. C4 is the highest CPU performance with the lowest cost R3 R3 type instances are for memory-intensive applications G2 G2 type instances has NVIDIA GPU and is used for graphic applications and GPU computing applications such as deep learning   The following table shows the variations of models of M4 type instance. You can choose the best one among models. Model vCPU RAM (GiB) EBS bandwidth (Mbps) m4.large 2 8 450 m4.xlarge 4 16 750 m4.2xlarge 8 32 1,000 m4.4xlarge 16 64 2,000 m4.10xlarge 40 160 4,000   Amazon S3 Amazon S3 is storage for Cloud. It provides a simple web interface that allows you to store and retrieve data. S3 API is an ease of use but ensures security. S3 provides Cloud storage services and is scalable, reliable, fast, and inexpensive. Buckets and Keys Buckets are containers for objects stored in Amazon S3. Objects are stored in buckets. Bucket name is unique among all regions in the world. So, names of buckets are the top-level identities of S3 and units of charges and access controls. Keys are the unique identifiers for an object within a bucket. Every object in a bucket has exactly one key. Keys are the second-level identifiers and should be unique in a bucket. To identify an object, you use the combination of bucket name and key name. Objects Objects are accessed by a bucket names and keys. Objects consist of data and metadata. Metadata is a set of name-value pairs that describe the characteristics of object. Examples of metadata are the date last modified and content type. Objects can have multiple versions of data. There's more… It is clearly impossible to review all the different APIs for all the different services proposed via the Amazonica library, but you would probably get the feeling of having tremendous powers in your hands right now. (Don't forget to give that credit card back to your boss now …) Some other examples of Amazon services are as follows: Amazon IoT: This proposes a way to get connected devices easily and securely interact with cloud applications and other devices. Amazon Kinesis: This gives you ways of easily loading massive volumes of streaming data into AWS and easily analyzing them through streaming techniques. Summary We hope you enjoyed this appetizer to the book Clojure Programming Cookbook, which will present you a set of progressive readings to improve your Clojure skills, and make it so that Clojure becomes your de facto everyday language for professional and efficient work. This book presents different topics of generic programming, which are always to the point, with some fun so that each recipe feels not like a classroom, but more like a fun read, with challenging exercises left to the reader to gradually build up skills. See you in the book! Resources for Article: Further resources on this subject: Customizing Xtext Components [article] Reactive Programming and the Flux Architecture [article] Setup Routine for an Enterprise Spring Application [article]
Read more
  • 0
  • 0
  • 4887
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-fast-data-manipulation-r
Packt
14 Oct 2016
28 min read
Save for later

Fast Data Manipulation with R

Packt
14 Oct 2016
28 min read
Data analysis is a combination of art and science. The art part consists of data exploration and visualization, which is usually done best with better intuition and understanding of the data. The science part consists of statistical analysis, which relies on concrete knowledge of statistics and analytic skills. However, both parts of a serious research require proper tools and good skills to work with them. R is exactly the proper tool to do data analysis with. In this article by Kun Ren, author of the book Learning R Programming, we will discuss how R and data.table package make it easy to transform data and, thus, greatly unleash our productivity. (For more resources related to this topic, see here.) Loading data as data frames The most basic data structures in R are atomic vectors, such as. numeric, logical, character, and complex vector, and list. An atomic vector stores elements of the same type while list is allowed to store different types of elements. The most commonly used data structure in R to store real-world data is data frame. A data frame stores data in tabular form. In essence, a data frame is a list of vectors with equal length but maybe different types. Most of the code in this article is based on a group of fictitious data about some products (you can download the data at https://gist.github.com/renkun-ken/ba2d33f21efded23db66a68240c20c92). We will use the readr package to load the data for better handling of column types. If you don't have this package installed, please run install.packages("readr"). library(readr) product_info <- read_csv("data/product-info.csv") product_info ##    id      name  type   class released ## 1 T01    SupCar   toy vehicle      yes ## 2 T02  SupPlane   toy vehicle       no ## 3 M01     JeepX model vehicle      yes ## 4 M02 AircraftX model vehicle      yes ## 5 M03    Runner model  people      yes ## 6 M04    Dancer model  people       no Once the data is loaded into memory as a data frame, we can take a look at its column types, shown as follows: sapply(product_info, class) ##          id        name        type       class    released ## "character" "character" "character" "character" "character" Using built-in functions to manipulate data frames Although a data frame is essentially a list of vectors, we can access it like a matrix due to all column vectors being the same length. To select rows that meet certain conditions, we will supply a logical vector as the first argument of [] while the second is left empty. For example, we can take out all rows of toy type, shown as follows: product_info[product_info$type == "toy", ] ##    id     name type   class released ## 1 T01   SupCar  toy vehicle      yes ## 2 T02 SupPlane  toy vehicle       no Or, we can take out all rows that are not released. product_info[product_info$released == "no", ] ##    id     name  type   class released ## 2 T02 SupPlane   toy vehicle       no ## 6 M04   Dancer model  people       no To filter columns, we can supply a character vector as the second argument while the first is left empty, which is exactly the same with how we subset a matrix. product_info[1:3, c("id", "name", "type")] ##    id     name  type ## 1 T01   SupCar   toy ## 2 T02 SupPlane   toy ## 3 M01    JeepX model Alternatively, we can filter the data frame by regarding it as a list. We can supply only one character vector of column names in []. product_info[c("id", "name", "class")] ##    id      name   class ## 1 T01    SupCar vehicle ## 2 T02  SupPlane vehicle ## 3 M01     JeepX vehicle ## 4 M02 AircraftX vehicle ## 5 M03    Runner  people ## 6 M04    Dancer  people To filter a data frame by both row and column, we can supply a vector as the first argument to select rows and a vector as the second to select columns. product_info[product_info$type == "toy", c("name", "class", "released")] ##       name   class released ## 1   SupCar vehicle      yes ## 2 SupPlane vehicle       no If the row filtering condition is based on values of certain columns, the preceding code can be very redundant, especially when the condition gets more complicated. Another built-in function to simplify code is subset, as introduced previously. subset(product_info,   subset = type == "model" & released == "yes",   select = name:class) ##        name  type   class ## 3     JeepX model vehicle ## 4 AircraftX model vehicle ## 5    Runner model  people The subset function uses nonstandard evaluation so that we can directly use the columns of the data frame without typing product_info many times because the expressions are meant to be evaluated in the context of the data frame. Similarly, we can use with to evaluate an expression in the context of the data frame, that is, the columns of the data frame can be used as symbols in the expression without repeatedly specifying the data frame. with(product_info, name[released == "no"]) ## [1] "SupPlane" "Dancer" The expression can be more than a simple subsetting. We can summarize the data by counting the occurrences of each possible value of a vector. For example, we can create a table of occurrences of types of records that are released. with(product_info, table(type[released == "yes"])) ## ## model   toy ##     3     1 In addition to the table of product information, we also have a table of product statistics that describe some properties of each product. product_stats <- read_csv("data/product-stats.csv") product_stats ##    id material size weight ## 1 T01    Metal  120   10.0 ## 2 T02    Metal  350   45.0 ## 3 M01 Plastics   50     NA ## 4 M02 Plastics   85    3.0 ## 5 M03     Wood   15     NA ## 6 M04     Wood   16    0.6 Now, think of how we can get the names of products with the top three largest sizes? One way is to sort the records in product_stats by size in descending order, select id values of the top three records, and use these values to filter rows of product_info by id. top_3_id <- product_stats[order(product_stats$size, decreasing = TRUE), "id"][1:3] product_info[product_info$id %in% top_3_id, ] ##    id      name  type   class released ## 1 T01    SupCar   toy vehicle      yes ## 2 T02  SupPlane   toy vehicle       no ## 4 M02 AircraftX model vehicle      yes This approach looks quite redundant. Note that product_info and product_stats actually describe the same set of products in different perspectives. The connection between these two tables is the id column. Each id is unique and means the same product. To access both sets of information, we can put the two tables together into one data frame. The simplest way to do this is use merge: product_table <- merge(product_info, product_stats, by = "id") product_table ##    id      name  type   class released material size weight ## 1 M01     JeepX model vehicle      yes Plastics   50     NA ## 2 M02 AircraftX model vehicle      yes Plastics   85    3.0 ## 3 M03    Runner model  people      yes     Wood   15     NA ## 4 M04    Dancer model  people       no     Wood   16    0.6 ## 5 T01    SupCar   toy vehicle      yes    Metal  120   10.0 ## 6 T02  SupPlane   toy vehicle       no    Metal  350   45.0 Now, we can create a new data frame that is a combined version of product_table and product_info with a shared id column. In fact, if you reorder the records in the second table, the two tables still can be correctly merged. With the combined version, we can do things more easily. For example, with the merged version, we can sort the data frame with any column in one table we loaded without having to manually work with the other. product_table[order(product_table$size), ] ##    id      name  type   class released material size weight ## 3 M03    Runner model  people      yes     Wood   15     NA ## 4 M04    Dancer model  people       no     Wood   16    0.6 ## 1 M01     JeepX model vehicle      yes Plastics   50     NA ## 2 M02 AircraftX model vehicle      yes Plastics   85    3.0 ## 5 T01    SupCar   toy vehicle      yes    Metal  120   10.0 ## 6 T02  SupPlane   toy vehicle       no    Metal  350   45.0 To solve the problem, we can directly use the merged table and get the same answer. product_table[order(product_table$size, decreasing = TRUE), "name"][1:3] ## [1] "SupPlane"  "SupCar"    "AircraftX" The merged data frame allows us to sort the records by a column in one data frame and filter the records by a column in the other. For example, we can first sort the product records by weight in descending order and select all records of model type. product_table[order(product_table$weight, decreasing = TRUE), ][   product_table$type == "model",] ##    id      name  type   class released material size weight ## 6 T02  SupPlane   toy vehicle       no    Metal  350   45.0 ## 5 T01    SupCar   toy vehicle      yes    Metal  120   10.0 ## 2 M02 AircraftX model vehicle      yes Plastics   85    3.0 ## 4 M04    Dancer model  people       no     Wood   16    0.6 Sometimes, the column values are literal but can be converted to standard R data structures to better represent the data. For example, released column in product_info only takes yes and no, which can be better represented with a logical vector. We can use <- to modify the column values, as we learned previously. However, it is usually better to create a new data frame with the existing columns properly adjusted and new columns added without polluting the original data. To do this, we can use transform: transform(product_table,   released = ifelse(released == "yes", TRUE, FALSE),   density = weight / size) ##    id      name  type   class released material size weight ## 1 M01     JeepX model vehicle     TRUE Plastics   50     NA ## 2 M02 AircraftX model vehicle     TRUE Plastics   85    3.0 ## 3 M03    Runner model  people     TRUE     Wood   15     NA ## 4 M04    Dancer model  people    FALSE     Wood   16    0.6 ## 5 T01    SupCar   toy vehicle     TRUE    Metal  120   10.0 ## 6 T02  SupPlane   toy vehicle    FALSE    Metal  350   45.0 ##      density ## 1         NA ## 2 0.03529412 ## 3         NA ## 4 0.03750000 ## 5 0.08333333 ## 6 0.12857143 The result is a new data frame with released converted to a logical vector and a new density column added. You can easily verify that product_table is not modified at all. Additionally, note that transform is like subset, as both functions use nonstandard evaluation to allow direct use of data frame columns as symbols in the arguments so that we don't have to type product_table$ all the time. Now, we will load another table into R. It is the test results of the quality, and durability of each product. We store the data in product_tests. product_tests <- read_csv("data/product-tests.csv") product_tests ##    id quality durability waterproof ## 1 T01      NA         10         no ## 2 T02      10          9         no ## 3 M01       6          4        yes ## 4 M02       6          5        yes ## 5 M03       5         NA        yes ## 6 M04       6          6        yes Note that the values in both quality and durability contain missing values (NA). To exclude all rows with missing values, we can use na.omit(): na.omit(product_tests) ##    id quality durability waterproof ## 2 T02      10          9         no ## 3 M01       6          4        yes ## 4 M02       6          5        yes ## 6 M04       6          6        yes Another way is to use complete.cases() to get a logical vector indicating all complete rows, without any missing value,: complete.cases(product_tests) ## [1] FALSE  TRUE  TRUE  TRUE FALSE  TRUE Then, we can use this logical vector to filter the data frame. For example, we can get the id  column of all complete rows as follows: product_tests[complete.cases(product_tests), "id"] ## [1] "T02" "M01" "M02" "M04" Or, we can get the id column of all incomplete rows: product_tests[!complete.cases(product_tests), "id"] ## [1] "T01" "M03" Note that product_info, product_stats and product_tests all share an id column, and we can merge them altogether. Unfortunately, there's no built-in function to merge an arbitrary number of data frames. We can only merge two existing data frames at a time, or we'll have to merge them recursively. merge(product_table, product_tests, by = "id") ##    id      name  type   class released material size weight ## 1 M01     JeepX model vehicle      yes Plastics   50     NA ## 2 M02 AircraftX model vehicle      yes Plastics   85    3.0 ## 3 M03    Runner model  people      yes     Wood   15     NA ## 4 M04    Dancer model  people       no     Wood   16    0.6 ## 5 T01    SupCar   toy vehicle      yes    Metal  120   10.0 ## 6 T02  SupPlane   toy vehicle       no    Metal  350   45.0 ##   quality durability waterproof ## 1       6          4        yes ## 2       6          5        yes ## 3       5         NA        yes ## 4       6          6        yes ## 5      NA         10         no ## 6      10          9         no Data wrangling with data.table In the previous section, we had an overview on how we can use built-in functions to work with data frames. Built-in functions work, but are usually verbose. In this section, let's use data.table, an enhanced version of data.frame, and see how it makes data manipulation much easier. Run install.packages("data.table") to install the package. As long as the package is ready, we can load the package and use fread() to read the data files as data.table objects. library(data.table) product_info <- fread("data/product-info.csv") product_stats <- fread("data/product-stats.csv") product_tests <- fread("data/product-tests.csv") toy_tests <- fread("data/product-toy-tests.csv") It is extremely easy to filter data in data.table. To select the first two rows, just use [1:2], which instead selects the first two columns for data.frame. product_info[1:2] ##     id     name type   class released ## 1: T01   SupCar  toy vehicle      yes ## 2: T02 SupPlane  toy vehicle       no To filter by logical conditions, just directly type columns names as variables without quotation as the expression is evaluated within the context of product_info: product_info[type == "model" & class == "people"] ##     id   name  type  class released ## 1: M03 Runner model people      yes ## 2: M04 Dancer model people       no It is easy to select or transform columns. product_stats[, .(id, material, density = size / weight)] ##     id material   density ## 1: T01    Metal 12.000000 ## 2: T02    Metal  7.777778 ## 3: M01 Plastics        NA ## 4: M02 Plastics 28.333333 ## 5: M03     Wood        NA ## 6: M04     Wood 26.666667 The data.table object also supports using key for subsetting, which can be much faster than using ==. We can set a column as key for each data.table: setkey(product_info, id) setkey(product_stats, id) setkey(product_tests, id) Then, we can use a value to directly select rows. product_info["M02"] ##     id      name  type   class released ## 1: M02 AircraftX model vehicle      yes We can also set multiple columns as key so as to use multiple values to subset it. setkey(toy_tests, id, date) toy_tests[.("T02", 20160303)] ##     id     date sample quality durability ## 1: T02 20160303     75       8          8 If two data.table objects share the same key, we can join them easily: product_info[product_tests] ##     id      name  type   class released quality durability ## 1: M01     JeepX model vehicle      yes       6          4 ## 2: M02 AircraftX model vehicle      yes       6          5 ## 3: M03    Runner model  people      yes       5         NA ## 4: M04    Dancer model  people       no       6          6 ## 5: T01    SupCar   toy vehicle      yes      NA         10 ## 6: T02  SupPlane   toy vehicle       no      10          9 ##    waterproof ## 1:        yes ## 2:        yes ## 3:        yes ## 4:        yes ## 5:         no ## 6:         no Instead of creating new data.table, in-place modification is also supported. The := sets the values of a column in place without the overhead of making copies and, thus, is much faster than using <-. product_info[, released := (released == "yes")] ##     id      name  type   class released ## 1: M01     JeepX model vehicle     TRUE ## 2: M02 AircraftX model vehicle     TRUE ## 3: M03    Runner model  people     TRUE ## 4: M04    Dancer model  people    FALSE ## 5: T01    SupCar   toy vehicle     TRUE ## 6: T02  SupPlane   toy vehicle    FALSE product_info ##     id      name  type   class released ## 1: M01     JeepX model vehicle     TRUE ## 2: M02 AircraftX model vehicle     TRUE ## 3: M03    Runner model  people     TRUE ## 4: M04    Dancer model  people    FALSE ## 5: T01    SupCar   toy vehicle     TRUE ## 6: T02  SupPlane   toy vehicle    FALSE Another important argument of subsetting a data.table is by, which is used to split the data into multiple parts and for each part the second argument (j) is evaluated. For example, the simplest usage of by is counting the records in each group. In the following code, we can count the number of both released and unreleased products: product_info[, .N, by = released] ##    released N ## 1:     TRUE 4 ## 2:    FALSE 2 The group can be defined by more than one variable. For example, a tuple of type and class can be a group, and for each group, we can count the number of records, as follows: product_info[, .N, by = .(type, class)] ##     type   class N ## 1: model vehicle 2 ## 2: model  people 2 ## 3:   toy vehicle 2 We can also perform the following statistical calculations for each group: product_tests[, .(mean_quality = mean(quality, na.rm = TRUE)),   by = .(waterproof)] ##    waterproof mean_quality ## 1:        yes         5.75 ## 2:         no        10.00 We can chain multiple [] in turn. In the following example, we will first join product_info and product_tests by a shared key id and then calculate the mean value of quality and durability for each group of type and class of released products. product_info[product_tests][released == TRUE,   .(mean_quality = mean(quality, na.rm = TRUE),     mean_durability = mean(durability, na.rm = TRUE)),   by = .(type, class)] ##     type   class mean_quality mean_durability ## 1: model vehicle            6             4.5 ## 2: model  people            5             NaN ## 3:   toy vehicle          NaN            10.0 Note that the values of the by columns will be unique in the resulted data.table; we can use keyby instead of by to ensure that it is automatically used as key by the resulted data.table. product_info[product_tests][released == TRUE,   .(mean_quality = mean(quality, na.rm = TRUE),     mean_durability = mean(durability, na.rm = TRUE)),   keyby = .(type, class)] ##     type   class mean_quality mean_durability ## 1: model  people            5             NaN ## 2: model vehicle            6             4.5 ## 3:   toy vehicle          NaN            10.0 The data.table package also provides functions to perform superfast reshaping of data. For example, we can use dcast() to spread id values along the x-axis as columns and align quality values to all possible date values along the y-axis. toy_quality <- dcast(toy_tests, date ~ id, value.var = "quality") toy_quality ##        date T01 T02 ## 1: 20160201   9   7 ## 2: 20160302  10  NA ## 3: 20160303  NA   8 ## 4: 20160403  NA   9 ## 5: 20160405   9  NA ## 6: 20160502   9  10 Although each month a test is conducted for each product, the dates may not exactly match with each other. This results in missing values if one product has a value on a day but the other has no corresponding value on exactly the same day. One way to fix this is to use year-month data instead of exact date. In the following code, we will create a new ym column that is the first 6 characters of toy_tests. For example, substr(20160101, 1, 6) will result in 201601. toy_tests[, ym := substr(toy_tests$date, 1, 6)] ##     id     date sample quality durability     ym ## 1: T01 20160201    100       9          9 201602 ## 2: T01 20160302    150      10          9 201603 ## 3: T01 20160405    180       9         10 201604 ## 4: T01 20160502    140       9          9 201605 ## 5: T02 20160201     70       7          9 201602 ## 6: T02 20160303     75       8          8 201603 ## 7: T02 20160403     90       9          8 201604 ## 8: T02 20160502     85      10          9 201605 toy_tests$ym ## [1] "201602" "201603" "201604" "201605" "201602" "201603" ## [7] "201604" "201605" This time, we will use ym for alignment instead of date: toy_quality <- dcast(toy_tests, ym ~ id, value.var = "quality") toy_quality ##        ym T01 T02 ## 1: 201602   9   7 ## 2: 201603  10   8 ## 3: 201604   9   9 ## 4: 201605   9  10 Now the missing values are gone, the quality scores of both products in each month are naturally presented. Sometimes, we will need to combine a number of columns into one that indicates the measure and another that stores the value. For example, the following code uses melt() to combine the two measures (quality and durability) of the original data into a column named measure and a column of the measured value. toy_tests2 <- melt(toy_tests, id.vars = c("id", "ym"),   measure.vars = c("quality", "durability"),   variable.name = "measure") toy_tests2 ##      id     ym    measure value ##  1: T01 201602    quality     9 ##  2: T01 201603    quality    10 ##  3: T01 201604    quality     9 ##  4: T01 201605    quality     9 ##  5: T02 201602    quality     7 ##  6: T02 201603    quality     8 ##  7: T02 201604    quality     9 ##  8: T02 201605    quality    10 ##  9: T01 201602 durability     9 ## 10: T01 201603 durability     9 ## 11: T01 201604 durability    10 ## 12: T01 201605 durability     9 ## 13: T02 201602 durability     9 ## 14: T02 201603 durability     8 ## 15: T02 201604 durability     8 ## 16: T02 201605 durability     9 The variable names are now contained in the data, which can be directly used by some packages. For example, we can use ggplot2 to plot data in such format. The following code is an example of a scatter plot with a facet grid of different combination of factors. library(ggplot2) ggplot(toy_tests2, aes(x = ym, y = value)) +   geom_point() +   facet_grid(id ~ measure) The graph generated is shown as follows: The plot can be easily manipulated because the grouping factor (measure) is contained as data rather than columns, which is easier to represent from the perspective of the ggplot2 package. ggplot(toy_tests2, aes(x = ym, y = value, color = id)) +   geom_point() +   facet_grid(. ~ measure) The graph generated is shown as follows: Summary In this article, we used both built-in functions and the data.table package to perform simple data manipulation tasks. Using built-in functions can be verbose while using data.table can be much easier and faster. However, the tasks in real-world data analysis can be much more complex than the examples we demonstrated, which also requires better R programming skills. It is helpful to have a good understanding on how nonstandard evaluation makes data.table so easy to work with, how environment works and scoping rules apply to make your code predictable, and so on. A universal and consistent understanding of how R basically works will certainly give you great confidence to write R code to work with data and enable you to learn packages very quickly. Resources for Article: Further resources on this subject: Supervised Machine Learning [article] Getting Started with Bootstrap [article] Basics of Classes and Objects [article]
Read more
  • 0
  • 0
  • 4136

article-image-applying-themes-sails-applications-part-2
Luis Lobo
14 Oct 2016
4 min read
Save for later

Applying Themes to Sails Applications, Part 2

Luis Lobo
14 Oct 2016
4 min read
In Part 1 of this series covering themes in the Sails Framework, we bootstrapped our sample Sails app (step 1). Here in Part 2, we will complete steps 2 and 3, compiling our theme’s CSS and the necessary Less files and setting up the theme Sails hook to complete our application. Step 2 – Adding a task for compiling our theme's CSS and the necessary Less files Let’s pick things back up where we left of in Part 1. We now want to customize our page to have our burrito style. We need to add a task that compiles our themes. Edit your /tasks/config/less.js so that it looks like this one: module.exports = function (grunt) { grunt.config.set('less', { dev: { files: [{ expand: true, cwd: 'assets/styles/', src: ['importer.less'], dest: '.tmp/public/styles/', ext: '.css' }, { expand: true, cwd: 'assets/themes/export', src: ['*.less'], dest: '.tmp/public/themes/', ext: '.css' }] } }); grunt.loadNpmTasks('grunt-contrib-less'); }; Basically, we added a second object to the files section, which tells the Less compiler task to look for any Less file in assets/themes/export, compile it, and put the resulting CSS in the .tmp/public/themes folder. In case you were not aware of it, the .tmp/public folder is the one Sails uses to publish its assets. We now create two themes: one is default.less and the other is burrito.less, which is based on default.less. We also have two other Less files, each one holding the variables for each theme. This technique allows you to have one base theme and many other themes based on the default. /assets/themes/variables.less @app-navbar-background-color: red; @app-navbar-brand-color: white; /assets/themes/variablesBurrito.less @app-navbar-background-color: green; @app-navbar-brand-color: yellow; /assets/themes/export/default.less @import "../variables.less"; .navbar-inverse { background-color: @app-navbar-background-color; .navbar-brand { color: @app-navbar-brand-color; } } /assets/themes/export/burrito.less @import "default.less"; @import "../variablesBurrito.less"; So, burrito.less just inherits from default.less but overrides the variables with the ones on its own, creating a new theme based on the default. If you lift Sails now, you will notice that the Navigation bar has a red background on white. Step 3 – Setting up the theme Sails hook The last step involves creating a Hook, a Node module that adds functionality to the Sails corethat catches the hostname, and if it has burrito in it, sets the new theme. First, let’s create the folder for the hook: mkdir -p ./api/hooks/theme Now create a file named index.js in that folder with this content: /** * theme hook - Sets the correct CSS to be displayed */ module.exports = function (sails) { return { routes: { before: { 'all /*': function (req, res, next) { if (!req.isSocket) { // makes theme variable available in views res.locals.theme = sails.hooks.theme.getTheme(req); } returnnext(); } } }, /** * getTheme defines which css needs to be used for this request * In this case, we select the theme by pattern matching certain words from the hostname */ getTheme: function (req) { var hostname = 'default'; var theme = 'default'; try { hostname = req.get('host').toLowerCase(); } catch(e) { // host may not be available always (ie, socket calls. If you need that, add a Host header in your // sails socket configuration) } // if burrito is found on the hostname, change the theme if (hostname.indexOf('burrito') > -1) { theme = 'burrito'; } return theme; } }; }; Finally, to test our configuration, we need to add a host entry in our OS hosts file. In Linux/Unix-based operating systems, you have to edit /etc/hosts (with sudo or root). Add the following line: 127.0.0.1 burrito.smartdelivery.localwww.smartdelivery.local Now navigate using those host names, first to www.smartdelivery.local: And lastly, navigate to burrito.smartdelivery.local: You now have your Burrito Smart Delivery! And you have a Themed Sails Application! I hope you have enjoyed this series.  You can get the source code from here. Enjoy! About the author Luis Lobo Borobia is the CTO at FictionCity.NET, is a mentor and advisor, independent software engineer consultant, and conference speaker. He has a background as a software analyst and designer, creating, designing, and implementing software products, solutions, frameworks, and platforms for several kinds of industries. In the last few years, he has focused on research and development for the Internet of Things, using the latest bleeding-edge software and hardware technologies available.
Read more
  • 0
  • 0
  • 12744

article-image-bringing-devops-network-operations
Packt
14 Oct 2016
37 min read
Save for later

Bringing DevOps to Network Operations

Packt
14 Oct 2016
37 min read
In this article by Steven Armstrong, author of the book DevOps for Networking, we will focus on people and process with regards to DevOps. The DevOps initiative was initially about breaking down silos between development and operations teams and changing company's operational models. It will highlight methods to unblock IT staff and allow them to work in a more productive fashion, but these mindsets have since been extended to quality assurance testing, security, and now network operations. This article will primarily focus on the evolving role of the network engineer, which is changing like the operations engineer before them, and the need for network engineers to learn new skills that will allow network engineers to remain as valuable as they are today as the industry moves towards a completely programmatically controlled operational model. (For more resources related to this topic, see here.) This article will look at two differing roles, that of the CTO / senior manager and engineer, discussing at length some of the initiatives that can be utilized to facilitate the desired cultural changes that are required to create a successful DevOps transformation for a whole organization or even just allow a single department improve their internal processes by automating everything they do. In this article, the following topics will be covered: Initiating a change in behavior Top-down DevOps initiatives for networking teams Bottom-up DevOps initiatives for networking teams Initiating a change in behavior The networking OSI model contains seven layers, but it is widely suggested that the OSI model has an additional 8th layer named the user layer, which governs how end users integrate and interact with the network. People are undoubtedly a harder beast to master and manage than technology, so there is no one size fits all solution to the vast amount of people issues that exist. The seven layers of OS are shown in the following image: Initiating cultural change and changes in behavior is the most difficult task an organization will face, and it won't occur overnight. To change behavior there must first be obvious business benefits. It is important to first outline the benefits that these cultural changes will bring to an organization, which will enable managers or change agents to make business justifications to implement the required changes. Cultural change and dealing with people and processes is notoriously hard, so divorcing the tools and dealing with people and processes is paramount to the success of any DevOps initiative or project. Cultural change is not something that needs to be planned and a company initiative. In a recent study by Gartner, it was shown that selecting the wrong tooling was not the main reason that cloud projects were a failure, instead the top reason was failure to change the operational model. Reasons to implement DevOps When implementing DevOps, some myths are often perpetuated, such as DevOps only works for start-ups, it won't bring any value to a particular team, or that it is simply a buzz word and a fad. The quantifiable benefits of DevOps initiatives are undeniable when done correctly. Some of these benefits include improvements to the following: The velocity of change Mean time to resolve Improved uptime Increased number of deployments Cross-skilling between teams The removal of the bus factor of one Any team in the IT industry would benefit from these improvements, so really teams can't afford to not adopt DevOps, as it will undoubtedly improve their business functions. By implementing a DevOps initiative, it promotes repeatability, measurement, and automation. Implementing automation naturally improves the velocity of change and increased number of deployments a team can do in any given day and time to market. Automation of the deployment process allows teams to push fixes through to production quickly as well as allowing an organization to push new products and features to market. A byproduct of automation is that the mean time to resolve will also become quicker for infrastructure issues. If infrastructure or network changes are automated, they can be applied much more efficiently than if they were carried out manually. Manual changes depend on the velocity of the engineer implementing the change rather than an automated script that can be measured more accurately. Implementing DevOps also means measuring and monitoring efficiently too, so having effective monitoring is crucial on all parts of infrastructure and networking, as it means the pace in which root cause analysis can carried out improves. Having effective monitoring helps to facilitate the process of mean time to resolve, so when a production issue occurs, the source of the issue can be found quicker than numerous engineers logging onto consoles and servers trying to debug issues. Instead a well-implemented monitoring system can provide a quick notification to localize the source of the issue, silencing any resultant alarms that result from the initial root cause, allowing the issue to be highlighted and fixed efficiently. The monitoring then hands over to the repeatable automation, which can then push out the localized fix to production. This process provides a highly accurate feedback loop, where processes will improve daily. If alerts are missed, they will ideally be built into the monitoring system over time as part of the incident post-mortem. Effective monitoring and automation results in quicker mean time to resolve, which leads to happier customers, and results in improved uptime of products. Utilizing automation and effective monitoring also means that all members of a team have access to see how processes work and how fixes and new features are pushed out. This will mean less of a reliance on key individuals removing the bus factor of one where a key engineer needs to do the majority of tasks in the team as he is the most highly skilled individual and has all of the system knowledge stored in his head. Using a DevOps model means that the very highly skilled engineer can instead use their talents to help cross skill other team members and create effective monitoring that can help any team member carry out the root cause analysis they normally do manually. This builds the talented engineers deep knowledge into the monitoring system, so the monitoring system as opposed to the talented engineer becomes the go to point of reference when an issue first occurs, or ideally the monitoring system becomes the source of truth that alerts on events to prevent customer facing issues. To improve cross-skilling, the talented engineer should ideally help write automation too, so they are not the only member of the team that can carry out specific tasks. Reasons to implement DevOps for networking So how do some of those DevOps benefits apply to traditional networking teams? Some of the common complaints with siloed networking teams today are the following: Reactive Slow often using ticketing systems to collaborate Manual processes carried out using admin terminals Lack of preproduction testing Manual mistakes leading to network outages Constantly in firefighting mode Lack of automation in daily processes Network teams like infrastructure teams before them are essentially used to working in siloed teams, interacting with other teams in large organizations via ticketing systems or using suboptimal processes. This is not a streamlined or optimized way of working, which led to the DevOps initiative that sought to break down barriers between Development and Operations staff, but its remit has since widened. Networking does not seem to have been initially included in this DevOps movement yet, but software delivery can only operate as fast as the slowest component. The slowest component will eventually become the bottleneck or blocker of the entire delivery process. That slowest component often becomes the star engineer in a siloed team that can't process enough tickets in a day manually to keep up with demand, thus becoming the bus factor of one. If that engineer goes off sick, then work is blocked, the company becomes too reliant and cannot function efficiently without them. If a team is not operating in the same way as the rest of the business, then all other departments will be slowed down as the siloed department is not agile enough. Put simply, the reason networking teams exist in most companies is to provide a service to development teams. Development teams require networking to be deployed, so they deliver applications to production so that the business can make money from those products. So networking changes to ACL policies, load-balancing rules, and provisioning of new subnets for new applications can no longer be deemed acceptable if they take days, months or even weeks. Networking has a direct impact on the velocity of change, mean time to resolve, uptime, as well as the number of deployments, which are four of the key performance indicators of a successful DevOps initiative. So networking needs to be included in a DevOps model by companies, otherwise all of these quantifiable benefits will become constrained. Given the rapid way AWS, Microsoft Azure, OpenStack, and Software-defined Networking (SDN) can be used to provision network functions in the private and public cloud, it is no longer acceptable for network teams to not adapt their operational processes and learn new skills. But the caveat is that the evolution of networking has been quick, and they need the support and time to do this. If a cloud solution is implemented and the operational model does not change, then no real quantifiable benefits will be felt by the organization. Cloud projects traditionally do not fail because of technology, cloud projects fail because of the incumbent operational models that hinder them from being a success. There is zero value to be had from building a brand new OpenStack private cloud, with its open set of extensible APIs to manage compute, networking, and storage if a company doesn't change its operational model and allow end users to use those APIs to self-service their requests. If network engineers are still using the GUI to point and click and cut and paste then this doesn't bring any real business value as the network engineer that cuts and pastes the slowest is the bottleneck. The company may as well stick with their current processes as implementing a private cloud solution with manual processes will not result in a speeding up time to market or mean time to recover from failure. However, cloud should not be used as an excuse to deride your internal network staff with, as incumbent operational models in companies are typically not designed or set up by current staff, they are normally inherited. Moving to public cloud doesn't solve the problem of the operational agility of a company's network team, it is a quick fix and bandage that disguises the deeper rooted cultural challenges that exist. However, smarter ways of working allied with use of automation, measurement, and monitoring can help network teams refine their internal processes and facilitate the developers and operations staff that they work with daily. Cultural change can be initiated in two different ways, grass roots bottom-up initiatives coming from engineers, or top-down management initiatives. Top-down DevOps initiatives for networking teams Top-down DevOps initiatives are when a CTO, Directors, or Senior Manager have to buy in from the company to make changes to the operational model. These changes are required as the incumbent operational model is deemed suboptimal and not set up to deliver software at the speed of competitors, which inherently delays new products or crucial fixes from being delivered to market. When doing DevOps transformations from a top-down management level, it is imperative that some ground work is done with the teams involved, if large changes are going to be made to the operational model, it can often cause unrest or stress to staff on the ground. When implementing operational changes, upper management need to have the buy in of the people on the ground as they will operate within that model daily. Having teams buy in is a very important aspect; otherwise, the company will end up with an unhappy workforce, which will mean the best staff will ultimately leave. It is very important that upper management engage staff when implementing new operational processes and deal with any concerns transparently from the outset, as opposed to going for an offsite management meeting and coming back with an enforced plan, which is all too common a theme. Management should survey the teams to understand how they operate on a daily basis, what they like about the current processes and where their frustrations lie. The biggest impediment to changing an operational model is misunderstanding the current operational model. All initiatives should ideally be led and not enforced. So let's focus on some specific top-down initiatives that could be used to help. Analyzing successful teams One approach would be for the management is to look at other teams within the organization whose processes are working well and are delivering in an incremental agile fashion, if no other team in the organization is working in this fashion, then reach out to other companies. Ask if it would be possible to go and look at the way another company operate for a day. Most companies will happily use successful projects as reference cases to public audiences at conferences or meet-ups, as they enjoy showing their achievements, so it shouldn't be difficult to seek out companies that have overcome similar cultural challenges. It is good to attend some DevOps conferences and look at who is speaking, so approach the speakers and they will undoubtedly be happy to help. Management teams should initially book a meeting with the high-performing team and do a question and answer session focusing on the following points, if it is an external vendor then an introduction phone call can suffice. Some important questions to ask in the initial meeting are the following: Which processes normally work well? What tools they actually use on a daily basis? How is work assigned? How do they track work? What is the team structure? How do other teams make requests to the team? How is work prioritized? How do they deal with interruptions? How are meetings structured? It is important not to reinvent the wheel, if a team in the organization already has a proven template that works well, then that team could also be invaluable in helping facilitate cultural change within the networks team. It will be slightly more challenging if focus is put on an external team as the evangelist as it opens up excuses such as it being easier for them because of x, y, and z in their company. A good strategy, when utilizing a local team in the organization as the evangelist, is to embed a network engineer in that team for a few weeks and have them observe and give feedback how the other teams operate and document their findings. This is imperative, so the network engineers on the ground understand the processes. Flexibility is also important, as only some of the successful team's processes may be applicable to a network team, so don't expect two teams to work identically. The sum of parts and personal individuals in the team really do mean that every team is different, so focus on goals rather than the implementation of strict process. If teams achieve the same outcomes in slightly different ways, then as long as work can be tracked and is visible to management, it shouldn't be an issue as long as it can be easily reported on. Make sure pace is prioritized, select specific change agents to make sure teams are comfortable with new processes, so empower change agents in the network team to choose how they want to work by engaging with the team by creating new processes and also put them in charge of eventual tool selection. However, before selecting any tooling, it is important to start with process and agree on the new operational model to prevent tooling driving processes, this is a common mistake in IT. Mapping out activity diagrams A good piece of advice is to use an activity diagram as a visual aid to understand how a team's interactions work and where they can be improved. A typical development activity diagram, with manual hand-off to a quality assurance team is shown here: Utilizing activity diagrams as a visual aid is important as it highlights suboptimal business process flows. In the example, we see a development team's activity diagram. This process is suboptimal as it doesn't include the quality assurance team in the Test locally and Peer review phases. Instead it has a formalized QA hand-off phase, which is very late in the development cycle, and a suboptimal way of working as it promotes a development and QA silo, which is a DevOps anti-pattern. A better approach would be to have QA engineers work on creating test tasks and creating automated tests, whereas the development team works on coding tasks. This would allow the development Peer review process to have QA engineers' review and test developer code earlier in the development lifecycle and make sure that every piece of code written has appropriate test coverage before the code is checked in. Another shortcoming in the process is that it does not cater for software bugs found by the quality assurance team or in production by customers, so mapping these streams of work into the activity diagram would also be useful to show all potential feedback loops. If a feedback loop is missed in the overall activity diagram, then it can cause a breakdown in the process flow, so it is important to capture all permutations in the overarching flow that could occur before mapping tooling to facilitate the process. Each team should look at ways of shortening interactions to aid mean time to resolve and improve the velocity of change at which work can flow through the overall process. Management should dedicate some time in their schedule with the development, infrastructure, networking, and test teams and map out what they believe the team processes to be in their individual teams. Keep it high level, this should represent a simple activity swim-lane utilizing the start point where they accept work and the process the team goes through to deliver that work. Once each team has mapped out the initial approach, they should focus on optimizing it and removing the parts of the process they dislike and discuss ways the process could be improved as a team. It may take many iterations before this is mapped out effectively, so don't rush this process, it should be used as a learning experience for each team. The finalized activity diagram will normally include management and technical functions combined in an optimized way to show the overall process flow. Try not to bother using Business Process Management (BPM) software at this stage a simple white board will suffice to keep it simple and informal. It is a good practice to utilize two layers of an activity diagram, so the first layer can be a box that simply says Peer review, which then references a nested activity diagrams outlining what the teams peer review process is. Both need refined but the nested tier of business processes should be dictated by the individual teams as these are specific to their needs, so it's important to leave teams the flexibility they need at this level. It is important to split the two out tiers; otherwise, the overall top layer of activity diagram will be too complex to extract any real value from, so try and minimize the complexity at the top layer, as this will need to be integrated with other teams processes. The activity doesn't need to contain team-specific details such as how an internal team's Peer review process operates as this will always be subjective to that team; this should be included but will be a nested layer activity that won't be shared. Another team should be able to look at a team's top layer activity diagram and understand the process without explanation. It can sometimes be useful to first map out a high performing teams top layer activity diagram to show how an integrated joined up business process should look. This will help teams that struggle a bit more with these concepts and allow them to use that team's activity diagram as a guide. This can be used as a point of reference and show how these teams have solved their cross team interaction issues and facilitated one or more teams interacting without friction. The main aim of this exercise is to join up business processes, so they are not siloed between teams, so the planning and execution of work is as integrated as possible for joined up initiatives. Once each team has completed their individual activity diagram and optimized it to the way the team wants, the second phase of the process can begin. This involves layering each team's top layer of their activity diagrams together to create a joined up process. Teams should use this layering exercise as an excuse to talk about suboptimal processes and how the overall business process should look end to end. Utilize this session to remove perceived bottlenecks between teams, completely ignoring existing tools and the constraints of current tools, this whole exercise should be focusing on process not tooling. A good example of a suboptimum process flow that is constrained by tooling would be a stage on a top layer activity diagram that says raise ticket with ticketing system. This should be broken down so work is people focused, what does the person requesting the change actually require? Developers' day job involves writing code and building great features and products, so if a new feature needs a network change, then networking should be treated as part of that feature change. So the time taken for the network changes needs to be catered for as part of the planning and estimation for that feature rather than a ticketed request that will hinder the velocity of change when it is done reactively as an afterthought. This is normally a very successful exercise when engagement is good, it is good to utilize a senior engineer and manager from each team in the combined activity diagram layering exercise with more junior engineers involved in each team included in the team-specific activity diagram exercise. Changing the network team's operational model The network team's operational model at the end of the activity diagram exercise should ideally be fully integrated with the rest of the business. Once the new operational model has been agreed with all teams, it is time to implement it. It is important to note that because the teams on the ground created the operational model and joined up activity diagram, it should be signed off by all parties as the new business process. So this removes the issue of an enforced model from management as those using it have been involved in creating it. The operational model can be iterated and improved over time, but interactions shouldn't change greatly although new interaction points may be added that have been initially missed. A master copy of the business process can then be stored and updated, so anyone new joining the company knows exactly how to interact with other teams. Short term it may seem the new approach is slowing down development estimates as automation is not in place for network functions, so estimation for developer features becomes higher when they require network changes. This is often just a truer reflection of reality, as estimations didn't take into account network changes and then they became blockers as they were tickets, but once reported, it can be optimized and improved over time. Once the overall activity diagram has been merged together and agreed with all the teams, it is important to remember if the processes are properly optimized, there should not be pages and pages of high-level operations on the diagram. If the interactions are too verbose, it will take any change hours and hours to traverse each of the steps on the activity diagram. The activity diagram below shows a joined up business process, where work is either defined from a single roadmap producing user stories for all teams. New user stories, which are units of work, are then estimated out by cross-functional teams, including developers, infrastructure, quality assurance, and network engineers. Each team will review the user story and working out which cross-functional tasks are involved to deliver the feature. The user story then becomes part of the sprint with the cross-functional teams working on the user story together making sure that it has everything it needs to work prior to the check-in. After Peer review, the feature or change is then handed off to the automated processes to deliver the code, infrastructure, and network changes to production. The checked-in feature then flows through unit testing, quality assurance, integration, performance testing quality gates, which will include any new tests that were written by the quality assurance team before check-in. Once every stage is passed, the automation is invoked by a button press to push the changes to production. Each environment has the same network changes applied, so network changes are made first on test environments before production. This relies on treating networking as code, meaning automated network processes need to be created so the network team can be as agile as the developers. Once the agreed operational model is mapped out only then should the DevOps transformation begin. This will involve selecting the best of breed tools at every stage to deliver the desired outcome with the focus on the following benefits: The velocity of change Mean time to resolve Improved uptime Increased number of deployments Cross-skilling between teams The removal of the bus factor of one All business processes will be different for each company, so it is important to engage each department and have the buy in from all managers to make this activity a success. Changing the network teams behavior Once a new operational model has been established in the business, it is important to help prevent the network team from becoming the bottleneck in a DevOps-focused continuous delivery model. Traditionally, network engineers will be used to operating command lines and logging into admin consoles on network devices to make changes. Infrastructure engineers adjusted to automation as they already had scripting experience in bash and PowerShell coupled with a firm grounding in Linux or Windows operating systems, so transitioning to configuration management tooling was not a huge step. However, it may be more difficult to persuade network engineers from making that same transition initially. Moving network engineers towards coding against APIs and adopting configuration management tools may initially appear daunting, as it is a higher barrier to entry, but having an experienced automation engineer on hand, can help network engineers make this transition. It is important to be patient, so try to change this behavior gradually by setting some automation initiatives for the network team in their objectives. This will encourage the correct behavior and try and incentivize it too. It may be useful to start off automation initiatives by offering training or purchasing particular coding books for teams. It may also be useful to hold an initial automation hack day; this will give network engineers a day away from their day jobs and time to attempt to automate a small process, which is repeated everyday by network engineers. If possible, make this a mandatory exercise, so that it is adopted and make other teams available to cover for the network team, so they aren't distracted. This is a good way of seeing which members of the network team may be open to evangelizing DevOps and automation. If any particular individual stands out, then work with them to help push automation initiatives forward to the rest of the team by making them the champion for automation. Establishing an internal DevOps meet-up where teams present back their automation achievements is also a good way of promoting automation in network teams and this keeping the momentum going. Encourage each team across the business to present back interesting things they have achieved each quarter and incentivize this too by allowing each team time off from their day job to attend if they participate. This leads to a sense of community and illustrates to teams they are part of bigger movement that is bringing real cost benefits to the business. This also helps to focus teams on the common goal of making the company better and breaks down barriers between teams in the process. One approach that should be avoided at all costs is having other teams write all the network automation for networking teams. Ideally, it should be the networking team that evolve and adopt automation, so giving the network team a sense of ownership over the network automation is very important. This though requires full buy in from networking teams and discipline not to revert back to manual tasks at any point even if issues occur. To ease the transition offer to put an automation engineer into the network team from infrastructure or development, but this should only be a temporary measure. It is important to select an automation engineer that is respected by the network team and knowledgeable in networking, as no one should ever attempt to automate something that they cannot operate by hand, so having someone well-versed in networking to help with network automation is crucial, as they will be training the network team so have to be respected. If an automation engineer is assigned to the network team and isn't knowledgeable or respected, then the initiative will likely fail, so choose wisely. It is important to accept at an early stage, that this transition towards DevOps and automation may not be for everyone, so not every network engineer will be able to make the journey. It is all about the network team seizing the opportunity and showing initiative and willingness to pick up and learn new skills. It is important to stamp out disruptive behavior early on which may be a bad influence on the team. It is fine to have for people to have a cynical skepticism at first, but not attempting to change or build new skills shouldn't be tolerated, as it will disrupt the team dynamic and this should be monitored so it doesn't cause automation initiatives to fail or stall, just because individuals are proving to be blockers or being disruptive. It is important to note that every organization has its own unique culture and a company's rate of change will be subject to cultural uptake of the new processes and ways of working. When initiating cultural change, change agents are necessary and can come from internal IT staff or external sources depending on the aptitude and appetite of the staff to change. Every change project is different, but it is important that it has the correct individuals involved to make it a success along with the correct management sponsorship and backing. Bottom-up DevOps initiatives for networking teams Bottom-up DevOps initiatives are when an engineer, team leads, or lower management don't necessarily have buy in from the company to make changes to the operational model. However, they realize that although changes can't be made to the overall incumbent operational model, they can try and facilitate positive changes using DevOps philosophies within their team that can help the team perform better and make their productivity more efficient. When implementing DevOps initiatives from a bottom-up initiative, it is much more difficult and challenging at times as some individuals or teams may not be willing to change the way they work and operate as they don't have to. But it is important not to become disheartened and do the best possible job for the business. It is still possible to eventually convince upper management to implement a DevOps initiative using grass roots initiatives to prove the process brings real business benefits. Evangelizing DevOps in the networking team It is important to try and stay positive at all times, working on a bottom-up initiative can be tiring, but it is important to roll with the punches and not take things too personally. Always remain positive and try to focus on evangelizing the benefits associated with DevOps processes and positive behavior first within your own team. The first challenge is to convince your own team of the merits of adopting a DevOps approach before even attempting to convince other teams in the business. A good way of doing this is by showing the benefits that DevOps approach has made to other companies, such as Google, Facebook, and Etsy, focusing on what they have done in the networking space. A pushback from individuals may be the fact that these companies are unicorns and DevOps has only worked for companies for this reason, so be prepared to be challenged. Seek out initiatives that have been implemented by these companies that the networking team could adopt and are actually applicable to your company. In order to facilitate an environment of change, work out your colleagues drivers are, what motivates them? Try tailor the sell to individuals motivations, the sell to an engineer or manager may be completely different, an engineer on the ground may be motivated by the following: Doing more interesting work Developing skills and experience Helping automate menial daily tasks Learning sought-after configuration management skills Understanding the development lifecycle Learning to code A manager on the other hand will probably be more motivated by offering to measure KPI's that make his team look better such as: Time taken to implement changes Mean time to resolve failures Improved uptime of the network Another way to promote engagement is to invite your networking team to DevOps meet-ups arranged by forward thinking networking vendors. They may be amazed that most networking and load-balancing vendors are now actively promoting automation and DevOps and not yet be aware of this. Some of the new innovations in this space may be enough to change their opinions and make them interested in picking up some of the new approaches, so they can keep pace with the industry. Seeking sponsorship from a respected manager or engineer After making the network team aware of the DevOps initiatives, it is important to take this to the next stage. Seek out a respected manager or senior engineer in the networking team that may be open to trying out DevOps and automation. It is important to sell this person the dream, state how you are passionate about implementing some changes to help the team, and that you are keen to utilize some proven best practices that have worked well for other successful companies. It is important to be humble, try not to rant or spew generalized DevOps jargon to your peers, which can be very off-putting. Always make reasonable arguments and justify them while avoid making sweeping statements or generalizations. Try not to appear to be trying to undermine the manager or senior engineer, instead ask for their help to achieve the goal by seeking their approval to back the initiative or idea. A charm offensive may be necessary at this stage to convince the manager or engineer that it's a good idea but gradually building up to the request can help otherwise it may appear insincere if the request comes out the blue. Potentially analyze the situation over lunch or drinks and gauge if it is something they would be interested in, there is little point trying to convince people that are stubborn as they probably will not budge unless the initiative comes from above. Once you have found the courage to broach the subject, it is now time to put forward numerous suggestions on how the team could work differently with the help of a mediator that could take the form of a project manager. Ask for the opportunity to try this out on a small scale and offer to lead the initiative and ask for their support and backing. It is likely that the manager or senior engineer will be impressed at your initiative and allow you to run with the idea, but they may choose the initiative you implement. So, never suggest anything you can't achieve, you may only get one opportunity at this so it is important to make a good impression. Try and focus on a small task to start with; that's typically a pain point, and attempt to automate it. Anyone can write an automation script, but try and make the automation process easy to use, find what the team likes in the current process, and try and incorporate aspects of it. For example, if they often see the output from a command line displayed in a particular way, write the automation script so that it still displays the same output, so the process is not completely alien to them. Try not to hardcode values into scripts and extract them into a configuration files to make the automation more flexible, so it could potentially be used again in different ways. By showing engineers the flexibility of automation, it will encourage them to use it more, show other in the teams how you wrote the automation and ways they could adapt it to apply it to other activities. If this is done wisely, then automation will be adopted by enthusiastic members of the team, and you will gain enough momentum to impress the sponsor enough to take it forward onto more complex tasks. Automate a complex problem with the networking team The next stage of the process after building confidence by automating small repeatable tasks is to take on a more complex problem; this can be used to cement the use of automation within the networking team going forward. This part of the process is about empowering others to take charge, and lead automation initiatives themselves in the future, so will be more time-consuming. It is imperative that the more difficult to work with engineers that may have been deliberately avoided while building out the initial automation is involved this time. These engineers more than likely have not been involved in automation at all at this stage. This probably means the most certified person in the team and alpha of the team, nobody said it was going to be easy, but it will be worth it in the long run convincing the biggest skeptics of the merits of DevOps and automation. At this stage, automation within the network team should have enough credibility and momentum to broach the subject citing successful use cases. It's easier to involve all difficult individuals in the process rather than presenting ideas back to them at the end of the process. Difficult senior engineers or managers are less likely to shoot down your ideas in front of your peers if they are involved in the creation of the process and have contributed in some way. Try and be respectful, even if you do not agree with their viewpoints, but don't back down if you believe that you are correct or give up. Make arguments fact based and non-emotive, write down pros and cons, and document any concerns without ignoring them, you have to be willing to compromise but not to the point of devaluing the solution. There may actually be genuine risks involved that need addressed, so valid points should not be glossed over or ignored. Where possible seek backup from your sponsor if you are not sure on some of the points or feel individuals are being unreasonable. When implementing the complex automation task work as a team, not as an individual, this is a learning experience for others as well as yourself. Try and teach the network team a configuration management tool, they may just be scared try out new things, so go with a gentle approach. Potentially stopping at times to try out some online tutorials to familiarize everyone with the tool and try out various approaches to solve problems in the easiest way possible. Try and show the network engineers how easy it is to use configuration management tools and the benefits. Don't use complicated configuration management tools as it may put them off. The majority of network engineers can't currently code, something that will potentially change in the coming years. As stated before, infrastructure engineers at least had a grounding in bash or PowerShell to help get started, so pick tooling that they like and give them options. Try not to enforce tools they are not comfortable with. When utilizing automation, one of the key concerns for network engineers is peer review as they have a natural distrust that the automation has worked. Try and build in gated processes to address these concerns, automation doesn't mean any peer review so create a lightweight process to help. Make the automation easy to review by utilizing source control to show diffs and educate the network engineers on how to do this. Coding can be a scary prospect initially, so propose to do some team exercises each week on a coding or configuration management task. Work on it as a team. This makes it less threatening, and it is important to listen to feedback. If the consensus is that something isn't working well or isn't of benefit, then look at alternate ways to achieve the same goal that works for the whole team. Before releasing any new automated process, test it in preproduction environment, alongside an experienced engineer and have them peer review it, and try to make it fail against numerous test cases. There is only one opportunity to make a first impression, with a new process, so make sure it is a successful one. Try and set up knowledge-sharing session between the team to discuss the automation and make sure everyone knows how to do operations manually too, so they can easily debug any future issues or extend or amend the automation. Make sure that output and logging is clear to all users as they will all need to support the automation when it is used in production. Summary In this article, we covered practical initiatives, which when combined, will allow IT staff to implement successful DevOps models in their organization. Rather than just focusing on departmental issues, it has promoted using a set of practical strategies to change the day-to-day operational models that constrain teams. It also focuses on the need for network engineers to learn new skills and techniques in order to make the most of a new operational model and not become the bottleneck for delivery. This article has provided practical real-world examples that could help senior managers and engineers to improve their own companies, emphasizing collaboration between teams and showing that networking departments now required to automate all network operations to deliver at the pace expected by businesses. Key takeaways from this article are: DevOps is not just about development and operations staff it can be applied to network teams Before starting a DevOps initiative analyze successful teams or companies and what made them successful Senior management sponsorship is a key to creating a successful DevOps model Your own companies model will not identically mirror other companies, so try not to copy like for like, adapt it so that it works in your own organization Allow teams to create their own processes and don't dictate processes Allow change agents to initiate changes that teams are comfortable with Automate all operational work, start small, and build up to larger more complex problems once the team is comfortable with new ways of working Successful change will not happen overnight; it will only work through a model of continuous improvement Useful links on DevOps are: https://www.youtube.com/watch?v=TdAmAj3eaFI https://www.youtube.com/watch?v=gqmuVHw-hQw Resources for Article: Further resources on this subject: Jenkins 2.0: The impetus for DevOps Movement [article] Introduction to DevOps [article] Command Line Tools for DevOps [article]
Read more
  • 0
  • 0
  • 12947

article-image-swifts-core-libraries
Packt
13 Oct 2016
26 min read
Save for later

Swift's Core Libraries

Packt
13 Oct 2016
26 min read
In this article, Jon Hoffman, the author of the book Mastering Swift 3, talks about how he was really excited when Apple announced that they were going to release a version of Swift for Linux, so he could use Swift for his Linux and embedded development as well as his Mac OS and iOS development. When Apple first released Swift 2.2 for Linux, he was very excited but was also a little disappointed because he could not read/write files, access network services, or use Grand Central Dispatch (GCD or libdispatch) on Linux like he could on Apple's platforms. With the release of Swift 3, Apple has corrected this with the release of Swift's core libraries. (For more resources related to this topic, see here.) In this article, you will learn about the following topics: What are the Swift core libraries How to use Apple's URL loading system How to use the Formatter classes How to use the File Manager class The Swift Core Libraries are written to provide a rich set of APIs that are consistent across the various platforms that Swift supports. By using these libraries, developers will be able to write code that will be portable to all platforms that Swift supports. These libraries provide a higher level of functionality as compared to the Swift standard library. The core libraries provide functionality in a number of areas, such as: Networking Unit testing Scheduling and execution of work (libdispatch) Property lists, JSON parsing, and XML parsing Support for dates, times, and calendar calculations Abstraction of OS-specific behavior Interaction with the file system User preferences We are unable to cover all of the Core Libraries in this single article; however, we will look at some of the more useful ones. We will start off by looking at Apple's URL loading system that is used for network development. Apple's URL loading system Apple's URL loading system is a framework of classes available to interact with URLs. We can use these classes to communicate with services that use standard Internet protocols. The classes that we will be using in this section to connect to and retrieve information from REST services are as follows: URLSession: This is the main session object. URLSessionConfiguration: This is used to configure the behavior of the URLSession object. URLSessionTask: This is a base class to handle the data being retrieved from the URL. Apple provides three concrete subclasses of the URLSessionTask class. URL: This is an object that represents the URL to connect to. URLRequest: This class contains information about the request that we are making and is used by the URLSessionTask service to make the request. HTTPURLResponse: This class contains the response to our request. Now, let's look at each of these classes a little more in depth so that we have a basic understanding of what each does. URLSession A URLSession object provides an API for interacting with various protocols such as HTTP and HTTPS. The session object, which is an instance of the URLSession, manages this interaction. These session objects are highly configurable, which allows us to control how our requests are made and how we handle the data that is returned. Like most networking APIs, URLSession is asynchronous. This means that we have to provide a way to return the response from the service back to the code that needs it. The most popular way to return the results from a session is to pass a completion handler block (closure) to the session. This completion handler is then called when the service successfully responds or we receive an error. All of the examples in this article use completion handlers to process the data that is returned from the services. URLSessionConfiguration The URLSessionConfiguration class defines the behavior and policies to use when using the URLSession object to connect to a URL. When using the URLSession object, we usually create a URLSessionConfiguration instance first, because an instance of this class is required when we create an instance of the URLSession class. The URLSessionConfiguration class defines three session types: Default session configuration: Manages the upload and download tasks with default configurations Ephemeral session configuration: This configuration behaves similar to the default session configuration, except that it does not cache anything to disk Background session configuration: This session allows for uploads and downloads to be performed, even when the app is running in the background It is important to note that we should make sure that we configure the URLSessionConfiguration object appropriately before we use it to create an instance of the URLSession class. When the session object is created, it creates a copy of the configuration object that we provide. Any changes made to the configuration object once the session object is created are ignored by the session. If we need to make changes to the configuration, we must create another instance of the URLSession class. URLSessionTask The URLSession service uses an instance of the URLSessionTask class to make the call to the service that we are connecting to. The URLSessionTask class is a base class, and Apple has provided three concrete subclasses that we can use: URLSessionDataTask: This returns the response, in memory, directly to the application as one or more Data objects. This is the task that we generally use most often. URLSessionDownloadTask: This writes the response directly to a temporary file. URLSessionUploadTask: This is used for making requests that require a request body, such as a POST or PUT request. It is important to note that a task will not send the request to the service until we call the resume() method. URL The URL object represents the URL that we are going to connect to. The URL class is not limited to URLs that represent remote servers, but it can also be used to represent a local file on disk. In this article, we will be using the URL class exclusively to represent the URL of the remote service that we are connecting to. URLRequest We use the URLRequest class to encapsulate our URL and the request properties. It is important to understand that the URLRequest class is used to encapsulate the necessary information to make our request, but it does not make the actual request. To make the request, we use instances of the URLSession and URLSessionTask classes. HTTPURLResponse The HTTPURLResponse class is a subclass of the URLResponse class that encapsulates the metadata associated with the response to a URL request. The HTTPURLResponse class provides methods for accessing specific information associated with an HTTP response. Specifically, this class allows us to access the HTTP header fields and the response status codes. We briefly covered a number of classes in this section and it may not be clear how they all actually fit together; however, once you see the examples a little further in this article, it will become much clearer. Before we go into our examples, let's take a quick look at the type of service that we will be connecting to. REST web services REST has become one of the most important technologies for stateless communications between devices. Due to the lightweight and stateless nature of the REST-based services, its importance is likely to continue to grow as more devices are connected to the Internet. REST is an architecture style for designing networked applications. The idea behind REST is that instead of using complex mechanisms, such as SOAP or CORBA to communicate between devices, we use simple HTTP requests for the communication. While, in theory, REST is not dependent on the Internet protocols, it is almost always implemented using them. Therefore, when we are accessing REST services, we are almost always interacting with web servers in the same way that our web browsers interact with these servers. REST web services use the HTTP POST, GET, PUT, or DELETE methods. If we think about a standard Create, Read, Update, Delete (CRUD) application, we would use a POST request to create or update data, a GET request to read data, and a DELETE request to delete data. When we type a URL into our browser's address bar and hit Enter, we are generally making a GET request to the server and asking it to send us the web page associated with that URL. When we fill out a web form and click the submit button, we are generally making a POST request to the server. We then include the parameters from the web form in the body of our POST request. Now, let's look at how to make an HTTP GET request using Apple's networking API. Making an HTTP GET request In this example, we will make a GET request to Apple's iTunes search API to get a list of items related to the search term "Jimmy Buffett". Since we are retrieving data from the service, by REST standards, we should use a GET request to retrieve the data. While the REST standard is to use GET requests to retrieve data from a service, there is nothing stopping a developer of a web service from using a GET request to create or update a data object. It is not recommended to use a GET request in this manner, but just be aware that there are services out there that do not adhere to the REST standards. The following code makes a request to Apple's iTunes search API and then prints the results to the console: public typealias dataFromURLCompletionClosure = (URLResponse?, Data?) -> Void public func sendGetRequest ( _ handler: @escaping dataFromURLCompletionClosure) { let sessionConfiguration = URLSessionConfiguration.default; let urlString = "https://itunes.apple.com/search?term=jimmy+buffett" if let encodeString = urlString.addingPercentEncoding( withAllowedCharacters: CharacterSet.urlQueryAllowed), let url = URL(string: encodeString) { var request = URLRequest(url:url) request.httpMethod = "GET" let urlSession = URLSession( configuration:sessionConfiguration, delegate: nil, delegateQueue: nil) let sessionTask = urlSession.dataTask(with: request) { (data, response, error) in handler(response, data) } sessionTask.resume() } } We start off by creating a type alias named DataFromURLCompletionClosure. The DataFromURLCompletionClosure type will be used for both the GET and POST examples of this article. If you are not familiar with using a typealias object to define a closure type. We then create a function named sendGetRequest(), which will be used to make the GET request to Apple's iTunes API. This function accepts one argument named handler, which is a closure that conforms to the DataFromURLCompletionClosure type. The handler closure will be used to return the results from the request. Stating with Swift 3, the default for closure arguments to functions is not escaping, which means that, by default, the closures argument cannot escape the function body. A closure is considered to escape a function when that closure, which is passed as an argument to the function, is called after the function returns. Since the closure will be called after the function returns, we use the @escaping attribute before the parameter type to indicate it is allowed to escape. Within our sendGetRequest() method, we begin by creating an instance of the URLSessionConfiguration class using the default settings. If we need to, we can modify the session configuration properties after we create it, but in this example, the default configuration is what we want. After we create our session configuration, we create the URL string. This is the URL of the service we are connecting to. With a GET request, we put our parameters in the URL itself. In this specific example, https://itunes.apple.com/search is the URL of the web service. We then follow the web service URL with a question mark (?), which indicates that the rest of the URL string consists of parameters for the web service. The parameters take the form of key/value pairs, which means that each parameter has a key and a value. The key and value of a parameter, in a URL, are separated by an equals sign (=). In our example, the key is term and the value is jimmy+buffett. Next, we run the URL string that we just created through the addingPercentEncoding() method to make sure our URL string is encoded properly. We use the CharacterSet.urlQueryAllowed character set with this method to ensure we have a valid URL string. Next, we use the URL string that we just built to create a URL instance named url. Since we are making a GET request, this URL instance will represent both the location of the web service and the parameters that we are sending to it. We create an instance of the URLRequest class using the URL instance that we just created. In this example, we set the HTTPMethod property; however, we can also set other properties, such as the timeout interval or add items to our HTTP header. Now, we use the sessionConfiguration constant that we created at the beginning of the sendGetRequest() function to create an instance of the URLSession class. The URLSession class provides the API that we will use to connect to Apple's iTunes search API. In this example, we use the dataTask(with:) method of the URLSession instance to return an instance of the URLSessionDataTask type named sessionTask. The sessionTask instance is what makes the request to the iTunes search API. When we receive the response from the service, we use the handler callback to return both the URLResponse object and the Data object. The URLResponse contains information about the response, and the Data instance contains the body of the response. Finally, we call the resume() method of the URLSessionDataTask instance to make the request to the web service. Remember, as we mentioned earlier, a URLSessionTask instance will not send the request to the service until we call the resume() method. Now, let's look at how we would call the sendGetRequest() function. The first thing we need to do is to create a closure that will be passed to the sendGetRequest() function and called when the response from the web service is received. In this example, we will simply print the response to the console. Since the response is in the JSON format, we could use the JSONSerialization class, which we will see later in this article, to parse the response; however, since this section is on networking, we will simply print the response to the console. Here is the code: let printResultsClosure: dataFromURLCompletionClosure = { if let data = $1 { let sString = let sString = String(data: data, encoding: String.Encoding(rawValue: String.Encoding.utf8.rawValue)) print(sString) } else { print("Data is nil") } } We define this closure, named printResultsClosure, to be an instance of the DataFromURLCompletionClosure type. Within the closure, we unwrap the first parameter and set the value to a constant named data. If the first parameter is not nil, we convert the data constant to an instance of the String class, which is then printed to the console. Now, let's call the sendGetRequest() method with the following code: let aConnect = HttpConnect() aConnect.sendGetRequest(printResultsClosure) This code creates an instance of the HttpConnect class and then calls the sendGetRequest() method, passing the printResultsClosure closure as the only parameter. If we run this code while we are connected to the Internet, we will receive a JSON response that contains a list of items related to Jimmy Buffett on iTunes. Now that we have seen how to make a simple HTTP GET request, let's look at how we would make an HTTP POST request to a web service. Making an HTTP POST request Since Apple's iTunes, APIs use GET requests to retrieve data. In this section, we will use the free http://httpbin.org service to show you how to make a POST request. The POST service that http://httpbin.org provides can be found at http://httpbin.org/post. This service will echo back the parameters that it receives so that we can verify that our request was made properly. When we make a POST request, we generally have some data that we want to send or post to the server. This data takes the form of key/value pairs. These pairs are separated by an ampersand (&) symbol, and each key is separated from its value by an equals sign (=). As an example, let's say that we want to submit the following data to our service: firstname: Jon lastname: Hoffman age: 47 years The body of the POST request would take the following format: firstname=Jon&lastname=Hoffman&age=47 Once we have the data in the proper format, we will then use the dataUsingEncoding() method, as we did with the GET request to properly encode the POST data. Since the data going to the server is in the key/value format, the most appropriate way to store this data, prior to sending it to the service, is with a Dictionary object. With this in mind, we will need to create a method that will take a Dictionary object and return a string object that can be used for the POST request. The following code will do that: private func dictionaryToQueryString(_ dict: [String : String]) -> String { var parts = [String]() for (key, value) in dict { let part : String = key + "=" + value parts.append(part); } return parts.joined(separator: "&") } This function loops through each key/value pair of the Dictionary object and creates a String object that contains the key and the value separated by the equals sign (=). We then use the joinWithSeperator() function to join each item in the array, separated by the specified sting. In our case, we want to separate each string with the ampersand symbol (&). We then return this newly created string to the code that called it. Now, let's create the sendPostRequest() function that will send the POST request to the http://httpbin.org post service. We will see a lot of similarities between this sendPostRequest() function and the sendGetRequest() function, which we showed you in the Making an HTTP GET request section. Let's take a look at the following code: public func sendPostRequest(_ handler: @escaping dataFromURLCompletionClosure) { let sessionConfiguration = URLSessionConfiguration.default let urlString = "http://httpbin.org/post" if let encodeString = urlString.addingPercentEncoding( withAllowedCharacters: CharacterSet.urlQueryAllowed), let url = URL(string: encodeString) { var request = URLRequest(url:url) request.httpMethod = "POST" let params = dictionaryToQueryString(["One":"1 and 1", "Two":"2 and 2"]) request.httpBody = params.data( using: String.Encoding.utf8, allowLossyConversion: true) let urlSession = URLSession( configuration:sessionConfiguration, delegate: nil, delegateQueue: nil) let sessionTask = urlSession.dataTask(with: request) { (data, response, error) in handler(response, data) } sessionTask.resume() } } This code is very similar to the sendGetRequest() function that we saw earlier in this section. The two main differences are the httpMethod of the URLRequest is set to POST rather than GET and how we set the parameters. In this function, we set the httpBody property of the URLRequest instance to the parameters we are submitting. Now that we have seen how to use the URL Loading System, let's look at how we can use Formatters. Formatter Formatter is an abstract class that declares an interface for an object that creates, converts, or validates a human readable form of some data. The types that subclass the Formatter class are generally used when we want to take a particular object, like an instance of the Date class, and present the value in a form that the user of our application can understand. Apple has provided several concrete implementations of the Formatter class, and in this section we will look at two of them. It is important to remember that the formatters that Apple provides will provide the proper format for the default locale of the device the application is running on. We will start off by looking at the DateFormatter type. DateFormatter The DateFormatter class is a subclass of the Formatter abstract class that can be used to convert a Date object into a human readable string. It can also be used to convert a String representation of a date into a Date object. We will look at both of these use cases in this section. Let's begin by seeing how we could covert a Date object into a human readable string. The DateFormatter type has five predefined styles that we can use when we are converting a Date object to a human readable string. The following chart shows what the five styles look like for an en-US locale. DateFormatter Style Date Time .none No Format No Format .short 12/25/16 6:00 AM .medium Dec 25, 2016 6:00:00 AM .long December 25, 2016 6:00:00 AM EST .full Sunday December 25, 2016 6:00:00 AM Eastern Standard Time The following code shows how we would use the predefined DateFormatter styles: let now = Date() let formatter = DateFormatter() formatter.dateStyle = .short formatter.timeStyle = .medium let dateStr = formatter.string(from: now) We use the string(from:) method to convert the now date to a human readable string. In this example, for the en-US locale, the dateStr constant would contain text similar to "Aug 19, 2016, 6:40 PM". There are numerous times when the predefined styles do not meet our needs. For those times, we can define our own styles using a custom format string. This string is a series of characters that the DateFormatter type knows are stand-ins for the values we want to show. The DateFormatter instance will replace these stand-ins with the appropriate values. The following table shows some of the formatting values that we can use to format our Date objects: Stand-in Format Description Example output yy Two digit year 16, 14, 04 yyyy Four digit year 2016, 2014, 2004 MM Two digit month 06, 08, 12 MMM Three letter Month Jul, Aug, Dec MMMM Full month name July, August dd Two digit day 10, 11, 30 EEE Three letter Day Mon, Sat, Sun EEEE Full day Monday, Sunday a Period of day AM, PM hh Two digit hour 02, 03, 04 HH Two digit hour for 24 hour clock 11, 14, 16 mm Two digit minute 30, 35, 45 ss Two digit seconds 30, 35, 45 The following code shows how we would use these custom formatters: let now = Date() let formatter2 = DateFormatter() formatter2.dateFormat = "YYYY-MM-dd HH:mm:ss" let dateStr2 = formatter2.string(from: now) We use the string(from:) method to convert the now date to a human readable string. In this example, for the en-US locale, the dateStr2 constant would contain text similar to 2016-08-19 19:03:23. Now let's look at how we would take a formatted date string and convert it to a Date object. We will use the same format string that we used in our last example: formatter2.dateFormat = "YYYY-MM-dd HH:mm:ss" let dateStr3 = "2016-08-19 16:32:02" let date = formatter2.date(from: dateStr3) In this example, we took the human readable date string and converted it to a Date object using the date(from:) method. If the format of the human readable date string does not match the format specified in the DateFormatter instance, then the conversion will fail and return nil. Now let's take a look at the NumberFormatter type. NumberFormatter The NumberFormatter class is a subclass of the Formatter abstract class that can be used to convert a number into a human readable string with a specified format. This formatter is especially useful when we want to display a currency string since it will convert the number to the proper currency to the proper locale. Let's begin by looking at how we would convert a number into a currency string: let formatter1 = NumberFormatter() formatter1.numberStyle = .currency let num1 = formatter1.string(from: 23.99) In the previous code, we define our number style to be .currency, which tells our formatter that we want to convert our number to a currency string. We then use the string(from:) method to convert the number to a string. In this example, for the en-US locale, the num1 constant would contain the string "$23.99". Now let's see how we would round a number using the NumberFormatter type. The following code will round the number to two decimal points: let formatter2 = NumberFormatter() formatter2.numberStyle = .decimal formatter2.maximumFractionDigits = 2 let num2 = formatter2.string(from: 23.2357) In this example, we set the numberStyle property to the .decimal style and we also define the maximum number of decimal digits to be two using the maximumFractionDigits property. We use the string(from:) method to convert the number to a string. In this example, the num2 constant will contain the string "23.24". Now let's see how we can spell out a number using the NumberFormatter type. The following code will take a number and spell out the number: let formatter3 = NumberFormatter() formatter3.numberStyle = .spellOut let num3 = formatter3.string(from: 2015) In this example, we set the numberStyle property to the .spellout style. This style will spell out the number. For this example, the num3 constant will contain the string two thousand fifteen. Now let's look at how we can manipulate the file system using the FileManager class FileManager The file system is a complex topic and how we manipulate it within our applications is generally specific to the operating system that our application is running on. This can be an issue when we are trying to port code from one operating system to another. Apple has addressed this issue by putting the FileManager object in the core libraries. The FileManager object lets us examine and make changes to the file system in a uniform manner across all operating systems that Swift supports. The FileManager class provides us with a shared instance that we can use. This instance should be suitable for most of our file system related tasks. We can access this shared instance using the default property. When we use the FileManager object we can provide paths as either an instance of the URL or String types. In this section, all of our paths will be String types for consistency purposes. Let's start off by seeing how we could list the contents of a directory using the FileManager object. The following code shows how to do this: let fileManager = FileManager.default do { let path = "/Users/jonhoffman/" let dirContents = try fileManager.contentsOfDirectory(atPath: path) for item in dirContents { print(item); } } catch let error { print("Failed reading contents of directory: (error)") } We start off by getting the shared instance of the FileManager object using the default property. We will use this same shared instance for all of our examples in this section rather than redefining it for each example. Next, we define our path and use it with the contentsOfDirectory(atPath:) method. This method returns an array of String types that contains the names of the items in the path. We use a for loop to list these items. Next let's look at how we would create a directory using the file manager. The following code shows how to do this: do { let path = "/Users/jonhoffman/masteringswift/test/dir" try fileManager.createDirectory(atPath: path, withIntermediateDirectories: true) } catch let error { print("Failed creating directory, (error) ") } In this example, we use the createDirectory(atPath: withIntermediateDirectories:) method to create the directory. When we set the withIntermediateDirectories parameter to true, this method will create any parent directories that are missing. When this parameter is set to false, if any parent directories are missing the operation will fail. Now let's look at how we would copy an item from one location to another: do { let pathOrig = "/Users/jonhoffman/masteringswift/" let pathNew = "/Users/jonhoffman/masteringswift2/" try fileManager.copyItem(atPath: pathOrig, toPath: pathNew) } catch let error { print("Failed copying directory, (error) ") } In this example, we use the copyItem(atPath: toPath:) method to copy one item to another location. This method can be used to copy either directories or files. If it is used for directories, the entire path structure below the directory specified by the path is copied. The file manager also has a method that will let us move an item rather than copying it. Let's see how to do this: do { let pathOrig = "/Users/jonhoffman/masteringswift2/" let pathNew = "/Users/jonhoffman/masteringswift3/" try fileManager.moveItem(atPath: pathOrig, toPath: pathNew) } catch let error { print("Failed moving directory, (error) ") } To move an item, we use the moveItem(atPath: toPath:) method. Just like the copy example we just saw, the move method can be used for either files or directories. If the path specifies a directory, then the entire directory structure below that path will be moved. Now let's see how we can delete an item from the file system: do { let path = "/Users/jonhoffman/masteringswift/" try fileManager.removeItem(atPath: path) } catch let error { print("Failed Removing directory, (error) ") } In this example, we use the removeItem(atPath:) method to remove the item from the file system. A word of warning, once you delete something it is gone and there is no getting it back. Next let's look at how we can read permissions for items in our file system. For this we will need to create a file named test.file, which our path will point to: let path = "/Users/jonhoffman/masteringswift3/test.file" if fileManager.fileExists(atPath: path) { let isReadable = fileManager.isReadableFile(atPath: path) let isWriteable = fileManager.isWritableFile(atPath: path) let isExecutable = fileManager.isExecutableFile(atPath: path) let isDeleteable = fileManager.isDeletableFile(atPath: path) print("can read (isReadable)") print("can write (isWriteable)") print("can execute (isExecutable)") print("can delete (isDeleteable)") } In this example, we use four different methods to read the file system permissions for the file system item. These methods are: isReadableFile(atPath:): true if the file is readable isWritableFile(atPath:):true if the file is writable isExecutableFile(atPath:): true if the file is executable isDeletableFile(atPath:): true if the file is deletable If we wanted to read or write text files, we could use methods provided by the String type rather than the FileManager type. Even though we do not use the FileManager class, for this example, we wanted to show how to read and write text files. Let's see how we would write some text to a file: let filePath = "/Users/jonhoffman/Documents/test.file" let outString = "Write this text to the file" do { try outString.write(toFile: filePath, atomically: true, encoding: .utf8) } catch let error { print("Failed writing to path: (error)") } In this example, we start off by defining our path as a String type just as we did in the previous examples. We then create another instance of the String type that contains the text we want to put in our file. To write the text to the file we use the write(toFile: atomically: encoding:) method of the String instance. This method will create the file if needed and write the contents of the String instance to the file. It is just as easy to read a text file using the String type. The following example shows how to do this: let filePath = "/Users/jonhoffman/Documents/test.file" var inString = "" do { inString = try String(contentsOfFile: filePath) } catch let error { print("Failed reading from path: (error)") } print("Text Read: (inString)") In this example, we use the contentsOfFile: initiator to create an instance of the String type that contains the contents of the file specified by the file path. Now that we have seen how to use the FileManager type, let's look at how we would serialize and deserialize JSON documents using the JSONSerialization type. Summary In this article, we looked at some of the libraries that make up the Swift Core Libraries. While these libraries are only a small portion of the libraries that make up the Swift Core Libraries they are arguable some of the most useful libraries. To explore the other libraries that, you can refer to Apple's GitHub page: https://github.com/apple/swift-corelibs-foundation. Resources for Article: Further resources on this subject: Flappy Swift [article] Network Development with Swift [article] Your First Swift 2 Project [article]
Read more
  • 0
  • 0
  • 25872
article-image-iot-and-decision-science
Packt
13 Oct 2016
10 min read
Save for later

IoT and Decision Science

Packt
13 Oct 2016
10 min read
In this article by Jojo Moolayil, author of the book Smarter Decisions - The Intersection of Internet of Things and Decision Science, you will learn that the Internet of Things (IoT) and Decision Science have been among the hottest topics in the industry for a while now. You would have heard about IoT and wanted to learn more about it, but unfortunately you would have come across multiple names and definitions over the Internet with hazy differences between them. Also, Decision Science has grown from a nascent domain to become one of the fastest and most widespread horizontal in the industry in the recent years. With the ever-increasing volume, variety, and veracity of data, decision science has become more and more valuable for the industry. Using data to uncover latent patterns and insights to solve business problems has made it easier for businesses to take actions with better impact and accuracy. (For more resources related to this topic, see here.) Data is the new oil for the industry, and with the boom of IoT, we are in a world where more and more devices are getting connected to the Internet with sensors capturing more and more vital granular dimensions details that had never been touched earlier. The IoT is a game changer with a plethora of devices connected to each other; the industry is eagerly attempting to untap the huge potential that it can deliver. The true value and impact of IoT is delivered with the help of Decision Science. IoT has inherently generated an ocean of data where you can swim to gather insights and take smarter decisions with the intersection of Decision Science and IoT. In this book, you will learn about IoT and Decision Science in detail by solving real-life IoT business problems using a structured approach. In this article, we will begin by understanding the fundamental basics of IoT and Decision Science problem solving. You will learn the following concepts: Understanding IoT and demystifying Machine to Machine (M2M), IoT, Internet of Everything (IoE), and Industrial IoT (IIoT) Digging deeper into the logical stack of IoT Studying the problem life cycle Exploring the problem landscape The art of problem solving The problem solving framework It is highly recommended that you explore this article in depth. It focuses on the basics and concepts required to build problems and use cases. Understanding the IoT To get started with the IoT, lets first try to understand it using the easiest constructs. Internet and Things; we have two simple words here that help us understand the entire concept. So what is the Internet? It is basically a network of computing devices. Similarly, what is a Thing? It could be any real-life entity featuring Internet connectivity. So now, what do we decipher from IoT? It is a network of connected Things that can transmit and receive data from other things once connected to the network. This is how we describe the Internet of Things in a nutshell. Now, let's take a glance at the definition. IoT can be defined as the ever-growing network of Things (entities) that feature Internet connectivity and the communication that occurs between them and other Internet-enabled devices and systems. The Things in IoT are enabled with sensors that capture vital information from the device during its operations, and the device features Internet connectivity that helps it transfer and communicate to other devices and the network. Today, when we discuss about IoT, there are so many other similar terms that come into the picture, such as Industrial Internet, M2M, IoE, and a few more, and we find it difficult to understand the differences between them. Before we begin delineating the differences between these hazy terms and understand how IoT evolved in the industry, lets first take a simple real-life scenario to understand how exactly IoT looks like. IoT in a real-life scenario Let's take a simple example to understand how IoT works. Consider a scenario where you are a father in a family with a working mother and 10-year old son studying in school. You and your wife work in different offices. Your house is equipped with quite a few smart devices, say, a smart microwave, smart refrigerator, and smart TV. You are currently in office and you get notified on your smartphone that your son, Josh, has reached home from school. (He used his personal smart key to open the door.) You then use your smartphone to turn on the microwave at home to heat the sandwiches kept in it. Your son gets notified on the smart home controller that you have hot sandwiches ready for him. He quickly finishes them and starts preparing for a math test at school and you resume your work. After a while, you get notified again that your wife has also reached home (She also uses a similar smart key.) and you suddenly realize that you need to reach home to help your son with his math test. You again use your smartphone and change the air conditioner settings for three people and set the refrigerator to defrost using the app. In another 15 minutes, you are home and the air conditioning temperature is well set for three people. You then grab a can of juice from the refrigerator and discuss some math problems with your son on the couch. Intuitive, isnt it? How did it his happen and how did you access and control everything right from your phone? Well, this is how IoT works! Devices can talk to each other and also take actions based on the signals received: The IoT scenario Lets take a closer look at the same scenario. You are sitting in office and you could access the air conditioner, microwave, refrigerator, and home controller through your smartphone. Yes, the devices feature Internet connectivity and once connected to the network, they can send and receive data from other devices and take actions based on signals. A simple protocol helps these devices understand and send data and signals to a plethora of heterogeneous devices connected to the network. We will get into the details of the protocol and how these devices talk to each other soon. However, before that, we will get into some details of how this technology started and why we have so many different names today for IoT. Demystifying M2M, IoT, IIoT, and IoE So now that we have a general understanding about what is IoT, lets try to understand how it all started. A few questions that we will try to understand are: Is IoT very new in the market?, When did this start?, How did this start?, Whats the difference between M2M, IoT, IoE, and all those different names?, and so on. If we try to understand the fundamentals of IoT, that is, machines or devices connected to each other in a network, which isn't something really new and radically challenging, then what is this buzz all about? The buzz about machines talking to each other started long before most of us thought of it, and back then it was called Machine to Machine Data. In early 1950, a lot of machinery deployed for aerospace and military operations required automated communication and remote access for service and maintenance. Telemetry was where it all started. It is a process in which a highly automated communication was established from which data is collected by making measurements at remote or inaccessible geographical areas and then sent to a receiver through a cellular or wired network where it was monitored for further actions. To understand this better, lets take an example of a manned space shuttle sent for space exploration. A huge number of sensors are installed in such a space shuttle to monitor the physical condition of astronauts, the environment, and also the condition of the space shuttle. The data collected through these sensors is then sent back to the substation located on Earth, where a team would use this data to analyze and take further actions. During the same time, industrial revolution peaked and a huge number of machines were deployed in various industries. Some of these industries where failures could be catastrophic also saw the rise in machine-to-machine communication and remote monitoring: Telemetry Thus, machine-to-machine data a.k.a. M2M was born and mainly through telemetry. Unfortunately, it didnt scale to the extent that it was supposed to and this was largely because of the time it was developed in. Back then, cellular connectivity was not widespread and affordable, and installing sensors and developing the infrastructure to gather data from them was a very expensive deal. Therefore, only a small chunk of business and military use cases leveraged this. As time passed, a lot of changes happened. The Internet was born and flourished exponentially. The number of devices that got connected to the Internet was colossal. Computing power, storage capacities, and communication and technology infrastructure scaled massively. Additionally, the need to connect devices to other devices evolved, and the cost of setting up infrastructure for this became very affordable and agile. Thus came the IoT. The major difference between M2M and IoT initially was that the latter used the Internet (IPV4/6) as the medium whereas the former used cellular or wired connection for communication. However, this was mainly because of the time they evolved in. Today, heavy engineering industries have machinery deployed that communicate over the IPV4/6 network and is called Industrial IoT or sometimes M2M. The difference between the two is bare minimum and there are enough cases where both are used interchangeably. Therefore, even though M2M was actually the ancestor of IoT, today both are pretty much the same. M2M or IIoT are nowadays aggressively used to market IoT disruptions in the industrial sector. IoE or Internet of Everything was a term that surfaced on the media and Internet very recently. The term was coined by Cisco with a very intuitive definition. It emphasizes Humans as one dimension in the ecosystem. It is a more organized way of defining IoT. The IoE has logically broken down the IoT ecosystem into smaller components and simplified the ecosystem in an innovative way that was very much essential. IoE divides its ecosystem into four logical units as follows: People Processes Data Devices Built on the foundation of IoT, IoE is defined as The networked connection of People, Data, Processes, and Things. Overall, all these different terms in the IoT fraternity have more similarities than differences and, at the core, they are the same, that is, devices connecting to each other over a network. The names are then stylized to give a more intrinsic connotation of the business they refer to, such as Industrial IoT and Machine to Machine for (B2B) heavy engineering, manufacturing and energy verticals, Consumer IoT for the B2C industries, and so on. Summary In this article we learnt how to start with the IoT. It is basically a network of computing devices. Similarly, what is a Thing? It could be any real-life entity featuring Internet connectivity. So now, what do we decipher from IoT? It is a network of connected Things that can transmit and receive data from other things once connected to the network. This is how we describe the Internet of Things in a nutshell. Resources for Article: Further resources on this subject: Machine Learning Tasks [article] Welcome to Machine Learning Using the .NET Framework [article] Why Big Data in the Financial Sector? [article]
Read more
  • 0
  • 0
  • 1789

article-image-reconstructing-3d-scenes
Packt
13 Oct 2016
25 min read
Save for later

Reconstructing 3D Scenes

Packt
13 Oct 2016
25 min read
In this article by Robert Laganiere, the author of the book OpenCV 3 Computer Vision Application Programming Cookbook Third Edition, has covered the following recipes: Calibrating a camera Recovering camera pose (For more resources related to this topic, see here.) Digital image formation Let us now redraw a new version of the figure describing the pin-hole camera model. More specifically, we want to demonstrate the relation between a point in 3D at position (X,Y,Z) and its image (x,y), on a camera specified in pixel coordinates. Note the changes that have been made to the original figure. First, a reference frame was positioned at the center of the projection; then, the Y-axis was alligned to point downwards to get a coordinate system that is compatible with the usual convention, which places the image origin at the upper-left corner of the image. Finally, we have identified a special point on the image plane, by considering the line coming from the focal point that is orthogonal to the image plane. The point (u0,v0) is the pixel position at which this line pierces the image plane and is called the principal point. It would be logical to assume that this principal point is at the center of the image plane, but in practice, this one might be off by few pixels depending on the precision of the camera. Since we are dealing with digital images, the number of pixels on the image plane (its resolution) is another important characteristic of a camera. We learned previously that a 3D point (X,Y,Z) will be projected onto the image plane at (fX/Z,fY/Z). Now, if we want to translate this coordinate into pixels, we need to divide the 2D image position by the pixel width (px) and then the height (py). Note that by dividing the focal length given in world units (generally given in millimeters) by px, we obtain the focal length expressed in (horizontal) pixels. We will then define this term as fx. Similarly, fy =f/py is defined as the focal length expressed in vertical pixel unit. The complete projective equation is therefore as shown: We know that (u0,v0) is the principal point that is added to the result in order to move the origin to the upper-left corner of the image. Also, the physical size of a pixel can be obtained by dividing the size of the image sensor (generally in millimeters) by the number of pixels (horizontally or vertically). In modern sensor, pixels are generally of square shape, that is, they have the same horizontal and vertical size. The preceding equations can be rewritten in matrix form. Here is the complete projective equation in its most general form: Calibrating a camera Camera calibration is the process by which the different camera parameters (that is, the ones appearing in the projective equation) are obtained. One can obviously use the specifications provided by the camera manufacturer, but for some tasks, such as 3D reconstruction, these specifications are not accurate enough. However, accurate calibration information can be obtained by undertaking an appropriate camera calibration step. An active camera calibration procedure will proceed by showing known patterns to the camera and analyzing the obtained images. An optimization process will then determine the optimal parameter values that explain the observations. This is a complex process that has been made easy by the availability of OpenCV calibration functions. How to do it... To calibrate a camera, the idea is to show it a set of scene points for which the 3D positions are known. Then, you need to observe where these points project on the image. With the knowledge of a sufficient number of 3D points and associated 2D image points, the exact camera parameters can be inferred from the projective equation. Obviously, for accurate results, we need to observe as many points as possible. One way to achieve this would be to take a picture of a scene with known 3D points, but in practice, this is rarely feasible. A more convenient way is to take several images of a set of 3D points from different viewpoints. This approach is simpler, but it requires you to compute the position of each camera view in addition to the computation of the internal camera parameters, which is fortunately feasible. OpenCV proposes that you use a chessboard pattern to generate the set of 3D scene points required for calibration. This pattern creates points at the corners of each square, and since this pattern is flat, we can freely assume that the board is located at Z=0, with the X and Y axes well-aligned with the grid. In this case, the calibration process simply consists of showing the chessboard pattern to the camera from different viewpoints. The following is an example of a calibration pattern image made of 7x5 inner corners as captured during the calibration step: The good thing is that OpenCV has a function that automatically detects the corners of this chessboard pattern. You simply provide an image and the size of the chessboard used (the number of horizontal and vertical inner corner points). The function will return the position of these chessboard corners on the image. If the function fails to find the pattern, then it simply returns false, as shown: //output vectors of image points std::vector<cv::Point2f> imageCorners; //number of inner corners on the chessboard cv::Size boardSize(7,5); //Get the chessboard corners bool found = cv::findChessboardCorners(image, // image of chessboard pattern boardSize, // size of pattern imageCorners); // list of detected corners The output parameter, imageCorners, will simply contain the pixel coordinates of the detected inner corners of the shown pattern. Note that this function accepts additional parameters if you need to tune the algorithm, which are not discussed here. There is also a special function that draws the detected corners on the chessboard image, with lines connecting them in a sequence: //Draw the corners cv::drawChessboardCorners(image, boardSize, imageCorners, found); // corners have been found The following image is obtained: The lines that connect the points show the order in which the points are listed in the vector of detected image points. To perform a calibration, we now need to specify the corresponding 3D points. You can specify these points in the units of your choice (for example, in centimeters or in inches); however, the simplest is to assume that each square represents one unit. In that case, the coordinates of the first point would be (0,0,0) (assuming that the board is located at a depth of Z=0), the coordinates of the second point would be (1,0,0), and so on, the last point being located at (6,4,0). There are a total of 35 points in this pattern, which is too less to obtain an accurate calibration. To get more points, you need to show more images of the same calibration pattern from various points of view. To do so, you can either move the pattern in front of the camera or move the camera around the board; from a mathematical point of view, this is completely equivalent. The OpenCV calibration function assumes that the reference frame is fixed on the calibration pattern and will calculate the rotation and translation of the camera with respect to the reference frame. Let‘s now encapsulate the calibration process in a CameraCalibrator class. The attributes of this class are as follows: // input points: // the points in world coordinates // (each square is one unit) std::vector<std::vector<cv::Point3f>> objectPoints; // the image point positions in pixels std::vector<std::vector<cv::Point2f>> imagePoints; // output Matrices cv::Mat cameraMatrix; cv::Mat distCoeffs; // flag to specify how calibration is done int flag; Note that the input vectors of the scene and image points are in fact made of std::vector of point instances; each vector element is a vector of the points from one view. Here, we decided to add the calibration points by specifying a vector of the chessboard image filename as input, the method will take care of extracting the point coordinates from the images: // Open chessboard images and extract corner points int CameraCalibrator::addChessboardPoints(const std::vector<std::string>& filelist, // list of filenames cv::Size & boardSize) { // calibration noard size // the points on the chessboard std::vector<cv::Point2f> imageCorners; std::vector<cv::Point3f> objectCorners; // 3D Scene Points: // Initialize the chessboard corners // in the chessboard reference frame // The corners are at 3D location (X,Y,Z)= (i,j,0) for (int i=0; i<boardSize.height; i++) { for (int j=0; j<boardSize.width; j++) { objectCorners.push_back(cv::Point3f(i, j, 0.0f)); } } // 2D Image points: cv::Mat image; // to contain chessboard image int successes = 0; // for all viewpoints for (int i=0; i<filelist.size(); i++) { // Open the image image = cv::imread(filelist[i],0); // Get the chessboard corners bool found = cv::findChessboardCorners(image, //image of chessboard pattern boardSize, // size of pattern imageCorners); // list of detected corners // Get subpixel accuracy on the corners if (found) { cv::cornerSubPix(image, imageCorners, cv::Size(5, 5), // half size of serach window cv::Size(-1, -1), cv::TermCriteria(cv::TermCriteria::MAX_ITER + cv::TermCriteria::EPS,30,// max number of iterations 0.1)); // min accuracy // If we have a good board, add it to our data if (imageCorners.size() == boardSize.area()) { // Add image and scene points from one view addPoints(imageCorners, objectCorners); successes++; } } //If we have a good board, add it to our data if (imageCorners.size() == boardSize.area()) { // Add image and scene points from one view addPoints(imageCorners, objectCorners); successes++; } } return successes; } The first loop inputs the 3D coordinates of the chessboard and the corresponding image points are the ones provided by the cv::findChessboardCorners function; this is done for all the available viewpoints. Moreover, in order to obtain a more accurate image point location, the cv::cornerSubPix function can be used, and as the name suggests, the image points will then be localized at subpixel accuracy. The termination criterion that is specified by the cv::TermCriteria object defines the maximum number of iterations and the minimum accuracy in subpixel coordinates. The first of these two conditions that is reached will stop the corner refinement process. When a set of chessboard corners have been successfully detected, these points are added to the vectors of the image and scene points using our addPoints method. Once a sufficient number of chessboard images have been processed (and consequently, a large number of 3D scene point / 2D image point correspondences are available), we can initiate the computation of the calibration parameters as shown: // Calibrate the camera // returns the re-projection error double CameraCalibrator::calibrate(cv::Size &imageSize){ //Output rotations and translations std::vector<cv::Mat> rvecs, tvecs; // start calibration return calibrateCamera(objectPoints, // the 3D points imagePoints, // the image points imageSize, // image size cameraMatrix, // output camera matrix distCoeffs, // output distortion matrix rvecs, tvecs, // Rs, Ts flag); // set options } In practice, 10 to 20 chessboard images are sufficient, but these must be taken from different viewpoints at different depths. The two important outputs of this function are the camera matrix and the distortion parameters. These will be described in the next section. How it works... In order to explain the result of the calibration, we need to go back to the projective equation presented in the introduction of this article. This equation describes the transformation of a 3D point into a 2D point through the successive application of two matrices. The first matrix includes all of the camera parameters, which are called the intrinsic parameters of the camera. This 3x3 matrix is one of the output matrices returned by the cv::calibrateCamera function. There is also a function called cv::calibrationMatrixValues that explicitly returns the value of the intrinsic parameters given by a calibration matrix. The second matrix is there to have the input points expressed into camera-centric coordinates. It is composed of a rotation vector (a 3x3 matrix) and a translation vector (a 3x1 matrix). Remember that in our calibration example, the reference frame was placed on the chessboard. Therefore, there is a rigid transformation (made of a rotation component represented by the matrix entries r1 to r9 and a translation represented by t1, t2, and t3) that must be computed for each view. These are in the output parameter list of the cv::calibrateCamera function. The rotation and translation components are often called the extrinsic parameters of the calibration and they are different for each view. The intrinsic parameters remain constant for a given camera/lens system. The calibration results provided by the cv::calibrateCamera are obtained through an optimization process. This process aims to find the intrinsic and extrinsic parameters that minimizes the difference between the predicted image point position, as computed from the projection of the 3D scene points, and the actual image point position, as observed on the image. The sum of this difference for all the points specified during the calibration is called the re-projection error. The intrinsic parameters of our test camera obtained from a calibration based on the 27 chessboard images are fx=409 pixels; fy=408 pixels; u0=237; and v0=171. Our calibration images have a size of 536x356 pixels. From the calibration results, you can see that, as expected, the principal point is close to the center of the image, but yet off by few pixels. The calibration images were taken using a Nikon D500 camera with an 18mm lens. Looking at the manufacturer specifitions, we find that the sensor size of this camera is 23.5mm x 15.7mm which gives us a pixel size of 0.0438mm. The estimated focal length is expressed in pixels, so multiplying the result by the pixel size gives us an estimated focal length of 17.8mm, which is consistent with the actual lens we used. Let us now turn our attention to the distortion parameters. So far, we have mentioned that under the pin-hole camera model, we can neglect the effect of the lens. However, this is only possible if the lens that is used to capture an image does not introduce important optical distortions. Unfortunately, this is not the case with lower quality lenses or with lenses that have a very short focal length. Even the lens we used in this experiment introduced some distortion, that is, the edges of the rectangular board are curved in the image. Note that this distortion becomes more important as we move away from the center of the image. This is a typical distortion observed with a fish-eye lens and is called radial distortion. It is possible to compensate for these deformations by introducing an appropriate distortion model. The idea is to represent the distortions induced by a lens by a set of mathematical equations. Once established, these equations can then be reverted in order to undo the distortions visible on the image. Fortunately, the exact parameters of the transformation, which will correct the distortions, can be obtained together with the other camera parameters during the calibration phase. Once this is done, any image from the newly calibrated camera will be undistorted. Therefore, we have added an additional method to our calibration class. //remove distortion in an image (after calibration) cv::Mat CameraCalibrator::remap(const cv::Mat &image) { cv::Mat undistorted; if (mustInitUndistort) { //called once per calibration cv::initUndistortRectifyMap(cameraMatrix, // computed camera matrix distCoeffs, // computed distortion matrix cv::Mat(), // optional rectification (none) cv::Mat(), // camera matrix to generate undistorted image.size(), // size of undistorted CV_32FC1, // type of output map map1, map2); // the x and y mapping functions mustInitUndistort= false; } // Apply mapping functions cv::remap(image, undistorted, map1, map2, cv::INTER_LINEAR); // interpolation type return undistorted; } Running this code on one of our calibration image results in the following undistorted image: To correct the distortion, OpenCV uses a polynomial function that is applied to the image points in order to move them at their undistorted position. By default, five coefficients are used; a model made of eight coefficients is also available. Once these coefficients are obtained, it is possible to compute two cv::Mat mapping functions (one for the x coordinate and one for the y coordinate) that will give the new undistorted position of an image point on a distorted image. This is computed by the cv::initUndistortRectifyMap function, and the cv::remap function remaps all the points of an input image to a new image. Note that because of the nonlinear transformation, some pixels of the input image now fall outside the boundary of the output image. You can expand the size of the output image to compensate for this loss of pixels, but you now obtain output pixels that have no values in the input image (they will then be displayed as black pixels). There‘s more... More options are available when it comes to camera calibration. Calibration with known intrinsic parameters When a good estimate of the camera’s intrinsic parameters is known, it could be advantageous to input them in the cv::calibrateCamera function. They will then be used as initial values in the optimization process. To do so, you just need to add the cv::CALIB_USE_INTRINSIC_GUESS flag and input these values in the calibration matrix parameter. It is also possible to impose a fixed value for the principal point (cv::CALIB_FIX_PRINCIPAL_POINT), which can often be assumed to be the central pixel. You can also impose a fixed ratio for the focal lengths fx and fy (cv::CALIB_FIX_RATIO); in which case, you assume that the pixels have a square shape. Using a grid of circles for calibration Instead of the usual chessboard pattern, OpenCV also offers the possibility to calibrate a camera by using a grid of circles. In this case, the centers of the circles are used as calibration points. The corresponding function is very similar to the function we used to locate the chessboard corners, for example: cv::Size boardSize(7,7); std::vector<cv::Point2f> centers; bool found = cv:: findCirclesGrid(image, boardSize, centers); See also The A flexible new technique for camera calibration article by Z. Zhang  in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no 11, 2000, is a classic paper on the problem of camera calibration Recovering camera pose When a camera is calibrated, it becomes possible to relate the captured with the outside world. If the 3D structure of an object is known, then one can predict how the object will be imaged on the sensor of the camera. The process of image formation is indeed completely described by the projective equation that was presented at the beginning of this article. When most of the terms of this equation are known, it becomes possible to infer the value of the other elements (2D or 3D) through the observation of some images. In this recipe, we will look at the camera pose recovery problem when a known 3D structure is observed. How to do it... Lets consider a simple object here, a bench in a park. We took an image of it using the camera/lens system calibrated in the previous recipe. We have manually identified 8 distinct image points on the bench that we will use for our camera pose estimation. Having access to this object makes it possible to make some physical measurements. The bench is composed of a seat of size 242.5cmx53.5cmx9cm and a back of size 242.5cmx24cmx9cm that is fixed 12cm over the seat. Using this information, we can then easily derive the 3D coordinates of the eight identified points in an object-centric reference frame (here we fixed the origin at the left extremity of the intersection between the two planes). We can then create a vector of cv::Point3f containing these coordinates. //Input object points std::vector<cv::Point3f> objectPoints; objectPoints.push_back(cv::Point3f(0, 45, 0)); objectPoints.push_back(cv::Point3f(242.5, 45, 0)); objectPoints.push_back(cv::Point3f(242.5, 21, 0)); objectPoints.push_back(cv::Point3f(0, 21, 0)); objectPoints.push_back(cv::Point3f(0, 9, -9)); objectPoints.push_back(cv::Point3f(242.5, 9, -9)); objectPoints.push_back(cv::Point3f(242.5, 9, 44.5)); objectPoints.push_back(cv::Point3f(0, 9, 44.5)); The question now is where the camera was with respect to these points when the shown picture was taken. Since the coordinates of the image of these known points on the 2D image plane are also known, it becomes easy to answer this question using the cv::solvePnP function. Here, the correspondence between the 3D and the 2D points has been established manually, but as a reader of this book, you should be able to come up with methods that would allow you to obtain this information automatically. //Input image points std::vector<cv::Point2f> imagePoints; imagePoints.push_back(cv::Point2f(136, 113)); imagePoints.push_back(cv::Point2f(379, 114)); imagePoints.push_back(cv::Point2f(379, 150)); imagePoints.push_back(cv::Point2f(138, 135)); imagePoints.push_back(cv::Point2f(143, 146)); imagePoints.push_back(cv::Point2f(381, 166)); imagePoints.push_back(cv::Point2f(345, 194)); imagePoints.push_back(cv::Point2f(103, 161)); // Get the camera pose from 3D/2D points cv::Mat rvec, tvec; cv::solvePnP(objectPoints, imagePoints, // corresponding 3D/2D pts cameraMatrix, cameraDistCoeffs, // calibration rvec, tvec); // output pose // Convert to 3D rotation matrix cv::Mat rotation; cv::Rodrigues(rvec, rotation); This function computes the rigid transformation (rotation and translation) that brings the object coordinates in the camera-centric reference frame (that is, the ones that has its origin at the focal point). It is also important to note that the rotation computed by this function is given in the form of a 3D vector. This is a compact representation in which the rotation to apply is described by a unit vector (an axis of rotation) around which the object is rotated by a certain angle. This axis-angle representation is also called the Rodrigues’ rotation formula. In OpenCV, the angle of rotation corresponds to the norm of the output rotation vector, which is later aligned with the axis of rotation. This is why the cv::Rodrigues function is used to obtain the 3D matrix of rotation that appears in our projective equation. The pose recovery procedure described here is simple, but how do we know we obtained the right camera/object pose information. We can visually assess the quality of the results using the cv::viz module that gives us the ability to visualize 3D information. The use of this module is explained in the last section of this recipe. For now, lets display a simple 3D representation of our object and the camera that captured it: It might be difficult to judge of the quality of the pose recovery just by looking at this image but if you test the example of this recipe on your computer, you will have the possibility to move this representation in 3D using your mouse which should give you a better sense of the solution obtained. How it works... In this recipe, we assumed that the 3D structure of the object was known as well as the correspondence between sets of object points and image points. The camera’s intrinsic parameters were also known through calibration. If you look at our projective equation presented at the end of the Digital image formation section of the introduction of this article, this means that we have points for which coordinates (X,Y,Z) and (x,y) are known. We also have the elements of first matrix known (the intrinsic parameters). Only the second matrix is unknown; this is the one that contains the extrinsic parameters of the camera that is the camera/object pose information. Our objective is to recover these unknown parameters from the observation of 3D scene points. This problem is known as the Perspective-n-Point problem or PnP problem. Rotation has three degrees of freedom (for example, angle of rotation around the three axes) and translation also has three degrees of freedom. We therefore have a total of 6 unknowns. For each object point/image point correspondence, the projective equation gives us three algebraic equations but since the projective equation is up to a scale factor, we only have 2 independent equations. A minimum of three points is therefore required to solve this system of equations. Obviously, more points provide a more reliable estimate. In practice, many different algorithms have been proposed to solve this problem and OpenCV proposes a number of different implementation in its cv::solvePnP function. The default method consists in optimizing what is called the reprojection error. Minimizing this type of error is considered to be the best strategy to get accurate 3D information from camera images. In our problem, it corresponds to finding the optimal camera position that minimizes the 2D distance between the projected 3D points (as obtained by applying the projective equation) and the observed image points given as input. Note that OpenCV also has a cv::solvePnPRansac function. As the name suggest, this function uses the RANSAC algorithm in order to solve the PnP problem. This means that some of the object points/image points correspondences may be wrong and the function will returns which ones have been identified as outliers. This is very useful when these correspondences have been obtained through an automatic process that can fail for some points. There‘s more... When working with 3D information, it is often difficult to validate the solutions obtained. To this end, OpenCV offers a simple yet powerful visualization module that facilitates the development and debugging of 3D vision algorithms. It allows inserting points, lines, cameras, and other objects in a virtual 3D environment that you can interactively visualize from various points of views. cv::Viz, a 3D Visualizer module cv::Viz is an extra module of the OpenCV library that is built on top of the VTK open source library. This Visualization Tooolkit (VTK) is a powerful framework used for 3D computer graphics. With cv::viz, you create a 3D virtual environment to which you can add a variety of objects. A visualization window is created that displays the environment from a given point of view. You saw in this recipe an example of what can be displayed in a cv::viz window. This window responds to mouse events that are used to navigate inside the environment (through rotations and translations). This section describes the basic use of the cv::viz module. The first thing to do is to create the visualization window. Here we use a white background: // Create a viz window cv::viz::Viz3d visualizer(“Viz window“); visualizer.setBackgroundColor(cv::viz::Color::white()); Next, you create your virtual objects and insert them into the scene. There is a variety of predefined objects. One of them is particularly useful for us; it is the one that creates a virtual pin-hole camera: // Create a virtual camera cv::viz::WCameraPosition cam(cMatrix, // matrix of intrinsics image, // image displayed on the plane 30.0, // scale factor cv::viz::Color::black()); // Add the virtual camera to the environment visualizer.showWidget(“Camera“, cam); The cMatrix variable is a cv::Matx33d (that is,a cv::Matx<double,3,3>) instance containing the intrinsic camera parameters as obtained from calibration. By default this camera is inserted at the origin of the coordinate system. To represent the bench, we used two rectangular cuboid objects. // Create a virtual bench from cuboids cv::viz::WCube plane1(cv::Point3f(0.0, 45.0, 0.0), cv::Point3f(242.5, 21.0, -9.0), true, // show wire frame cv::viz::Color::blue()); plane1.setRenderingProperty(cv::viz::LINE_WIDTH, 4.0); cv::viz::WCube plane2(cv::Point3f(0.0, 9.0, -9.0), cv::Point3f(242.5, 0.0, 44.5), true, // show wire frame cv::viz::Color::blue()); plane2.setRenderingProperty(cv::viz::LINE_WIDTH, 4.0); // Add the virtual objects to the environment visualizer.showWidget(“top“, plane1); visualizer.showWidget(“bottom“, plane2); This virtual bench is also added at the origin; it then needs to be moved at its camera-centric position as found from our cv::solvePnP function. It is the responsibility of the setWidgetPose method to perform this operation. This one simply applies the rotation and translation components of the estimated motion. cv::Mat rotation; // convert vector-3 rotation // to a 3x3 rotation matrix cv::Rodrigues(rvec, rotation); // Move the bench cv::Affine3d pose(rotation, tvec); visualizer.setWidgetPose(“top“, pose); visualizer.setWidgetPose(“bottom“, pose); The final step is to create a loop that keeps displaying the visualization window. The 1ms pause is there to listen to mouse events. // visualization loop while(cv::waitKey(100)==-1 && !visualizer.wasStopped()) { visualizer.spinOnce(1, // pause 1ms true); // redraw } This loop will stop when the visualization window is closed or when a key is pressed over an OpenCV image window. Try to apply inside this loop some motion on an object (using setWidgetPose); this is how animation can be created. See also Model-based object pose in 25 lines of code by D. DeMenthon and L. S. Davis, in European Conference on Computer Vision, 1992, pp.335–343 is a famous method for recovering camera pose from scene points. Summary This article teaches us how, under specific conditions, the 3D structure of the scene and the 3D pose of the cameras that captured it can be recovered. We have seen how a good understanding of projective geometry concepts allows to devise methods enabling 3D reconstruction. Resources for Article: Further resources on this subject: OpenCV: Image Processing using Morphological Filters [article] Learn computer vision applications in Open CV [article] Cardboard is Virtual Reality for Everyone [article]
Read more
  • 0
  • 0
  • 7668

article-image-customizing-player-character
Packt
13 Oct 2016
18 min read
Save for later

Customizing the Player Character

Packt
13 Oct 2016
18 min read
One of the key features of an RPG is to be able to customize your character player. In this article by Vahe Karamian, author of the book Building an RPG with Unity 5.x, we will take a look at how we can provide a means to achieve this. (For more resources related to this topic, see here.) Once again, the approach and concept are universal, but the actual implementation might be a little different based on your model structure. Create a new scene and name it Character Customization. Create a Cube prefab and set it to the origin. Change theScaleof the cube to<5, 0.1, 5>, you can also change the name of the GameObject to Base. This will be the platform that our character model stands on while the player customizes his/her character before game play. Drag and drop thefbxfile representing your character model into theScene View. The next few steps will entirely depend on your model hierarchy and structure as designed by the modeller. To illustrate the point, I have placed the same model in the scene twice. The one on the left is the model that has been configured to display only the basics, the model on the right is the model in its original state as shown in the figure below: Notice that this particular model I am using has everything attached. These include the different types of weapons, shoes, helmets, and armour. The instantiated prefab on the left hand side has turned off all of the extras from the GameObject's hierarchy. Here is how the hierarchy looks in the Hierarchy View: The model has a veryextensivehierarchy in its structure, the figure above is a small snippet to demonstrate that you will need to navigate the structure and manually identify and enable or disable the mesh representing a particular part of the model. Customizable Parts Based on my model, I cancustomizea few things on my 3D model. I can customize the shoulder pads, I can customize the body type, I can customize the weapons and armor it has, I can customize the helmet and shoes, and finally I can also customize the skin texture to give it different looks. Let's get a listing of all the different customizable items we have for our character: Shoulder Shields:there are four types Body Type:there are three body types; skinny, buff, and chubby Armor:knee pad, leg plate Shields:there are two types of shields Boots:there are two types of boots Helmet:there are four types of helmets Weapons:there are 13 different types of weapons Skins:there are 13 different types of skins User Interface Now that we know what ouroptionsare for customizing our player character, we can start thinking about the User Interface (UI) that will be used to enable the customization of the character. To design our UI, we will need to create aCanvasGameObject, this is done by right-clicking in theHierarchy Viewand selectingCreate|UI|Canvas. This will place aCanvasGameObject and anEventSystemGameObject in theHierarchy View. It is assumed that you already know how to create a UI in Unity. I am going to use a Panel togroupthe customizable items. For the moment I will be using checkboxes for some items and scroll bars for the weapons and skin texture. The following figure will illustrate how my UI for customization looks: These UI elements will need to be integrated with Event Handlers that will perform the necessary actions for enabling or disabling certain parts of the character model. For instance, using the UI I can select Shoulder Pad 4, Buff Body Type, move the scroll bar until the Hammer weapon shows up, selecting the second Helmet checkbox, selecting Shield 1 and Boot 2, my character will look like the figure below.We need a way to refer to each one of the meshes representing the different types of customizable objects on the model. This will be done through a C# script. The script will need to keep track of all the parts we are going to be managing for customization. Some models will not have the extra meshes attached. You can always create empty GameObjects at a particular location on the model, and you can dynamically instantiate the prefab representing your custom object at the given point. This can also be done for our current model, for instance, if we have a special space weapon that somehow gets dropped by the aliens in the game world, we can attach the weapon to our model through C# code. The important thing is to understand the concept, and the rest is up to you! The Code for Character Customization Things don't happen automatically. So we need to create some C# code that will handle the customization of our character model. The script we create here will handle the UI events that will drive the enabling and disabling of different parts of the model mesh. Create a new C# script and call itCharacterCustomization.cs. This script will be attached to theBaseGameObject in the scene. Here is a listing of the script: using UnityEngine; using UnityEngine.UI; using System.Collections; using UnityEngine.SceneManagement; public class CharacterCustomization : MonoBehaviour { public GameObject PLAYER_CHARACTER; public Material[] PLAYER_SKIN; public GameObject CLOTH_01LOD0; public GameObject CLOTH_01LOD0_SKIN; public GameObject CLOTH_02LOD0; public GameObject CLOTH_02LOD0_SKIN; public GameObject CLOTH_03LOD0; public GameObject CLOTH_03LOD0_SKIN; public GameObject CLOTH_03LOD0_FAT; public GameObject BELT_LOD0; public GameObject SKN_LOD0; public GameObject FAT_LOD0; public GameObject RGL_LOD0; public GameObject HAIR_LOD0; public GameObject BOW_LOD0; // Head Equipment public GameObject GLADIATOR_01LOD0; public GameObject HELMET_01LOD0; public GameObject HELMET_02LOD0; public GameObject HELMET_03LOD0; public GameObject HELMET_04LOD0; // Shoulder Pad - Right Arm / Left Arm public GameObject SHOULDER_PAD_R_01LOD0; public GameObject SHOULDER_PAD_R_02LOD0; public GameObject SHOULDER_PAD_R_03LOD0; public GameObject SHOULDER_PAD_R_04LOD0; public GameObject SHOULDER_PAD_L_01LOD0; public GameObject SHOULDER_PAD_L_02LOD0; public GameObject SHOULDER_PAD_L_03LOD0; public GameObject SHOULDER_PAD_L_04LOD0; // Fore Arm - Right / Left Plates public GameObject ARM_PLATE_R_1LOD0; public GameObject ARM_PLATE_R_2LOD0; public GameObject ARM_PLATE_L_1LOD0; public GameObject ARM_PLATE_L_2LOD0; // Player Character Weapons public GameObject AXE_01LOD0; public GameObject AXE_02LOD0; public GameObject CLUB_01LOD0; public GameObject CLUB_02LOD0; public GameObject FALCHION_LOD0; public GameObject GLADIUS_LOD0; public GameObject MACE_LOD0; public GameObject MAUL_LOD0; public GameObject SCIMITAR_LOD0; public GameObject SPEAR_LOD0; public GameObject SWORD_BASTARD_LOD0; public GameObject SWORD_BOARD_01LOD0; public GameObject SWORD_SHORT_LOD0; // Player Character Defense Weapons public GameObject SHIELD_01LOD0; public GameObject SHIELD_02LOD0; public GameObject QUIVER_LOD0; public GameObject BOW_01_LOD0; // Player Character Calf - Right / Left public GameObject KNEE_PAD_R_LOD0; public GameObject LEG_PLATE_R_LOD0; public GameObject KNEE_PAD_L_LOD0; public GameObject LEG_PLATE_L_LOD0; public GameObject BOOT_01LOD0; public GameObject BOOT_02LOD0; // Use this for initialization void Start() { } public bool ROTATE_MODEL = false; // Update is called once per frame void Update() { if (Input.GetKeyUp(KeyCode.R)) { this.ROTATE_MODEL = !this.ROTATE_MODEL; } if (this.ROTATE_MODEL) { this.PLAYER_CHARACTER.transform.Rotate(new Vector3(0, 1, 0), 33.0f * Time.deltaTime); } if (Input.GetKeyUp(KeyCode.L)) { Debug.Log(PlayerPrefs.GetString("NAME")); } } public void SetShoulderPad(Toggle id) { switch (id.name) { case "SP-01": { this.SHOULDER_PAD_R_01LOD0.SetActive(id.isOn); this.SHOULDER_PAD_R_02LOD0.SetActive(false); this.SHOULDER_PAD_R_03LOD0.SetActive(false); this.SHOULDER_PAD_R_04LOD0.SetActive(false); this.SHOULDER_PAD_L_01LOD0.SetActive(id.isOn); this.SHOULDER_PAD_L_02LOD0.SetActive(false); this.SHOULDER_PAD_L_03LOD0.SetActive(false); this.SHOULDER_PAD_L_04LOD0.SetActive(false); PlayerPrefs.SetInt("SP-01", 1); PlayerPrefs.SetInt("SP-02", 0); PlayerPrefs.SetInt("SP-03", 0); PlayerPrefs.SetInt("SP-04", 0); break; } case "SP-02": { this.SHOULDER_PAD_R_01LOD0.SetActive(false); this.SHOULDER_PAD_R_02LOD0.SetActive(id.isOn); this.SHOULDER_PAD_R_03LOD0.SetActive(false); this.SHOULDER_PAD_R_04LOD0.SetActive(false); this.SHOULDER_PAD_L_01LOD0.SetActive(false); this.SHOULDER_PAD_L_02LOD0.SetActive(id.isOn); this.SHOULDER_PAD_L_03LOD0.SetActive(false); this.SHOULDER_PAD_L_04LOD0.SetActive(false); PlayerPrefs.SetInt("SP-01", 0); PlayerPrefs.SetInt("SP-02", 1); PlayerPrefs.SetInt("SP-03", 0); PlayerPrefs.SetInt("SP-04", 0); break; } case "SP-03": { this.SHOULDER_PAD_R_01LOD0.SetActive(false); this.SHOULDER_PAD_R_02LOD0.SetActive(false); this.SHOULDER_PAD_R_03LOD0.SetActive(id.isOn); this.SHOULDER_PAD_R_04LOD0.SetActive(false); this.SHOULDER_PAD_L_01LOD0.SetActive(false); this.SHOULDER_PAD_L_02LOD0.SetActive(false); this.SHOULDER_PAD_L_03LOD0.SetActive(id.isOn); this.SHOULDER_PAD_L_04LOD0.SetActive(false); PlayerPrefs.SetInt("SP-01", 0); PlayerPrefs.SetInt("SP-02", 0); PlayerPrefs.SetInt("SP-03", 1); PlayerPrefs.SetInt("SP-04", 0); break; } case "SP-04": { this.SHOULDER_PAD_R_01LOD0.SetActive(false); this.SHOULDER_PAD_R_02LOD0.SetActive(false); this.SHOULDER_PAD_R_03LOD0.SetActive(false); this.SHOULDER_PAD_R_04LOD0.SetActive(id.isOn); this.SHOULDER_PAD_L_01LOD0.SetActive(false); this.SHOULDER_PAD_L_02LOD0.SetActive(false); this.SHOULDER_PAD_L_03LOD0.SetActive(false); this.SHOULDER_PAD_L_04LOD0.SetActive(id.isOn); PlayerPrefs.SetInt("SP-01", 0); PlayerPrefs.SetInt("SP-02", 0); PlayerPrefs.SetInt("SP-03", 0); PlayerPrefs.SetInt("SP-04", 1); break; } } } public void SetBodyType(Toggle id) { switch (id.name) { case "BT-01": { this.RGL_LOD0.SetActive(id.isOn); this.FAT_LOD0.SetActive(false); break; } case "BT-02": { this.RGL_LOD0.SetActive(false); this.FAT_LOD0.SetActive(id.isOn); break; } } } public void SetKneePad(Toggle id) { this.KNEE_PAD_R_LOD0.SetActive(id.isOn); this.KNEE_PAD_L_LOD0.SetActive(id.isOn); } public void SetLegPlate(Toggle id) { this.LEG_PLATE_R_LOD0.SetActive(id.isOn); this.LEG_PLATE_L_LOD0.SetActive(id.isOn); } public void SetWeaponType(Slider id) { switch (System.Convert.ToInt32(id.value)) { case 0: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 1: { this.AXE_01LOD0.SetActive(true); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 2: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(true); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 3: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(true); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 4: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(true); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 5: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(true); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 6: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(true); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 7: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(true); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 8: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(true); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 9: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(true); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 10: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(true); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 11: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(true); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 12: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(true); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 13: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(true); break; } } } public void SetHelmetType(Toggle id) { switch (id.name) { case "HL-01": { this.HELMET_01LOD0.SetActive(id.isOn); this.HELMET_02LOD0.SetActive(false); this.HELMET_03LOD0.SetActive(false); this.HELMET_04LOD0.SetActive(false); break; } case "HL-02": { this.HELMET_01LOD0.SetActive(false); this.HELMET_02LOD0.SetActive(id.isOn); this.HELMET_03LOD0.SetActive(false); this.HELMET_04LOD0.SetActive(false); break; } case "HL-03": { this.HELMET_01LOD0.SetActive(false); this.HELMET_02LOD0.SetActive(false); this.HELMET_03LOD0.SetActive(id.isOn); this.HELMET_04LOD0.SetActive(false); break; } case "HL-04": { this.HELMET_01LOD0.SetActive(false); this.HELMET_02LOD0.SetActive(false); this.HELMET_03LOD0.SetActive(false); this.HELMET_04LOD0.SetActive(id.isOn); break; } } } public void SetShieldType(Toggle id) { switch (id.name) { case "SL-01": { this.SHIELD_01LOD0.SetActive(id.isOn); this.SHIELD_02LOD0.SetActive(false); break; } case "SL-02": { this.SHIELD_01LOD0.SetActive(false); this.SHIELD_02LOD0.SetActive(id.isOn); break; } } } public void SetSkinType(Slider id) { this.SKN_LOD0.GetComponent<Renderer>().material = this.PLAYER_SKIN[System.Convert.ToInt32(id.value)]; this.FAT_LOD0.GetComponent<Renderer>().material = this.PLAYER_SKIN[System.Convert.ToInt32(id.value)]; this.RGL_LOD0.GetComponent<Renderer>().material = this.PLAYER_SKIN[System.Convert.ToInt32(id.value)]; } public void SetBootType(Toggle id) { switch (id.name) { case "BT-01": { this.BOOT_01LOD0.SetActive(id.isOn); this.BOOT_02LOD0.SetActive(false); break; } case "BT-02": { this.BOOT_01LOD0.SetActive(false); this.BOOT_02LOD0.SetActive(id.isOn); break; } } } } This is a long script but it is straightforward. At the top of the script we have defined all of the variables that will be referencing the different meshes in our model character. All variables are of type GameObject with the exception of thePLAYER_SKINvariable which is an array ofMaterialdata type. The array is used to store the different types of texture created for the character model. There are a few functions defined that are called by the UI event handler. These functions are:SetShoulderPad(Toggle id); SetBodyType(Toggle id); SetKneePad(Toggle id); SetLegPlate(Toggle id); SetWeaponType(Slider id); SetHelmetType(Toggle id); SetShieldType(Toggle id); SetSkinType(Slider id);All of the functions take a parameter that identifies which specific type is should enable or disable. A BIG NOTE HERE! You can also use the system we just built to create all of the different variations of your Non-Character Player models and store them as prefabs! Wow! This will save you so much time and effort in creating your characters representing different barbarians!!! Preserving Our Character State Now that we have spent the time to customize our character, we need to preserve our character and use it in our game. In Unity, there is a function calledDontDestroyOnLoad(). This is a great function that can be utilized at this time. What does it do? It keeps the specified GameObject in memory going from one scene to the next. We can use these mechanisms for now, eventually though, you will want to create a system that can save and load your user data. Go ahead and create a new C# script and call itDoNotDestroy.cs. This script is going to be very simple. Here is the listing: using UnityEngine; using System.Collections; public class DoNotDestroy : MonoBehaviour { // Use this for initialization void Start() { DontDestroyOnLoad(this); } // Update is called once per frame void Update() { } } After you create the script go ahead and attach it to your character model prefab in the scene. Not bad, let's do a quick recap of what we have done so far. Recap By now you should have three scenes that are functional. We have our scene that represents the main menu, we have our scene that represents our initial level, and we just created a scene that is used for character customization. Here is the flow of our game thus far: We start the game, see the main menu, select theStart Gamebutton to enter the character customization scene, do our customization, and when we click theSavebutton we loadlevel 1. For this to work, we have created the following C# scripts: GameMaster.cs:used as the main script to keep track of our game state CharacterCustomization.cs:used exclusively for customizing our character DoNotDestroy.cs:used to save the state of a given object CharacterController.cs:used to control the motion of our character IKHandle.cs:used to implement inverse kinematics for the foot When you combine all of this together you now have a good framework and flow that can be extended and improved as we go along. Summary We covered some very important topics and concepts in the article that can be used and enhanced for your games. We started the article by looking into how to customize your player character. The concepts you take away from the article can be applied to a wide variety of scenarios. We look at how to understand the structure of your character model so that you can better determine the customization methods. These are the different types of weapons, clothing, armour, shields and so on... We then looked at how to create a user interface to help enable us with the customization of our player character during gameplay. We also learned that the tool we developed can be used to quickly create several different character models (customized) and store them as Prefabs for later use! Great time saver!!! We also learned how to preserve the state of our player character after customization for gameplay. You should now have an idea of how to approach your project. Resources for Article: Further resources on this subject: Animations Sprites [article] Development Tricks with Unreal Engine 4 [article] The Game World [article]
Read more
  • 0
  • 0
  • 44983
article-image-offloading-work-ui-thread-android
Packt
13 Oct 2016
8 min read
Save for later

Offloading work from the UI Thread on Android

Packt
13 Oct 2016
8 min read
In this article by Helder Vasconcelos, the author of Asynchronous Android Programming book - Second Edition, will present the most common asynchronous techniques techniques used on Android to develop an application, that using the multicore CPUs available on the most recent devices, is able to deliver up to date results quickly, respond to user interactions immediately and produce smooth transitions between the different UI states without draining the device battery. (For more resources related to this topic, see here.) Several reports have shown that an efficient application that provides a great user experience have better review ratings, higher user engagement and are able to achieve higher revenues. Why do we need Asynchronous Programming on Android? The Android system, by default, executes the UI rendering pipeline, base components (Activity, Fragment, Service, BroadcastReceiver, ...) lifecycle callback handling and UI interaction processing on a single thread, sometimes known as UI thread or main thread. The main thread handles his work sequentially collecting its work from a queue of tasks (Message Queue) that are queued to be processed by a particular application component. When any unit of work, such as a I/O operation, takes a significant period of time to complete, it will block and prevent the main thread from handling the next tasks waiting on the main thread queue to processed. Besides that, most Android devices refresh the screen 60 times per second, so every 16 milliseconds (1s/60 frames) a UI rendering task has to be processed by the main thread in order to draw and update the device screen. The next figure shows up a typical main thread timeline that runs the UI rendering task every 16ms. When a long lasting operation, prevents the main thread from executing frame rendering in time, the current frame drawing is deferred or some frame drawings are missed generating a UI glitch noticeable to the application user. A typical long lasting operation could be: Network data communication HTTP REST Request SOAP Service Access File Upload or Backup Reading or writing of files to the filesystem Shared Preferences Files File Cache Access Internal Database reading or writing Camera, Image, Video, Binary file processing. A user Interface glitch produced by dropped frames on Android is known on Android as jank. The Android SDK command systrace (https://developer.android.com/studio/profile/systrace.html) comes with the ability to measure the performance of UI rendering and then diagnose and identify problems that may arrive from various threads that are running on the application process. In the next image we illustrate a typical main thread timeline when a blocking operation dominates the main thread for more than 2 frame rendering windows: As you can perceive, 2 frames are dropped and the 1 UI Rendering frame was deferred because our blocking operation took approximately 35ms, to finish: When the long running operation that runs on the main thread does not complete within 5 seconds, the Android System displays an “Application not Responding” (ANR) dialog to the user giving him the option to close the application. Hence, in order to execute compute-intensive or blocking I/O operations without dropping a UI frame, generate UI glitches or degrade the application responsiveness, we have to offload the task execution to a background thread, with less priority, that runs concurrently and asynchronously in an independent line of execution, like shown on the following picture. Although, the use of asynchronous and multithreaded techniques always introduces complexity to the application, Android SDK and some well known open source libraries provide high level asynchronous constructs that allow us to perform reliable asynchronous work that relieve the main thread from the hard work. Each asynchronous construct has advantages and disadvantages and by choosing the right construct for your requirements can make your code more reliable, simpler, easier to maintain and less error prone. Let’s enumerate the most common techniques that are covered in detail in the “Asynchronous Android Programming” book. AsyncTask AsyncTask is simple construct available on the Android platform since Android Cupcake (API Level 3) and is the most widely used asynchronous construct. The AsyncTask was designed to run short-background operations that once finished update the UI. The AsyncTask construct performs the time consuming operation, defined on the doInBackground function, on a global static thread pool of background threads. Once doInBackground terminates with success, the AsyncTask construct switches back the processing to the main thread (onPostExecute) delivering the operation result for further processing. This technique if it is not used properly can lead to memory leaks or inconsistent results. HandlerThread The HandlerThread is a Threat that incorporates a message queue and an Android Looper that runs continuously waiting for incoming operations to execute. To submit new work to the Thread we have to instantiate a Handler that is attached to HandlerThread Looper. public class DownloadImageTask extends AsyncTask<URL, Integer, Bitmap> { protected Long doInBackground(URL... urls) {} protected void onProgressUpdate(Integer... progress) {} protected void onPostExecute(Bitmap image) {} } HandlerThread handlerThread = new HandlerThread("MyHandlerThread"); handlerThread.start(); Looper looper = handlerThread.getLooper(); Handler handler = new Handler(looper); handler.post(new Runnable(){ @Override public void run() {} }); The Handler interface allow us to submit a Message or a Runnable subclass object that could aggregate data and a chunk of work to be executed. Loader The Loader construct allow us to run asynchronous operations that load content from a content provider or a data source, such as an Internal Database or a HTTP service. The API can load data asynchronously, detect data changes, cache data and is aware of the Fragment and Activity lifecycle. The Loader API was introduced to the Android platform at API level 11, but are available for backwards compatibility through the Android Support libraries. public static class TextLoader extends AsyncTaskLoader<String> { @Override public String loadInBackground() { // Background work } @Override public void deliverResult(String text) {} @Override protected void onStartLoading() {} @Override protected void onStopLoading() {} @Override public void onCanceled(String text) {} @Override protected void onReset() {} } IntentService The IntentService class is a specialized subclass of Service that implements a background work queue using a single HandlerThread. When work is submitted to an IntentService, it is queued for processing by a HandlerThread, and processed in order of submission. public class BackupService extends IntentService { @Override protected void onHandleIntent(Intent workIntent) { // Background Work } } JobScheduler The JobScheduler API allow us to execute jobs in background when several prerequisites are fulfilled and taking into the account the energy and network context of the device. This technique allows us to defer and batch job executions until the device is charging or an unmetered network is available. JobScheduler scheduler = (JobScheduler) getSystemService( Context.JOB_SCHEDULER_SERVICE ); JobInfo.Builder builder = new JobInfo.Builder(JOB_ID, serviceComponent); builder.setRequiredNetworkType(JobInfo.NETWORK_TYPE_UNMETERED); builder.setRequiresCharging(true); scheduler.schedule(builder.build()); RxJava RxJava is an implementation of the Reactive Extensions (ReactiveX) on the JVM, that was developed by Netflix, used to compose asynchronous event processing that react to an observable source of events. The framework extends the Observer pattern by allowing us to create a stream of events, that could be intercepted by operators (input/output) that modify the original stream of events and deliver the result or an error to a final Observer. The RxJava Schedulers allow us to control in which thread our Observable will begin operating on and in which thread the event is delivered to the final Observer or Subscriber. Subscription subscription = getPostsFromNetwork() .map(new Func1<Post, Post>() { @Override public Post call(Post Post) { ... return result; } }) .subscribeOn(Schedulers.io()) .observeOn(AndroidSchedulers.mainThread()) .subscribe(new Observer<Post>() { @Override public void onCompleted() {} @Override public void onError() {} @Override public void onNext(Post post) { // Process the result } }); Summary As you have seen, there are several high level asynchronous constructs available to offload the long computing or blocking tasks from the UI thread and it is up to developer to choose right construct for each situation because there is not an ideal choice that could be applied everywhere. Asynchronous multithreaded programming, that produces reliable results, is difficult and error prone task so using a high level technique tends to simplify the application source code and the multithreading processing logic required to scale the application over the available CPU cores. Remember that keeping your application responsive and smooth is essential to delight your users and increase your chances to create a notorious mobile application. The techniques and asynchronous constructs summarized on the previous paragraphs are covered in detail in the Asynchronous Android Programming book published by Packt Publishing. Resources for Article: Further resources on this subject: Getting started with Android Development [article] Practical How-To Recipes for Android [article] Speeding up Gradle builds for Android [article]
Read more
  • 0
  • 0
  • 14408

article-image-server-side-swift-building-slack-bot-part-2
Peter Zignego
13 Oct 2016
5 min read
Save for later

Server-side Swift: Building a Slack Bot, Part 2

Peter Zignego
13 Oct 2016
5 min read
In Part 1 of this series, I introduced you to SlackKit and Zewo, which allows us to build and deploy a Slack bot written in Swift to a Linux server. Here in Part 2, we will finish the app, showing all of the Swift code. We will also show how to get an API token, how to test the app and deploy it on Heroku, and finally how to launch it. Show Me the Swift Code! Finally, some Swift code! To create our bot, we need to edit our main.swift file to contain our bot logic: import String importSlackKit class Leaderboard: MessageEventsDelegate { // A dictionary to hold our leaderboard var leaderboard: [String: Int] = [String: Int]() letatSet = CharacterSet(characters: ["@"]) // A SlackKit client instance let client: SlackClient // Initalize the leaderboard with a valid Slack API token init(token: String) { client = SlackClient(apiToken: token) client.messageEventsDelegate = self } // Enum to hold commands the bot knows enum Command: String { case Leaderboard = "leaderboard" } // Enum to hold logic that triggers certain bot behaviors enum Trigger: String { casePlusPlus = "++" caseMinusMinus = "--" } // MARK: MessageEventsDelegate // Listen to the messages that are coming in over the Slack RTM connection funcmessageReceived(message: Message) { listen(message: message) } funcmessageSent(message: Message){} funcmessageChanged(message: Message){} funcmessageDeleted(message: Message?){} // MARK: Leaderboard Internal Logic privatefunc listen(message: Message) { // If a message contains our bots user ID and a recognized command, handle that command if let id = client.authenticatedUser?.id, text = message.text { iftext.lowercased().contains(query: Command.Leaderboard.rawValue) &&text.contains(query: id) { handleCommand(command: .Leaderboard, channel: message.channel) } } // If a message contains a trigger value, handle that trigger ifmessage.text?.contains(query: Trigger.PlusPlus.rawValue) == true { handleMessageWithTrigger(message: message, trigger: .PlusPlus) } ifmessage.text?.contains(query: Trigger.MinusMinus.rawValue) == true { handleMessageWithTrigger(message: message, trigger: .MinusMinus) } } // Text parsing can be messy when you don't have Foundation... privatefunchandleMessageWithTrigger(message: Message, trigger: Trigger) { if let text = message.text, start = text.index(of: "@"), end = text.index(of: trigger.rawValue) { let string = String(text.characters[start...end].dropLast().dropFirst()) let users = client.users.values.filter{$0.id == self.userID(string: string)} // If the receiver of the trigger is a user, use their user ID ifusers.count> 0 { letidString = userID(string: string) initalizationForValue(dictionary: &leaderboard, value: idString) scoringForValue(dictionary: &leaderboard, value: idString, trigger: trigger) // Otherwise just store the receiver value as is } else { initalizationForValue(dictionary: &leaderboard, value: string) scoringForValue(dictionary: &leaderboard, value: string, trigger: trigger) } } } // Handle recognized commands privatefunchandleCommand(command: Command, channel:String?) { switch command { case .Leaderboard: // Send message to the channel with the leaderboard attached if let id = channel { client.webAPI.sendMessage(channel:id, text: "Leaderboard", linkNames: true, attachments: [constructLeaderboardAttachment()], success: {(response) in }, failure: { (error) in print("Leaderboard failed to post due to error:(error)") }) } } } privatefuncinitalizationForValue(dictionary: inout [String: Int], value: String) { if dictionary[value] == nil { dictionary[value] = 0 } } privatefuncscoringForValue(dictionary: inout [String: Int], value: String, trigger: Trigger) { switch trigger { case .PlusPlus: dictionary[value]?+=1 case .MinusMinus: dictionary[value]?-=1 } } // MARK: Leaderboard Interface privatefuncconstructLeaderboardAttachment() -> Attachment? { let Great! But we’ll need to replace the dummy API token with the real deal before anything will work. Getting an API Token We need to create a bot integration in Slack. You’ll need a Slack instance that you have administrator access to. If you don’t already have one of those to play with, go sign up. Slack is free for small teams: Create a new bot here. Enter a name for your bot. I’m going to use “leaderbot”. Click on “Add Bot Integration”. Copy the API token that Slack generates and replace the placeholder token at the bottom of main.swift with it. Testing 1,2,3… Now that we have our API token, we’re ready to do some local testing. Back in Xcode, select the leaderbot command-line application target and run your bot (⌘+R). When we go and look at Slack, our leaderbot’s activity indicator should show that it’s online. It’s alive! To ensure that it’s working, we should give our helpful little bot some karma points: @leaderbot++ And ask it to see the leaderboard: @leaderbot leaderboard Head in the Clouds Now that we’ve verified that our leaderboard bot works locally, it’s time to deploy it. We are deploying on Heroku, so if you don’t have an account, go and sign up for a free one. First, we need to add a Procfile for Heroku. Back in the terminal, run: echo slackbot: .build/debug/leaderbot > Procfile Next, let’s check in our code: git init git add . git commit -am’leaderbot powering up’ Finally, we’ll setup Heroku: Install the Heroku toolbelt. Log in to Heroku in your terminal: heroku login Create our application on Heroku and set our buildpack: heroku create --buildpack https://github.com/pvzig/heroku-buildpack-swift.git leaderbot Set up our Heroku remote: heroku git:remote -a leaderbot Push to master: git push heroku master Once you push to master, you’ll see Heroku going through the process of building your application. Launch! When the build is complete, all that’s left to do is to run our bot: heroku run:detached slackbot Like when we tested locally, our bot should become active and respond to our commands! You’re Done! Congratulations, you’ve successfully built and deployed a Slack bot written in Swift onto a Linux server! Built With: Jay: Pure-Swift JSON parser and formatterkylef’s Heroku buildpack for Swift Open Swift: Open source cross project standards for Swift SlackKit: A Slack client library Zewo: Open source libraries for modern server software Disclaimer The linux version of SlackKit should be considered an alpha release. It’s a fun tech demo to show what’s possible with Swift on the server, not something to be relied upon. Feel free to report issues you come across. About the author Peter Zignego is an iOS developer in Durham, North Carolina, USA. He writes at bytesized.co, tweets at @pvzig, and freelances at Launch Software.
Read more
  • 0
  • 0
  • 6085
Modal Close icon
Modal Close icon