Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7008 Articles
article-image-storing-and-generating-graphs-stored-data
Packt
01 Oct 2015
20 min read
Save for later

Storing and Generating Graphs of the Stored Data

Packt
01 Oct 2015
20 min read
In this article by Matthijs Kooijman, the author of the book Building Wireless Sensor Networks Using Arduino, we will explore some of the ways to persistently store the collected sensor data and visualize the data using convenient graphs. First, you will see how to connect your coordinator to the Internet and send its data to the Beebotte cloud platform. You will learn how to create a custom dashboard in this platform that can show you the collected data in a convenient graph format. Second, you will see how you can collect and visualize the data on your own computer instead of sending it to the Internet directly. (For more resources related to this topic, see here.) For the first part, you will need some shield to connect your coordinator Arduino to the Internet in addition to the hardware that has been recommended for the coordinator in the previous chapter. This article provides examples for the following two shields: The Arduino Ethernet shield (https://www.arduino.cc/en/Main/ArduinoEthernetShield) The Adafruit CC3000 Wi-Fi shield (https://www.adafruit.com/products/1491) If possible, the Ethernet shield is recommended as its library is small, and it is easier to keep a reliable connection using a wired connection. Additionally, the CC3000 shield conflicts with the SparkFun XBee shield and this would require some modification on the latter's part to make them work together. For the second part, no additional hardware is needed. There are other Ethernet or Wi-Fi shields that you can use. Storing your data in the cloud When it comes to storing your data online somewhere, there are literally dozens of online platforms that offer some kind of data storage service aimed at collecting sensor data. Each of these has different features, complexity, and cost, and you are encouraged to have a look at what is available. Even though a lot of platforms are available, almost none of them are really suited for a hobby sensor network like the one presented in this article. Most platforms support the basic collection of data and offer some web API to access the data but there were two requirements that ruled out most of the platforms: It has to be affordable for a home user with just a bit of data. Ideally, there is a free version to get started. It has to support creating a dashboard that can show data and graphs and also show input elements that can be used to talk back to the network. (This will be used in the next chapter to create an online thermostat). When this article was written, only two platforms seemed completely suitable: Beebotte (https://beebotte.com/) and Adafruit IO (https://io.adafruit.com/). The examples in this article use Beebotte because at the time of writing, Adafruit IO was not publicly available yet and Beebotte currently has some additional features. However, you are encouraged to check out Adafruit IO as an alternative. As both the platforms use the MQTT protocol (explained in the next section), you should be able to reuse the example code with just minimal changes for Adafruit IO. Introducing Beebotte Beebotte, like most of these services, can be seen as a big online database that stores any data you send to it and allows retrieving any data you are interested in. Additionally, you can easily create dashboards that allow you to look at your data and even interact with it through various configurable widgets. By the end of this chapter, you might have a dashboard that looks like this: Before showing you how to talk to Beebotte from your Arduino, some important concepts in the Beebotte system will be introduced: channels, resources, security tokens, and access protocols. The examples in this article serve to get started with Beebotte, but will certainly not cover all features and details. Be sure to check out the extensive documentation on the Beebotte site at https://beebotte.com/overview. Channels and resources All data collected by Beebotte is organized into resources, each representing a single series of data. All data stored in a resource signifies the same thing, such as the temperature in your living room or on/off status of the air conditioner but at different points in time. This kind of data is also often referred to as time-series data. To keep your data organized, Beebotte supports the concept of channels. Essentially, a channel is just a group of resources that somehow belong together. Typically, a channel represents a single device or data source but you are free to group your resources in any way that you see fit. In this example, every sensor module in the network will get its own channel, each containing a resource to store the temperature data and a resource to store the humidity data. Security To be able to access the data stored in your Beebotte account or publish new data, every connection needs to be authenticated. This happens using a secret token or key (similar to a password) that you configure in your Arduino code and proves to the Beebotte server that you are allowed to access the data. There are two kinds of secrets currently supported by Beebotte: Your account secret key: This is a single key for your account, which allows access to all resources and all channels in your account. It additionally allows the creation and deletion of the channels and resources. Channel tokens: Each channel has an associated channel token, which allows reading and writing data from that channel only. Additionally, the channel token can be regenerated if the token is ever compromised. This example uses the account secret key to authenticate the connection. It would be better to use a more limited channel token (to limit the consequences if the token is leaked) but in this example, as the coordinator forwards data for multiple sensor nodes (each of which have their own channel), a channel token does not provide enough access. As an alternative, you could consider using a single channel containing all the resources (named, for example, "Livingroom_Temperature" to still allow grouping) so that you can use the slightly more secure channel tokens. In the future, Beebotte might also support more flexible limited tokens that support writing to more than one channel. The examples in this article use an unencrypted connection, so make sure at least your Wi-Fi connection is encrypted (using WPA or WPA2). If you are working with particularly sensitive information, be sure to consider using SSL/TLS for the connection. Due to limited microcontroller speeds, running SSL/TLS directly on the microcontroller does not seem feasible, so this would need external cryptographic hardware or support on the Wi-Fi / Ethernet shield that is being used. At the time of writing, there does not seem to be any shield that directly supports this but it seems that at least the ESP2866-based shields and the Arduino Yún could be made to support this and the upcoming Arduino Wi-Fi shield 101 might support it as well (but this is out of the scope for this article). Access protocols To store new data and access the existing data over the Internet, a few different access methods are supported by Beebotte: HTTP / REST: The hypertext transfer protocol (HTTP) is the protocol that powers the web. Originally, it was used to let a web browser request a web page from a server, but now HTTP is also commonly used to let all kinds of devices send and request arbitrary data (instead of webpages) to and from servers as well. In this case, the server is commonly said to export an HTTP or REST (Relational state transfer) API.HTTP APIs are convenient, since HTTP is a very widespread protocol and HTTP libraries are available for most programming languages and environments. WebSockets: A downside of HTTP is that it is not very convenient for sending events from the server to the client. A server can only send data to the client after the client sends a request, which means the client must poll for new events continuously.To overcome this, the WebSockets standard was created. WebSockets is a protocol on top of HTTP that keeps a connection (socket) open indefinitely, ready for the server to send new data whenever it wants to, instead of having to wait for the client to request new data. MQTT: The message queuing telemetry transport protocol (MQTT) is a so-called publish/subscribe (pubsub) protocol. The idea is that multiple devices can connect to a central server and each can publish a message to a given topic and also subscribe to any number of topics. Whenever a message is published to a topic, it is automatically forwarded to all the devices that have subscribed to the same topic.MQTT, like WebSockets, keeps a connection open continuously so that both the client and server can send data at any time, making this protocol especially suitable for real-time data and events. MQTT can not be used to access historical data, though. A lot of alternative platforms only support the HTTP access protocol, which works fine to push and access the data and would be suitable for the examples in this chapter but is less suitable to control your network from the Internet, as used in the next chapter. To prepare for this, the examples in this chapter will already use the MQTT protocol, which supports both the use cases efficiently. Sending your data to Beebotte Now that you have learned about some important Beebotte concepts, you are ready to send your collected sensor data to Beebotte. First, you will prepare Beebotte and your Arduino to connect to each other. Then, you will write a sketch for your coordinator to send the data. Finally, you will see how to access and visualize the stored data. Preparing Beebotte Before you can start sending data to Beebotte, you will have to prepare the proper channels and resources to store the data. This example uses two channels: "Livingroom" and "Study", referring to the rooms where the sensors have been placed. You should, of course, use names that reflect your setup and adapt things if you have more or fewer sensors. The first step is to register an account on beebotte.com. Once you have done this, you can access your Control Panel, which will initially show you an empty list of channels: You can create a new channel by clicking the Create New channel button. In the resulting window, you can fill in a name and description for the channel and define the resources. This should look something like this: After creating a channel for every sensor node that you have built, you have prepared Beebotte to receive all the sensor data. The next step is to modify the coordinator sketch to actually send the data. Connecting your Arduino to the Internet In order to let your coordinator send its data to Beebotte, it must be connected to the Internet somehow. There are plenty of shields out there that add wired Ethernet or wireless Wi-Fi connectivity to your Arduino. Wi-Fi shields are typically a lot more expensive than the Ethernet ones but the recently introduced ESP2866 cheap Wi-Fi chipset will likely change that. (However, at the time of writing, no ready-to-use Arduino shield was available yet.) As the code to connect your Arduino to the Internet will be significantly different for each shield, every part of the code will not be discussed in this article. Instead, we will focus on the code that connects to the MQTT server and publishes the data, assuming that the Internet connection is already set up. In the code bundle for this article, two complete examples are available for use with the Arduino Ethernet shield and Adafruit CC3000 shield. These examples should be usable as a starting point to use the other hardware as well. Some things to keep in mind when selecting your hardware have been listed, as follows: Check carefully for conflicting pins. For example, the Adafruit CC3000 Wi-Fi shield uses pins 2 and 3[t1]  to communicate with the shield but you might be using these pins to talk to the XBee module as well (particularly, when using the SparkFun XBee shield). The libraries for the various Wi-Fi shields take up a lot of code space on the Arduino. For example, using the SparkFun or Adafruit CC3000 library along with the Adafruit MQTT library fills up most of the available code space on an Arduino Uno. The Ethernet library is a bit (but not much) smaller. It is not always easy to keep a reliable Wi-Fi connection. In theory, this is a matter of simply reconnecting when the Wi-Fi connection is failing but in practice it can be tricky to implement this completely reliably. Again, this is easier with wired Ethernet as it does not have the same disconnection issues as Wi-Fi. If you use a different hardware than recommended (including the recently announced Arduino Ethernet shield 2), you will likely need a different Arduino library and need to make changes to the example code provided. Writing the sketch To send the data to Beebotte, the Coordinator.ino sketch from the previous chapter needs to be modified. As noted before, only the MQTT code will be shown here. However, the code to establish the Internet connection is included in the full examples in the code bundle (in the Coordinator_Ethernet.ino and Coordinator_CC3000.ino sketches). This example uses the "Adafruit MQTT library" for the MQTT protocol, which can be installed through the Arduino library manager or can be found here: https://github.com/adafruit/Adafruit_MQTT_Library. Depending on the Internet shield, you might need more libraries as well (see the comments in the example sketches). Do not forget to add the appropriate includes for the libraries that you are using. For the MQTT library, this is: #include <Adafruit_MQTT.h> #include <Adafruit_MQTT_Client.h> To set up the MQTT library, you first need to define some settings: const char MQTT_SERVER[] PROGMEM = "mqtt.beebotte.com"; const char MQTT_CLIENTID[] PROGMEM = ""; const char MQTT_USERNAME[] PROGMEM = "your_key_here"; const char MQTT_PASSWORD[] PROGMEM = ""; const int MQTT_PORT = 1883; This defines the connection settings to use for the MQTT connection. These are appropriate for Beebotte; if you are using some other platform, check its documentation for the appropriate settings. Note that here the username must be set to your account's secret key, for example: const char MQTT_USERNAME[] PROGMEM = "840f626930a07c87aa315e27b22448468844edcad03fe34f551ac747533f544f"; If you are using a channel token, it must be set to the token prefixed with "token:", such as: const char MQTT_USERNAME[] PROGMEM = "token:1438006626319_UNJoxdmKBoMFIPt7"; The password is unused and can be empty, just like the client identifier. The port is just the default MQTT port number, 1883. Note that all the string variables are marked with the PROGMEM keyword. This tells the compiler to store the string variables in program memory, just like the F() macro you have seen before does (which also uses the PROGMEM keyword underwater). However, the F() macro can only be used in the functions, which is why these variables use the PROGMEM keyword directly. This also means that the extra checking offered by the F() macro is not available, so be careful not to mix up the normal and PROGMEM strings here as this will not result in any compiler error and instead things will be broken when you run the code. Using the configuration constants defined earlier, you can now define the main mqtt object: Adafruit_MQTT_Client mqtt(&client, MQTT_SERVER, MQTT_PORT, MQTT_CLIENTID, MQTT_USERNAME, MQTT_PASSWORD); There are a few different flavors of this object. For example, there are flavors optimized for the specific hardware and corresponding libraries, and there is also a generic flavor that works with any hardware whose library exposes a generic Client object (like most libraries currently do). The latter flavor, Adafruit_MQTT_Client, is used in this example and should be fine. The &client [t2] part of this line refers to a previously created Client object (not shown here as it depends on the Internet shield used), which is used by the MQTT library to set up the MQTT connection. To actually connect to the MQTT server, a function called connect() will be defined. This function is called to connect once at startup and to reconnect every time when the publishing of some data fails. In the CC3300 version, this function associates to the WiFi access point and then sets up the connection to the MQTT server. On the Ethernet version, where the network is always available after initial initialization, the connect() function only sets up the MQTT connection. The latter version is shown as follows: void connect() { client.stop(); // Ensure any old connection is closed uint8_t ret = mqtt.connect(); if (ret == 0) DebugSerial.println(F("MQTT connected")); else DebugSerial.println(mqtt.connectErrorString(ret)); } This calls mqtt.connect() to connect to the MQTT server and writes a debug message to report the success or failure. Note that mqtt.connect() returns a number as an error code (with 0 meaning OK), which is translated to a human readable error message using the mqtt.connectErrorString() function. Now, to actually publish a single value, there is the publish() function: void publish(const __FlashStringHelper *resource, float value) { // Use JSON to wrap the data, so Beebotte will remember the data // (instead of just publishing it to whoever is currently listening). String data; data += "{"data": "; data += value; data += ", "write": true}"; DebugSerial.print(F("Publishing ")); DebugSerial.print(data); DebugSerial.print(F( " to ")); DebugSerial.println(resource); // Publish data and try to reconnect when publishing data fails if (!mqtt.publish(resource, data.c_str())) { DebugSerial.println(F("Failed to publish, trying reconnect...")); connect(); if (!mqtt.publish(resource, data.c_str())) DebugSerial.println(F("Still failed to publish data")); } } This function takes two parameters: the name of the resource to publish to and the value to publish. Note the type of the resource parameter: __FlashStringHelper* is similar to the more common char* string type but indicates that the string is stored in the program memory instead of RAM. This is also the type returned by the F() macro that you have seen before. Just like the MQTT server configuration values that used the PROGMEM keyword before, the MQTT library also expects the MQTT topic names to be stored in the program memory. The actual value is sent using the JSON (JavaScript object notation) format. For example, for a temperature of 20 degrees, it constructs {"data": 20.00, "write": true}. This format allows, in addition to transmitting the value, indicating that Beebotte should store the value so that it can be retrieved later. If the write value is false, or not present, Beebotte will only forward the value to any other devices currently subscribed to the appropriate topic without saving it for later. This example uses some quick and dirty string concatenation to generate the JSON. If you want something more robust and elegant, have a look at the ArduinoJson library at https://github.com/bblanchon/ArduinoJson. If publishing the data fails, it is likely that the WiFi or MQTT connection has failed and so it attempts to reconnect and publish the data once more. As before, there is a processRxPacket() function that gets called when a radio packet is received through the XBee module: void processRxPacket(ZBRxResponse& rx, uintptr_t) { Buffer b(rx.getData(), rx.getDataLength()); uint8_t type = b.remove<uint8_t>(); XBeeAddress64 addr = rx.getRemoteAddress64(); if (addr == 0x0013A20040DADEE0 && type == 1 && b.len() == 8) { publish(F("Livingroom/Temperature"), b.remove<float>()); publish(F("Livingroom/Humidity"), b.remove<float>()); return; } if (addr == 0x0013A20040E2C832 && type == 1 && b.len() == 8) { publish(F("Study/Temperature"), b.remove<float>()); publish(F("Study/Humidity"), b.remove<float>()); return; } DebugSerial.println(F("Unknown or invalid packet")); printResponse(rx, DebugSerial); } Instead of simply printing the packet contents as before, it figures out who the sender of the packet is and which Beebotte resource corresponds to it and calls the publish() function that was defined earlier. As you can see, the Beebotte resources are identified using "Channel/Resource", resulting in a unique identifier for each resource (which is later used in the MQTT message as the topic identifier). Note that the F() macro is used for the resource names to store them in program memory as this is what publish() and the MQTT library expect. If you run the resulting sketch and everything connects correctly, the coordinator will forward any sensor values that it receives to the Beebotte server. If you wait for (at most) five minutes to pass (or reset the sensor Arduino to have it send a reading right away) and then go to the appropriate channel in your Beebotte control panel, it should look something like this: Visualizing your data To easily allow the visualizing of your data, Beebotte supports dashboards. A dashboard is essentially a web page where you can add graphs, gauges, tables, buttons, and so on (collectively called widgets). These widgets can then display or control the data in one or more previously defined resources. To create such a dashboard, head over to the My Dashboards section of your control panel and click Create Dashboard to start building one. Once you set a name for the dashboard, you can start adding widgets to it. To display the temperature and humidity for all the sensors that you are using, you could use the Multi-line Chart widget. As the temperature and humidity values will be fairly far apart, it makes sense to put them each in a separate chart. Adding the temperature chart could look like this: If you also add a chart for the humidity, it should look like this: Here, only the living room sensor has been powered on, so no data is shown for the study yet. Also, to make the graphs a bit more interesting, some warm and humid air was breathed onto the sensor, causing the big spike in both the charts. There are plenty of other widget types that will prove useful to you. The Beebotte documentation provides information on the supported types, but you are encouraged to just play with the widgets a bit to see what they can add to your project. In the next chapter, you will see how to use the control widgets, which allow sending events and data back to the coordinator to control it. Accessing your data You have been accessing your data through a Beebotte dashboard so far. However, when these dashboards are not powerful enough for your needs or you want to access your data from an existing application, you can also access the recent and historical data through the Beebotte's HTTP or WebSocket API's. This gives you full control over the processing and display of the data without being limited in any way by what Beebotte offers in its dashboards. As creating a custom (web) application is out of the scope of this article, the HTTP and WebSocket API will not be discussed in detail. Instead, you should be able to find extensive documentation on this API on the Beebotte site at https://beebotte.com/overview. Summary In this article we learnt some of the ways to store the collected sensor data and visualize the data using convenient graphs. We also learnt as to how to save our data on the Cloud. Resources for Article: Further resources on this subject: TV Set Constant Volume Controller[article] Internet Connected Smart Water Meter[article] Getting Started with Arduino [article]
Read more
  • 0
  • 0
  • 13175

article-image-using-underscorejs-collections
Packt
01 Oct 2015
21 min read
Save for later

Using Underscore.js with Collections

Packt
01 Oct 2015
21 min read
In this article Alex Pop, the author of the book, Learning Underscore.js, we will explore Underscore functionality for collections using more in-depth examples. Some of the more advanced concepts related to Underscore functions such as scope resolution and execution context will be explained. The topics of the article are as follows: Key Underscore functions revisited Searching and filtering This article assumes that you are familiar with JavaScript fundamentals such as prototypical inheritance and the built-in data types. The source code for the examples from this article is hosted online at https://github.com/popalexandruvasile/underscorejs-examples/tree/master/collections, and you can execute the examples using the Cloud9 IDE at the address https://ide.c9.io/alexpop/underscorejs-examples from the collections folder. (For more resources related to this topic, see here.) Key Underscore functions – each, map, and reduce This flexible approach means that some Underscore functions can operate over collections: an Underscore-specific term for arrays, array like objects, and objects (where the collection represents the object properties). We will refer to the elements within these collections as collection items. By providing functions that operate over object properties Underscore expands JavaScript reflection like capabilities. Reflection is a programming feature for examining the structure of a computer program, especially during program execution. JavaScript is a dynamic language without static type system support (as of ES6). This makes it convenient to use a technique named duck typing when working with objects that share similar behaviors. Duck typing is a programming technique used in dynamic languages where objects are identified through their structure represented by properties and methods rather than their type (the name of duck typing is derived from the phrase "if it walks like a duck, swims like a duck, and quacks like a duck, then it is a duck"). Underscore itself uses duck typing to assert that an object is an array by checking for a property called length of type Number. Applying reflection techniques We will build an example that demonstrates duck typing and reflection techniques through a function that will extract object properties so that they can be persisted to a relational database. Usually relational database stores objects represented as a data row with columns types that map to regular SQL data types. We will use the _.each() function to iterate over object properties and extract those of type boolean, number, string and Date as they be easily mapped to SQL data type and ignore everything else: var propertyExtractor = (function() { "use strict" return { extractStorableProperties: function(source) { var storableProperties = {}; if (!source || source.id !== +source.id) { return storableProperties; } _.each(source, function(value, key) { var isDate = typeof value === 'object' && value instanceof Date; if (isDate || typeof value === 'boolean' || typeof value === 'number' || typeof value === 'string') { storableProperties[key] = value; } }); return storableProperties; } }; }()); You can find the example in the propertyExtractor.js file within the each-with-properties-and-context folder from the source code for this article. The first highlighted code snippet checks whether the object passed to the extractStorableProperties() function has a property called id that is a number. The + sign converts the id property to a number and the non-identity operator !== compares the result of this conversion with the unconverted original value. The non-identity operator returns true only if the type of the compared objects is different or they are of the same type and have different values. This was a duck typing technique used by Underscore up until version 1.7 to assert whether it deals with an array-like instance or an object instance in its collections related functions. Underscore collection related functions operate over array-like objects as they do not strictly check for the built in Array object. These functions can also work with the arguments objects or the HTML DOM NodeList objects. The last highlighted code snippet is the _.each() function that operates over object properties using an iteration function that receives the property value as its first argument and the property name as the optional second argument. If a property has a null or undefined value it will not appear in the returned object. The extractStorableProperties() function will return a new object with all the storable properties. The return value is used in the test specifications to assert that, given a sample object, the function behaves as expected: describe("Given propertyExtractor", function() { describe("when calling extractStorableProperties()", function() { var storableProperties; beforeEach(function() { var source = { id: 2, name: "Blue lamp", description: null, ui: undefined, price: 10, purchaseDate: new Date(2014, 10, 1), isInUse: true, }; storableProperties = propertyExtractor.extractStorableProperties(source); }); it("then the property count should be correct", function() { expect(Object.keys(storableProperties).length).toEqual(5); }); it("then the 'price' property should be correct", function() { expect(storableProperties.price).toEqual(10); }); it("then the 'description' property should not be defined", function() { expect(storableProperties.description).toEqual(undefined); }); }); }); Notice how we used the propertyExtractor global instance to access the function under test, and then, we used the ES5 function Object.keys to assert that the number of returned properties has the correct size. In a production ready application, we need to ensure that the global objects names do not clash among other best practices. You can find the test specification in the spec/propertyExtractorSpec.js file and execute them by browsing the SpecRunner.html file from the example source code folder. There is also an index.html file that will display the results of the example rendered in the browser using the index.js file. Manipulating the this variable Many Underscore functions have a similar signature with _.each(list, iteratee, [context]),where the optional context parameter will be used to set the this value for the iteratee function when it is called for each collection item. In JavaScript, the built in this variable will be different depending on the context where it is used. When the this variable is used in the global scope context, and in a browser environment, it will return the native window object instance. If this is used in a function scope, then the variable will have different values: If the function is an object method or an object constructor, then this will return the current object instance. Here is a short example code for this scenario: var item1 = { id: 1, name: "Item1", getInfo: function(){ return "Object: " + this.id + "-" + this.name; } }; console.log(item1.getInfo()); // -> “Object: 1-Item1” If the function does not belong to an object, then this will be undefined in the JavaScript strict mode. In the non-strict mode, this will return its global scope value. With a library such as Underscore that favors a functional style, we need to ensure that the functions used as parameters are using the this variable correctly. Let's assume that you have a function that references this (maybe it was used as an object method) and you want to use it with one of the Underscore functions such as _.each.. You can still use the function as is and provide the desired this value as the context parameter value when calling each. I have rewritten the previous example function to showcase the use of the context parameter: var propertyExtractor = (function() { "use strict"; return { extractStorablePropertiesWithThis: function(source) { var storableProperties = {}; if (!source || source.id !== +source.id) { return storableProperties; } _.each(source, function(value, key) { var isDate = typeof value === 'object' && value instanceof Date; if (isDate || typeof value === 'boolean' || typeof value === 'number' || typeof value === 'string') { this[key] = value; } }, storableProperties); return storableProperties; } }; }()); The first highlighted snippet shows the use of this, which is typical for an object method. The last highlighted snippet shows the context parameter value that this was set to. The storableProperties value will be passed as this for each iteratee function call. The test specifications for this example are identical with the previous example, and you can find them in the same folder each-with-properties-and-context from the source code for this article. You can use the optional context parameter in many of the Underscore functions where applicable and is a useful technique when working with functions that rely on a specific this value. Using map and reduce with object properties In the previous example, we had some user interface-specific code in the index.js file that was tasked with displaying the results of the propertyExtractor.extractStorableProperties() call in the browser. Let's pull this functionality in another example and imagine that we need a new function that, given an object, will transform its properties in a format suitable for displaying in a browser by returning an array of formatted text for each property. To achieve this, we will use the Underscore _.map() function over object properties as demonstrated in the next example: var propertyFormatter = (function() { "use strict"; return { extractPropertiesForDisplayAsArray: function(source) { if (!source || source.id !== +source.id) { return []; } return _.map(source, function(value, key) { var isDate = typeof value === 'object' && value instanceof Date; if (isDate || typeof value === 'boolean' || typeof value === 'number' || typeof value === 'string') { return "Property: " + key + " of type: " + typeof value + " has value: " + value; } return "Property: " + key + " cannot be displayed."; }); } }; }()); With Underscore, we can write compact and expressive code that manipulates these properties with little effort. The test specifications for the extractPropertiesForDisplayAsArray() function are using Jasmine regular expression matchers to assert the test conditions in the highlighted code snippets from the following example: describe("Given propertyFormatter", function() { describe("when calling extractPropertiesForDisplayAsArray()", function() { var propertiesForDisplayAsArray; beforeEach(function() { var source = { id: 2, name: "Blue lamp", description: null, ui: undefined, price: 10, purchaseDate: new Date(2014, 10, 1), isInUse: true, }; propertiesForDisplayAsArray = propertyFormatter.extractPropertiesForDisplayAsArray(source); }); it("then the returned property count should be correct", function() { expect(propertiesForDisplayAsArray.length).toEqual(7); }); it("then the 'price' property should be displayed", function() { expect(propertiesForDisplayAsArray[4]).toMatch("price.+10"); }); it("then the 'description' property should not be displayed", function() { expect(propertiesForDisplayAsArray[2]).toMatch("cannot be displayed"); }); }); }); The following example shows how _.reduce() is used to manipulate object properties. This will transform the properties of an object in a format suitable for browser display by returning a string value that contains all the properties in a convenient format: extractPropertiesForDisplayAsString: function(source) { if (!source || source.id !== +source.id) { return []; } return _.reduce(source, function(memo, value, key) { if (memo && memo !== "") { memo += "<br/>"; } var isDate = typeof value === 'object' && value instanceof Date; if (isDate || typeof value === 'boolean' || typeof value === 'number' || typeof value === 'string') { return memo + "Property: " + key + " of type: " + typeof value + " has value: " + value; } return memo + "Property: " + key + " cannot be displayed."; }, ""); } The example is almost identical with the previous one with the exception of the memo accumulator used to build the returned string value. The test specifications for the extractPropertiesForDisplayAsString() function are using a regular expression matcher and can be found in the spec/propertyFormatterSpec.js file: describe("when calling extractPropertiesForDisplayAsString()", function() { var propertiesForDisplayAsString; beforeEach(function() { var source = { id: 2, name: "Blue lamp", description: null, ui: undefined, price: 10, purchaseDate: new Date(2014, 10, 1), isInUse: true, }; propertiesForDisplayAsString = propertyFormatter.extractAllPropertiesForDisplay(source); }); it("then the returned string has expected length", function() { expect(propertiesForDisplayAsString.length).toBeGreaterThan(0); }); it("then the 'price' property should be displayed", function() { expect(propertiesForDisplayAsString).toMatch("<br/>Property: price of type: number has value: 10<br/>"); }); }); The examples from this subsection can be found within the map.reduce-with-properties folder from the source code for this article. Searching and filtering The _.find(list, predicate, [context]) function is part of the Underscore comprehensive functionality for searching and filtering collections represented by object properties and array like objects. We will make a distinction between search and filter functions with the former tasked with finding one item in a collection and the latter tasked with retrieving a subset of the collection (although sometimes, you will find the distinction between these functions thin and blurry). We will revisit the find function and the other search- and filtering-related functions using an example with slightly more diverse data that is suitable for database persistence. We will use the problem domain of a bicycle rental shop and build an array of bicycle objects with the following structure: var getBicycles = function() { return [{ id: 1, name: "A fast bike", type: "Road Bike", quantity: 10, rentPrice: 20, dateAdded: new Date(2015, 1, 2) }, { ... }, { id: 12, name: "A clown bike", type: "Children Bike", quantity: 2, rentPrice: 12, dateAdded: new Date(2014, 11, 1) }]; }; Each bicycle object has an id property, and we will use the propertyFormatter object built in the previous section to display the examples results in the browser for your convenience. The code was shortened here for brevity (you can find its full version alongside the other examples from this section within the searching and filtering folders from the source code for this article). All the examples are covered by tests and these are the recommended starting points if you want to explore them in detail. Searching For the first example of this section, we will define a bicycle-related requirement where we need to search for a bicycle of a specific type and with a rental price under a maximum value. Compared to the previous _.find() example, we will start with writing the tests specifications first for the functionality that is yet to be implemented. This is a test-driven development approach where we will define the acceptance criteria for the function under test first followed by the actual implementation. Writing the tests first forces us to think about what the code should do, rather than how it should do it, and this helps eliminate waste by writing only the code required to make the tests pass. Underscore find The test specifications for our initial requirement are as follows: describe("Given bicycleFinder", function() { describe("when calling findBicycle()", function() { var bicycle; beforeEach(function() { bicycle = bicycleFinder.findBicycle("Urban Bike", 16); }); it("then it should return an object", function() { expect(bicycle).toBeDefined(); }); it("then the 'type' property should be correct", function() { expect(bicycle.type).toEqual("Urban Bike"); }); it("then the 'rentPrice' property should be correct", function() { expect(bicycle.rentPrice).toEqual(15); }); }); }); The highlighted function call bicyleFinder.findBicycle() should return one bicycle object of the expected type and price as asserted by the tests. Here is the implementation that satisfies the test specifications: var bicycleFinder = (function() { "use strict"; var getBicycles = function() { return [{ id: 1, name: "A fast bike", type: "Road Bike", quantity: 10, rentPrice: 20, dateAdded: new Date(2015, 1, 2) }, { ... }, { id: 12, name: "A clown bike", type: "Children Bike", quantity: 2, rentPrice: 12, dateAdded: new Date(2014, 11, 1) }]; }; return { findBicycle: function(type, maxRentPrice) { var bicycles = getBicycles(); return _.find(bicycles, function(bicycle) { return bicycle.type === type && bicycle.rentPrice <= maxRentPrice; }); } }; }()); The code returns the first bicycle that satisfies the search criteria ignoring the rest of the bicycles that might meet the same criteria. You can browse the index.html file from the searching folder within the source code for this article to see the result of calling the bicyleFinder.findBicycle() function displayed on the browser via the propertyFormatter object. Underscore some There is a closely related function to _.find() with the signature _.some(list, [predicate], [context]). This function will return true if at least one item of the list collection satisfies the predicate function. The predicate parameter is optional, and if it is not specified, the _.some() function will return true if at least one item of the collection is not null. This makes the function a good candidate for implementing guard clauses. A guard clause is a function that ensures that a variable (usually a parameter) satisfies a specific condition before it is being used any further. The next example shows how _.some() is used to perform checks that are typical for a guard clause: var list1 = []; var list2 = [null, , undefined, {}]; var object1 = {}; var object2 = { property1: null, property3: true }; if (!_.some(list1) && !_.some(object1)) { alert("Collections list1 and object1 are not valid when calling _.some() over them."); } if(_.some(list2) && _.some(object2)){ alert("Collections list2 and object2 have at least one valid item and they are valid when calling _.some() over them."); } If you execute this code in a browser, you will see both alerts being displayed. The first alert gets triggered when an empty array or an object without any properties defined are found. The second alert appears when we have an array with at least one element that is not null and is not undefined or when we have an object that has at least one property that evaluates as true. Going back to our bicycle data, we will define a new requirement to showcase the use of _.some() in this context. We will implement a function that will ensure that we can find at least one bicycle of a specific type and with a maximum rent price. The code is very similar to the bicycleFinder.findBicycle() implementation with the difference that the new function returns true if the specific bicycle is found (rather than the actual object): hasBicycle: function(type, maxRentPrice) { var bicycles = getBicycles(); return _.some(bicycles, function(bicycle) { return bicycle.type === type && bicycle.rentPrice <= maxRentPrice; }); } You can find the tests specifications for this function in the spec/bicycleFinderSpec.js file from the searching example folder. Underscore findWhere Another function similar to _.find() has the signature _.findWhere(list, properties). This compares the property key-value pairs of each collection item from list with the property key-value pairs found on the properties object parameter. Usually, the properties parameter is an object literal that contains a subset of the properties of a collection item. The _.findWhere() function is useful when we need to extract a collection item matching an exact value compared to _.find() that can extract a collection item that matches a range of values or more complex criteria. To showcase the function, we will implement a requirement that needs to search a bicycle that has a specific id value. This is how the test specifications look like: describe("when calling findBicycleById()", function() { var bicycle; beforeEach(function() { bicycle = bicycleFinder.findBicycleById(6); }); it("then it should return an object", function() { expect(bicycle).toBeDefined(); }); it("then the 'id' property should be correct", function() { expect(bicycle.id).toEqual(6); }); }); And the next code snippet from the bicycleFinder.js file contains the actual implementation: findBicycleById: function(id){ var bicycles = getBicycles(); return _.findWhere(bicycles, {id: id}); } Underscore contains In a similar vein, with the _.some() function, there is a _.contains(list, value) function that will return true if there is at least one item from the list collection that is equal to the value parameter. The equality check is based on the strict comparison operator === where the operands will be checked for both type and value equality. We will implement a function that checks whether a bicycle with a specific id value exists in our collection: hasBicycleWithId: function(id) { var bicycles = getBicycles(); var bicycleIds = _.pluck(bicycles,"id"); return _.contains(bicycleIds, id); } Notice how the _.pluck(list, propertyName) function was used to create an array that stores the id property value for each collection item. In its implementation, _.pluck() is actually using _.map(), acting like a shortcut function for it. Filtering As we mentioned at the beginning of this section, Underscore provides powerful filtering functions, which are usually tasked with working on a subsection of a collection. We will reuse the same example data as before, and we will build some new functions to explore this functionality. Underscore filter We will start by defining a new requirement for our data where we need to build a function that retrieves all bicycles of a specific type and with a maximum rent price. This is how the test specifications looks like for the yet to be implemented function bicycleFinder.filterBicycles(type, maxRentPrice): describe("when calling filterBicycles()", function() { var bicycles; beforeEach(function() { bicycles = bicycleFinder.filterBicycles("Urban Bike", 16); }); it("then it should return two objects", function() { expect(bicycles).toBeDefined(); expect(bicycles.length).toEqual(2); }); it("then the 'type' property should be correct", function() { expect(bicycles[0].type).toEqual("Urban Bike"); expect(bicycles[1].type).toEqual("Urban Bike"); }); it("then the 'rentPrice' property should be correct", function() { expect(bicycles[0].rentPrice).toEqual(15); expect(bicycles[1].rentPrice).toEqual(14); }); }); The test expectations are assuming the function under test filterBicycles() returns an array, and they are asserting against each element of this array. To implement the new function, we will use the _.filter(list, predicate, [context]) function that returns an array with all the items from the list collection that satisfy the predicate function. Here is our example implementation code: filterBicycles: function(type, maxRentPrice) { var bicycles = getBicycles(); return _.filter(bicycles, function(bicycle) { return bicycle.type === type && bicycle.rentPrice <= maxRentPrice; }); } The usage of the _.filter() function is very similar to the _.find() function with the only difference in the return type of these functions. You can find this example together with the rest of examples from this subsection within the filtering folder from the source code for this article. Underscore where Underscore defines a shortcut function for _.filter() which is _.where(list, properties). This function is similar to the _.findWhere() function, and it uses the properties object parameter to compare and retrieve all the items from the list collection with matching properties. To showcase the function, we defined a new requirement for our example data where we need to retrieve all bicycles of a specific type. This is the code that implements the requirement: filterBicyclesByType: function(type) { var bicycles = getBicycles(); return _.where(bicycles, { type: type }); } By using _.where(), we are in fact using a more compact and expressive version of _.filter() in scenarios where we need to perform exact value matches. Underscore reject and partition Underscore provides a useful function which is the opposite for _.filter() and has a similar signature: _.reject(list, predicate, [context]). Calling the function will return an array of values from the list collection that do not satisfy the predicate function. To show its usage we will implement a function that retrieves all bicycles with a rental price less than or equal with a given value. Here is the function implementation: getAllBicyclesForSetRentPrice: function(setRentPrice) { var bicycles = getBicycles(); return _.reject(bicycles, function(bicycle) { return bicycle.rentPrice > setRentPrice; }); } Using the _.filter() function alongside the _.reject() function with the same list collection and predicate function will allow us to partition the collection in two arrays. One array holds items that do satisfy the predicate function while the other holds items that do not satisfy the predicate function. Underscore has a more convenient function that achieves the same result and this is _.partition(list, predicate). It returns an array that has two array elements: the first has the values that would be returned by calling _.filter() using the same input parameters and the second has the values for calling _.reject(). Underscore every We mentioned _.some() as being a great function for implementing guard clauses. It is also worth mentioning another closely related function _.every(list, [predicate], [context]). The function will check every item of the list collection and will return true if every item satisfies the predicate function or if list is null, undefined or empty. If the predicate function is not specified the value of each item will be evaluated instead. If we use the same data from the guard clause example for _.some() we will get the opposite results as shown in the next example: var list1 = []; var list2 = [null, , undefined, {}]; var object1 = {}; var object2 = { property1: null, property3: true }; if (_.every(list1) && _.every(object1)) { alert("Collections list1 and object1 are valid when calling _.every() over them."); } if(!_.every(list2) && !_.every(object2)){ alert("Collections list2 and object2 do not have all items valid so they are not valid when calling _.every() over them."); } To ensure a collection is not null, undefined, or empty and each item is also not null or undefined we should use both _.some() and _.every() as part of the same check as shown in the next example: var list1 = [{}]; var object1 = { property1: {}}; if (_.every(list1) && _.every(object1) && _.some(list1) && _.some(object1)) { alert("Collections list1 and object1 are valid when calling both _some() and _.every() over them."); } If the list1 object is an empty array or an empty object literal calling _.every() for it returns true while calling _some() returns false hence the need to use both functions when validating a collection. These code examples demonstrate how you can build your own guard clauses or data validation rules by using simple Underscore functions. Summary In this article, we explored many of the collection specific functions provided by Underscore and demonstrated additional functionality. We continued with searching and filtering functions. Resources for Article: Further resources on this subject: Packaged Elegance[article] Marshalling Data Services with Ext.Direct[article] Understanding and Developing Node Modules [article]
Read more
  • 0
  • 0
  • 7648

article-image-apps-different-platforms
Packt
01 Oct 2015
9 min read
Save for later

Apps for Different Platforms

Packt
01 Oct 2015
9 min read
In this article by Hoc Phan, the author of the book Ionic Cookbook, we will cover tasks related to building and publishing apps, such as: Building and publishing an app for iOS Building and publishing an app for Android Using PhoneGap Build for cross–platform (For more resources related to this topic, see here.) Introduction In the past, it used to be very cumbersome to build and successfully publish an app. However, there are many documentations and unofficial instructions on the Internet today that can pretty much address any problem you may run into. In addition, Ionic also comes with its own CLI to assist in this process. This article will guide you through the app building and publishing steps at a high level. You will learn how to: Build iOS and Android app via Ionic CLI Publish iOS app using Xcode via iTunes Connect Build Windows Phone app using PhoneGap Build The purpose of this article is to provide ideas on what to look for and some "gotchas". Apple, Google, and Microsoft are constantly updating their platforms and processes so the steps may not look exactly the same over time. Building and publishing an app for iOS Publishing on App Store could be a frustrating process if you are not well prepared upfront. In this section, you will walk through the steps to properly configure everything in Apple Developer Center, iTunes Connect and local Xcode Project. Getting ready You must register for Apple Developer Program in order to access https://developer.apple.com and https://itunesconnect.apple.com because those websites will require an approved account. In addition, the instructions given next use the latest version of these components: Mac OS X Yosemite 10.10.4 Xcode 6.4 Ionic CLI 1.6.4 Cordova 5.1.1 How to do it Here are the instructions: Make sure you are in the app folder and build for the iOS platform. $ ionic build ios Go to the ios folder under platforms/ to open the .xcodeproj file in Xcode. Go through the General tab to make sure you have correct information for everything, especially Bundle Identifier and Version. Change and save as needed. Visit Apple Developer website and click on Certificates, Identifiers & Profiles. For iOS apps, you just have to go through the steps in the website to fill out necessary information. The important part you need to do correctly here is to go to Identifiers | App IDs because it must match your Bundle Identifier in Xcode. Visit iTunes Connect and click on the My Apps button. Select the Plus (+) icon to click on New iOS App. Fill out the form and make sure to select the right Bundle Identifier of your app. There are several additional steps to provide information about the app such as screenshots, icons, addresses, and so on. If you just want to test the app, you could just provide some place holder information initially and come back to edit later. That's it for preparing your Developer and iTunes Connect account. Now open Xcode and select iOS Device as the archive target. Otherwise, the archive feature will not turn on. You will need to archive your app before you can submit it to the App Store. Navigate to Product | Archive in the top menu. After the archive process completed, click on Submit to App Store to finish the publishing process. At first, the app could take an hour to appear in iTunes Connect. However, subsequent submission will go faster. You should look for the app in the Prerelease tab in iTunes Connect. iTunes Connect has very nice integration with TestFlight to test your app. You can switch on and off this feature. Note that for each publish, you have to change the version number in Xcode so that it won't conflict with existing version in iTunes Connect. For publishing, select Submit for Beta App Review. You may want to go through other tabs such as Pricing and In-App Purchases to configure your own requirements. How it works Obviously this section does not cover every bit of details in the publishing process. In general, you just need to make sure your app is tested thoroughly, locally in a physical device (either via USB or TestFlight) before submitting to the App Store. If for some reason the Archive feature doesn't build, you could manually go to your local Xcode folder to delete that specific temporary archived app to clear cache: ~/Library/Developer/Xcode/Archives See also TestFlight is a separate subject by itself. The benefit of TestFlight is that you don't need your app to be approved by Apple in order to install the app on a physical device for testing and development. You can find out more information about TestFlight here: https://developer.apple.com/library/prerelease/ios/documentation/LanguagesUtilities/Conceptual/iTunesConnect_Guide/Chapters/BetaTestingTheApp.html Building and publishing an app for Android Building and publishing an Android app is a little more straightforward than iOS because you just interface with the command line to build the .apk file and upload to Google Play's Developer Console. Ionic Framework documentation also has a great instruction page for this: http://ionicframework.com/docs/guide/publishing.html. Getting ready The requirement is to have your Google Developer account ready and login to https://play.google.com/apps/publish. Your local environment should also have the right SDK as well as keytool, jarsigner, and zipalign command line for that specific version. How to do it Here are the instructions: Go to your app folder and build for Android using this command: $ ionic build --release android You will see the android-release-unsigned.apk file in the apk folder under /platforms/android/build/outputs. Go to that folder in the Terminal. If this is the first time you create this app, you must have a keystore file. This file is used to identify your app for publishing. If you lose it, you cannot update your app later on. To create a keystore, type the following command in the command line and make sure it's the same keytool version of the SDK: $ keytool -genkey -v -keystore my-release-key.keystore -alias alias_name -keyalg RSA -keysize 2048 -validity 10000 Once you fill out the information in the command line, make a copy of this file somewhere safe because you will need it later. The next step is to use that file to sign your app so it will create a new .apk file that Google Play allow users to install: $ jarsigner -verbose -sigalg SHA1withRSA -digestalg SHA1 -keystore my-release-key.keystore HelloWorld-release-unsigned.apk alias_name To prepare for final .apk before upload, you must package it using zipalign: $ zipalign -v 4 HelloWorld-release-unsigned.apk HelloWorld.apk Log in to Google Developer Console and click on Add new application. Fill out as much information as possible for your app using the left menu. Now you are ready to upload your .apk file. First is to perform a beta testing. Once you are completed with beta testing, you can follow Developer Console instructions to push the app to Production. How it works This section does not cover other Android marketplaces such as Amazon Appstore because each of them has different processes. However, the common idea is that you need to completely build the unsigned version of the apk folder, sign it using existing or new keystore file, and finally zipalign to prepare for upload. Using PhoneGap Build for cross-platform Adobe PhoneGap Build is a very useful product that provides build-as-a-service in the cloud. If you have trouble building the app locally in your computer, you could upload the entire Ionic project to PhoneGap Build and it will build the app for Apple, Android, and Windows Phone automatically. Getting ready Go to https://build.phonegap.com and register for a free account. You will be able to build one private app for free. For additional private apps, there is monthly fee associated with the account. How to do it Here are the instructions: Zip your entire /www folder and replace cordova.js to phonegap.js in index.html as described in http://docs.build.phonegap.com/en_US/introduction_getting_started.md.html#Getting%20Started%20with%20Build. You may have to edit config.xml to ensure all plugins are included. Detailed changes are at PhoneGap documentation: http://docs.build.phonegap.com/en_US/configuring_plugins.md.html#Plugins. Select Upload a .zip file under private tab. Upload the .zip file of the www folder. Make sure to upload appropriate key for each platform. For Windows Phone, upload publisher ID file. After that, you just build the app and download the completed build file for each platform. How it works In a nutshell, PhoneGap Build is a convenience way when you are only familiar with one platform during development process but you want your app to be built quickly for other platforms. Under the hood, PhoneGap Build has its own environment to automate the process for each user. However, the user still has to own the responsibility of providing key file for signing the app. PhoneGap Build just helps attach the key to your app. See also The common issue people usually face with when using PhoneGap Build is failure to build. You may want to refer to their documentation for troubleshooting: http://docs.build.phonegap.com/en_US/support_failed-builds.md.html#Failed%20Builds Summary This article provided you with general information about tasks related to building and publishing apps for Android, for iOS and for cross-platform using PhoneGap, wherein you came to know how to publish an app in various places such as App Store and Google Play. Resources for Article: Further resources on this subject: Our App and Tool Stack[article] Directives and Services of Ionic[article] AngularJS Project [article]
Read more
  • 0
  • 0
  • 8173

article-image-deploying-your-own-server
Packt
30 Sep 2015
16 min read
Save for later

Deploying on your own server

Packt
30 Sep 2015
16 min read
In this article by Jack Stouffer, the author of the book Mastering Flask, you will learn how to deploy and host your application on the different options available, and the advantages and disadvantages related to them. The most common way to deploy any web app is to run it on a server that you have control over. Control in this case means access to the terminal on the server with an administrator account. This type of deployment gives you the most amount of freedom out of the other choices as it allows you to install any program or tool you wish. This is in contrast to other hosting solutions where the web server and database are chosen for you. This type of deployment also happens to be the least expensive option. The downside to this freedom is that you take the responsibility of keeping the server up, backing up user data, keeping the software on the server up to date to avoid security issues, and so on. Entire books have been written on good server management, so if this is not a responsibility that you believe you or your company can handle, it would be best if you choose one of the other deployment options. This section will be based on a Debian Linux-based server, as Linux is far and away the most popular OS for running web servers, and Debian is the most popular Linux distro (a particular combination of software and the Linux kernel released as a package). Any OS with Bash and a program called SSH (which will be introduced in the next section) will work for this article, the only differences will be the command-line programs to install software on the server. (For more resources related to this topic, see here.) Each of these web servers will use a protocol named Web Server Gateway Interface (WSGI), which is a standard designed to allow Python web applications to easily communicate with web servers. We will never directly work with WSGI. However, most of the web server interfaces we will be using will have WSGI in their name, and it can be confusing if you don't know what the name is. Pushing code to your server with fabric To automate the process of setting up and pushing our application code to the server, we will use a Python tool called fabric. Fabric is a command-line program that reads and executes Python scripts on remote servers using a tool called SSH. SSH is a protocol that allows a user of one computer to remotely log in to another computer and execute commands on the command line, provided that the user has an account on the remote machine. To install fabric, we will use pip: $ pip install fabric Fabric commands are collections of command-line programs to be run on the remote machine's shell, in this case, Bash. We are going to make three different commands: one to run our unit tests, one to set up a brand new server to our specifications, and one to have the server update its copy of the application code with git. We will store these commands in a new file at the root of our project directory called fabfile.py. As it's the easiest to create, let's make the test command first: from fabric.api import local def test(): local('python -m unittest discover') To run this function from the command line, we can use fabric's command-line interface by passing the name of the command to run: $ fab test [localhost] local: python -m unittest discover ..... --------------------------------------------------------------------- Ran 5 tests in 6.028s OK Fabric has three main commands: local, run, and sudo. The local function, as seen in the preceding function, runs commands on the local computer. The run and sudo functions run commands on a remote machine, but sudo runs commands as an administrator. All of these functions notify fabric if the command ran successfully or not. If a command didn't run successfully, meaning that our tests failed in this case, any other commands in the function will not be run. This is useful for our commands because it allows us to force ourselves not to push any code to the server that does not pass our tests. Now we need to create the command to set up a new server from scratch. What this command will do is install the software our production environment needs as well as downloads the code from our centralized git repository. It will also create a new user that will act as the runner of the web server as well as the owner of the code repository. Do not run your webserver or have your code deployed by the root user. This opens your application to a whole host of security vulnerabilities. This command will differ based on your operating system, and we will be adding to this command in the rest of the article based on what server you choose: from fabric.api import env, local, run, sudo, cd env.hosts = ['deploy@[your IP]'] def upgrade_libs(): sudo("apt-get update") sudo("apt-get upgrade") def setup(): test() upgrade_libs() # necessary to install many Python libraries sudo("apt-get install -y build-essential") sudo("apt-get install -y git") sudo("apt-get install -y python") sudo("apt-get install -y python-pip") # necessary to install many Python libraries sudo("apt-get install -y python-all-dev") run("useradd -d /home/deploy/ deploy") run("gpasswd -a deploy sudo") # allows Python packages to be installed by the deploy user sudo("chown -R deploy /usr/local/") sudo("chown -R deploy /usr/lib/python2.7/") run("git config --global credential.helper store") with cd("/home/deploy/"): run("git clone [your repo URL]") with cd('/home/deploy/webapp'): run("pip install -r requirements.txt") run("python manage.py createdb") There are two new fabric features in this script. One is the env.hosts assignment, which tells fabric the user and IP address of the machine it should be logging in to. Second, there is the cd function used in conjunction with the with keyword, which executes any functions in the context of that directory instead of the home directory of the deploy user. The line that modifies the git configuration is there to tell git to remember your repository's username and password, so you do not have to enter it every time you wish to push code to the server. Also, before the server is set up, we make sure to update the server's software to keep the server up to date. Finally, we have the function to push our new code to the server. In time, this command will also restart the web server and reload any configuration files that come from our code. But this depends on the server you choose, so this is filled out in the subsequent sections: def deploy(): test() upgrade_libs() with cd('/home/deploy/webapp'): run("git pull") run("pip install -r requirements.txt") So, if we were to begin working on a new server, all we would need to do to set it up is to run the following commands: $ fabric setup $ fabric deploy Running your web server with supervisor Now that we have automated our updating process, we need some program on the server to make sure that our web server, and database if you aren't using SQLite, is running. To do this, we will use a simple program called supervisor. All that supervisor does is automatically run command-line programs in background processes and allows you to see the status of running programs. Supervisor also monitors all of the processes its running, and if the process dies, it tries to restart it. To install supervisor, we need to add it to the setup command in our fabfile.py: def setup(): … sudo("apt-get install -y supervisor") To tell supervisor what to do, we need to create a configuration file and then copy it to the /etc/supervisor/conf.d/ directory of our server during the deploy fabric command. Supervisor will load all of the files in this directory when it starts and attempt to run them. In a new file in the root of our project directory named supervisor.conf, add the following: [program:webapp] command= directory=/home/deploy/webapp user=deploy [program:rabbitmq] command=rabbitmq-server user=deploy [program:celery] command=celery worker -A celery_runner directory=/home/deploy/webapp user=deploy This is the bare minimum configuration needed to get a web server up and running. But, supervisor has a lot more configuration options. To view all of the customizations, go to the supervisor documentation at http://supervisord.org/. This configuration tells supervisor to run a command in the context of /home/deploy/webapp under the deploy user. The right hand of the command value is empty because it depends on what server you are running and will be filled in for each section. Now we need to add a sudo call in the deploy command to copy this configuration file to the /etc/supervisor/conf.d/ directory: def deploy(): … with cd('/home/deploy/webapp'): … sudo("cp supervisord.conf /etc/supervisor/conf.d/webapp.conf") sudo('service supervisor restart') A lot of projects just create the files on the server and forget about them, but having the configuration file stored in our git repository and copied on every deployment gives several advantages. First, this means that it easy to revert changes if something goes wrong using git. Second, it means that we don't have to log in to our server in order to make changes to the files. Don't use the Flask development server in production. Not only it fails to handle concurrent connections, but it also allows arbitrary Python code to be run on your server. Gevent The simplest option to get a web server up and running is to use a Python library called gevent to host your application. Gevent is a Python library that adds an alternative way of doing concurrent programming outside of the Python threading library called coroutines. Gevent has an interface for running WSGI applications that is both simple and has good performance. A simple gevent server can easily handle hundreds of concurrent users, which is more in number than 99 percent of websites on the Internet will ever have. The downside to this option is that its simplicity means a lack of configuration options. There is no way, for example, to add rate limiting to the server or to add HTTPS traffic. This deployment option is purely for sites that you don't expect to receive a huge amount of traffic. Remember YAGNI (short for You Aren't Gonna Need It); only upgrade to a different web server if you really need to. Coroutines are a bit outside of the scope of this book, so a good explanation can be found at https://en.wikipedia.org/wiki/Coroutine. To install gevent, we will use pip: $ pip install gevent In a new file in the root of the project directory named gserver.py, add the following: from gevent.wsgi import WSGIServer from webapp import create_app app = create_app('webapp.config.ProdConfig') server = WSGIServer(('', 80), app) server.serve_forever() To run the server with supervisor, just change the command value to the following: [program:webapp] command=python gserver.py directory=/home/deploy/webapp user=deploy Now when you deploy, gevent will be automatically installed for you by running your requirements.txt on every deployment, that is, if you are properly pip freeze–ing after every new dependency is added. Tornado Tornado is another very simple way to deploy WSGI apps purely with Python. Tornado is a web server that is designed to handle thousands of simultaneous connections. If your application needs real-time data, Tornado also supports websockets for continuous, long-lived connections to the server. Do not use Tornado in production on a Windows server. The Windows version of Tornado is not only much slower, but it is considered beta quality software. To use Tornado with our application, we will use Tornado's WSGIContainer in order to wrap the application object to make it Tornado compatible. Then, Tornado will start to listen on port 80 for requests until the process is terminated. In a new file named tserver.py, add the following: from tornado.wsgi import WSGIContainer from tornado.httpserver import HTTPServer from tornado.ioloop import IOLoop from webapp import create_app app = WSGIContainer(create_app("webapp.config.ProdConfig")) http_server = HTTPServer(app) http_server.listen(80) IOLoop.instance().start() To run the Tornado with supervisor, just change the command value to the following: [program:webapp] command=python tserver.py directory=/home/deploy/webapp user=deploy Nginx and uWSGI If you need more performance or customization, the most popular way to deploy a Python web application is to use the web server Nginx as a frontend for the WSGI server uWSGI by using a reverse proxy. A reverse proxy is a program in networks that retrieves contents for a client from a server as if they returned from the proxy itself as shown in the following figure: Nginx and uWSGI are used in this way because we get the power of the Nginx frontend while having the customization of uWSGI. Nginx is a very powerful web server that became popular by providing the best combination of speed and customization. Nginx is consistently faster than other web severs, such as Apache httpd, and has native support for WSGI applications. The way it achieves this speed is several good architecture decisions as well as the decision early on that they were not going to try to cover a large amount of use cases like Apache does. Having a smaller feature set makes it much easier to maintain and optimize the code. From a programmer's perspective, it is also much easier to configure Nginx, as there is no giant default configuration file (httpd.conf) that needs to be overridden with .htaccess files in each of your project directories. One downside is that Nginx has a much smaller community than Apache, so if you have an obscure problem, you are less likely to be able to find answers online. Also, it's possible that a feature that most programmers are used to in Apache isn't supported in Nginx. uWSGI is a web server that supports several different types of server interfaces, including WSGI. uWSGI handles severing the application content as well as things such as load balancing traffic across several different processes and threads. To install uWSGI, we will use pip in the following way: $ pip install uwsgi In order to run our application, uWSGI needs a file with an accessible WSGI application. In a new file named wsgi.py in the top level of the project directory, add the following: from webapp import create_app app = create_app("webapp.config.ProdConfig") To test uWSGI, we can run it from the command line with the following: $ uwsgi --socket 127.0.0.1:8080 --wsgi-file wsgi.py --callable app --processes 4 --threads 2 If you are running this on your server, you should be able to access port 8080 and see your app (if you don't have a firewall that is). What this command does is load the app object from the wsgi.py file and makes it accessible from localhost on port 8080. It also spawns four different processes with two threads each, which are automatically load balanced by a master process. This amount of processes is the overkill for the vast, vast majority of websites. To start off, use a single process with two threads and scale up from there. Instead of adding all of the configuration options on the command line, we can create a text file to hold our configuration, which brings the same benefits for configuration that were listed in the section on supervisor. In a new file in the root of the project directory named uwsgi.ini, add the following: [uwsgi] socket = 127.0.0.1:8080 wsgi-file = wsgi.py callable = app processes = 4 threads = 2 uWSGI supports hundreds of configuration options as well as several official and unofficial plugins. To leverage the full power of uWSGI, you can explore the documentation at http://uwsgi-docs.readthedocs.org/. Let's run the server now from supervisor: [program:webapp] command=uwsgi uwsgi.ini directory=/home/deploy/webapp user=deploy We also need to install Nginx during the setup function: def setup(): … sudo("apt-get install -y nginx") Because we are installing Nginx from the OS's package manager, the OS will handle running Nginx for us. At the time of writing, the Nginx version in the official Debian package manager is several years old. To install the most recent version, follow the instructions here: http://wiki.nginx.org/Install. Next, we need to create an Nginx configuration file and then copy it to the /etc/nginx/sites-available/ directory when we push the code. In a new file in the root of the project directory named nginx.conf, add the following server { listen 80; server_name your_domain_name; location / { include uwsgi_params; uwsgi_pass 127.0.0.1:8080; } location /static { alias /home/deploy/webapp/webapp/static; } } What this configuration file does is tell Nginx to listen for incoming requests on port 80 and forward all requests to the WSGI application that is listening on port 8080. Also, it makes an exception for any requests for static files and instead sends those requests directly to the file system. Bypassing uWSGI for static files gives a great performance boost, as Nginx is really good at serving static files quickly. Finally, in the fabfile.py file: def deploy(): … with cd('/home/deploy/webapp'): … sudo("cp nginx.conf " "/etc/nginx/sites-available/[your_domain]") sudo("ln -sf /etc/nginx/sites-available/your_domain " "/etc/nginx/sites-enabled/[your_domain]") sudo("service nginx restart") Apache and uWSGI Using Apache httpd with uWSGI has mostly the same setup. First off, we need an apache configuration file in a new file in the root of our project directory named apache.conf: <VirtualHost *:80> <Location /> ProxyPass / uwsgi://127.0.0.1:8080/ </Location> </VirtualHost> This file just tells Apache to pass all requests on port 80 to the uWSGI web server listening on port 8080. But, this functionality requires an extra Apache plugin from uWSGI called mod proxy uWSGI. We can install this as well as Apache in the set command: def setup(): … sudo("apt-get install -y apache2") sudo("apt-get install -y libapache2-mod-proxy-uwsgi") Finally, in the deploy command, we need to copy our Apache configuration file into Apache's configuration directory. def deploy(): … with cd('/home/deploy/webapp'): … sudo("cp apache.conf " "/etc/apache2/sites-available/[your_domain]") sudo("ln -sf /etc/apache2/sites-available/[your_domain] " "/etc/apache2/sites-enabled/[your_domain]") sudo("service apache2 restart") Summary In this article you learnt that there are many different options to hosting your application, each having their own pros and cons. Deciding on one depends on the amount of time and money you are willing to spend as well as the total number of users you expect. Resources for Article: Further resources on this subject: Handling sessions and users[article] Snap – The Code Snippet Sharing Application[article] Man, Do I Like Templates! [article] from fabric.api import local def test():     local('python -m unittest discover')
Read more
  • 0
  • 0
  • 9985

article-image-overview-physics-bodies-and-physics-materials
Packt
30 Sep 2015
14 min read
Save for later

Overview of Physics Bodies and Physics Materials

Packt
30 Sep 2015
14 min read
In this article by Katax Emperor and Devin Sherry, author of the book Unreal Engine Physics Essentials, we will take a deeper look at Physics Bodies in Unreal Engine 4. We will also look at some of the detailed properties available to these assets. In addition, we will discuss the following topics: Physical materials – an overview For the purposes of this article, we will continue to work with Unreal Engine 4 and the Unreal_PhyProject. Let's begin by discussing Physics Bodies in Unreal Engine 4. (For more resources related to this topic, see here.) Physics Bodies – an overview When it comes to creating Physics Bodies, there are multiple ways to go about it (most of which we have covered up to this point), so we will not go into much detail about the creation of Physics Bodies. We can have Static Meshes react as Physics Bodies by checking the Simulate Physics property of the asset when it is placed in our level: We can also create Physics Bodies by creating Physics Assets and Skeletal Meshes, which automatically have the properties of physics by default. Lastly, Shape Components in blueprints, such as spheres, boxes, and capsules will automatically gain the properties of a Physics Body if they are set for any sort of collision, overlap, or other physics simulation events. As always, remember to ensure that our asset has a collision applied to it before attempting to simulate physics or establish Physics Bodies, otherwise the simulation will not work. When you work with the properties of Physics on Static Meshes or any other assets that we will attempt to simulate physics with, we will see a handful of different parameters that we can change in order to produce the desired effect under the Details panel. Let's break down these properties: Simulate Physics: This parameter allows you to enable or simulate physics with the asset you have selected. When this option is unchecked, the asset will remain static, and once enabled, we can edit the Physics Body properties for additional customization. Auto Weld: When this property is set to True, and when the asset is attached to a parent object, such as in a blueprint, the two bodies are merged into a single rigid body. Physics settings, such as collision profiles and body settings, are determined by Root Component. Start Awake: This parameter determines whether the selected asset will Simulate Physics at the start once it is spawned or whether it will Simulate Physics at a later time. We can change this parameter with the level and actor blueprints. Override Mass: When this property is checked and set to True, we can then freely change the Mass of our asset using kilograms (kg). Otherwise, the Mass in Kg parameter will be set to a default value that is based on a computation between the physical material applied and the mass scale value. Mass in Kg: This parameter determines the Mass of the selected asset using kilograms. This is important when you work with different sized physics objects and want them to react to forces appropriately. Locked Axis: This parameter allows you to lock the physical movement of our object along a specified axis. We have the choice to lock the default axes as specified in Project Settings. We also have the choice to lock physical movement along the individual X, Y, and Z axes. We can have none of the axes either locked in translation or rotation, or we can customize each axis individually with the Custom option. Enable Gravity: This parameter determines whether the object should have the force of gravity applied to it. The force of gravity can be altered in the World Settings properties of the level or in the Physics section of the Engine properties in Project Settings. Use Async Scene: This property allows you to enable the use of Asynchronous Physics for the specified object. By default, we cannot edit this property. In order to do so, we must navigate to Project Settings and then to the Physics section. Under the advanced Simulation tab, we will find the Enable Async Scene parameter. In an asynchronous scene, objects (such as Destructible actors) are simulated, and a Synchronous scene is where classic physics tasks, such as a falling crate, take place. Override Walkable Slope on Instance: This parameter determines whether or not we can customize an object's walkable slope. In general, we would use this parameter for our player character, but this property enables the customization of how steep a slope is that an object can walk on. This can be controlled specifically by the Walkable Slope Angle parameter and the Walkable Slope Behavior parameter. Override Max Depenetration Velocity: This parameter allows you to customize Max Depenetration Velocity of the selected physics body. Center of Mass Offset: This property allows you to specify a specific vector offset for the selected objects' center of mass from the calculated location. Being able to know and even modify the center of the mass for our objects can be very useful when you work with sensitive physics simulations (such as flight). Sleep Family: This parameter allows you to control the set of functions that the physics object uses when in a sleep mode or when the object is moving and slowly coming to a stop. The SF Sensitive option contains values with a lower sleep threshold. This is best used for objects that can move very slowly or for improved physics simulations (such as billiards). The SF Normal option contains values with a higher sleep threshold, and objects will come to a stop in a more abrupt manner once in motion as compared to the SF Sensitive option. Mass Scale: This parameter allows you to scale the mass of our object by multiplying a scalar value. The lower the number, the lower the mass of the object will become, whereas the larger the number, the larger the mass of the object will become. This property can be used in conjunction with the Mass in Kg parameter to add more customization to the mass of the object. Angular Damping: This property is a modifier of the drag force that is applied to the object in order to reduce angular movement, which means to reduce the rotation of the object. We will go into more detail regarding Angular Damping. Linear Damping: This property is used to simulate the different types of friction that can assist in the game world. This modifier adds a drag force to reduce linear movement, reducing the translation of the object. We will go into more detail regarding Linear Damping. Max Angular Velocity: This parameter limits Max Angular Velocity of the selected object in order to prevent the object from rotating at high rates. By increasing this value, the object will spin at very high speeds once it is impacted by an outside force that is strong enough to reach the Max Angular Velocity value. By decreasing this value, the object will not rotate as fast, and it will come to a halt much faster depending on the angular damping applied. Position Solver Iteration Count: This parameter reflects the physics body's solver iteration count for its position; the solver iteration count is responsible for periodically checking the physics body's position. Increasing this value will be more CPU intensive, but better stabilized. Velocity Solver Iteration Count: This parameter reflects the physics body's solver iteration count for its velocity; the solver iteration count is responsible for periodically checking the physics body's velocity. Increasing this value will be more CPU intensive, but better stabilized. Now that we have discussed all the different parameters available to Physics Bodies in Unreal Engine 4, feel free to play around with these values in order to obtain a stronger grasp of what each property controls and how it affects the physical properties of the object. As there are a handful of properties, we will not go into detailed examples of each, but the best way to learn more is to experiment with these values. However, we will work with how to create various examples of physics bodies in order to explore Physics Damping and Friction. Physical Materials – an overview Physical Materials are assets that are used to define the response of a physics body when you dynamically interact with the game world. When you first create Physical Material, you are presented with a set of default values that are identical to the default Physical Material that is applied to all physics objects. To create Physical Material, let's navigate to Content Browser and select the Content folder so that it is highlighted. From here, we can right-click on the Content folder and select the New Folder option to create a new folder for our Physical Material; name this new folder PhysicalMaterials. Now, in the PhysicalMaterials folder, right-click on the empty area of Content Browser and navigate to the Physics section and select Physical Material. Make sure to name this new asset PM_Test. Double-click on the new Physical Material asset to open Generic Asset Editor and we should see the following values that we can edit in order to make our physics objects behave in certain ways: Let's take a few minutes to break down each of these properties: Friction: This parameter controls how easily objects can slide on this surface. The lower the friction value, the more slippery the surface. The higher the friction value, the less slippery the surface. For example, ice would have a Friction surface value of .05, whereas a Friction surface value of 1 would cause the object not to slip as much once moved. Friction Combine Mode: This parameter controls how friction is computed for multiple materials. This property is important when it comes to interactions between multiple physical materials and how we want these calculations to be made. Our choices are Average, Minimum, Maximum, and Multiply. Override Friction Combine Mode: This parameter allows you to set the Friction Combine Mode parameter instead of using Friction Combine Mode, found in the Project Settings | Engine | Physics section. Restitution: This parameter controls how bouncy the surface is. The higher the value, the more bouncy the surface will become. Density: This parameter is used in conjunction with the shape of the object to calculate its mass properties. The higher the number, the heavier the object becomes (in grams per cubic centimeter). Raise Mass to Power: This parameter is used to adjust the way in which the mass increases as the object gets larger. This is applied to the mass that is calculated based on a solid object. In actuality, larger objects do not tend to be solid and become more like shells (such as a vehicle). The values are clamped to 1 or less. Destructible Damage Threshold Scale: This parameter is used to scale the damage threshold for the destructible objects that this physical material is applied to. Surface Type: This parameter is used to describe what type of real-world surface we are trying to imitate for our project. We can edit these values by navigating to the Project Settings | Physics | Physical Surface section. Tire Friction Scale: This parameter is used as the overall tire friction scalar for every type of tire and is multiplied by the parent values of the tire. Tire Friction Scales: This parameter is almost identical to the Tire Friction Scale parameter, but it looks for a Tire Type data asset to associate it to. Tire Types can be created through the use of Data Assets by right-clicking on the Content Browser | Miscellaneous | Data Asset | Tire Type section. Now that we have briefly discussed how to create Physical Materials and what their properties are, let's take a look at how to apply Physical Materials to our physics bodies. In FirstPersonExampleMap, we can select any of the physics body cubes throughout the level and in the Details panel under Collision, we will find the Phys Material Override parameter. It is here that we can apply our Physical Material to the cube and view how it reacts to our game world. For the sake of an example, let's return to the Physical Material, PM_Test, that we created earlier, change the Friction property from 0.7 to 0.2, and save it. With this change in place, let's select a physics body cube in FirstPersonExampleMap and apply the Physical Material, PM_Test, to the Phys Material Override parameter of the object. Now, if we play the game, we will see that the cube we applied the Physical Material, PM_Test, to will start to slide more once shot by the player than it did when it had a Friction value of 0.7. We can also apply this Physical Material to the floor mesh in FirstPersonExampleMap to see how it affects the other physics bodies in our game world. From here, feel free to play around with the Physical Material parameters to see how we can affect the physics bodies in our game world. Lastly, let's briefly discuss how to apply Physical Materials to normal Materials, Material Instances, and Skeletal Meshes. To apply Physical Material to a normal material, we first need to either create or open an already created material in Content Browser. To create a material, just right-click on an empty area of Content Browser and select Material from the drop-down menu.Double-click on Material to open Material Editor, and we will see the parameter for Phys Material under the Physical Material section of Details panel in the bottom-left of Material Editor: To apply Physical Material to Material Instance, we first need to create Material Instance by navigating to Content Browser and right-clicking on an empty area to bring up the context drop-down menu. Under the Materials & Textures section, we will find an option for Material Instance. Double-click on this option to open Material Instance Editor. Under the Details panel in the top-left corner of this editor, we will find an option to apply Phys Material under the General section: Lastly, to apply Physical Material to Skeletal Mesh, we need to either create or open an already created Physics Asset that contains Skeletal Mesh. In the First Person Shooter Project template, we can find TutorialTPP_PhysicsAsset under the Engine Content folder. If the Engine Content folder is not visible by default in Content Browser, we need to simply navigate to View Options in the bottom-right corner of Content Browser and check the Show Engine Content parameter. Under the Engine Content folder, we can navigate to the Tutorial folder and then to the TutorialAssets folder to find the TutorialTPP_PhysicsAsset asset. Double-click on this asset to open Physical Asset Tool. Now, we can click on any of the body parts found on Skeletal Mesh to highlight it. Once this is highlighted, we can view the option for Simple Collision Physical Material in the Details panel under the Physics section. Here, we can apply any of our Physical Materials to this body part. Summary In this article, we discussed what Physics Bodies are and how they function in Unreal Engine 4. Moreover, we looked at the properties that are involved in Physics Bodies and how these properties can affect the behavior of these bodies in the game. Additionally, we briefly discussed Physical Materials, how to create them, and what their properties entail when it comes to affecting its behavior in the game. We then reviewed how to apply Physical Materials to static meshes, materials, material instances, and skeletal meshes. Now that we have a stronger understanding of how Physics Bodies work in the context of angular and linear velocities, momentum, and the application of damping, we can move on and explore in detail how Physical Materials work and how they are implemented. Resources for Article: Further resources on this subject: Creating a Brick Breaking Game[article] Working with Away3D Cameras[article] Replacing 2D Sprites with 3D Models [article]
Read more
  • 0
  • 0
  • 24362

article-image-collaboration-using-github-workflow
Packt
30 Sep 2015
12 min read
Save for later

Collaboration Using the GitHub Workflow

Packt
30 Sep 2015
12 min read
In this article by Achilleas Pipinellis, the author of the book GitHub Essentials, has come up with a workflow based on the features it provides and the power of Git. It has named it the GitHub workflow (https://guides.github.com/introduction/flow). In this article, we will learn how to work with branches and pull requests, which is the most powerful feature of GitHub. (For more resources related to this topic, see here.) Learn about pull requests Pull request is the number one feature in GitHub that made it what it is today. It was introduced in early 2008 and is being used extensively among projects since then. While everything else can be pretty much disabled in a project's settings (such as issues and the wiki), pull requests are always enabled. Why pull requests are a powerful asset to work with Whether you are working on a personal project where you are the sole contributor or on a big open source one with contributors from all over the globe, working with pull requests will certainly make your life easier. I like to think of pull requests as chunks of commits, and the GitHub UI helps you visualize clearer what is about to be merged in the default branch or the branch of your choice. Pull requests are reviewable with an enhanced diff view. You can easily revert them with a simple button on GitHub and they can be tested before merging, if a CI service is enabled in the project. The connection between branches and pull requests There is a special connection between branches and pull requests. In this connection, GitHub will automatically show you a button to create a new pull request if you push a new branch in your repository. As we will explore in the following sections, this is tightly coupled to the GitHub workflow, and GitHub uses some special words to describe the from and to branches. As per GitHub's documentation: The base branch is where you think changes should be applied, the head branch is what you would like to be applied. So, in GitHub terms, head is your branch, and base the branch you would like to merge into. Create branches directly in a project – the shared repository model The shared repository model, as GitHub aptly calls it, is when you push new branches directly to the source repository. From there, you can create a new pull request by comparing between branches, as we will see in the following sections. Of course, in order to be able to push to a repository you either have to be the owner or a collaborator; in other words you must have write access. Create branches in your fork – the fork and pull model Forked repositories are related to their parent in a way that GitHub uses in order to compare their branches. The fork and pull model is usually used in projects when one does not have write access but is willing to contribute. After forking a repository, you push a branch to your fork and then create a pull request in the source repository asking its maintainer to merge the changes. This is common practice to contribute to open source projects hosted on GitHub. You will not have access to their repository, but being open source, you can fork the public repository and work on your own copy. How to create and submit a pull request There are quite a few ways to initiate the creation of a pull request, as we you will see in the following sections. The most common one is to push a branch to your repository and let GitHub's UI guide you. Let's explore this option first. Use the Compare & pull request button Whenever a new branch is pushed to a repository, GitHub shows a quick button to create a pull request. In reality, you are taken to the compare page, as we will explore in the next section, but some values are already filled out for you. Let's create, for example, a new branch named add_gitignore where we will add a .gitignore file with the following contents: git checkout -b add_gitignore echo -e '.bundlen.sass-cachen.vendorn_site' > .gitignore git add .gitignore git commit -m 'Add .gitignore' git push origin add_gitignore Next, head over your repository's main page and you will notice the Compare & pull request button, as shown in the following screenshot: From here on, if you hit this button you will be taken to the compare page. Note that I am pushing to my repository following the shared repository model, so here is how GitHub greets me: What would happen if I used the fork and pull repository model? For this purpose, I created another user to fork my repository and followed the same instructions to add a new branch named add_gitignore with the same changes. From here on, when you push the branch to your fork, the Compare & pull request button appears whether you are on your fork's page or on the parent repository. Here is how it looks if you visit your fork: The following screenshot will appear, if you visit the parent repository: In the last case (captured in red), you can see from which user this branch came from (axil43:add_gitignore). In either case, when using the fork and pull model, hitting the Compare & pull request button will take you to the compare page with slightly different options: Since you are comparing across forks, there are more details. In particular, you can see the base fork and branch as well as the head fork and branch that are the ones you are the owner of. GitHub considers the default branch set in your repository to be the one you want to merge into (base) when the Create Pull Request button appears. Before submitting it, let's explore the other two options that you can use to create a pull request. You can jump to the Submit a pull request section if you like. Use the compare function directly As mentioned in the previous section, the Compare & pull request button gets you on the compare page with some predefined values. The button appears right after you push a new branch and is there only for a few moments. In this section, we will see how to use the compare function directly in order to create a pull request. You can access the compare function by clicking on the green button next to the branch drop-down list on a repository's main page: This is pretty powerful as one can compare across forks or, in the same repository, pretty much everything—branches, tags, single commits and time ranges. The default page when you land on the compare page is like the following one; you start by comparing your default branch with GitHub, proposing a list of recently created branches to choose from and compare: In order to have something to compare to, the base branch must be older than what you are comparing to. From here, if I choose the add_gitignore branch, GitHub compares it to a master and shows the diff along with the message that it is able to be merged into the base branch without any conflicts. Finally, you can create the pull request: Notice that I am using the compare function while I'm at my own repository. When comparing in a repository that is a fork of another, the compare function slightly changes and automatically includes more options as we have seen in the previous section. As you may have noticed the Compare & pull request quick button is just a shortcut for using compare manually. If you want to have more fine-grained control on the repositories and the branches compared, use the compare feature directly. Use the GitHub web editor So far, we have seen the two most well-known types of initiating a pull request. There is a third way as well: using entirely the web editor that GitHub provides. This can prove useful for people who are not too familiar with Git and the terminal, and can also be used by more advanced Git users who want to propose a quick change. As always, according to the model you are using (shared repository or fork and pull), the process is a little different. Let's first explore the shared repository model flow using the web editor, which means editing files in a repository that you own. The shared repository model Firstly, make sure you are on the branch that you wish to branch off; then, head over a file you wish to change and press the edit button with the pencil icon: Make the change you want in that file, add a proper commit message, and choose Create a new branch giving the name of the branch you wish to create. By default, the branch name is username-patch-i, where username is your username and i is an increasing integer starting from 1. Consecutive edits on files will create branches such as username-patch-1, username-patch-2, and so on. In our example, I decided to give the branch a name of my own: When ready, press the Propose file change button. From this moment on, the branch is created with the file edits you made. Even if you close the next page, your changes will not be lost. Let's skip the pull request submission for the time being and see how the fork and pull model works. The fork and pull model In the fork and pull model, you fork a repository and submit a pull request from the changes you make in your fork. In the case of using the web editor, there is a caveat. In order to get GitHub automatically recognize that you wish to perform a pull request in the parent repository, you have to start the web editor from the parent repository and not your fork. In the following screenshot, you can see what happens in this case: GitHub informs you that a new branch will be created in your repository (fork) with the new changes in order to submit a pull request. Hitting the Propose file change button will take you to the form to submit the pull request: Contrary to the shared repository model, you can now see the base/head repositories and branches that are compared. Also, notice that the default name for the new branch is patch-i, where i is an increasing integer number. In our case, this was the first branch created that way, so it was named patch-1. If you would like to have the ability to name the branch the way you like, you should follow the shared repository model instructions as explained in preceding section. Following that route, edit the file in your fork where you have write access, add your own branch name, hit the Propose file change button for the branch to be created, and then abort when asked to create the pull request. You can then use the Compare & pull request quick button or use the compare function directly to propose a pull request to the parent repository. One last thing to consider when using the web editor, is the limitation of editing one file at a time. If you wish to include more changes in the same branch that GitHub created for you when you first edited a file, you must first change to that branch and then make any subsequent changes. How to change the branch? Simply choose it from the drop-down menu as shown in the following screenshot: Submit a pull request So far, we have explored the various ways to initiate a pull request. In this section, we will finally continue to submit it as well. The pull request form is identical to the form when creating a new issue. If you have write access to the repository that you are making the pull request to, then you are able to set labels, milestone, and assignee. The title of the pull request is automatically filled by the last commit message that the branch has, or if there are multiple commits, it will just fill in the branch name. In either case, you can change it to your liking. In the following image, you can see the title is taken from the branch name after GitHub has stripped the special characters. In a sense, the title gets humanized: You can add an optional description and images if you deem proper. Whenever ready, hit the Create pull request button. In the following sections, we will explore how the peer review works. Peer review and inline comments The nice thing about pull requests is that you have a nice and clear view of what is about to get merged. You can see only the changes that matter, and the best part is that you can fire up a discussion concerning those changes. In the previous section, we submitted the pull request so that it can be reviewed and eventually get merged. Suppose that we are collaborating with a team and they chime in to discuss the changes. Let's first check the layout of a pull request. Summary In this article, we explored the GitHub workflow and the various ways to perform a pull request, as well as the many features GitHub provides to make that workflow even smoother. This is how the majority of open source projects work when there are dozens of contributors involved. Resources for Article: Further resources on this subject: Git Teaches – Great Tools Don't Make Great Craftsmen[article] Maintaining Your GitLab Instance[article] Configuration [article]
Read more
  • 0
  • 0
  • 33829
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-designing-and-building-vrealize-automation-62-infrastructure
Packt
29 Sep 2015
16 min read
Save for later

Designing and Building a vRealize Automation 6.2 Infrastructure

Packt
29 Sep 2015
16 min read
 In this article by J. Powell, the author of the book Mastering vRealize Automation 6.2, we put together a design and build vRealize Automation 6.2 from POC to production. With the knowledge gained from this article, you should feel comfortable installing and configuring vRA. In this article, we will be covering the following topics: Proving the technology Proof of Concept Proof of Technology Pilot Designing the vRealize Automation architecture (For more resources related to this topic, see here.) Proving the technology In this section, we are going to discuss how to approach a vRealize Automation 6.2 project. This is a necessary component in order to assure a successful project, and it is specifically necessary when we discuss vRA, due to the sheer amount of moving parts that comprise the software. We are going to focus on the end users, whether they are individuals or business units, such as your company's software development department. These are the people that will be using vRA to provide the speed and agility necessary to deliver results that drive the business and make money. If we take this approach and treat our co-workers as customers, we can give them what they need to perform their jobs as opposed to what we perceive they need from an IT perspective. Designing our vRA deployment around the user and business requirements, first gives us a better plan to implement the backend infrastructure as well as the service offerings within the vRA web portal. This allows us to build a business case for vRealize Automation and will help determine which of the three editions will make sense to meet these needs. Once we have our business case created, validated, and approved, we can start testing vRealize Automation. There are three common phases to a testing cycle: Proof of Concept Proof of Technology Pilot implementation We will cover these phases in the following sections and explore whether you need them for your vRealize Automation 6.2 deployment. Proof of Concept A POC is typically an abbreviated version of what you hope to achieve during production. It is normally spun up in a lab, using old hardware, with a limited number of test users. Once your POC is set up, one of two things happen. First, nothing happens or it gets decommissioned. After all, it's just the IT department getting their hands dirty with new technology. This also happens when there is not a clear business driver, which provides a reason to have the technology in a production environment. The second thing that could happen is that the technology is proven, and it moves into a pilot phase. Of course, this is completely up to you. Perhaps, a demonstration of the technology will be enough, or testing some limited outcomes in VMware's HOL for vRealize Automation 6.2 will do the trick. Due to the number of components and features within vRA, it is strongly recommended that you create a POC, documenting the process along the way. This will give you a strong base if you take the project from POC to production. Proof of Technology The object of a POT project is to determine whether the proposed solution or technology will integrate in your existing IT landscape and add value. This is the stage where it is important to document any technical issues you encounter in your individual environment. There is no need to involve pilot users in this process as it is specifically to validate the technical merits of the software. Pilot implementation A pilot is a small scale and targeted roll out of the technology in a production environment. Its scope is limited, typically by a number of users and systems. This is to allow testing, so as to make sure the technology works as expected and designed. It also limits the business risk. A pilot deployment in terms of vRA is also a way to gain feedback from the users who will ultimately use it on a regular basis. vRealize Automation 6.2 is a product that empowers the end users to provision everything as a service. How the users feel about the layout of the web portal, user experience, and automated feedback from the system directly impacts how well the product will work in a full-blown production scenario. This also gives you time to make any necessary modifications to the vRA environment before providing access to additional users. When designing the pilot infrastructure, you should use the same hardware that is used during production. This includes ESXi hosts, storage, fiber or Internet Small Computer System Interface (iSCSI) connectivity, and vCenter versions. This will take into account any variances between platforms and configurations that could affect performance. Even at this stage, design, attention to detail, and following VMware best practices is key. Often, pilot programs get rolled straight into production. Adhering to these concepts will put you on the right path to a successful deployment. To get a better understanding, let's look at some of the design elements that should be considered: Size of the deployment: A small deployment will support 10,000 managed machines, 500 catalog items, and 10 concurrent deployments. Concurrent provisioning: Only two concurrent provisions per endpoint are allowed by default. You may want to increase this limit to suit your requirements. Hardware sizing: This refers to the number of servers, the CPU, and the memory. Scale: This refers to whether there will be multiple Identity and vRealize Automation vApps. Storage: This refers to pools of storage from Storage Area Network (SAN) or Network Attached Storage (NAS) and tiers of storage for performance requirements. Network: This refers to LANs, load balancing, internal versus external access to web portals, and IP pools for use with the infrastructure provisioned through vRA. Firewall: This refers to knowing what ports need to be opened between the various components that make up vRA, as well as the other endpoint that may fall under vRA's purview. Portal layout: This refers to the items you want to provide to the end user and the manner in which you categorize them for future growth. IT Business Management Suite Standard Edition: If you are going to implement this product, it can scale up to 20,000 VMs across four vCenter servers. Certificates: Appliances can be self-signed, but it is recommended to use an internal Certificate Authority for vRA components and an externally signed certificate to use on the vRA web portal if it is going to be exposed to the public Internet. VMware has published a Technical White Paper that covers all the details and considerations when deploying vRA. You can download the paper by visiting http://www.vmware.com/files/pdf/products/vCloud/VMware-vCloud-Automation-Center-61-Reference-Architecture.pdf. VMware provides the following general recommendation when deploying vRealize Automation: keep all vRA components in the same time zone with their clocks synced. If you plan on using VMware IT Business Management Suite Standard Edition, deploy it in the same LAN as vCenter. You can deploy Worker DEMs and proxy agents over the WAN, but all other components should not go over the WAN, as to prevent performance degradation. Here is a diagram of the pilot process: Designing the vRealize Automation architecture We have discussed the components that comprise vRealize Automation as well as some key design elements. Now, let's see some of the scenarios at a high level. Keep in mind that vRA is designed to manage tens of thousands of VMs in an infrastructure. Depending on your environment, you may never exceed the limitations of what VMware considers to be a small deployment. The following diagram displays the minimum footprint needed for small deployment architecture: A medium deployment can support up to 30,000 managed machines, 1,000 catalog items, and 50 concurrent deployments. The following diagram shows you the minimum required footprint for a medium deployment: Large deployments support 50,000 managed machines, 2,500 catalog items, and 100 concurrent deployments. The following diagram shows you the minimum required footprint for a large deployment: Design considerations Now that we understand the design elements for a small, medium, and large infrastructure, let's explore the components of vRA and build an example design, based on the small infrastructure requirements from VMware. Since there are so many options and components, we have broken them down into easily digestible components. Naming conventions It is important to give some thought to naming conventions for different aspects of the vRA web portal. Your company has probably set a naming convention for servers and environments, and we will have to make sure items provisioned from vRA adhere to those standards. It is important to name the different components of vRealize Automation in a method that makes sense for what your end goal may be regarding what vRA will do. This is necessary because it is not easy (and in some cases not possible) to rename the elements of the vRA web portal once you have implemented them. Compute resources Compute resources in terms of vRA refers to an object that represents a host, host cluster, virtual data center, or a public Cloud region, such as Amazon, where machines and applications can be provisioned. For example, compute resources can refer to vCenter, Hyper-V, or Amazon AWS. This list grows with each subsequent release of vRA. Business and Fabric groups A Business group in the vRA web portal is a set of services and resources assigned to a set of users. Quite simply, it is a way to align a business department or unit with the resources it needs. For example, you may have a Business group named Software Developers, and you would want them to be able to provision SQL 2012 and 2014 on Windows 2012 R2 servers. Fabric groups enable IT administrators to provide resources from your infrastructure. You can add users or groups to the Fabric group in order to manage the infrastructure resources you have assigned. For example, if you have a software development cluster in vCenter, you could create a Fabric group that contains the users responsible for the management of this cluster to oversee the cluster resources. Endpoints and credentials Endpoints can represent anything from vCenter, to storage, physical servers, and public Cloud offerings, such as Amazon AWS. The platform address is defined with the endpoint (in terms of being accessed through a web browser) along with the credentials needed to manage them. Reservations Reservations refer to how we provide a portion of our total infrastructure that is to be used for consumption by end users. It is a key design element in the vRealize Automation 6.2 infrastructure design. Each reservation created will need to define the disk, memory, networking, and priority. The lower their number, the higher will be the priority. This is to resolve conflicts in case there are multiple matching reservations. If the priorities of the multiple reservations are equal, vRA will choose a reservation in a round-robin style order: In the preceding diagram, on the far right-hand side, we can see that we have Shared Infrastructure composed of Private Physical and Private Virtual space, as well as a portion of a Public Cloud offering. By creating different reservations, we can assure that there is enough infrastructure for the business, while providing a dedicated portion of the total infrastructure to our end users. Reservation policies A reservation policy is a set of reservations that you can select from a blueprint to restrict provisioning only to specific reservations. Reservation policies are then attached to a reservation. An example of reservations policies can be taken when using them to create different storage policies. You can create a separate Bronze, Silver, and Gold policy to reflect the type of disk available on our SAN (such as SATA, SAS, and SDD). Network profiles By default, vRA will assign an IP address from a DHCP server to all the machines it provisions. However, most production environments do not use DHCP for their servers. A network profile will need to be created to allocate and assign static IPs to these servers. Network profile options consist of external, private, NAT (short for Network Address Translation), and routed. For the scope of our examples, we will focus on the external option. Compute resources Compute resources are tied in with Fabric groups, endpoints, storage reservation policies, and cost profiles. You must have these elements created before you can configure compute resources, although some components, such as storage and cost profiles, are optional. An example of a compute resource is a vCenter cluster. It is created automatically when you add an endpoint to the vRA web portal. Blueprints Blueprints are instruction sets to build virtual, physical, and Cloud-based machines, as well vApps. Blueprints define a machine or a set of application properties, the way it is provisioned, and its policy and management settings. For an end user, a blueprint is listed as an item in the Service Catalog tab. The user can request the item, and vRA would use the blueprint to provision the user's request. Blueprints also provide a way to prompt the user making the request for additional items, such as more compute resources, application or machine names, as well as network information. Of course, this can be automated as well and will probably be the preferred method in your environment. Blueprints also contain workflow logic. vRealize Automation contains built-in workflows for cloning snapshots, Kickstart, ISO, SCCM, and WIM deployments. You can define a minimum and maximum for CPU, memory, and storage. This will give end users the option to customize their machines to match their individual needs. It is a best practice to define the minimum for servers with very low resources, such as 1 vCPU and 512 MB for memory. It is easy to hot add these resources if the end user needs more compute after an initial request. However, if you set the minimum resources too high in the blueprint, you cannot lower the value. You will have to create a new blueprint. You can also define customized properties in the blueprints. For example, if you want to provide a VM with a defined MAC address or without a virtual CD-ROM attached, you can do so. VMware has published a detailed guide of the Custom Properties and their values. You can find it at http://pubs.vmware.com/vra-62/topic/com.vmware.ICbase/PDF/vrealize-automation-62-custom-properties.pdf. Custom Properties are case sensitive. It is recommended to test Custom Properties individually until you are comfortable using them. For example, a blueprint referencing an ISO workflow would fail if you have a Custom Property to remove the CD-ROM. Users and groups Users and groups are defined in the Administration section of the vRA web portal. This is where we would assign vRA specific roles to groups. It is worth mentioning when you login to the vRA web portal and click on users, it is blank. This is because of the sheer number of users that could be potentially allowed to access the portal and would slow the load time. In our examples, we will focus on users and groups from our Identity Appliance that ties in to Active Directory. Catalog management Catalog management consists of services, catalog items, actions, and entitlement. We will discuss them in more detail in the following sections. Services Services are another key design element and are defined by the vRA administrators to help group subsets of your environment. For example, you may have services defined for applications, where you would list items, such as SQL and Oracle databases. You could also create a service called OperatingSystems where you would group catalog items, such as Linux and Windows. You can make these services active or inactive, and also define maintenance windows when catalog items under this category would be unavailable for provisioning. Catalog items Catalog items are essentially links back to blueprints. These items are tied back to a service that you previously defined and helped shape the Service Catalog tab that the end user will use to provision machines and applications. Also, you will entitle users to use the catalog item. Entitlements As mentioned previously, entitlements are how we link business users and groups to services, catalog items, and actions. Actions Actions are a list of operations that gives a user the ability to perform certain tasks with services and catalog items. There are over 30 out of the box action items that come with vRA. This includes creating and destroying VMs, changing the lease time, as well as adding additional compute resources. You also have the option of creating custom actions as well. Approval policies Approval policies are the sets of rules that govern the use of catalog items. They can be used in the pre or post configuration life cycle of an item. Let's say, as an example, we have a Red Hat Linux VM that a user can provision. We have set the minimum vCPU to 1, but have defined a maximum of 4. We would want to notify the user's manager and the IT team when a request to provision the VM exceeds the minimum vCPU we have defined. We could create an approval policy to perform a pre-check to see if the user is requesting more than one vCPU. If the threshold is exceeded, an e-mail will be sent out to approve the additional vCPU resources. Until the notification is approved, the VM will not be provisioned. Advanced services Advanced services is an area of the vRA web portal where we can tie in customized workflows from vRealize Orchestrator. For example, we may need to check for a file in the VM's operating system once it has been deployed. We need to do this to make sure that an application has been deployed successfully or a baseline compliance is in order. We can present vRealize Orchestrator workflows for end users to leverage in almost the same manner as we do IaaS. Summary In this article, we covered the design and build principles of vRealize Automation 6.2. We discussed how to prove the technology by performing due diligence checks with the business users and creating a case to implement a POC. We detailed considerations when rolling out vRA in a pilot program and showed you how to gauge its success. Lastly, we detailed the components that comprise the design and build of vRealize Automation, while introducing additional elements. Resources for Article: Further resources on this subject: vROps – Introduction and Architecture[article] An Overview of Horizon View Architecture and its Components[article] Master Virtual Desktop Image Creation [article]
Read more
  • 0
  • 0
  • 6430

article-image-data-around-us
Packt
29 Sep 2015
25 min read
Save for later

Data Around Us

Packt
29 Sep 2015
25 min read
In this article by Gergely Daróczi, author of the book Mastering Data Analysis with R we will discuss Spatial data, also known as geospatial data, which identifies geographic locations, such as natural or constructed features around us. Although all observations have some spatial content, such as the location of the observation, but this is out of most data analysis tools' range due to the complex nature of spatial information; alternatively, the spatiality might not be that interesting (at first sight) in the given research topic. On the other hand, analyzing spatial data can reveal some very important underlying structures of the data, and it is well worth spending time visualizing the differences and similarities between close or far data points. In this article, we are going to help with this and will use a variety of R packages to: Retrieve geospatial information from the Internet Visualize points and polygons on a map (For more resources related to this topic, see here.) Geocoding We will use the hflights dataset to demonstrate how one can deal with data bearing spatial information. To this end, let's aggregate our dataset but instead of generating daily data let's view the aggregated characteristics of the airports. For the sake of performance, we will use the data.table package: > library(hflights) > library(data.table) > dt <- data.table(hflights)[, list( + N = .N, + Cancelled = sum(Cancelled), + Distance = Distance[1], + TimeVar = sd(ActualElapsedTime, na.rm = TRUE), + ArrDelay = mean(ArrDelay, na.rm = TRUE)) , by = Dest] So we have loaded and then immediately transformed the hlfights dataset to a data.table object. At the same time, we aggregated by the destination of the flights to compute: The number of rows The number of cancelled flights The distance The standard deviation of the elapsed time of the flights The arithmetic mean of the delays The resulting R object looks like this: > str(dt) Classes 'data.table' and 'data.frame': 116 obs. of 6 variables: $ Dest : chr "DFW" "MIA" "SEA" "JFK" ... $ N : int 6653 2463 2615 695 402 6823 4893 5022 6064 ... $ Cancelled: int 153 24 4 18 1 40 40 27 33 28 ... $ Distance : int 224 964 1874 1428 3904 305 191 140 1379 862 ... $ TimeVar : num 10 12.4 16.5 19.2 15.3 ... $ ArrDelay : num 5.961 0.649 9.652 9.859 10.927 ... - attr(*, ".internal.selfref")=<externalptr> So we have 116 observations all around the world and five variables describing those. Although this seems to be a spatial dataset, we have no geospatial identifiers that a computer can understand per se, so let's fetch the geocodes of these airports from the Google Maps API via the ggmap package. First, let's see how it works when we are looking for the geo-coordinates of Houston: > library(ggmap) > (h <- geocode('Houston, TX')) Information from URL : http://maps.googleapis.com/maps/api/geocode/json?address=Houston,+TX&sensor=false lon lat 1 -95.3698 29.76043 So the geocode function can return the matched latitude and longitude of the string we sent to Google. Now let's do the very same thing for all flight destinations: > dt[, c('lon', 'lat') := geocode(Dest)] Well, this took some time as we had to make 116 separate queries to the Google Maps API. Please note that Google limits you to 2,500 queries a day without authentication, so do not run this on a large dataset. There is a helper function in the package, called geocodeQueryCheck, which can be used to check the remaining number of free queries for the day. Some of the methods and functions we plan to use in some later sections of this article do not support data.table, so let's fall back to the traditional data.frame format and also print the structure of the current object: > str(setDF(dt)) 'data.frame': 116 obs. of 8 variables: $ Dest : chr "DFW" "MIA" "SEA" "JFK" ... $ N : int 6653 2463 2615 695 402 6823 4893 5022 6064 ... $ Cancelled: int 153 24 4 18 1 40 40 27 33 28 ... $ Distance : int 224 964 1874 1428 3904 305 191 140 1379 862 ... $ TimeVar : num 10 12.4 16.5 19.2 15.3 ... $ ArrDelay : num 5.961 0.649 9.652 9.859 10.927 ... $ lon : num -97 136.5 -122.3 -73.8 -157.9 ... $ lat : num 32.9 34.7 47.5 40.6 21.3 ... This was pretty quick and easy, wasn't it? Now that we have the longitude and latitude values of all the airports, we can try to show these points on a map. Visualizing point data in space For the first time, let's keep it simple and load some package-bundled polygons as the base map. To this end, we will use the maps package. After loading it, we use the map function to render the polygons of the United States of America, add a title, and then some points for the airports and also for Houston with a slightly modified symbol: > library(maps) > map('state') > title('Flight destinations from Houston,TX') > points(h$lon, h$lat, col = 'blue', pch = 13) > points(dt$lon, dt$lat, col = 'red', pch = 19) And showing the airport names on the plot is pretty easy as well: we can use the well-known functions from the base graphics package. Let's pass the three character names as labels to the text function with a slightly increased y value to shift the preceding text the previously rendered data points: > text(dt$lon, dt$lat + 1, labels = dt$Dest, cex = 0.7) Now we can also specify the color of the points to be rendered. This feature can be used to plot our first meaningful map to highlight the number of flights in 2011 to different parts of the USA: > map('state') > title('Frequent flight destinations from Houston,TX') > points(h$lon, h$lat, col = 'blue', pch = 13) > points(dt$lon, dt$lat, pch = 19, + col = rgb(1, 0, 0, dt$N / max(dt$N))) > legend('bottomright', legend = round(quantile(dt$N)), pch = 19, + col = rgb(1, 0, 0, quantile(dt$N) / max(dt$N)), box.col = NA) So the intensity of red shows the number of flights to the given points (airports); the values range from 1 to almost 10,000. Probably it would be more meaningful to compute these values on a state level, as there are many airports, very close to each other, that might be better aggregated at a higher administrative area level. To this end, we load the polygon of the states, match the points of interest (airports) with the overlaying polygons (states), and render the polygons as a thematic map instead of points, like we did on the previous pages. Finding polygon overlays of point data We already have all the data we need to identify the parent state of each airport. The dt dataset includes the geo-coordinates of the locations, and we managed to render the states as polygons with the map function. Actually, this latter function can return the underlying dataset without rendering a plot: > str(map_data <- map('state', plot = FALSE, fill = TRUE)) List of 4 $ x : num [1:15599] -87.5 -87.5 -87.5 -87.5 -87.6 ... $ y : num [1:15599] 30.4 30.4 30.4 30.3 30.3 ... $ range: num [1:4] -124.7 -67 25.1 49.4 $ names: chr [1:63] "alabama" "arizona" "arkansas" "california" ... - attr(*, "class")= chr "map" So we have around 16,000 points describing the boundaries of the US states, but this map data is more detailed than we actually need (see for example the name of the polygons starting with Washington): > grep('^washington', map_data$names, value = TRUE) [1] "washington:san juan island" "washington:lopez island" [3] "washington:orcas island" "washington:whidbey island" [5] "washington:main" In short, the non-connecting parts of a state are defined as separate polygons. To this end, let's save a list of the state names without the string after the colon: > states <- sapply(strsplit(map_data$names, ':'), '[[', 1) We will use this list as the basis of aggregation from now on. Let's transform this map dataset into another class of object, so that we can use the powerful features of the sp package. We will use the maptools package to do this transformation: > library(maptools) > us <- map2SpatialPolygons(map_data, IDs = states, + proj4string = CRS("+proj=longlat +datum=WGS84")) An alternative way of getting the state polygons might be to directly load those instead of transforming from other data formats as described earlier. To this end, you may find the raster package especially useful to download free map shapefiles from gadm.org via the getData function. Although these maps are way too detailed for such a simple task, you can always simplify those—for example, with the gSimplify function of the rgeos package. So we have just created an object called us, which includes the polygons of map_data for each state with the given projection. This object can be shown on a map just like we did previously, although you should use the general plot method instead of the map function: > plot(us) Besides this, however, the sp package supports so many powerful features! For example, it's very easy to identify the overlay polygons of the provided points via the over function. As this function name conflicts with the one found in the grDevices package, it's better to refer to the function along with the namespace using a double colon: > library(sp) > dtp <- SpatialPointsDataFrame(dt[, c('lon', 'lat')], dt, + proj4string = CRS("+proj=longlat +datum=WGS84")) > str(sp::over(us, dtp)) 'data.frame': 49 obs. of 8 variables: $ Dest : chr "BHM" "PHX" "XNA" "LAX" ... $ N : int 2736 5096 1172 6064 164 NA NA 2699 3085 7886 ... $ Cancelled: int 39 29 34 33 1 NA NA 35 11 141 ... $ Distance : int 562 1009 438 1379 926 NA NA 1208 787 689 ... $ TimeVar : num 10.1 13.61 9.47 15.16 13.82 ... $ ArrDelay : num 8.696 2.166 6.896 8.321 -0.451 ... $ lon : num -86.8 -112.1 -94.3 -118.4 -107.9 ... $ lat : num 33.6 33.4 36.3 33.9 38.5 ... What happened here? First, we passed the coordinates and the whole dataset to the SpatialPointsDataFrame function, which stored our data as spatial points with the given longitude and latitude values. Next we called the over function to left-join the values of dtp to the US states. An alternative way of identifying the state of a given airport is to ask for more detailed information from the Google Maps API. By changing the default output argument of the geocode function, we can get all address components for the matched spatial object, which of course includes the state as well. Look for example at the following code snippet: geocode('LAX','all')$results[[1]]$address_components Based on this, you might want to get a similar output for all airports and filter the list for the short name of the state. The rlist package would be extremely useful in this task, as it offers some very convenient ways of manipulating lists in R. The only problem here is that we matched only one airport to the states, which is definitely not okay. See for example the fourth column in the earlier output: it shows LAX as the matched airport for California (returned by states[4]), although there are many others there as well. To overcome this issue, we can do at least two things. First, we can use the returnList argument of the over function to return all matched rows of dtp, and we will then post-process that data: > str(sapply(sp::over(us, dtp, returnList = TRUE), + function(x) sum(x$Cancelled))) Named int [1:49] 51 44 34 97 23 0 0 35 66 149 ... - attr(*, "names")= chr [1:49] "alabama" "arizona" "arkansas" ... So we created and called an anonymous function that will sum up the Cancelled values of the data.frame in each element of the list returned by over. Another, probably cleaner, approach is to redefine dtp to only include the related values and pass a function to over to do the summary: > dtp <- SpatialPointsDataFrame(dt[, c('lon', 'lat')], + dt[, 'Cancelled', drop = FALSE], + proj4string = CRS("+proj=longlat +datum=WGS84")) > str(cancels <- sp::over(us, dtp, fn = sum)) 'data.frame': 49 obs. of 1 variable: $ Cancelled: int 51 44 34 97 23 NA NA 35 66 149 ... Either way, we have a vector to merge back to the US state names: > val <- cancels$Cancelled[match(states, row.names(cancels))] And to update all missing values to zero (as the number of cancelled flights in a state without any airport is not missing data, but exactly zero for sure): > val[is.na(val)] <- 0 Plotting thematic maps Now we have everything to create our first thematic map. Let's pass the val vector to the previously used map function (or plot it using the us object), specify a plot title, add a blue point for Houston, and then create a legend, which shows the quantiles of the overall number of cancelled flights as a reference: > map("state", col = rgb(1, 0, 0, sqrt(val/max(val))), fill = TRUE) > title('Number of cancelled flights from Houston to US states') > points(h$lon, h$lat, col = 'blue', pch = 13) > legend('bottomright', legend = round(quantile(val)), + fill = rgb(1, 0, 0, sqrt(quantile(val)/max(val))), box.col = NA) Please note that, instead of a linear scale, we decided to compute the square root of the relative values to define the intensity of the fill color, so that we can visually highlight the differences between the states. This was necessary as most flight cancellations happened in Texas (748), and there were no more than 150 cancelled flights in any other state (with the average being around 45). You can also easily load ESRI shape files or other geospatial vector data formats into R as points or polygons with a bunch of packages already discussed and a few others as well, such as the maptools, rgdal, dismo, raster, or shapefile packages. Another, probably easier, way to generate country-level thematic maps, especially choropleth maps, is to load the rworldmap package made by Andy South, and rely on the convenient mapCountryData function. Rendering polygons around points Besides thematic maps, another really useful way of presenting spatial data is to draw artificial polygons around the data points based on the data values. This is especially useful if there is no available polygon shape file to be used to generate a thematic map. A level plot, contour plot, or isopleths, might be an already familiar design from tourist maps, where the altitude of the mountains is represented by a line drawn around the center of the hill at the very same levels. This is a very smart approach having maps present the height of hills—projecting this third dimension onto a 2-dimensional image. Now let's try to replicate this design by considering our data points as mountains on the otherwise flat map. We already know the heights and exact geo-coordinates of the geometric centers of these hills (airports); the only challenge here is to draw the actual shape of these objects. In other words: Are these mountains connected? How steep are the hillsides? Should we consider any underlying spatial effects in the data? In other words, can we actually render these as mountains with a 3D shape instead of plotting independent points in space? If the answer for the last question is positive, then we can start trying to answer the other questions by fine-tuning the plot parameters. For now, let's simply suppose that there is a spatial effect in the underlying data, and it makes sense to visualize the data in such a way. Later, we will have the chance to disprove or support this statement either by analyzing the generated plots, or by building some geo-spatial models—some of these will be discussed later, in the Spatial Statistics section. Contour lines First, let's expand our data points into a matrix with the fields package. The size of the resulting R object is defined arbitrarily but, for the given number of rows and columns, which should be a lot higher to generate higher resolution images, 256 is a good start: > library(fields) > out <- as.image(dt$ArrDelay, x = dt[, c('lon', 'lat')], + nrow = 256, ncol = 256) The as.image function generates a special R object, which in short includes a 3‑dimensional matrix-like data structure, where the x and y axes represent the longitude and latitude ranges of the original data respectively. To simplify this even more, we have a matrix with 256 rows and 256 columns, where each of those represents a discrete value evenly distributed between the lowest and highest values of the latitude and longitude. And on the z axis, we have the ArrDelay values—which are in most cases of course missing: > table(is.na(out$z)) FALSE TRUE 112 65424 What does this matrix look like? It's better to see what we have at the moment: > image(out) Well, this does not seem to be useful at all. What is shown there? We rendered the x and y dimensions of the matrix with z colors here, and most tiles of this map are empty due to the high amount of missing values in z. Also, it's pretty straightforward now that the dataset included many airports outside the USA as well. How does it look if we focus only on the USA? > image(out, xlim = base::range(map_data$x, na.rm = TRUE), + ylim = base::range(map_data$y, na.rm = TRUE)) An alternative and more elegant approach to rendering only the US part of the matrix would be to drop the non-US airports from the database before actually creating the out R object. Although we will continue with this example for didactic purposes, with real data make sure you concentrate on the target subset of your data instead of trying to smooth and model unrelated data points as well. A lot better! So we have our data points as a tile, now let's try to identify the slope of these mountain peaks, to be able to render them on a future map. This can be done by smoothing the matrix: > look <- image.smooth(out, theta = .5) > table(is.na(look$z)) FALSE TRUE 14470 51066 As can be seen in the preceding table, this algorithm successfully eliminated many missing values from the matrix. The image.smooth function basically reused our initial data point values in the neighboring tiles, and computed some kind of average for the conflicting overrides. This smoothing algorithm results in the following arbitrary map, which does not respect any political or geographical boundaries: > image(look) It would be really nice to plot these artificial polygons along with the administrative boundaries, so let's clear out all cells that do not belong to the territory of the USA. We will use the point.in.polygon function from the sp package to do so: > usa_data <- map('usa', plot = FALSE, region = 'main') > p <- expand.grid(look$x, look$y) > library(sp) > n <- which(point.in.polygon(p$Var1, p$Var2, + usa_data$x, usa_data$y) == 0) > look$z[n] <- NA In a nutshell, we have loaded the main polygon of the USA without any sub-administrative areas, and verified our cells in the look object, if those are overlapping the polygon. Then we simply reset the value of the cell, if not. The next step is to render the boundaries of the USA, plot our smoothed contour plot, then add some eye-candy in the means of the US states and, the main point of interest, the airport: > map("usa") > image(look, add = TRUE) > map("state", lwd = 3, add = TRUE) > title('Arrival delays of flights from Houston') > points(dt$lon, dt$lat, pch = 19, cex = .5) > points(h$lon, h$lat, pch = 13) Now this is pretty neat, isn't it? Voronoi diagrams An alternative way of visualizing point data with polygons is to generate Voronoi cells between them. In short, the Voronoi map partitions the space into regions around the data points by aligning all parts of the map to one of the regions to minimize the distance from the central data points. This is extremely easy to interpret, and also to implement in R. The deldir package provides a function with the very same name for Delaunay triangulation: > library(deldir) > map("usa") > plot(deldir(dt$lon, dt$lat), wlines = "tess", lwd = 2, + pch = 19, col = c('red', 'darkgray'), add = TRUE) Here, we represented the airports with red dots, as we did before, but also added the Dirichlet tessellation (Voronoi cells) rendered as dark-gray dashed lines. For more options on how to fine-tune the results, see the plot.deldir method. In the next section, let's see how to improve this plot by adding a more detailed background map to it. Satellite maps There are many R packages on CRAN that can fetch data from Google Maps, Stamen, Bing, or OpenStreetMap—even some of the packages we previously used in this article, like the ggmap package, can do this. Similarly, the dismo package also comes with both geo-coding and Google Maps API integration capabilities, and there are some other packages focused on that latter, such as the RgoogleMaps package. Now we will use the OpenStreetMap package, mainly because it supports not only the awesome OpenStreetMap database back-end, but also a bunch of other formats as well. For example, we can render really nice terrain maps via Stamen: > library(OpenStreetMap) > map <- openmap(c(max(map_data$y, na.rm = TRUE), + min(map_data$x, na.rm = TRUE)), + c(min(map_data$y, na.rm = TRUE), + max(map_data$x, na.rm = TRUE)), + type = 'stamen-terrain') So we defined the left upper and right lower corners of the map we need, and also specified the map style to be a satellite map. As the data by default arrives from the remote servers with the Mercator projections, we first have to transform that to WGS84 (we used this previously), so that we can render the points and polygons on the top of the fetched map: > map <- openproj(map, + projection = '+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs') And Showtime at last: > plot(map) > plot(deldir(dt$lon, dt$lat), wlines = "tess", lwd = 2, + col = c('red', 'black'), pch = 19, cex = 0.5, add = TRUE) This seems to be a lot better compared to the outline map we created previously. Now you can try some other map styles as well, such as mapquest-aerial, or some of the really nice-looking cloudMade designs. Interactive maps Besides being able to use Web-services to download map tiles for the background of the maps created in R, we can also rely on some of those to generate truly interactive maps. One of the best known related services is the Google Visualization API, which provides a platform for hosting visualizations made by the community; you can also use it to share maps you've created with others. Querying Google Maps In R, you can access this API via the googleVis package written and maintained by Markus Gesmann and Diego de Castillo. Most functions of the package generate HTML and JavaScript code that we can directly view in a Web browser as an SVG object with the base plot function; alternatively, we can integrate them in a Web page, for example via the IFRAME HTML tag. The gvisIntensityMap function takes a data.frame with country ISO or USA state codes and the actual data to create a simple intensity map. We will use the cancels dataset we created in the Finding Polygon Overlays of Point Data section but, before that, we have to do some data transformations. Let's add the state name as a new column to the data.frame, and replace the missing values with zero: > cancels$state <- rownames(cancels) > cancels$Cancelled[is.na(cancels$Cancelled)] <- 0 Now it's time to load the package and pass the data along with a few extra parameters, signifying that we want to generate a state-level US map: > library(googleVis) > plot(gvisGeoChart(cancels, 'state', 'Cancelled', + options = list( + region = 'US', + displayMode = 'regions', + resolution = 'provinces'))) The package also offers opportunities to query the Google Map API via the gvisMap function. We will use this feature to render the airports from the dt dataset as points on a Google Map with an auto-generated tooltip of the variables. But first, as usual, we have to do some data transformations again. The location argument of the gvisMap function takes the latitude and longitude values separated by a colon: > dt$LatLong <- paste(dt$lat, dt$lon, sep = ':') We also have to generate the tooltips as a new variable, which can be done easily with an apply call. We will concatenate the variable names and actual values separated by a HTML line break: > dt$tip <- apply(dt, 1, function(x) + paste(names(dt), x, collapse = '<br/ >')) And now we just pass these arguments to the function for an instant interactive map: > plot(gvisMap(dt, 'LatLong', tipvar = 'tip')) Another nifty feature of the googleVis package is that you can easily merge the different visualizations into one by using the gvisMerge function. The use of this function is quite simple: specify any two gvis objects you want to merge, and also whether they are to be placed horizontally or vertically. JavaScript mapping libraries The great success of the trending JavaScript data visualization libraries is only partly due to their great design. I suspect other factors also contribute to the general spread of such tools: it's very easy to create and deploy full-blown data models, especially since the release and on-going development of Mike Bostock's D3.js. Although there are also many really useful and smart R packages to interact directly with D3 and topojson (see for example my R user activity compilation at http://bit.ly/countRies). Now we will only focus on how to use Leaflet— probably the most used JavaScript library for interactive maps. What I truly love in R is that there are many packages wrapping other tools, so that R users can rely on only one programming language, and we can easily use C++ programs and Hadoop MapReduce jobs or build JavaScript-powered dashboards without actually knowing anything about the underlying technology. This is especially true when it comes to Leaflet! There are at least two very nice packages that can generate a Leaflet plot from the R console, without a single line of JavaScript. The Leaflet reference class of the rCharts package was developed by Ramnath Vaidyanathan, and includes some methods to create a new object, set the viewport and zoom level, add some points or polygons to the map, and then render or print the generated HTML and JavaScript code to the console or to a file. Unfortunately, this package is not on CRAN yet, so you have to install it from GitHub: > devtools::install_github('ramnathv/rCharts') As a quick example, let's generate a Leaflet map of the airports with some tooltips, like we did with the Google Maps API in the previous section. As the setView method expects numeric geo-coordinates as the center of the map, we will use Kansas City's airport as a reference: > library(rCharts) > map <- Leaflet$new() > map$setView(as.numeric(dt[which(dt$Dest == 'MCI'), + c('lat', 'lon')]), zoom = 4) > for (i in 1:nrow(dt)) + map$marker(c(dt$lat[i], dt$lon[i]), bindPopup = dt$tip[i]) > map$show() Similarly, RStudio's leaflet package and the more general htmlwidgets package also provide some easy ways to generate JavaScript-powered data visualizations. Let's load the library and define the steps one by one using the pipe operator from the magrittr package, which is pretty standard for all packages created or inspired by RStudio or Hadley Wickham: > library(leaflet) > leaflet(us) %>% + addProviderTiles("Acetate.terrain") %>% + addPolygons() %>% + addMarkers(lng = dt$lon, lat = dt$lat, popup = dt$tip) I especially like this latter map, as we can load a third-party satellite map in the background, then render the states as polygons; we also added the original data points along with some useful tooltips on the very same map with literally a one-line R command. We could even color the state polygons based on the aggregated results we computed in the previous sections! Ever tried to do the same in Java? Alternative map designs Besides being able to use some third-party tools, another main reason why I tend to use R for all my data analysis tasks is that R is extremely powerful in creating custom data exploration, visualization, and modeling designs. As an example, let's create a flow-map based on our data, where we will highlight the flights from Houston based on the number of actual and cancelled flights. We will use lines and circles to render these two variables on a 2-dimensional map, and we will also add a contour plot in the background based on the average time delay. But, as usual, let's do some data transformations first! To keep the number of flows at a minimal level, let's get rid of the airports outside the USA at last: > dt <- dt[point.in.polygon(dt$lon, dt$lat, + usa_data$x, usa_data$y) == 1, ] We will need the diagram package (to render curved arrows from Houston to the destination airports) and the scales package to create transparent colors: > library(diagram) > library(scales) Then let's render the contour map described in the Contour Lines section: > map("usa") > title('Number of flights, cancellations and delays from Houston') > image(look, add = TRUE) > map("state", lwd = 3, add = TRUE) And then add a curved line from Houston to each of the destination airports, where the width of the line represents the number of cancelled flights and the diameter of the target circles shows the number of actual flights: > for (i in 1:nrow(dt)) { + curvedarrow( + from = rev(as.numeric(h)), + to = as.numeric(dt[i, c('lon', 'lat')]), + arr.pos = 1, + arr.type = 'circle', + curve = 0.1, + arr.col = alpha('black', dt$N[i] / max(dt$N)), + arr.length = dt$N[i] / max(dt$N), + lwd = dt$Cancelled[i] / max(dt$Cancelled) * 25, + lcol = alpha('black', + dt$Cancelled[i] / max(dt$Cancelled))) + } Well, this article ended up being about visualizing spatial data, and not really about analyzing spatial data by fitting models and filtering raw data. Summary In case you are interested in knowing other R-related books that Packt has in store for you, here is the link: R for Data Science Practical Data Science Cookbook Resources for Article: Further resources on this subject: R ─ Classification and Regression Trees[article] An overview of common machine learning tasks[article] Reduction with Principal Component Analysis [article]
Read more
  • 0
  • 0
  • 2453

article-image-lights-and-effects
Packt
29 Sep 2015
27 min read
Save for later

Lights and Effects

Packt
29 Sep 2015
27 min read
 In this article by Matt Smith and Chico Queiroz, authors of Unity 5.x Cookbook, we will cover the following topics: Using lights and cookie textures to simulate a cloudy day Adding a custom Reflection map to a scene Creating a laser aim with Projector and Line Renderer Reflecting surrounding objects with Reflection Probes Setting up an environment with Procedural Skybox and Directional Light (For more resources related to this topic, see here.) Introduction Whether you're willing to make a better-looking game, or add interesting features, lights and effects can boost your project and help you deliver a higher quality product. In this article, we will look at the creative ways of using lights and effects, and also take a look at some of Unity's new features, such as Procedural Skyboxes, Reflection Probes, Light Probes, and custom Reflection Sources. Lighting is certainly an area that has received a lot of attention from Unity, which now features real-time Global Illumination technology provided by Enlighten. This new technology provides better and more realistic results for both real-time and baked lighting. For more information on Unity's Global Illumination system, check out its documentation at http://docs.unity3d.com/Manual/GIIntro.html. The big picture There are many ways of creating light sources in Unity. Here's a quick overview of the most common methods. Lights Lights are placed into the scene as game objects, featuring a Light component. They can function in Realtime, Baked, or Mixed modes. Among the other properties, they can have their Range, Color, Intensity, and Shadow Type set by the user. There are four types of lights: Directional Light: This is normally used to simulate the sunlight Spot Light: This works like a cone-shaped spot light Point Light: This is a bulb lamp-like, omnidirectional light Area Light: This baked-only light type is emitted in all directions from a rectangle-shaped entity, allowing for a smooth, realistic shading For an overview of the light types, check Unity's documentation at http://docs.unity3d.com/Manual/Lighting.html. Different types of lights Environment Lighting Unity's Environment Lighting is often achieved through the combination of a Skybox material and sunlight defined by the scene's Directional Light. Such a combination creates an ambient light that is integrated into the scene's environment, and which can be set as Realtime or Baked into Lightmaps. Emissive materials When applied to static objects, materials featuring the Emission colors or maps will cast light over surfaces nearby, in both real-time and baked modes, as shown in the following screenshot: Projector As its name suggests, a Projector can be used to simulate projected lights and shadows, basically by projecting a material and its texture map onto the other objects. Lightmaps and Light Probes Lightmaps are basically texture maps generated from the scene's lighting information and applied to the scene's static objects in order to avoid the use of processing-intensive real-time lighting. Light Probes are a way of sampling the scene's illumination at specific points in order to have it applied onto dynamic objects without the use of real-time lighting. The Lighting window The Lighting window, which can be found through navigating to the Window | Lighting menu, is the hub for setting and adjusting the scene's illumination features, such as Lightmaps, Global Illumination, Fog, and much more. It's strongly recommended that you take a look at Unity's documentation on the subject, which can be found at http://docs.unity3d.com/Manual/GlobalIllumination.html. Using lights and cookie textures to simulate a cloudy day As it can be seen in many first-person shooters and survival horror games, lights and shadows can add a great deal of realism to a scene, helping immensely to create the right atmosphere for the game. In this recipe, we will create a cloudy outdoor environment using cookie textures. Cookie textures work as masks for lights. It functions by adjusting the intensity of the light projection to the cookie texture's alpha channel. This allows for a silhouette effect (just think of the bat-signal) or, as in this particular case, subtle variations that give a filtered quality to the lighting. Getting ready If you don't have access to an image editor, or prefer to skip the texture map elaboration in order to focus on the implementation, please use the image file called cloudCookie.tga, which is provided inside the 1362_06_01 folder. How to do it... To simulate a cloudy outdoor environment, follow these steps: In your image editor, create a new 512 x 512 pixel image. Using black as the foreground color and white as the background color, apply the Clouds filter (in Photoshop, this is done by navigating to the Filter | Render | Clouds menu). Learning about the Alpha channel is useful, but you could get the same result without it. Skip steps 3 to 7, save your image as cloudCookie.png and, when changing texture type in step 9, leave Alpha from Greyscale checked. Select your entire image and copy it. Open the Channels window (in Photoshop, this can be done by navigating to the Window | Channels menu). There should be three channels: Red, Green, and Blue. Create a new channel. This will be the Alpha channel. In the Channels window, select the Alpha 1 channel and paste your image into it. Save your image file as cloudCookie.PSD or TGA. Import your image file to Unity and select it in the Project view. From the Inspector view, change its Texture Type to Cookie and its Light Type to Directional. Then, click on Apply, as shown: We will need a surface to actually see the lighting effect. You can either add a plane to your scene (via navigating to the GameObject | 3D Object | Plane menu), or create a Terrain (menu option GameObject | 3D Object | Terrain) and edit it, if you so you wish. Let's add a light to our scene. Since we want to simulate sunlight, the best option is to create a Directional Light. You can do this through the drop-down menu named Create | Light | Directional Light in the Hierarchy view. Using the Transform component of the Inspector view, reset the light's Position to X: 0, Y: 0, Z: 0 and its Rotation to X: 90; Y: 0; Z: 0. In the Cookie field, select the cloudCookie texture that you imported earlier. Change the Cookie Size field to 80, or a value that you feel is more appropriate for the scene's dimension. Please leave Shadow Type as No Shadows. Now, we need a script to translate our light and, consequently, the Cookie projection. Using the Create drop-down menu in the Project view, create a new C# Script named MovingShadows.cs. Open your script and replace everything with the following code: using UnityEngine; using System.Collections; public class MovingShadows : MonoBehaviour{ public float windSpeedX; public float windSpeedZ; private float lightCookieSize; private Vector3 initPos; void Start(){ initPos = transform.position; lightCookieSize = GetComponent<Light>().cookieSize; } void Update(){ Vector3 pos = transform.position; float xPos= Mathf.Abs (pos.x); float zPos= Mathf.Abs (pos.z); float xLimit = Mathf.Abs(initPos.x) + lightCookieSize; float zLimit = Mathf.Abs(initPos.z) + lightCookieSize; if (xPos >= xLimit) pos.x = initPos.x; if (zPos >= zLimit) pos.z = initPos.z; transform.position = pos; float windX = Time.deltaTime * windSpeedX; float windZ = Time.deltaTime * windSpeedZ; transform.Translate(windX, 0, windZ, Space.World); } } Save your script and apply it to the Directional Light. Select the Directional Light. In the Inspector view, change the parameters Wind Speed X and Wind Speed Z to 20 (you can change these values as you wish, as shown). Play your scene. The shadows will be moving. How it works... With our script, we are telling the Directional Light to move across the X and Z axis, causing the Light Cookie texture to be displaced as well. Also, we reset the light object to its original position whenever it traveled a distance that was either equal to or greater than the Light Cookie Size. The light position must be reset to prevent it from traveling too far, causing problems in real-time render and lighting. The Light Cookie Size parameter is used to ensure a smooth transition. The reason we are not enabling shadows is because the light angle for the X axis must be 90 degrees (or there will be a noticeable gap when the light resets to the original position). If you want dynamic shadows in your scene, please add a second Directional Light. There's more... In this recipe, we have applied a cookie texture to a Directional Light. But what if we were using the Spot or Point Lights? Creating Spot Light cookies Unity documentation has an excellent tutorial on how to make the Spot Light cookies. This is great to simulate shadows coming from projectors, windows, and so on. You can check it out at http://docs.unity3d.com/Manual/HOWTO-LightCookie.html. Creating Point Light Cookies If you want to use a cookie texture with a Point Light, you'll need to change the Light Type in the Texture Importer section of the Inspector. Adding a custom Reflection map to a scene Whereas Unity Legacy Shaders use individual Reflection Cubemaps per material, the new Standard Shader gets its reflection from the scene's Reflection Source, as configured in the Scene section of the Lighting window. The level of reflectiveness for each material is now given by its Metallic value or Specular value (for materials using Specular setup). This new method can be a real time saver, allowing you to quickly assign the same reflection map to every object in the scene. Also, as you can imagine, it helps keep the overall look of the scene coherent and cohesive. In this recipe, we will learn how to take advantage of the Reflection Source feature. Getting ready For this recipe, we will prepare a Reflection Cubemap, which is basically the environment to be projected as a reflection onto the material. It can be made from either six or, as shown in this recipe, a single image file. To help us with this recipe, it's been provided a Unity package, containing a prefab made of a 3D object and a basic Material (using a TIFF as Diffuse map), and also a JPG file to be used as the reflection map. All these files are inside the 1362_06_02 folder. How to do it... To add Reflectiveness and Specularity to a material, follow these steps: Import batteryPrefab.unitypackage to a new project. Then, select battery_prefab object from the Assets folder, in the Project view. From the Inspector view, expand the Material component and observe the asset preview window. Thanks to the Specular map, the material already features a reflective look. However, it looks as if it is reflecting the scene's default Skybox, as shown: Import the CustomReflection.jpg image file. From the Inspector view, change its Texture Type to Cubemap, its Mapping to Latitude - Longitude Layout (Cylindrical), and check the boxes for Glossy Reflection and Fixup Edge Seams. Finally, change its Filter Mode to Trilinear and click on the Apply button, shown as follows: Let's replace the Scene's Skybox with our newly created Cubemap, as the Reflection map for our scene. In order to do this, open the Lighting window by navigating to the Window | Lighting menu. Select the Scene section and use the drop-down menu to change the Reflection Source to Custom. Finally, assign the newly created CustomReflection texture as the Cubemap, shown as follows: Check out for the new reflections on the battery_prefab object. How it works... While it is the material's specular map that allows for a reflective look, including the intensity and smoothness of the reflection, the refection itself (that is, the image you see on the reflection) is given by the Cubemap that we have created from the image file. There's more... Reflection Cubemaps can be achieved in many ways and have different mapping properties. Mapping coordinates The Cylindrical mapping that we applied was well-suited for the photograph that we used. However, depending on how the reflection image is generated, a Cubic or Spheremap-based mapping can be more appropriate. Also, note that the Fixup Edge Seams option will try to make the image seamless. Sharp reflections You might have noticed that the reflection is somewhat blurry compared to the original image; this is because we have ticked the Glossy Reflections box. To get a sharper-looking reflection, deselect this option; in which case, you can also leave the Filter Mode option as default (Bilinear). Maximum size At 512 x 512 pixels, our reflection map will probably run fine on the lower-end machines. However, if the quality of the reflection map is not so important in your game's context, and the original image dimensions are big (say, 4096 x 4096), you might want to change the texture's Max Size at the Import Settings to a lower number. Creating a laser aim with Projector and Line Renderer Although using GUI elements, such as a cross-hair, is a valid way to allow players to aim, replacing (or combining) it with a projected laser dot might be a more interesting approach. In this recipe, we will use the Projector and Line components to implement this concept. Getting ready To help us with this recipe, it's been provided with a Unity package containing a sample scene featuring a character holding a laser pointer, and also a texture map named LineTexture. All files are inside the 1362_06_03 folder. Also, we'll make use of the Effects assets package provided by Unity (which you should have installed when installing Unity). How to do it... To create a laser dot aim with a Projector, follow these steps: Import BasicScene.unitypackage to a new project. Then, open the scene named BasicScene. This is a basic scene, featuring a player character whose aim is controlled via mouse. Import the Effects package by navigating to the Assets | Import Package | Effects menu. If you want to import only the necessary files within the package, deselect everything in the Importing package window by clicking on the None button, and then check the Projectors folder only. Then, click on Import, as shown: From the Inspector view, locate the ProjectorLight shader (inside the Assets | Standard Assets | Effects | Projectors | Shaders folder). Duplicate the file and name the new copy as ProjectorLaser. Open ProjectorLaser. From the first line of the code, change Shader "Projector/Light" to Shader "Projector/Laser". Then, locate the line of code – Blend DstColor One and change it to Blend One One. Save and close the file. The reason for editing the shader for the laser was to make it stronger by changing its blend type to Additive. However, if you want to learn more about it, check out Unity's documentation on the subject, which is available at http://docs.unity3d.com/Manual/SL-Reference.html. Now that we have fixed the shader, we need a material. From the Project view, use the Create drop-down menu to create a new Material. Name it LaserMaterial. Then, select it from the Project view and, from the Inspector view, change its Shader to Projector/Laser. From the Project view, locate the Falloff texture. Open it in your image editor and, except for the first and last columns column of pixels that should be black, paint everything white. Save the file and go back to Unity. Change the LaserMaterial's Main Color to red (RGB: 255, 0, 0). Then, from the texture slots, select the Light texture as Cookie and the Falloff texture as Falloff. From the Hierarchy view, find and select the pointerPrefab object (MsLaser | mixamorig:Hips | mixamorig:Spine | mixamorig:Spine1 | mixamorig:Spine2 | mixamorig:RightShoulder | mixamorig:RightArm | mixamorig:RightForeArm | mixamorig:RightHand | pointerPrefab). Then, from the Create drop-down menu, select Create Empty Child. Rename the new child of pointerPrefab as LaserProjector. Select the LaserProjector object. Then, from the Inspector view, click the Add Component button and navigate to Effects | Projector. Then, from the Projector component, set the Orthographic option as true and set Orthographic Size as 0.1. Finally, select LaserMaterial from the Material slot. Test the scene. You will be able to see the laser aim dot, as shown: Now, let's create a material for the Line Renderer component that we are about to add. From the Project view, use the Create drop-down menu to add a new Material. Name it as Line_Mat. From the Inspector view, change the shader of the Line_Mat to Particles/Additive. Then, set its Tint Color to red (RGB: 255;0;0). Import the LineTexture image file. Then, set it as the Particle Texture for the Line_Mat, as shown: Use the Create drop-down menu from Project view to add a C# script named LaserAim. Then, open it in your editor. Replace everything with the following code: using UnityEngine; using System.Collections; public class LaserAim : MonoBehaviour { public float lineWidth = 0.2f; public Color regularColor = new Color (0.15f, 0, 0, 1); public Color firingColor = new Color (0.31f, 0, 0, 1); public Material lineMat; private Vector3 lineEnd; private Projector proj; private LineRenderer line; void Start () { line = gameObject.AddComponent<LineRenderer>(); line.material = lineMat; line.material.SetColor("_TintColor", regularColor); line.SetVertexCount(2); line.SetWidth(lineWidth, lineWidth); proj = GetComponent<Projector> (); } void Update () { RaycastHit hit; Vector3 fwd = transform.TransformDirection(Vector3.forward); if (Physics.Raycast (transform.position, fwd, out hit)) { lineEnd = hit.point; float margin = 0.5f; proj.farClipPlane = hit.distance + margin; } else { lineEnd = transform.position + fwd * 10f; } line.SetPosition(0, transform.position); line.SetPosition(1, lineEnd); if(Input.GetButton("Fire1")){ float lerpSpeed = Mathf.Sin (Time.time * 10f); lerpSpeed = Mathf.Abs(lerpSpeed); Color lerpColor = Color.Lerp(regularColor, firingColor, lerpSpeed); line.material.SetColor("_TintColor", lerpColor); } if(Input.GetButtonUp("Fire1")){ line.material.SetColor("_TintColor", regularColor); } } } Save your script and attach it to the LaserProjector game object. Select the LaserProjector GameObject. From the Inspector view, find the Laser Aim component and fill the Line Material slot with the Line_Mat material, as shown: Play the scene. The laser aim is ready, and looks as shown: In this recipe, the width of the laser beam and its aim dot have been exaggerated. Should you need a more realistic thickness for your beam, change the Line Width field of the Laser Aim component to 0.05, and the Orthographic Size of the Projector component to 0.025. Also, remember to make the beam more opaque by setting the Regular Color of the Laser Aim component brighter. How it works... The laser aim effect was achieved by combining two different effects: a Projector and Line Renderer. A Projector, which can be used to simulate light, shadows, and more, is a component that projects a material (and its texture) onto other game objects. By attaching a projector to the Laser Pointer object, we have ensured that it will face the right direction at all times. To get the right, vibrant look, we have edited the projector material's Shader, making it brighter. Also, we have scripted a way to prevent projections from going through objects, by setting its Far Clip Plane on approximately the same level of the first object that is receiving the projection. The line of code that is responsible for this action is—proj.farClipPlane = hit.distance + margin;. Regarding the Line Renderer, we have opted to create it dynamically, via code, instead of manually adding the component to the game object. The code is also responsible for setting up its appearance, updating the line vertices position, and changing its color whenever the fire button is pressed, giving it a glowing/pulsing look. For more details on how the script works, don't forget to check out the commented code, available within the 1362_06_03 | End folder. Reflecting surrounding objects with Reflection Probes If you want your scene's environment to be reflected by game objects, featuring reflective materials (such as the ones with high Metallic or Specular levels), then you can achieve such effect using Reflection Probes. They allow for real-time, baked, or even custom reflections through the use of Cubemaps. Real-time reflections can be expensive in terms of processing; in which case, you should favor baked reflections, unless it's really necessary to display dynamic objects being reflected (mirror-like objects, for instance). Still, there are some ways real-time reflections can be optimized. In this recipe, we will test three different configurations for reflection probes: Real-time reflections (constantly updated) Real-time reflections (updated on-demand) via script Baked reflections (from the Editor) Getting ready For this recipe, we have prepared a basic scene, featuring three sets of reflective objects: one is constantly moving, one is static, and one moves whenever it is interacted with. The Probes.unitypackage package that is containing the scene can be found inside the 1362_06_04 folder. How to do it... To reflect the surrounding objects using the Reflection probes, follow these steps: Import Probes.unitypackage to a new project. Then, open the scene named Probes. This is a basic scene featuring three sets of reflective objects. Play the scene. Observe that one of the systems is dynamic, one is static, and one rotates randomly, whenever a key is pressed. Stop the scene. First, let's create a constantly updated real-time reflection probe. From the Create drop-down button of the Hierarchy view, add a Reflection Probe to the scene (Create | Light | Reflection Probe). Name it as RealtimeProbe and make it a child of the System 1 Realtime | MainSphere game object. Then, from the Inspector view, the Transform component, change its Position to X: 0; Y: 0; Z: 0, as shown: Now, go to the Reflection Probe component. Set Type as Realtime; Refresh Mode as Every Frame and Time Slicing as No time slicing, shown as follows: Play the scene. The reflections will be now be updated in real time. Stop the scene. Observe that the only object displaying the real-time reflections is System 1 Realtime | MainSphere. The reason for this is the Size of the Reflection Probe. From the Reflection Probe component, change its Size to X: 25; Y: 10; Z: 25. Note that the small red spheres are now affected as well. However, it is important to notice that all objects display the same reflection. Since our reflection probe's origin is placed at the same location as the MainSphere, all reflective objects will display reflections from that point of view. If you want to eliminate the reflection from the reflective objects within the reflection probe, such as the small red spheres, select the objects and, from the Mesh Renderer component, set Reflection Probes as Off, as shown in the following screenshot: Add a new Reflection Probe to the scene. This time, name it OnDemandProbe and make it a child of the System 2 On Demand | MainSphere game object. Then, from the Inspector view, Transform component, change its Position to X: 0; Y: 0; Z: 0. Now, go to the Reflection Probe component. Set Type as Realtime, Refresh Mode as Via scripting, and Time Slicing as Individual faces, as shown in the following screenshot: Using the Create drop-down menu in the Project view, create a new C# Script named UpdateProbe. Open your script and replace everything with the following code: using UnityEngine; using System.Collections; public class UpdateProbe : MonoBehaviour { private ReflectionProbe probe; void Awake () { probe = GetComponent<ReflectionProbe> (); probe.RenderProbe(); } public void RefreshProbe(){ probe.RenderProbe(); } } Save your script and attach it to the OnDemandProbe. Now, find the script named RandomRotation, which is attached to the System 2 On Demand | Spheres object, and open it in the code editor. Right before the Update() function, add the following lines: private GameObject probe; private UpdateProbe up; void Awake(){ probe = GameObject.Find("OnDemandProbe"); up = probe.GetComponent<UpdateProbe>(); } Now, locate the line of code called transform.eulerAngles = newRotation; and, immediately after it, add the following line: up.RefreshProbe(); Save the script and test your scene. Observe how the Reflection Probe is updated whenever a key is pressed. Stop the scene. Add a third Reflection Probe to the scene. Name it as CustomProbe and make it a child of the System 3 On Custom | MainSphere game object. Then, from the Inspector view, the Transform component, change its Position to X: 0; Y: 0; Z: 0. Go to the Reflection Probe component. Set Type as Custom and click on the Bake button, as shown: A Save File dialog window will show up. Save the file as CustomProbe-reflectionHDR.exr. Observe that the reflection map does not include the reflection of red spheres on it. To change this, you have two options: set the System 3 On Custom | Spheres GameObject (and all its children) as Reflection Probe Static or, from the Reflection Probe component of the CustomProbe GameObject, check the Dynamic Objects option, as shown, and bake the map again (by clicking on the Bake button). If you want your reflection Cubemap to be dynamically baked while you edit your scene, you can set the Reflection Probe Type to Baked, open the Lighting window (the Assets | Lighting menu), access the Scene section, and check the Continuous Baking option as shown. Please note that this mode won't include dynamic objects in the reflection, so be sure to set System 3 Custom | Spheres and System 3 Custom | MainSphere as Reflection Probe Static. How it works... The Reflection Probes element act like omnidirectional cameras that render Cubemaps and apply them onto the objects within their constraints. When creating Reflection Probes, it's important to be aware of how the different types work: Real-time Reflection Probes: Cubemaps are updated at runtime. The real-time Reflection Probes have three different Refresh Modes: On Awake (Cubemap is baked once, right before the scene starts); Every frame (Cubemap is constantly updated); Via scripting (Cubemap is updated whenever the RenderProbe function is used).Since Cubemaps feature six sides, the Reflection Probes features Time Slicing, so each side can be updated independently. There are three different types of Time Slicing: All Faces at Once (renders all faces at once and calculates mipmaps over 6 frames. Updates the probe in 9 frames); Individual Faces (each face is rendered over a number of frames. It updates the probe in 14 frames. The results can be a bit inaccurate, but it is the least expensive solution in terms of frame-rate impact); No Time Slicing (The Probe is rendered and mipmaps are calculated in one frame. It provides high accuracy, but it also the most expensive in terms of frame-rate). Baked: Cubemaps are baked during editing the screen. Cubemaps can be either manually or automatically updated, depending whether the Continuous Baking option is checked (it can be found at the Scene section of the Lighting window). Custom: The Custom Reflection Probes can be either manually baked from the scene (and even include Dynamic objects), or created from a premade Cubemap. There's more... There are a number of additional settings that can be tweaked, such as Importance, Intensity, Box Projection, Resolution, HDR, and so on. For a complete view on each of these settings, we strongly recommend that you read Unity's documentation on the subject, which is available at http://docs.unity3d.com/Manual/class-ReflectionProbe.html. Setting up an environment with Procedural Skybox and Directional Light Besides the traditional 6 Sided and Cubemap, Unity now features a third type of skybox: the Procedural Skybox. Easy to create and setup, the Procedural Skybox can be used in conjunction with a Directional Light to provide Environment Lighting to your scene. In this recipe, we will learn about different parameters of the Procedural Skybox. Getting ready For this recipe, you will need to import Unity's Standard Assets Effects package, which you should have installed when installing Unity. How to do it... To set up an Environment Lighting using the Procedural Skybox and Directional Light, follow these steps: Create a new scene inside a Unity project. Observe that a new scene already includes two objects: the Main Camera and a Directional Light. Add some cubes to your scene, including one at Position X: 0; Y: 0; Z: 0 scaled to X: 20; Y: 1; Z: 20, which is to be used as the ground, as shown: Using the Create drop-down menu from the Project view, create a new Material and name it MySkybox. From the Inspector view, use the appropriate drop-down menu to change the Shader of MySkybox from Standard to Skybox/Procedural. Open the Lighting window (menu Window | Lighting), access the Scene section. At the Environment Lighting subsection, populate the Skybox slot with the MySkybox material, and the Sun slot with the Directional Light from the Scene. From the Project view, select MySkybox. Then, from the Inspector view, set Sun size as 0.05 and Atmosphere Thickness as 1.4. Experiment by changing the Sky Tint color to RGB: 148; 128; 128, and the Ground color to a value that resembles the scene cube floor's color (such as RGB: 202; 202; 202). If you feel the scene is too bright, try bringing the Exposure level down to 0.85, shown as follows: Select the Directional Light and change its Rotation to X: 5; Y: 170; Z: 0. Note that the scene should resemble a dawning environment, something like the following scene: Let's make things even more interesting. Using the Create drop-down menu in the Project view, create a new C# Script named RotateLight. Open your script and replace everything with the following code: using UnityEngine; using System.Collections; public class RotateLight : MonoBehaviour { public float speed = -1.0f; void Update () { transform.Rotate(Vector3.right * speed * Time.deltaTime); } } Save it and add it as a component to the Directional Light. Import the Effects Assets package into your project (via the Assets | Import Package | Effects menu). Select the Directional Light. Then, from Inspector view, Light component, populate the Flare slot with the Sun flare. From the Scene section of the Lighting window, find the Other Settings subsection. Then, set Flare Fade Speed as 3 and Flare Strength as 0.5, shown as follows: Play the scene. You will see the sun rising and the Skybox colors changing accordingly. How it works... Ultimately, the appearance of Unity's native Procedural Skyboxes depends on the five parameters that make them up: Sun size: The size of the bright yellow sun that is drawn onto the skybox is located according to the Directional Light's Rotation on the X and Y axes. Atmosphere Thickness: This simulates how dense the atmosphere is for this skybox. Lower values (less than 1.0) are good for simulating the outer space settings. Moderate values (around 1.0) are suitable for the earth-based environments. Values that are slightly above 1.0 can be useful when simulating air pollution and other dramatic settings. Exaggerated values (like more than 2.0) can help to illustrate extreme conditions or even alien settings. Sky Tint: It is the color that is used to tint the skybox. It is useful for fine-tuning or creating stylized environments. Ground: This is the color of the ground. It can really affect the Global Illumination of the scene. So, choose a value that is close to the level's terrain and/or geometry (or a neutral one). Exposure: This determines the amount of light that gets in the skybox. The higher levels simulate overexposure, while the lower values simulate underexposure. It is important to notice that the Skybox appearance will respond to the scene's Directional Light, playing the role of the Sun. In this case, rotating the light around its X axis can create dawn and sunset scenarios, whereas rotating it around its Y axis will change the position of the sun, changing the cardinal points of the scene. Also, regarding the Environment Lighting, note that although we have used the Skybox as the Ambient Source, we could have chosen a Gradient or a single Color instead—in which case, the scene's illumination wouldn't be attached to the Skybox appearance. Finally, also regarding the Environment Lighting, please note that we have set the Ambient GI to Realtime. The reason for this was to allow the real-time changes in the GI, promoted by the rotating Directional Light. In case we didn't need these changes at runtime, we could have chosen the Baked alternative. Summary In this article you have learned and had hands-on approach to a number Unity's lighting system features, such as cookie textures, Reflection maps, Lightmaps, Light and Reflection probes, and Procedural Skyboxes. The article also demonstrated the use of Projectors. Resources for Article: Further resources on this subject: Animation features in Unity 5[article] Scripting Strategies[article] Editor Tool, Prefabs, and Main Menu [article]
Read more
  • 0
  • 0
  • 21105

article-image-oracle-api-management-implementation-12c
Packt
29 Sep 2015
5 min read
Save for later

Oracle API Management Implementation 12c

Packt
29 Sep 2015
5 min read
 This article by Luis Augusto Weir, the author of the book, Oracle API Management 12c Implementation, gives you a gist of what is covered in the book. At present, the digital transformation is essential for any business strategy, regardless of the industry they belong to an organization. (For more resources related to this topic, see here.) The companies who embark on a journey of digital transformation, they become able to create innovative and disruptive solutions; this in order to deliver a user experience much richer, unified, and personalized at lower cost. These organizations are able to address customers dynamically and across a wide variety of channels, such as mobile applications, highly responsive websites, and social networks. Ultimately, companies that develop models aligned digital innovation business, acquire a considerable competitive advantage over those that do not. The main trigger for this transformation is the ability to expose and make available business information and key technological capabilities for this, which often are buried in information systems (EIS) of the organization, or in components integration are only visible internally. In the digital economy, it is highly desirable to realize those assets in a standardized way through APIs, this course, in a controlled, scalable, and secure environment. The lightweight nature and ease of finding/using these APIs greatly facilitates its adoption as the essential mechanism to expose and/or consume various features from a multichannel environment. API Management is the discipline that governs the development cycle of APIs, defining the tools and processes needed to build, publish, and operate, also including management development communities around them. Our recent book, API Management Oracle 12c (Luis Weir, Andrew Bell, Rolando Carrasco, Arturo Viveros), is a very comprehensive and detailed to implement API Management in an organization guide. In this book, he explains the relationship that keeps this discipline with concepts such great detail as SOA Governance and DevOps .The convergence of API Management with SOA and governance of such services is addressed particularly to explain and shape the concept of Application Services Governance (ESG). On the other hand, it highlights the presence of case studies based on real scenarios, with multiple examples to demonstrate the correct definition and implementation of a robust strategy in solving supported Oracle Management API. The book begins by describing a number of key concepts about API Management and contextualizing the complementary disciplines, such as SOA Governance, DevOps, and Enterprise Architecture (EA). This is in order to clear up any confusion about the relationship to these topics. Then, all these concepts are put into practice by defining the case study of an organization with real name, which previously dealt with successfully implementing a service-oriented architecture considering the government of it, and now It is the need/opportunity to extend its technology platform by implementing a strategy of API Management. Throughout the narrative of the case are also described: Business requirements justifying the adoption of API Management The potential impact of the proposed solution on the organization The steps required to design and implement the strategy The definition and implementation of the assessment of maturity (API Readiness) and analysis of gaps in terms of: people, tools, and technology The exercise of evaluation and selection of products, explaining the choice of Oracle as the most appropriate solution The implementation roadmap API Management In later chapters, the various steps are being addressed one by one needed to solve the raised stage, by implementing the following reference architecture for API Management, based on the components of the Oracle solution: Catalog API, API Manager, and API Gateway. In short, the book will enable the reader to acquire a number of advanced knowledge on the following topics: API Management, its definition, concepts, and objectives Differences and similarities between API Management and SOA Governance; where and how these two disciplines converge in the concept of ESG Application Services Governance[d1]  and how to define a framework aimed at ASG Definition and implementation of the assessment of maturity for API Management Criteria for the selection and evaluation tools; Why Oracle API Management Suite? Implementation of Oracle API Catalog (OAC), including OAC harvesting by bootstrapping & ANT scripts and JDev, OAC Console, user creation and management, metadata API, API Discovery, and how to extend the functionality of OAC REX by API. Management APIs and challenges in general API Management Oracle Implementation Manager API (OAPIM), including the creation, publishing, monitoring, subscription, and life cycle management APIs by OAPIM Portal Common scenarios for adoption/implementation of API Management and how to solve them[d2]  Implementation of Oracle API Gateway (OAG), including creation of policies with different filters, OAuth authentication, integration with LDAP, SOAP/REST APIs conversions, and Testing. Defining the deployment topology for Oracle API Management Suite Installing and configuring OAC, OAPIM, and OAG 12c Oracle Management API is designed for the following audience: Enterprise Architects, Solution Architects, Technical Leader and SOA and APIs professionals seeking to know thoroughly and successfully implement the Oracle API Management solution. Summary In this article, we looked at Oracle API Management Implementation 12c in brief. More information on this is provided in the book. Resources for Article: Further resources on this subject: Oracle 12c SQL and PL/SQL New Features[article] Securing Data at Rest in Oracle 11g[article] Getting Started with Oracle Primavera P6 [article]
Read more
  • 0
  • 0
  • 4859
article-image-creating-tfs-scheduled-jobs
Packt
28 Sep 2015
12 min read
Save for later

Creating TFS Scheduled Jobs

Packt
28 Sep 2015
12 min read
In this article by Gordon Beeming, the author of the book, Team Foundation Server 2015 Customization, we are going to cover TFS scheduled jobs. The topics that we are going to cover include: Writing a TFS Job Deploying a TFS Job Removing a TFS Job You would want to write a scheduled job for any logic that needs to be run at specific times, whether it is at certain increments or at specific times of the day. A scheduled job is not the place to put logic that you would like to run as soon as some other event, such as a check-in or a work item change, occurs. It will automatically link change sets to work items based on the comments. (For more resources related to this topic, see here.) The project setup First off, we'll start with our project setup. This time, we'll create a Windows console application. Creating a new windows console application The references that we'll need this time around are: Microsoft.VisualStudio.Services.WebApi.dll Microsoft.TeamFoundation.Common.dll Microsoft.TeamFoundation.Framework.Server.dll All of these can be found in C:Program FilesMicrosoft Team Foundation Server 14.0Application TierTFSJobAgent on the TFS server. That's all the setup that is required for your TFS job project. Any class that inherit ITeamFoundationJobExtension will be able to be used for a TFS Job. Writing the TFS job So, as mentioned, we are going to need a class that inherits from ITeamFoundationJobExtension. Let's create a class called TfsCommentsToChangeSetLinksJob and inherit from ITeamFoundationJobExtension. As part of this, we will need to implement the Run method, which is part of an interface, like this: public class TfsCommentsToChangeSetLinksJob : ITeamFoundationJobExtension { public TeamFoundationJobExecutionResult Run( TeamFoundationRequestContext requestContext, TeamFoundationJobDefinition jobDefinition, DateTime queueTime, out string resultMessage) { throw new NotImplementedException(); } } Then, we also add the using statement: using Microsoft.TeamFoundation.Framework.Server; Now, for this specific extension, we'll need to add references to the following: Microsoft.TeamFoundation.Client.dll Microsoft.TeamFoundation.VersionControl.Client.dll Microsoft.TeamFoundation.WorkItemTracking.Client.dll All of these can be found in C:Program FilesMicrosoft Team Foundation Server 14.0Application TierTFSJobAgent. Now, for the logic of our plugin, we use the following code inside of the Run method as a basic shell, where we'll then place the specific logic for this plugin. This basic shell will be adding a try catch block, and at the end of the try block, it will return a successful job run. We'll then add to the job message what exception may be thrown and returning that the job failed: resultMessage = string.Empty; try { // place logic here return TeamFoundationJobExecutionResult.Succeeded; } catch (Exception ex) { resultMessage += "Job Failed: " + ex.ToString(); return TeamFoundationJobExecutionResult.Failed; } Along with this code, you will need the following using function: using Microsoft.TeamFoundation; using Microsoft.TeamFoundation.Client; using Microsoft.TeamFoundation.VersionControl.Client; using Microsoft.TeamFoundation.WorkItemTracking.Client; using System.Linq; using System.Text.RegularExpressions; So next, we need to place some logic specific to this job in the try block. First, let's create a connection to TFS for version control: TfsTeamProjectCollection tfsTPC = TfsTeamProjectCollectionFactory.GetTeamProjectCollection( new Uri("http://localhost:8080/tfs")); VersionControlServer vcs = tfsTPC.GetService<VersionControlServer>(); Then, we will query the work item store's history and get the last 25 check-ins: WorkItemStore wis = tfsTPC.GetService<WorkItemStore>(); // get the last 25 check ins foreach (Changeset changeSet in vcs.QueryHistory("$/", RecursionType.Full, 25)) { // place the next logic here } Now that we have the changeset history, we are going to check the comments for any references to work items using a simple regex expression: //try match the regex for a hash number in the comment foreach (Match match in Regex.Matches((changeSet.Comment ?? string.Empty), @"#d{1,}")) { // place the next logic here } Getting into this loop, we'll know that we have found a valid number in the comment and that we should attempt to link the check-in to that work item. But just the fact that we have found a number doesn't mean that the work item exists, so let's try find a work item with the found number: int workItemId = Convert.ToInt32(match.Value.TrimStart('#')); var workItem = wis.GetWorkItem(workItemId); if (workItem != null) { // place the next logic here } Here, we are checking to make sure that the work item exists so that if the workItem variable is not null, then we'll proceed to check whether a relationship for this changeSet and workItem function already exists: //now create the link ExternalLink changesetLink = new ExternalLink( wis.RegisteredLinkTypes[ArtifactLinkIds.Changeset], changeSet.ArtifactUri.AbsoluteUri); //you should verify if such a link already exists if (!workItem.Links.OfType<ExternalLink>() .Any(l => l.LinkedArtifactUri == changeSet.ArtifactUri.AbsoluteUri)) { // place the next logic here } If a link does not exist, then we can add a new link: changesetLink.Comment = "Change set " + $"'{changeSet.ChangesetId}'" + " auto linked by a server plugin"; workItem.Links.Add(changesetLink); workItem.Save(); resultMessage += $"Linked CS:{changeSet.ChangesetId} " + $"to WI:{workItem.Id}"; We just have the extra bit here so as to get the last 25 change sets. If you were using this for production, you would probably want to store the last change set that you processed and then get history up until that point, but I don't think it's needed to illustrate this sample. Then, after getting the list of change sets, we basically process everything 100 percent as before. We check whether there is a comment and whether that comment contains a hash number that we can try linking to a changeSet function. We then check whether a workItem function exists for the number that we found. Next, we add a link to the work item from the changeSet function. Then, for each link we add to the overall resultMessage string so that when we look at the results from our job running, we can see which links were added automatically for us. As you can see, with this approach, we don't interfere with the check-in itself but rather process this out-of-hand way of linking changeSet to work with items at a later stage. Deploying our TFS Job Deploying the code is very simple; change the project's Output type to Class Library. This can be done by going to the project properties, and then in the Application tab, you will see an Output type drop-down list. Now, build your project. Then, copy the TfsJobSample.dll and TfsJobSample.pdb output files to the scheduled job plugins folder, which is C:Program FilesMicrosoft Team Foundation Server 14.0Application TierTFSJobAgentPlugins. Unfortunately, simply copying the files into this folder won't make your scheduled job automatically installed, and the reason for this is that as part of the interface of the scheduled job, you don't specify when to run your job. Instead, you register the job as a separate step. Change Output type back to Console Application option for the next step. You can, and should, split the TFS job from its installer into different projects, but in our sample, we'll use the same one. Registering, queueing, and deregistering a TFS Job If you try install the job the way you used to in TFS 2013, you will now get the TF400444 error: TF400444: The creation and deletion of jobs is no longer supported. You may only update the EnabledState or Schedule of a job. Failed to create, delete or update job id 5a7a01e0-fff1-44ee-88c3-b33589d8d3b3 This is because they have made some changes to the job service, for security reasons, and these changes prevent you from using the Client Object Model. You are now forced to use the Server Object Model. The code that you have to write is slightly more complicated and requires you to copy your executable to multiple locations to get it working properly. Place all of the following code in your program.cs file inside the main method. We start off by getting some arguments that are passed through to the application, and if we don't get at least one argument, we don't continue: #region Collect commands from the args if (args.Length != 1 && args.Length != 2) { Console.WriteLine("Usage: TfsJobSample.exe <command "+ "(/r, /i, /u, /q)> [job id]"); return; } string command = args[0]; Guid jobid = Guid.Empty; if (args.Length > 1) { if (!Guid.TryParse(args[1], out jobid)) { Console.WriteLine("Job Id not a valid Guid"); return; } } #endregion We then wrap all our logic in a try catch block, and for our catch, we only write the exception that occurred: try { // place logic here } catch (Exception ex) { Console.WriteLine(ex.ToString()); } Place the next steps inside the try block, unless asked to do otherwise. As part of using the Server Object Model, you'll need to create a DeploymentServiceHost. This requires you to have a connection string to the TFS Configuration database, so make sure that the connection string set in the following is valid for you. We also need some other generic path information, so we'll mimic what we could expect the job agents' paths to be: #region Build a DeploymentServiceHost string databaseServerDnsName = "localhost"; string connectionString = $"Data Source={databaseServerDnsName};"+ "Initial Catalog=TFS_Configuration;Integrated Security=true;"; TeamFoundationServiceHostProperties deploymentHostProperties = new TeamFoundationServiceHostProperties(); deploymentHostProperties.HostType = TeamFoundationHostType.Deployment | TeamFoundationHostType.Application; deploymentHostProperties.Id = Guid.Empty; deploymentHostProperties.PhysicalDirectory = @"C:Program FilesMicrosoft Team Foundation Server 14.0"+ @"Application TierTFSJobAgent"; deploymentHostProperties.PlugInDirectory = $@"{deploymentHostProperties.PhysicalDirectory}Plugins"; deploymentHostProperties.VirtualDirectory = "/"; ISqlConnectionInfo connInfo = SqlConnectionInfoFactory.Create(connectionString, null, null); DeploymentServiceHost host = new DeploymentServiceHost(deploymentHostProperties, connInfo, true); #endregion Now that we have a TeamFoundationServiceHost function, we are able to create a TeamFoundationRequestContext function . We'll need it to call methods such as UpdateJobDefinitions, which adds and/or removes our job, and QueryJobDefinition, which is used to queue our job outside of any schedule: using (TeamFoundationRequestContext requestContext = host.CreateSystemContext()) { TeamFoundationJobService jobService = requestContext.GetService<TeamFoundationJobService>() // place next logic here } We then create a new TeamFoundationJobDefinition instance with all of the information that we want for our TFS job, including the name, schedule, and enabled state: var jobDefinition = new TeamFoundationJobDefinition( "Comments to Change Set Links Job", "TfsJobSample.TfsCommentsToChangeSetLinksJob"); jobDefinition.EnabledState = TeamFoundationJobEnabledState.Enabled; jobDefinition.Schedule.Add(new TeamFoundationJobSchedule { ScheduledTime = DateTime.Now, PriorityLevel = JobPriorityLevel.Normal, Interval = 300, }); Once we have the job definition, we check what the command was and then execute the code that will relate to that command. For the /r command, we will just run our TFS job outside of the TFS job agent: if (command == "/r") { string resultMessage; new TfsCommentsToChangeSetLinksJob().Run(requestContext, jobDefinition, DateTime.Now, out resultMessage); } For the /i command, we will install the TFS job: else if (command == "/i") { jobService.UpdateJobDefinitions(requestContext, null, new[] { jobDefinition }); } For the /u command, we will uninstall the TFS Job: else if (command == "/u") { jobService.UpdateJobDefinitions(requestContext, new[] { jobid }, null); } Finally, with the /q command, we will queue the TFS job to be run inside the TFS job agent and outside of its schedule: else if (command == "/q") { jobService.QueryJobDefinition(requestContext, jobid); } Now that we have this code in the program.cs file, we need to compile the project and then copy TfsJobSample.exe and TfsJobSample.pdb to the TFS Tools folder, which is C:Program FilesMicrosoft Team Foundation Server 14.0Tools. Now open a cmd window as an administrator. Change the directory to the Tools folder and then run your application with a /i command, as follows: Installing the TFS Job Now, you have successfully installed the TFS Job. To uninstall it or force it to be queued, you will need the job ID. But basically you have to run /u with the job ID to uninstall, like this: Uninstalling the TFS Job You will be following the same approach as prior for queuing, simply specifying the /q command and the job ID. How do I know whether my TFS Job is running? The easiest way to check whether your TFS Job is running or not is to check out the job history table in the configuration database. To do this, you will need the job ID (we spoke about this earlier), which you can obtain by running the following query against the TFS_Configuration database: SELECT JobId FROM Tfs_Configuration.dbo.tbl_JobDefinition WITH ( NOLOCK ) WHERE JobName = 'Comments to Change Set Links Job' With this JobId, we will then run the following lines to query the job history: SElECT * FROM Tfs_Configuration.dbo.tbl_JobHistory WITH (NOLOCK) WHERE JobId = '<place the JobId from previous query here>' This will return you a list of results about the previous times the job was run. If you see that your job has a Result of 6 which is extension not found, then you will need to stop and restart the TFS job agent. You can do this by running the following commands in an Administrator cmd window: net stop TfsJobAgent net start TfsJobAgent Note that when you stop the TFS job agent, any jobs that are currently running will be terminated. Also, they will not get a chance to save their state, which, depending on how they were written, could lead to some unexpected situations when they start again. After the agent has started again, you will see that the Result field is now different as it is a job agent that will know about your job. If you prefer browsing the web to see the status of your jobs, you can browse to the job monitoring page (_oi/_jobMonitoring#_a=history), for example, http://gordon-lappy:8080/tfs/_oi/_jobMonitoring#_a=history. This will give you all the data that you can normally query but with nice graphs and grids. Summary In this article, we looked at how to write, install, uninstall, and queue a TFS Job. You learned that the way we used to install TFS Jobs will no longer work for TFS 2015 because of a change in the Client Object Model for security. Resources for Article: Further resources on this subject: Getting Started with TeamCity[article] Planning for a successful integration[article] Work Item Querying [article]
Read more
  • 0
  • 0
  • 13863

article-image-learning-rethinkdb
Jonathan Pollack
28 Sep 2015
6 min read
Save for later

Learning RethinkDB

Jonathan Pollack
28 Sep 2015
6 min read
RethinkDB is a relatively new, fully open-source NoSQL database, featuring: ridiculously easy sharding, replicating, & database management, table joins (that’s right!), geospatial & time-series support, and real-time monitoring of complicated queries. I think the feature list alone makes this a piece of tech worth looking further into, to say nothing of the fact that we’ll likely be seeing an explosion of apps that use RethinkDB as their fundamental database–so developers, get ready to have to learn about yet another database. That said, like any tool, you should consult your doctor when deciding if RethinkDB is right for you. When to avoid Like most NoSQL offerings, RethinkDB has a few conscience trade-offs in its design, most notably when it comes to ACID compliance, and the CAP-theorem. If you need a fully ACID compliant database, or strong type checking across your schema, you would be better served by a traditional SQL database. If you absolutely need write availability over data consistency–RethinkDB favors consistency. Also, because of how queries are performed and returned, “big data” use cases are probably not a great fit for this database–specifically if you want to handle results larger than 64 MB, or are performing computationally intensive work on your stored data. When to consider You want a great web-based management console for data-center configuration (sharding, replication, etc.), database monitoring, and testing queries. You want the flexibility of a schema-less database, with the ability to easily express relationships via table joins. You need to perform geospatial queries (e.g. find all documents with locations within 5km of a given point). You deal with time series data, especially across various times zones. You need to push data to your client based off of realtime changes to your data, as a result of complex queries. Management console The web console is insanely easy to use, and gives you all of the control you need for administrating your data-center–even if it is only a data-center of one database. Setting up a data-center is just a matter of pointing your new database to an existing node in a cluster. Once that’s done, you can use the web console to shard (and re-shard) your data, as well as determine how many replicas you want floating around. You can also run queries (and profile those queries) against your databases straight form the web console, giving you quick access to your data and performance. Table joins (capturing data relations) One of the best pieces of syntatic sugar that RethinkDB provides, in my opinion, is the ability to do table joins. While, certainly, this isn’t that magical–what we’re doing is essentially a nested query via a specified field to be used as the nested lookup’s primary key–it really does make queries easy to read and compose. r.table("table1").eq_join("doc_field_as_table2_primary_key", r.table("table2")).zip().run() Even more awesomely, the JavaScript ORM Thinky allows for very slick, seamless query-level joins, based on the same principal. Geospatial primitives Given that location aware queries are becoming more and more popular, if not downright necessary, it’s great to see that RethinkDB comes with support for the following geometric primitives:point, line, polygon (at least 3 sided), circle, and polygonSub (subtract one polygon from the larger, enclosing polygon). It allows for the following types of queries: distance, intersects, includes, getIntersecting, and getNearest. For example, you can find all of the documents within 5 km of Greenwich, England. r.table("table1").getNearest(r.point(0,0), {index: "table1_geo_index", maxDist: 5, unit: "km"}).run() Time-series support (sane date & time primitives) Official drivers do native conversions for you, which means timezone-aware context driven queries can be made that allow you to find documents that occurred at a given time on a given day in a given timezone. Some other cool features: Times can be used as indexes. Time operations are handled on the database, allowing them to be executed across the cluster effortlessly. Take, for example, the desire to figure out how many customer support tickets were coming in between 9 am, and 5 pm, every day. We don’t want to have to figure out how to offset the time-stamp on each document, given that the timezones could each be different. Thankfully, RethinkDB will do this accounting, and spread out the computation across the cluster without asking us for a thing. r.table('customer-support-tickets').filter(function (ticket) { // ticket.hours() is automatically dealt with in its own timezone return ticket('time').hours().lt(9).or( ticket('time').hours().ge(17)); }).count().run(); Realtime query result monitoring (change feeds) Probably by far and away the most impressive feature of RethinkDB has to be change-feeds. You can turn almost every practical query that you would want to monitor into a live stream of changes just by chaining the function call changes() to the end. For example, monitor the changes to a given table: r.table("table1").changes().run() or to a given query (the ordering of a table, for instance): r.table("table1").orderBy("key").changes().run() And of course, the queries can be made more complicated, but these examples above should blow your mind. No more pulling, no more having to come up with the data diffs yourself before pushing them to the client. RethinkDB will do the diff for you, and push the results straight to your server. There is one caveat here, however; while this is decent for order-of-magnitude: 10 clients, it is more efficient to couple your change-feeds to a pub-sub service when pushing to many clients. Conclusion RethinkDB has a lot of cool things to be excited about: ReQL (it’s readable, highly functional syntax), cluster management, primitives for 21st century applications, and change-feeds. And you know what, if RethinkDB only had change-feeds, I would still be extremely excited about it–think of all that time you no longer have to spend banging your head against the wall trying to deal with consistence and concurrency issues! If you are thinking about starting a new project, or are tired of fighting with your current NoSQL database, and don’t have any requirements in the “avoid camp”, you should highly consider using RethinkDB. About the author Jonathan Pollack is a full stack developer living in Berlin. He previously worked as a web developer at a public shoe company, and prior to that, worked at a start up that’s trying to build the world’s best pan-cloud virtualization layer. He can be found on Twitter @murphydanger.
Read more
  • 0
  • 0
  • 2228

Packt
25 Sep 2015
11 min read
Save for later

The Dashboard Design – Best Practices

Packt
25 Sep 2015
11 min read
 In this article by Julian Villafuerte, author of the book Creating Stunning Dashboards with QlikView you know more about the best practices for the dashboard design. (For more resources related to this topic, see here.) Data visualization is a field that is constantly evolving. However, some concepts have proven their value time and again through the years and have become what we call best practices. These notions should not be seen as strict rules that must be applied without any further consideration but as a series of tips that will help you create better applications. If you are a beginner, try to stick to them as much as you can. These best practices will save you a lot of trouble and will greatly enhance your first endeavors. On the other hand, if you are an advanced developer, combine them with your personal experiences in order to build the ultimate dashboard. Some guidelines in this article come from the widely known characters in the field of data visualization, such as Stephen Few, Edward Tufte, John Tukey, Alberto Cairo, and Nathan Yau. So, if a concept strikes your attention, I strongly recommend you to read more about it in their books. Throughout this article, we will review some useful recommendations that will help you create not only engaging, but also effective and user-friendly dashboards. Remember that they may apply differently depending on the information displayed and the audience you are working with. Nevertheless, they are great guidelines to the field of data visualization, so do not hesitate to consider them in all of your developments. Gestalt principles In the early 1900s, the Gestalt school of psychology conducted a series of studies on human perception in order to understand how our brain interprets forms and recognizes patterns. Understanding these principles may help you create a better structure for your dashboard and make your charts easier to interpret: Proximity: When we see multiple elements located near one another, we tend to see them as groups. For example, we can visually distinguish clusters in a scatter plot by grouping the dots according to their position. Similarity: Our brain associates the elements that are similar to each other (in terms of shape, size, color, or orientation). For example, in color-coded bar charts, we can associate the bars that share the same color even if they are not grouped. Enclosure: If a border surrounds a series of objects, we perceive them as part of a group. For example, if a scatter plot has reference lines that wrap the elements between 20 and 30 percent, we will automatically see them as a cluster. Closure: When we detect a figure that looks incomplete, we tend to perceive it as a closed structure. For example, even if we discard the borders of a bar chart, the axes will form a region that our brain will isolate without needing the extra lines. Continuity: If a number of objects are aligned, we will perceive them as a continuous body. For example, the different blocks of code when you indent QlikView script are percieved as one continuous code. Connection: If objects are connected by a line, we will see them as a group. For example, we tend to associate the dots connected by lines on a scatter plot with lines and symbols. Giving context to the data When it comes to analyzing data, context is everything. If you present isolated figures, the users will have a hard time trying to find the story hidden behind them. For example, if I told you that the gross margin of our company was 16.5 percent during the first quarter of 2015, would you evaluate it as a positive or negative sign? This is pretty difficult, right? However, what if we added some extra information to complement this KPI? Then, the following image would make a lot more sense: As you can see, adding context to the data can make the landscape look quite different. Now, it is easy to see that even though the gross margin has substantially improved during the last year, our company has some work to do in order to be competitive and surpass the industry standard. The appropriate references may change depending on the KPI you are dealing with and the goals of the organization, but some common examples are as follows: Last year's performance The quota, budget, or objective Comparison with the closest competitor, product, or employee The market share The industry standards Another good tip in this regard is to anticipate the comparisons. If you display figures regarding the monthly quota and the actual sales, you can save the users the mental calculations by including complementary indicators, such as the gap between them and the percentage of completion. Data-Ink Ratio One of the most interesting principles in the field of data visualization is Data-Ink Ratio, introduced by Edward R. Tufte in his book, The Visual Display of Quantitative Information, which must be read by every designer. In this publication, he states that there are two different types of ink (or in our case, pixels) in any chart, as follows: Data-ink: This includes all the nonerasable portions of graphic that are used to represent the actual data. These pixels are at the core of the visualization and cannot be removed without losing some of its content. Non-data-ink: This includes any element that is not directly related to the data or doesn't convey anything meaningful to the reader. Based on these concepts, he defined the Data Ink Ratio as the proportion of the graphic's ink that is devoted to the nonredundant display of data information: Data Ink Ratio = Data Ink / Total Ink As you can imagine, our goal is to maximize this number by decreasing the non-data-ink used in our dashboards. For example, the chart to the left has a low data-ink ratio due to the usage of 3D effects, shadows, backgrounds, and multiple grid lines. On the contrary, the chart to the right presents a higher ratio as most of the pixels are data-related. Avoiding chart junk Chart junk is another term coined by Tufte that refers to all the elements that distract the viewer from the actual information in a graphic. Evidently, chart junk is considered as non-data-ink and comprises of features such as heavy gridlines, frames, redundant labels, ornamental axes, backgrounds, overly complex fonts, shadows, images, or other effects included only as decoration. Take for instance the following charts: As you can see, by removing all the unnecessary elements in a chart, it becomes easier to interpret and looks much more elegant. Balance Colors, icons, reference lines, and other visual cues can be very useful to help the users focus on the most important elements in a dashboard. However, misusing or overusing these features can be a real hazard, so try to find the adequate balance for each of them. Excessive precision QlikView applications should use the appropriate language for each audience. When designing, think about whether precise figures will be useful or if they are going to become a distraction. Most of the time, dashboards show high-level KPIs, so it may be more comfortable for certain users to see rounded numbers, as in the following image: 3D charts One of Microsoft Excel's greatest wrongdoings is making everyone believe that 3D charts are good for data analysis. For some reason, people seem to love them; but, believe me, they are a real threat to business analysts. Despite their visual charm, these representations can easily hide some parts of the information and convey wrong perceptions depending on their usage of colors, shadows, and axis inclination. I strongly recommend you to avoid them in any context. Sorting Whether you are working with a list box, a bar chart, or a straight table, sorting an object is always advisable, as it adds context to the data. It can help you find the most commonly selected items in a list box, distinguish which slice is bigger on a pie chart when the sizes are similar, or easily spot the outliners in other graphic representations. Alignment and distribution Most of my colleagues argue that I am on the verge of an obsessive-compulsive disorder, but I cannot stand an application with unaligned objects. (Actually, I am still struggling with the fact that the paragraphs in this book are not justified, but anyway...). The design toolbar offers useful options in this regard, so there is no excuse for not having a tidy dashboard. If you take care of the quadrature of all the charts and filters, your interface will display a clean and professional look that every user will appreciate: Animations I have a rule of thumb regarding chart animation in QlikView—If you are Hans Rosling, go ahead. If not, better think it over twice. Even though they can be very illustrative, chart animations end up being a distraction rather than a tool to help us visualize data most of the time, so be conservative about their use. For those of you who do not know him, Hans Rosling is a Swedish professor of international health who works in Stockholm. However, he is best known for his amazing way of presenting data with GapMinder, a simple piece of software that allows him to animate a scatter plot. If you are a data enthusiast, you ought to watch his appearances in TED Talks. Avoiding scroll bars Throughout his work, Stephen Few emphasizes that all the information in a dashboard must fit on a single screen. Whilst I believe that there is no harm in splitting the data in multiple sheets, it is undeniable that scroll bars reduce the overall usability of an application. If the user has to continuously scroll right and left to read all the figures in a table, or if she must go up and down to see the filter panel, she will end up getting tired and eventually discard your dashboard. Consistency If you want to create an easy way to navigate your dashboard, you cannot forget about consistency. Locating standard objects (such as Current Selections Box, Search Object, and Filter Panels) in the same area in every tab will help the users easily find all the items they need. In addition, applying the same style, fonts, and color palettes in all your charts will make your dashboard look more elegant and professional. White space The space between charts, tables, and filters is often referred to as white space, and even though you may not notice it, it is a vital part of any dashboard. Displaying dozens of objects without letting them breathe makes your interface look cluttered and, therefore, harder to understand. Some of the benefits of using white space adequately are: The improvement in readability It focuses and emphasizes the important objects It guides the users' eyes, creating a sense of hierarchy in the dashboard It fosters a balanced layout, making your interface look clear and sophisticated Applying makeup Every now and then, you stumble upon delicate situations where some business users try their best to hide certain parts of the data. Whether it is about low sales or the insane amount of defective products, they often ask you to remove a few charts or avoid visual cues so that those numbers go unnoticed. Needless to say, dashboards are tools intended to inform and guide the decisions of the viewers, so avoid presenting misleading visualizations. Meaningless variety As a designer, you will often hesitate to use the same chart type multiple times in your application fearing that the users will get bored of it. Though this may be a haunting perception, if you present valuable data in an adequate format, there is no need to add new types of charts just for variety's sake. We want to keep the users engaged with great analyses, not just with pretty graphics. Summary In this article, you learned all about the best practices to be followed in Qlikview. Resources for Article: Further resources on this subject: Analyzing Financial Data in QlikView[article] Securing QlikView Documents[article] Common QlikView script errors [article]
Read more
  • 0
  • 0
  • 9683
article-image-introducing-r-rstudio-and-shiny
Packt
25 Sep 2015
9 min read
Save for later

Introducing R, RStudio, and Shiny

Packt
25 Sep 2015
9 min read
 In this article, by Hernán G. Resnizky, author of the book Learning Shiny, the main objective will be to learn how to install all the needed components to build an application in R with Shiny. Additionally, some general ideas about what R is will be covered in order to be able to dive deeper into programming using R. The following topics will be covered: A brief introduction to R, RStudio, and Shiny Installation of R and Shiny General tips and tricks (For more resources related to this topic, see here.) About R As stated on the R-project main website: "R is a language and environment for statistical computing and graphics." R is a successor of S and is a GNU project. This means, briefly, that anyone can have access to its source codes and can modify or adapt it to their needs. Nowadays, it is gaining territory over classic commercial software, and it is, along with Python, the most used language for statistics and data science. Regarding R's main characteristics, the following can be considered: Object oriented: R is a language that is composed mainly of objects and functions. Can be easily contributed to: Similar to GNU projects, R is constantly being enriched by user's contributions either by making their codes accessible via "packages" or libraries, or by editing/improving its source code. There are actually almost 7000 packages in the common R repository, Comprehensive R Archive Network (CRAN). Additionally, there are R repositories of public access, such as bioconductor project that contains packages for bioinformatics. Runtime execution: Unlike C or Java, R does not need compilation. This means that you can, for instance, write 2 + 2 in the console and it will return the value. Extensibility: The R functionalities can be extended through the installation of packages and libraries. Standard proven libraries can be found in CRAN repositories and are accessible directly from R by typing install.packages(). Installing R R can be installed in every operating system. It is highly recommended to download the program directly from http://cran.rstudio.com/ when working on Windows or Mac OS. On Ubuntu, R can be easily installed from the terminal as follows: sudo apt-get update sudo apt-get install r-base sudo apt-get install r-base-dev The installation of r-base-dev is highly recommended as it is a package that enables users to compile the R packages from source, that is, maintain the packages or install additional R packages directly from the R console using the install.packages() command. To install R on other UNIX-based operating systems, visit the following links: http://cran.rstudio.com/ http://cran.r-project.org/doc/manuals/r-release/R-admin.html#Obtaining-R A quick guide to R When working on Windows, R can be launched via its application. After the installation, it is available as any other program on Windows. When opening the program, a window like this will appear: When working on Linux, you can access the R console directly by typing R on the command line: In both the cases, R executes in runtime. This means that you can type in code, press Enter, and the result will be given immediately as follows: > 2+2 [1] 4 The R application in any operating system does not provide an easy environment to develop code. For this reason, it is highly recommended (not only to write web applications in R with Shiny, but for any task you want to perform in R) to use an Integrated Development Environment (IDE). About RStudio As with other programming languages, there is a huge variety of IDEs available for R. IDEs are applications that make code development easier and clearer for the programmer. RStudio is one of the most important ones for R, and it is especially recommended to write web applications in R with Shiny because this contains features specially designed for R. Additionally, RStudio provides facilities to write C++, Latex, or HTML documents and also integrates them to the R code. RStudio also provides version control, project management, and debugging features among many others. Installing RStudio RStudio for desktop computers can be downloaded from its official website at http://www.rstudio.com/products/rstudio/download/ where you can get versions of the software for Windows, MAC OS X, Ubuntu, Debian, and Fedora. Quick guide to RStudio Before installing and running RStudio, it is important to have R installed. As it is an IDE and not the programming language, it will not work at all. The following screenshot shows RStudio's starting view: At the first glance, the following four main windows are available: Text editor: This provides facilities to write the R scripts such as highlighting and a code completer (when hitting Tab, you can see the available options to complete the code written). It is also possible to include the R code in an HTML, Latex, or C++ piece of code. Environment and history: They are defined as follows: In the Environment section, you can see the active objects in each environment. By clicking on Global Environment (which is the environment shown by default), you can change the environment and see the active objects. In the History tab, the pieces of codes executed are stored line by line. You can select one or more lines and send them either to the editor or to the console. In addition, you can look up for a certain specific piece of code by typing it in the textbox in the top right part of this window. Console: This is an exact equivalent of R console, as described in Quick guide of R. Tabs: The different tabs are defined as follows: Files: This consists of a file browser with several additional features (renaming, deleting, and copying). Clicking on a file will open it in editor or the Environment tab depending on the type of the file. If it is a .rda or .RData file, it will open in both. If it is a text file, it will open in one of them. Plots: Whenever a plot is executed, it will be displayed in that tab. Packages: This shows a list of available and active packages. When the package is active, it will appear as clicked. Packages can also be installed interactively by clicking on Install Packages. Help: This is a window to seek and read active packages' documentation. Viewer: This enables us to see the HTML-generated content within RStudio. Along with numerous features, RStudio also provides keyboard shortcuts. A few of them are listed as follows: Description Windows/Linux OSX Complete the code. Tab Tab Run the selected piece of code. If no piece of code is selected, the active line is run. Ctrl + Enter ⌘ + Enter Comment the selected block of code. Ctrl + Shift + C ⌘ + / Create a section of code, which can be expanded or compressed by clicking on the arrow to the left. Additionally, it can be accessed by clicking on it in the bottom left menu. ##### ##### Find and replace. Ctrl + F ⌘ + F The following screenshots show how a block of code can be collapsed by clicking on the arrow and how it can be accessed quickly by clicking on its name in the bottom-left part of the window: Clicking on the circled arrow will collapse the Section 1 block, as follows: The full list of shortcuts can be found at https://support.rstudio.com/hc/en-us/articles/200711853-Keyboard-Shortcuts. For further information about other RStudio features, the full documentation is available at https://support.rstudio.com/hc/en-us/categories/200035113-Documentation. About Shiny Shiny is a package created by RStudio, which enables to easily interface R with a web browser. As stated in its official documentation, Shiny is a web application framework for R that makes it incredibly easy to build interactive web applications with R. One of its main advantages is that there is no need to combine R code with HTML/JavaScript code as the framework already contains prebuilt features that cover the most commonly used functionalities in a web interactive application. There is a wide range of software that has web application functionalities, especially oriented to interactive data visualization. What are the advantages of using R/Shiny then, you ask? They are as follows: It is free not only in terms of money, but as all GNU projects, in terms of freedom. As stated in the GNU main page: To understand the concept (GNU), you should think of free as in free speech, not as in free beer. Free software is a matter of the users' freedom to run, copy, distribute, study, change, and improve the software. All the possibilities of a powerful language such as R is available. Thanks to its contributive essence, you can develop a web application that can display any R-generated output. This means that you can, for instance, run complex statistical models and return the output in a friendly way in the browser, obtain and integrate data from the various sources and formats (for instance, SQL, XML, JSON, and so on) the way you need, and subset, process, and dynamically aggregate the data the way you want. These options are not available (or are much more difficult to accomplish) under most of the commercial BI tools. Installing and loading Shiny As with any other package available in the CRAN repositories, the easiest way to install Shiny is by executing install.packages("shiny"). The following output should appear on the console: Due to R's extensibility, many of its packages use elements (mostly functions) from other packages. For this reason, these packages are loaded or installed when the package that is dependent on them is loaded or installed. This is called dependency. Shiny (on its 0.10.2.1 version) depends on Rcpp, httpuv, mime, htmltools, and R6. An R session is started only with the minimal packages loaded. So if functions from other packages are used, they need to be loaded before using them. The corresponding command for this is as follows: library(shiny) When installing a package, the package name must be quoted but when loading the package, it must be unquoted. Summary After these instructions, the reader should be able to install all the fundamental elements to create a web application with Shiny. Additionally, he or she must have acquired at least a general idea of what R and the R project is. Resources for Article: Further resources on this subject: R ─ Classification and Regression Trees[article] An overview of common machine learning tasks[article] Taking Control of Reactivity, Inputs, and Outputs [article]
Read more
  • 0
  • 0
  • 33103

article-image-tv-set-constant-volume-controller
Packt
25 Sep 2015
19 min read
Save for later

TV Set Constant Volume Controller

Packt
25 Sep 2015
19 min read
In this article by Fabizio Boco, author of  Arduino iOS Bluprints, we learn how to control a TV set volume using Arduino and iOS. I don't watch TV much, but when I do, I usually completely relax and fall asleep. I know that TV is not meant for putting you off to sleep, but it does this to me. Unfortunately, commercials are transmitted at a very high volume and they wake me up. How can I relax if commercials wake me up every five minutes? Can you believe it? During one of my naps between two commercials, I came up with a solution based on iOS and Arduino. It's nothing complex. An iOS device listens to the TV set's audio, and when the audio level becomes higher than a preset threshold, the iOS device sends a message (via Bluetooth) to Arduino, which controls the TV set volume, emulating the traditional IR remote control. Exactly the same happens when the volume drops below another threshold. The final result is that the TV set volume is almost constant, independent of what is on the air. This helps me sleep longer! The techniques that you are going to learn in this article are useful in many different ways. You can use an IR remote control for any purpose, or you can control many different devices, such as a CD/DVD player, a stereo set, Apple TV, a projector, and so on, directly from an Arduino and iOS device. As always, it is up to your imagination. (For more resources related to this topic, see here.) Constant Volume Controller requirements Our aim is to design an Arduino-based device, which can make the TV set's volume almost constant by emulating the traditional remote controller, and an iOS application, which monitors the TV and decides when to decrease or increase the TV set's volume. Hardware Most TV sets can be controlled by an IR remote controller, which sends signals to control the volume, change the channel, and control all the other TV set functions. IR remote controllers use a carrier signal (usually at 38 KHz) that is easy to isolate from noise and disturbances. The carrier signal is turned on and off by following different rules (encoding) in order to transmit the 0 and 1 digital values. The IR receiver removes the carrier signal (with a pass low filter) and decodes the remaining signal by returning a clear sequence of 0 and 1. The IR remote control theory You can find more information about the IR remote control at http://bit.ly/1UjhsIY. Our circuit will emulate the IR remote controller by using an IR LED, which will send specific signals that can be interpreted by our TV set. On the other hand, we can receive an IR signal with a phototransistor and decode it into an understandable sequence of numbers by designing a demodulator and a decoder. Nowadays, electronics is very simple; an IR receiver module (Vishay 4938) will manage the complexity of signal demodulation, noise cancellation, triggering, and decoding. It can be directly connected to Arduino, making everything very easy. In the project in this article, we need an IR receiver to discover the coding rules that are used by our own IR remote controller (and the TV set). Additional electronic components In this project, we need the following additional components: IR LED Vishay TSAL6100 IR Receiver module Vishay TSOP 4838 Resistor 100Ω Resistor 680Ω Electrolytic capacitor 0.1μF Electronic circuit The following picture shows the electrical diagram of the circuit that we need for the project: The IR receiver will be used only to capture the TV set's remote controller signals so that our circuit can emulate them. However, an IR LED is constantly used to send commands to the TV set. The other two LEDs will show when Arduino increases or decreases the volume. They are optional and can be omitted. As usual, the Bluetooth device is used to receive commands from the iOS device. Powering the IR LED in the current limits of Arduino From the datasheet of the TSAL6100, we know that the forward voltage is 1.35V. The voltage drop along R1 is then 5-1.35 = 3.65V, and the current provided by Arduino to power the LED is about 3.65/680=5.3 mA. The maximum current that is allowed for each PIN is 40 mA (the recommended value is 20 mA). So, we are within the limits. In case your TV set is far from the LED, you may need to reduce the R1 resistor in order to get more current (and the IR light). Use a new value of R1 in the previous calculations to check whether you are within the Arduino limits. For more information about the Arduino PIN current, check out http://bit.ly/1JosGac. The following diagram shows how to mount the circuit on a breadboard: Arduino code The entire code of this project can be downloaded from https://www.packtpub.com/books/content/support. To understand better the explanations in the following paragraphs, open the downloaded code while reading them. In this project, we are going to use the IR remote library, which helps us code and decode IR signals. The library can be downloaded from http://bit.ly/1Isd8Ay and installed by using the following procedure: Navigate to the release page of http://bit.ly/1Isd8Ay in order to get the latest release and download the IRremote.zip file. Unzip the file whatever you like. Open the Finder and then the Applications folder (Shift + Control + A). Locate the Arduino application. Right-click on it and select Show Package Contents. Locate the Java folder and then libraries. Copy the IRremote folder (unzipped in step 2) into the libraries folder. Restart Arduino if you have it running. In this project, we need the following two Arduino programs: One is used to acquire the codes that your IR remote controller sends to increase and decrease the volume The other is the main program that Arduino has to run to automatically control the TV set volume Let's start with the code that is used to acquire the IR remote controller codes. Decoder setup code In this section, we will be referring to the downloaded Decode.ino program that is used to discover the codes that are used by your remote controller. Since the setup code is quite simple, it doesn't require a detailed explanation; it just initializes the library to receive and decode messages. Decoder main program In this section, we will be referring to the downloaded Decode.ino program; the main code receives signals from the TV remote controller and dumps the appropriate code, which will be included in the main program to emulate the remote controller itself. Once the program is run, if you press any button on the remote controller, the console will show the following: For IR Scope: +4500 -4350 … For Arduino sketch: unsigned int raw[68] = {4500,4350,600,1650,600,1600,600,1600,…}; The second row is what we need. Please refer to the Testing and tuning section for a detailed description of how to use this data. Now, we will take a look at the main code that will be running on Arduino all the time. Setup code In this section, we will be referring to the Arduino_VolumeController.ino program. The setup function initializes the nRF8001 board and configures the pins for the optional monitoring LEDs. Main program The loop function just calls the polACI function to allow the correct management of incoming messages from the nRF8001 board. The program accepts the following two messages from the iOS device (refer to the rxCallback function): D to decrease the volume I to increase the volume The following two functions perform the actual increasing and decreasing of volume by sending the two up and down buffers through the IR LED: void volumeUp() { irsend.sendRaw(up, VOLUME_UP_BUFFER_LEN, 38); delay(20); } void volumeDown() { irsend.sendRaw(down, VOLUME_DOWN_BUFFER_LEN, 38); delay(20); irsend.sendRaw(down, VOLUME_DOWN_BUFFER_LEN, 38); delay(20); } The up and down buffers, VOLUME_UP_BUFFER_LEN and VOLUME_DOWN_BUFFER_LEN, are prepared with the help of the Decode.ino program (see the Testing and Tuning section). iOS code In this article, we are going to look at the iOS application that monitors the TV set volume and sends the volume down or volume up commands to the Arduino board in order to maintain the volume at the desired value. The full code of this project can be downloaded from https://www.packtpub.com/books/content/support. To understand better the explanations in the following paragraphs, open the downloaded code while reading them. Create the Xcode project We will create a new project as we already did previously. The following are the steps that you need to follow: The following are the parameters for the new project: Project Type: Tabbed application Product Name: VolumeController Language: Objective-C Devices: Universal To set a capability for this project, perform the following steps: Select the project in the left pane of Xcode. Select Capabilities in the right pane. Turn on the Background Modes option and select Audio and AirPlay (refer to the following picture). This allows an iOS device to listen to audio signals too when the iOS device screen goes off or the app goes in the background: Since the structure of this project is very close to the Pet Door Locker, we can reuse a part of the user interface and the code by performing the following steps: Select FirstViewController.h and FirstViewController.m, right-click on them, click on Delete, and select Move to Trash. With the same procedure, deleteSecondViewControllerand Main.storyboard. Open the PetDoorLocker project in Xcode. Select the following files and drag and drop them to this project (refer to the following picture). BLEConnectionViewController.h     BLEConnectionViewController.m     Main.storyboardEnsure that Copy items if needed is selected and then click on Finish. Copy the icon that was used for the BLEConnectionViewController view controller. Create a new View Controller class and name it VolumeControllerViewController. Open the Main.storyboard and locate the main View Controller. Delete all the graphical components. Open the Identity Inspector and change the Class to VolumeControllerViewController. Now, we are ready to create what we need for the new application. Design the user interface for VolumeControllerViewController This view controller is the main view controller of the application and contains just the following components: The switch that turns on and off the volume control The slider that sets the desired volume of the TV set Once you have added the components and their layout constraints, you will end up with something that looks like the following screenshot: Once the GUI components are linked with the code of the view controller, we end with the following code: @interface VolumeControllerViewController () @property (strong, nonatomic) IBOutlet UISlider *volumeSlider; @end and with: - (IBAction)switchChanged:(UISwitch *)sender { … } - (IBAction)volumeChanged:(UISlider *)sender { … } Writing code for BLEConnectionViewController Since we copied this View Controller from the Pet Door Locker project, we don't need to change it apart from replacing the key, which was used to store the peripheral UUID, from PetDoorLockerDevice to VolumeControllerDevice. We saved some work! Now, we are ready to work on the VolumeControllerViewController, which is much more interesting. Writing code for VolumeControllerViewController This is the main part of the application; almost everything happens here. We need some properties, as follows: @interface VolumeControllerViewController () @property (strong, nonatomic) IBOutlet UISlider *volumeSlider; @property (strong, nonatomic) CBCentralManager *centralManager; @property (strong, nonatomic) CBPeripheral *arduinoDevice; @property (strong, nonatomic) CBCharacteristic *sendCharacteristic; @property (nonatomic,strong) AVAudioEngine *audioEngine; @property float actualVolumeDb; @property float desiredVolumeDb; @property float desiredVolumeMinDb; @property float desiredVolumeMaxDb; @property NSUInteger increaseVolumeDelay; @end Some are used to manage the Bluetooth communication and don't need much explanation. The audioEngine is the instance of AVAudioEngine, which allows us to transform the audio signal captured by the iOS device microphone in numeric samples. By analyzing these samples, we can obtain the power of the signal that is directly related to the TV set's volume (the higher the volume, the greater the signal power). Analog-to-digital conversion The operation of transforming an analog signal into a digital sequence of numbers, which represent the amplitude of the signal itself at different times, is called analog-to-digital conversion. Arduino analog inputs perform exactly the same operation. Together with the digital-to-analog conversion, it is a basic operation of digital signal processing and storing music in our devices and playing it with a reasonable quality. For more details, visit http://bit.ly/1N1QyXp. The actualVolumeDb property stores the actual volume of the signal measured in dB (short for decibel). Decibel (dB) The decibel (dB) is a logarithmic unit that expresses the ratio between two values of a physical quantity. Referring to the power of a signal, its value in decibel is calculated with the following formula: Here, P is the power of the signal and P0[PRK1]  is a reference power. You can find out more about decibel at http://bit.ly/1LZQM0m. We have to point out that if P < P0[PRK2] , the value of PdB[PRK3]  if lower of zero. So, decibel values are usually negative values, and 0dB indicates the maximum power of the signal. The desiredVolumeDb property stores the desired volume measured in dB, and the user controls this value through the volume slider in the main tab of the app; desiredVolumeMinDb and desiredVolumeMaxDb are derived from the desiredVolumeDb. The most significant part of the code is in the viewDidLoad method (refer to the downloaded code). First, we instantiate the AudioEngine and get the default input node, which is the microphone, as follows: _audioEngine = [[AVAudioEngine alloc] init]; AVAudioInputNode *input = [_audioEngine inputNode]; The AVAudioEngine is a very powerful class, which allows digital audio signal processing. We are just going to scratch its capabilities. AVAudioEngine You can find out more about AVAudioEngine by visiting http://apple.co/1kExe35 (AVAudioEngine in practice) and http://apple.co/1WYG6Tp. The AVAudioEngine and other functions that we are going to use require that we add the following imports: #import <AVFoundation/AVFoundation.h> #import <Accelerate/Accelerate.h> By installing an audio tap on the bus for our input node, we can get the numeric representation of the signal that the iOS device is listening to, as follows: [input installTapOnBus:0 bufferSize:8192 format:[input inputFormatForBus:0] block:^(AVAudioPCMBuffer* buffer, AVAudioTime* when) { … … }]; As soon as a new buffer of data is available, the code block is called and the data can be processed. Now, we can take a look at the code that transforms the audio data samples into actual commands to control the TV set: for (UInt32 i = 0; i < buffer.audioBufferList->mNumberBuffers; i++) { Float32 *data = buffer.audioBufferList->mBuffers[i].mData; UInt32 numFrames = buffer.audioBufferList->mBuffers[i].mDataByteSize / sizeof(Float32); // Squares all the data values vDSP_vsq(data, 1, data, 1, numFrames*buffer.audioBufferList->mNumberBuffers); // Mean value of the squared data values: power of the signal float meanVal = 0.0; vDSP_meanv(data, 1, &meanVal, numFrames*buffer.audioBufferList->mNumberBuffers); // Signal power in Decibel float meanValDb = 10 * log10(meanVal); _actualVolumeDb = _actualVolumeDb + 0.2*(meanValDb - _actualVolumeDb); if (fabsf(_actualVolumeDb) < _desiredVolumeMinDb && _centralManager.state == CBCentralManagerStatePoweredOn && _sendCharacteristic != nil) { //printf("Decrease volumen"); NSData* data=[@"D" dataUsingEncoding:NSUTF8StringEncoding]; [_arduinoDevice writeValue:data forCharacteristic:_sendCharacteristic type:CBCharacteristicWriteWithoutResponse]; _increaseVolumeDelay = 0; } if (fabsf(_actualVolumeDb) > _desiredVolumeMaxDb && _centralManager.state == CBCentralManagerStatePoweredOn && _sendCharacteristic != nil) { _increaseVolumeDelay++; } if (_increaseVolumeDelay > 10) { //printf("Increase volumen"); _increaseVolumeDelay = 0; NSData* data=[@"I" dataUsingEncoding:NSUTF8StringEncoding]; [_arduinoDevice writeValue:data forCharacteristic:_sendCharacteristic type:CBCharacteristicWriteWithoutResponse]; } } In our case, the for cycle is executed just once because we have just one buffer and we are using only one channel. The power of a signal, represented by N samples, can be calculated by using the following formula: Here, v is the value of the nth signal sample. Because the power calculation has to performed in real time, we are going to use the following functions, which are provided by the Accelerated Framework: vDSP_vsq: This function calculates the square of each input vector element vDSP_meanv: This function calculates the mean value of the input vector elements The Accelerated Framework The Accelerated Framework is an essential tool that is used for digital signal processing. It saves you time in implementing the most used algorithms and mostly providing implementation of algorithms that are optimized in terms of memory footprint and performance. More information on the Accelerated Framework can be found at http://apple.co/1PYIKE8 and http://apple.co/1JCJWYh. Eventually, the signal power is stored in _actualVolumeDb. When the modulus of _actualVolumeDb is lower than the _desiredVolumeMinDb, the TV set's volume is too high, and we need to send a message to Arduino to reduce it. Don't forget that _actualVolumeDb is a negative number; the modulus decreases this number when the TV set's volume increases. Conversely, when the TV set's volume decreases, the _actualVolumeDb modulus increases, and when it gets higher than _desiredVolumeMaxDb, we need to send a message to Arduino to increase the TV set's volume. During pauses in dialogues, the power of the signal tends to decrease even if the volume of the speech is not changed. Without any adjustment, the increasing and decreasing messages are continuously sent to the TV set during dialogues. To avoid this misbehavior, we send the volume increase message. Only after this does the signal power stay over the threshold for some time (when _increaseVolumeDelay is greater than 10). We can take a look at the other view controller methods that are not complex. When the view belonging at the view controller appears, the following method is called: -(void)viewDidAppear:(BOOL)animated { [super viewDidAppear:animated]; NSError* error = nil; [self connect]; _actualVolumeDb = 0; [_audioEngine startAndReturnError:&error]; if (error) { NSLog(@"Error %@",[error description]); } } In this function, we connect to the Arduino board and start the audio engine in order to start listening to the TV set. When the view disappears from the screen, the viewDidDisappearmethod is called, and we disconnect from the Arduino and stop the audio engine, as follows: -(void)viewDidDisappear:(BOOL)animated { [self viewDidDisappear:animated]; [self disconnect]; [_audioEngine pause]; } The method that is called when the switch is operated (switchChanged) is pretty simple: - (IBAction)switchChanged:(UISwitch *)sender { NSError* error = nil; if (sender.on) { [_audioEngine startAndReturnError:&error]; if (error) { NSLog(@"Error %@",[error description]); } _volumeSlider.enabled = YES; } else { [_audioEngine stop]; _volumeSlider.enabled = NO; } } The method that is called when the volume slider changes is as follows: - (IBAction)volumeChanged:(UISlider *)sender { _desiredVolumeDb = 50.*(1-sender.value); _desiredVolumeMaxDb = _desiredVolumeDb + 2; _desiredVolumeMinDb = _desiredVolumeDb - 3; } We just set the desired volume and the lower and upper thresholds. The other methods that are used to manage the Bluetooth connection and data transfer don't require any explanation, because they are exactly like in the previous projects. Testing and tuning We are now ready to test our new amazing system and spend more and more time watching TV (or taking more and more naps!) Let's perform the following procedure: Load the Decoder.ino sketch and open the Arduino IDE console. Point your TV remote controller to the TSOP4838 receiver and press the button that increases the volume. You should see something like the following appearing on the console: For IR Scope: +4500 -4350 … For Arduino sketch: unsigned int raw[68] = {4500,4350,600,1650,600,1600,600,1600,…}; Copy all the values between the curly braces. Open the Arduino_VolumeController.ino and paste the values for the following: unsigned int up[68] = {9000, 4450, …..,}; Check whether the length of the two vectors (68 in the example) is the same and modify it, if needed. Point your TV remote controller to the TSOP4838 receiver and press the button that decreases the volume. Copy the values and paste them for: unsigned int down[68] = {9000, 4400, ….,}; Check whether the length of the two vectors (68 in the example) is the same and modify it, if needed. Upload the Arduino_VolumeController.ino to Arduino and point the IR LED towards the TV set. Open the iOS application, scan for the nRF8001, and then go to the main tab. Tap on connect and then set the desired volume by touching the slider. Now, you should see the blue LED and the green LED flashing. The TV set's volume should stabilize to the desired value. To check whether everything is properly working, increase the volume of the TV set by using the remote control; you should immediately see the blue LED flashing and the volume getting lower to the preset value. Similarly, by decreasing the volume with the remote control, you should see the green LED flashing and the TV set's volume increasing. Take a nap, and the commercials will not wake you up! How to go further The following are some improvements that can be implemented in this project: Changing channels and controlling other TV set functions. Catching handclaps to turn on or off the TV set. Adding a button to mute the TV set. Muting the TV set on receiving a phone call. Anyway, you can use the IR techniques that you have learned for many other purposes. Take a look at the other functions provided by the IRremote library to learn the other provided options. You can find all the available functions in the IRremote.h that is stored in the IRremote library folder. On the iOS side, try to experiment with the AV Audio Engine and the Accelerate Framework that is used to process signals. Summary This artcle focused on an easy but useful project and taught you how to use IR to transmit and receive data to and from Arduino. There are many different applications of the basic circuits and programs that you learned here. On the iOS platform, you learned the very basics of capturing sounds from the device microphone and the DSP (digital signal processing). This allows you to leverage the processing capabilities of the iOS platform to expand your Arduino projects. Resources for Article: Further resources on this subject: Internet Connected Smart Water Meter[article] Getting Started with Arduino[article] Programmable DC Motor Controller with an LCD [article]
Read more
  • 0
  • 0
  • 12130
Modal Close icon
Modal Close icon