Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-bluetooth-low-energy-blend-micro
Michael Ang
17 Apr 2015
7 min read
Save for later

Combining the Blend Micro with the Bluetooth Low-Energy Module

Michael Ang
17 Apr 2015
7 min read
Have you ever wanted an easy way to connect your Arduino to your phone? The Blend Micro from RedBearLab is an Arduino-compatible development board that includes a Bluetooth Low-Energy (BLE) module for connecting with phones and computers. With BLE you can make a quick connection between your phone and Arduino to exchange simple messages like sensor readings or commands for the Arduino to execute. Blend Micro top and bottom showing Bluetooth module on the top side. ATMega32u4 (Arduino) microcontroller on the bottom and on-board antenna. The Blend Micro is a rather small development board - much smaller than a normal-sized Arduino. This makes it great for portable devices - just the kind we might like to connect to our phone! The larger Blend is available in the full-size Arduino format, if you have shields you’d like to connect. Bluetooth Low-Energy offers a lot of advantages over older versions of Bluetooth, particularly for battery-powered devices that aren’t always transmitting data. Recent iOS / Android devices and laptops with Bluetooth 4.0 should work (there’s a list of compatible devices on the Blend Micro page). I’ve been using the Blend Micro with my iPhone 5. Even on a breadboard it’s small enough to be portable. Coin cell battery pack on the right. Getting set up for development is unfortunately a bit complicated. To use the normal Arduino IDE you have to download an older version of Arduino (I’m using 1.0.6), install a few libraries, and patch a file inside the Arduino application itself (details here). Luckily that only has to be done once. A potentially easier way to get started is to use the online programming environment Codebender (quickstart instructions). One hitch with Codebender is you may need to manually press the reset button on the Blend Micro while programming (this isn’t required when programming using the normal Arduino IDE). If the Blend Micro is actively connected via Bluetooth, closing the connection on your phone or other device before programming seems to help. Once you’re set up for development programming the board is relatively straightforward. Blinking an LED from your Arduino is cool. How about blinking an LED on an Arduino, controlled by your phone? You can load the SimpleControls example onto your Blend Micro and the BLE Controller app onto your phone (iOS, Android). Connecting to the Blend Micro is simple - with the app open just tap "Scan" and your Blend Micro should be shown in the list of discovered devices. There’s no pairing step (required by previous Bluetooth versions) so connecting is easy. The BLE Controller app lets you control a few pins on the Arduino and receive data back, all without needing any more hardware on your phone. Pretty slick! Having the user interface to our device on our phone allows us to show a lot of information since there’s a lot of screen real estate. Since we already have our phone with us, why carry another screen for our portable device? I’m currently working on a wearable light logger that will record the intensity and color of the ambient light that people experience throughout their day. The wearable device is part of my Light Catchers project that collects our "light histories" into a public light sculpture. The wearable will have an Arduino-compatible micro-controller, RGB light sensor and data storage. For prototyping the wearable I’ve been using the Blend Micro to get a real-time view of the light sensor data on my phone so I can see how the sensor reacts to different lighting conditions. Sending live color readings under blue light. I started with the SimpleControls example and adapted it to send the RGB data from the light sensor. You can see the full code that runs on the Blend in my RGBLE.ino sketch. Sending the light sensor data was fairly straight forward. Let’s have a quick look at the code that’s needed to send data over BLE. Color display on the iPhone. RSSI is Bluetooth signal strength. Inside our setup function we can set the name of our BLE device. This name will show up when we scan for the device on our phone. Then we start the BLE library. void setup() { // ... // Set your BLE device name here, max. length 10 ble_set_name("RGB Sensor"); // Init. and start BLE library. ble_begin(); // ... } Inside our loop function, we can check if data is available, and read the bytes that were sent from the phone. void loop() { // ... // If data is ready while(ble_available()) { // read out command and data byte data0 = ble_read(); The RGB sensor that I’m using reads each color channel as a 10-bit value. Since the data won’t fit in an 8-bit byte the value is stored as two bytes. Sending a byte over the BLE connection is as simple as calling ble_write. I send each byte of the two-byte value separately using a little math with the shift operator (>>). I only take a reading and send the data if there is an active BLE connection. // Check if we’re connected if (ble_connected()) { // Take a light reading // ... // Send reading as bytes. ble_write(r >> 8); ble_write(r); At the end of our loop function the library needs to do some work to handle the Bluetooth data. // Allow BLE library to send/receive data ble_do_events(); } // end loop The app I run on my iPhone is a customized version of the Simple Controls sample app. My app shows the received color values on-screen. RedBearLab has sample code for various platforms available on the RedBearLab github page. For prototyping my device having an on-screen display with debugging controls is great. The small size of the Blend Micro makes it well suited for prototyping my wearable device. Range seems to be fairly limited (think inside a room rather than between rooms) but I haven’t done anything to optimize the antenna placement, so your mileage may vary. Color sensor prototype "in the field" on a sunny day at Tempelhofer Feld in Berlin. Battery life seems quite promising. I’m running my prototype off two 3V lithium coin cells and get several hours of life even before doing power optimization. Some Arduino boards have a power LED that’s always on while the board is powered. That LED might draw 20mA of current, which is a lot when you consider that good coin cells might provide 240mAh of current in the best case (typical datasheet). With the Blend Micro it’s easy to turn off all the onboard LEDs (see the RGBLE sketch for details). I measured the current consumption of my prototype around 14-16mA, with peaks around 20mA when starting the Bluetooth connection to my phone. It’s impressive to be sending data over the air using less power than you might use to light an LED! Accurately measuring the power consumption can be tricky since the radio transmissions can happen in short bursts. Probably the topic of another post! Other than some initial difficulty setting up the development environment programming with the Blend Micro is pretty smooth. Connecting your Arduino to your phone over a low power radio link opens up a lot of possibilities when you consider that your phone probably has a large touchscreen, cellular Internet connection, GPS and more. Once you try an Arduino that can wirelessly talk to your phone and computer, you always want it to do that! Resources RedBearLab Blend Micro RGBLE Arduino sketch (We Are) Light Catchers - wearable light logger About the Author Michael Ang is a Berlin-based artist and engineer working at the intersection of art, engineering and human experience. His works use technology to enhance our understanding of natural phenomena, modulate social interaction, and bridge the divide between the virtual and physical. His Light Catchers workshops and public installations will take place in Germany for the International Year of Light 2015.
Read more
  • 0
  • 0
  • 5488

article-image-using-networking-distributed-computing-openframeworks
Packt
16 Apr 2015
16 min read
Save for later

Using networking for distributed computing with openFrameworks

Packt
16 Apr 2015
16 min read
In this article by Denis Perevalov and Igor (Sodazot) Tatarnikov, authors of the book openFrameworks Essentials, we will investigate how to create a distributed project consisting of several programs working together and communicating with each other via networking. (For more resources related to this topic, see here.) Distributed computing with networking Networking is a way of sending and receiving data between programs, which work on a single or different computers and mobile devices. Using networking, it is possible to split a complex project into several programs working together. There are at least three reasons to create distributed projects: The first reason is splitting to obtain better performance. For example, when creating a big interactive wall with cameras and projectors, it is possible to use two computers. The first computer (tracker) will process data from cameras and send the result to the second computer (render), which will render the picture and output it to projectors. The second reason is creating a heterogeneous project using different development languages. For example, consider a project that generates a real-time visualization of data captured from the Web. It is easy to capture and analyze the data from the Web using a programming language like Python, but it is hard to create a rich, real-time visualization with it.On the opposite side, openFrameworks is good for real-time visualization but is not very elegant when dealing with data from the Web. So, it is a good idea to build a project consisting of two programs. The first Python program will capture data from the Web, and the second openFrameworks program will perform rendering. The third reason is synchronization with, and external control of, one program with other programs/devices. For example, a video synthesizer can be controlled from other computers and mobiles via networking. Networking in openFrameworks openFrameworks' networking capabilities are implemented in two core addons: ofxNetwork and ofxOsc. To use an addon in your project, you need to include it in the new project when creating a project using Project Generator, or by including the addon's headers and libraries into the existing project manually. If you need to use only one particular addon, you can use an existing addon's example as a sketch for your project. The ofxNetwork addon The ofxNetwork addon contains classes for sending and receiving data using the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP). The difference between these protocols is that TCP guarantees receiving data without losses and errors but requires the establishment of a preliminary connection (known as handshake) between a sender and a receiver. UDP doesn't require the establishment of any preliminary connection but also doesn't guarantee delivery and correctness of the received data. Typically, TCP is used in tasks where data needs to be received without errors, such as downloading a JPEG file from a web server. UDP is used in tasks where data should be received in real time at a fast rate, such as receiving a game state 60 times per second in a networking game. The ofxNetwork addon's classes are quite generic and allow the implementation of a wide range of low-level networking tasks. In this article, we don't explore it in detail. The ofxOsc addon The ofxOsc addon is intended for sending and receiving messages using the Open Sound Control (OSC) protocol. Messages of this protocol (OSC messages) are intended to store control commands and parameter values. This protocol is very popular today and is implemented in many VJ and multimedia programs and software for live electronic sound performance. All the popular programming tools support OSC too. An OSC protocol can use the UDP or TCP protocols for data transmission. Most often, as in openFrameworks implementation, a UDP protocol is used. See details of the OSC protocol at opensoundcontrol.org/spec-1_0. The main classes of ofxOsc are the following: ofxOscSender: This sends OSC messages ofxOscReceiver: This receives OSC messages ofxOscMessage: This class is for storing a single OSC message ofxOscBundle: This class is for storing several OSC messages, which can be sent and received as a bundle Let's add the OSC receiver to our VideoSynth project and then create a simple OSC sender, which will send messages to the VideoSynth project. Implementing the OSC messages receiver To implement the receiving of OSC messages in the VideoSynth project, perform the following steps: Include the ofxOsc addon's header to the ofApp.h file by inserting the following line after the #include "ofxGui.h" line: #include "ofxOsc.h" Add a declaration of the OSC receiver object to the ofApp class: ofxOscReceiver oscReceiver; Set up the OSC receiver in setup(): oscReceiver.setup( 12345 ); The argument of the setup() method is the networking port number. After executing this command, oscReceiver begins listening on this port for incoming OSC messages. Each received message is added to a special message queue for further processing. A networking port is a number from 0 to 65535. Ports from 10000 to 65535 normally are not used by existing operating systems, so you can use them as port numbers for OSC messages. Note that two programs receiving networking data and working on the same computer must have different port numbers. Add the processing of incoming OSC messages to update(): while ( oscReceiver.hasWaitingMessages() ) {ofxOscMessage m;oscReceiver.getNextMessage( &m );if ( m.getAddress() == "/pinchY" ) {pinchY = m.getArgAsFloat( 0 );}} The first line is a while loop, which checks whether there are unprocessed messages in the message queue of oscReceiver. The second line declares an empty OSC message m. The third line pops the latest message from the message queue and copies it to m. Now, we can process this message. Any OSC message consists of two parts: an address and (optionally) one or several arguments. An address is a string beginning with the / character. An address denotes the name of a control command or the name of a parameter that should be adjusted. Arguments can be float, integer, or string values, which specify some parameters of the command. In our example, we want to adjust the pinchY slider with OSC commands, so we expect to have an OSC message with the address /pinchY and the first argument with its float value. Hence, in the fourth line, we check whether the address of the m message is equal to /pinchY. If this is true, in the fifth line, we get the first message's argument (an argument with the index value 0) and set the pinchY slider to this value. Of course, we could use any other address instead of /pinchY (for example, /val), but normally, it is convenient to have the address similar to the parameter's name. It is easy to control other sliders with OSC. For example, to add control of the extrude slider, just add the following code: if ( m.getAddress() == "/extrude" ) {extrude = m.getArgAsFloat( 0 );} After running the project, nothing new happens; it works as always. But now, the project is listening for incoming OSC messages on port 12345. To check this, let's create a tiny openFrameworks project that sends OSC messages. Creating an OSC sender with openFrameworks Let's create a new project OscOF, one that contains a GUI panel with one slider, and send the slider's value via OSC to the VideoSynth project. Here, we assume that the OSC sender and receiver run on the same computer. See the details on running the sender on a separate computer in the upcoming Sending OSC messages between two separate computers section. Now perform the following steps: Create a new project using Project Generator. Namely, start Project Generator, set the project's name to OscOF (that means OSC with openFrameworks), and include the ofxGui and ofxOsc addons to the newly created project. The ofxGui addon is needed to create the GUI slider, and the ofxOsc addon is needed to send OSC messages. Open this project in your IDE. Include both addons' headers to the ofApp.h file by inserting the following lines (after the #include "ofMain.h" line): #include "ofxGui.h"#include "ofxOsc.h" Add the declarations of the OSC sender object, the GUI panel, and the GUI slider to the ofApp class declaration: ofxOscSender oscSender;ofxPanel gui;ofxFloatSlider slider;void sliderChanged( float &value ); The last line declares a new function, which will be called by openFrameworks when the slider's value is changed. This function will send the corresponding OSC message. The symbol & before value means that the value argument is passed to the function as a reference. Using reference here is not important for us, but is required by ofxGui; please see the information on the notion of a reference in the C++ documentation. Set up the OSC sender, the GUI panel with the slider, and the project's window title and size by adding the following code to setup(): oscSender.setup( "localhost", 12345 );slider.addListener( this, &ofApp::sliderChanged );gui.setup( "Parameters" );gui.add( slider.setup("pinchY", 0, 0, 1) );ofSetWindowTitle( "OscOF" );ofSetWindowShape( 300, 150 ); The first line starts the OSC sender. Here, the first argument specifies the IP address to which the OSC sender will send its messages. In our case, it is "localhost". This means the sender will send data to the same computer on which the sender runs. The second argument specifies the networking port, 12345. The difference between setting up the OSC sender and receiver is that we need to specify the address and port for the sender, and not only the port. Also, after starting, the sender does nothing until we give it the explicit command to send an OSC message. The second line starts listening to the slider's value changes. The first and second arguments of the addListener() command specify the object (this) and its member function (sliderChanged), which should be called when the slider is changed. The remaining lines set up the GUI panel, the GUI slider, and the project's window title and shape. Now, add the sliderChanged() function definition to ofApp.cpp: void ofApp::sliderChanged( float &value ) {ofxOscMessage m;m.setAddress( "/pinchY" );m.addFloatArg( value );oscSender.sendMessage( m );} This function is called when the slider value is changed, and the value parameter is its new value. The first three lines of the function create an OSC message m, set its address to /pinchY, and add a float argument equal to value. The last line sends this OSC message. As you may see, the m message's address (/pinchY) coincides with the address implemented in the previous section, which is expected by the receiver. Also, the receiver expects that this message has a float argument—and it is true too! So, the receiver will properly interpret our messages and set its pinchY slider to the desired value. Finally, add the command to draw GUI to draw(): gui.draw(); On running the project, you will see its window, consisting of a GUI panel with a slider, as shown in the following screenshot: This is the OSC sender made with openFrameworks Don't stop this project for a while. Run the VideoSynth project and change the pinchY slider's value in the OscOF window using the mouse. The pinchY slider in VideoSynth should change accordingly. This means that the OSC transmission between the two openFrameworks programs works. If you are not interested in sending data between two separate computers, feel free to skip the following section. Sending OSC messages between two separate computers We have checked passing OSC messages between two programs that run on the same computer. Now let's consider a situation when an OSC sender and an OSC receiver run on two separate computers connected to the same Local Area Network (LAN) using Ethernet or Wi-Fi. If you have two laptops, most probably they are already connected to the same networking router and hence are in the same LAN. To make an OSC connection work in this case, we need to change the "localhost" value in the sender's setup command by the local IP address of the receiver's computer. Typically, this address has a form like "192.168.0.2", or it could be a name, for example, "LAPTOP3". You can get the receiver's computer IP address by opening the properties of your network adapter or by executing the ifconfig command in the terminal window (for OS X or Linux) or ipconfig in the command prompt window (for Windows). Connection troubleshooting If you set the IP address in the sender's setup, but OSC messages from the OSC sender don't come to the OSC receiver, then it could be caused by the network firewall or antivirus software, which blocks transmitting data over our 12345 port. So please check the firewall and antivirus settings. To make sure that the connection between the two computers exists, use the ping command in the terminal (or the command prompt) window. Creating OSC senders with TouchOSC and Python At this point, we create the OSC sender using openFrameworks and send its data out to the VideoSynth project. But, it's easy to create the OSC sender using other programming tools. Such an opportunity can be useful for you in creating complex projects. So, let's show how to create an OSC sender on a mobile device using the TouchOSC app and also create simple senders using the Python and Max/MSP languages. If you are not interested in sending OSC from mobile devices or in Python or Max/MSP, feel free to skip the corresponding sections. Creating an OSC sender for a mobile device using the TouchOSC app It is very handy to control your openFrameworks project by a mobile device (or devices) using the OSC protocol. You can create a custom OSC sender by yourself, or you can use special apps made for this purpose. One such application is TouchOSC. It's a paid application available for iOS (see hexler.net/software/touchosc) and Android (see hexler.net/software/touchosc-android). Working with TouchOSC consists of four steps: creating the GUI panel (called layout) on the laptop, uploading it to a mobile device, setting up the OSC receiver's address and port, and working with the layout. Let's consider them in detail: To create the layout, download, unzip, and run a special program, TouchOSC Editor, on a laptop (it's available for OS X, Windows, and Linux). Add the desired GUI elements on the layout by right-clicking on the layout. When the layout is ready, upload it to a mobile device by running the TouchOSC app on the mobile and pressing the Sync button in TouchOSC Editor. In the TouchOSC app, go to the settings and set up the OSC receiver's IP address and port number. Next, open the created layout by choosing it from the list of all the existing layouts. Now, you can use the layout's GUI elements to send the OSC messages to your openFrameworks project (and, of course, to any other OSC-supporting software). Creating an OSC sender with Python In this section, we will create a project that sends OSC messages using the Python language. Here, we assume that the OSC sender and receiver run on the same computer. See the details on running the sender on a separate computer in the previous Sending OSC messages between two separate computers section. Python is a free, interpreted language available for all operating systems. It is extremely popular nowadays in various fields, including teaching programming, developing web projects, and performing computations in natural sciences. Using Python, you can easily capture information from the Web and social networks (using their API) and send it to openFrameworks for further processing, such as visualization or sonification, that is, converting data to a picture or sound. Using Python, it is quite easy to create GUI applications, but here we consider creating a project without a GUI. Perform the following steps to install Python, create an OSC sender, and run it: Install Python from www.python.org/downloads (the current version is 3.4). Download the python-osc library from pypi.python.org/pypi/python-osc and unzip it. This library implements the OSC protocol support in Python. Install this library, open the terminal (or command prompt) window, go to the folder where you unzipped python-osc and type the following: python setup.py install If this doesn't work, type the following: python3 setup.py install Python is ready to send OSC messages. Now let's create the sender program. Using your preferred code or text editor, create the OscPython.py file and fill it with the following code: from pythonosc import udp_clientfrom pythonosc import osc_message_builderimport timeif __name__ == "__main__":oscSender = udp_client.UDPClient("localhost", 12345)for i in range(10):m = osc_message_builder.OscMessageBuilder(address ="/pinchY")m.add_arg(i*0.1)oscSender.send(m.build())print(i)time.sleep(1) The first three lines import the udp_client, osc_message_builder, and time modules for sending the UDP data (we will send OSC messages using UDP), creating OSC messages, and working with time respectively. The if __name__ == "__main__": line is generic for Python programs and denotes the part of the code that will be executed when the program runs from the command line. The first line of the executed code creates the oscSender object, which will send the UDP data to the localhost IP address and the 12345 port. The second line starts a for cycle, where i runs the values 0, 1, 2, …, 9. The body of the cycle consists of commands for creating an OSC message m with address /pinchY and argument i*0.1, and sending it by OSC. The last two lines print the value i to the console and delay the execution for one second. Open the terminal (or command prompt) window, go to the folder with the OscPython.py file, and execute it by the python OscPython.py command. If this doesn't work, use the python3 OscPython.py command. The program starts and will send 10 OSC messages with the /pinchY address and the 0.0, 0.1, 0.2, …, 0.9 argument values, with 1 second of pause between the sent messages. Additionally, the program prints values from 0 to 9, as shown in the following screenshot: This is the output of an OSC sender made with Python Run the VideoSynth project and start our Python sender again. You will see how its pinchY slider gradually changes from 0.0 to 0.9. This means that OSC transmission from a Python program to an openFrameworks program works. Summary In this article, we learned how to create distributed projects using the OSC networking protocol. At first, we implemented receiving OSC in our openFrameworks project. Next, we created a simple OSC sender project with openFrameworks. Then, we considered how to create an OSC sender on mobile devices using TouchOSC and also how to build senders using the Python language. Now, we can control the video synthesizer from other computers or mobile devices via networking. Resources for Article: Further resources on this subject: Kinect in Motion – An Overview [Article] Learn Cinder Basics – Now [Article] Getting Started with Kinect [Article]
Read more
  • 0
  • 0
  • 5367

article-image-how-create-simple-first-person-puzzle-game
Travis and
16 Apr 2015
6 min read
Save for later

How to create a simple First Person Puzzle Game

Travis and
16 Apr 2015
6 min read
So you want to make a first person puzzle game, but you're not sure where to start. Well, this post can hopefully give you a heads-up on how to start and create a game. When creating this post we wanted to make a complex system of colored keys and locked doors, but we decided on a simple trigger system moving an elevator. That may seem really simplistic, but it encompasses really everything you need to make the harder key color key scene described, while keeping this lesson short. Let's begin. First create a project in Unity and name it something simple, like FirstPersonPuzzle. Include the Character Controller package, as this package contains the FirstPersonController that we are going to use in this project! If this is your first time using Unity, there are some great scripts packaged with Unity. Two examples are SmoothFollow.js and SmoothLookAt.js. The first of which has the camera follow a specific game object you designate and follow it without giving a choppy look that can come from just having the camera follow the object. SmoothLookAt will have the camera look at a designated game object, but it does not have a quick cut feeling when the script is run.  There are also C# versions you can find of almost all of these scripts online through the Unity community. We don't have enough time to get into them, but we encourage you to try them for yourself! Next we are going to make a simple plane to walk on, and give it the following information inside the transform component. Click the plane in the hierarchy view, and rename it to Ground. Hmm, it's a little dark in here, so lets quickly throw in a directional light, just to spice the place up a little bit, and put it in a nice place above us to keep it out of way while building the scene. First we will make a material, and then drag and drop that material on to the plane in the hierarchy. Delete the Main Camera found in the hierarchy. You hierarchy should now look like this. Now drag the First Person Controller from the Standard Assets folder into the hierarchy. Put the controller at the following transform position and you should be ready to go walking around the scene by hitting the play button! Just remember to switch the tag of the Controller to the Player tag, as seen in the screenshot. Next, we're going to create a little elevator script. We know, why an elevator in a puzzle game. Well, I want to show a little bit of how moving objects look, as well as triggering an action, and I wanted the single most ridiculous, jaw dropping, out of this world way to do so. Unfortunately it didn't work... so we put in an elevator. Create a cube quickly straight in the hierarchy and give it the following transform information. Now let's make another material, make it red, and place it onto the cube we just made. Rename that cube to "Elevator". Your scene should look like this: Create another cube in the hierarchy and call it Platform, and give it the following transform attributes. Okay, lastly for objects, create another cube, and name it ElevatorTrigger. For ease, in the hierarchy, drag the ElevatorTrigger cube we created into the Elevator object, making Elevator now the parent of ElevatorTrigger as shown. Now go to the inspector, and right click the Mesh Renderer and remove the component. This will essentially leave our object invisible. Also check the box in the Box Collider called Is Trigger so that this collider will watch for trigger enters. We're going to be using this for our coding. Make sure all transform attributes are as given. Now click create and select create --> C# Script, and name it Elevator. This script is going to be our first and only script. Explaining code is always very hard, so we’re going to try and do our best without being boring. First, the lerpToPosition and StartLerp function are taken almost word for word from the Unity documentation for Vector3.Lerp. We did this so as to not have to explain Lerping heavily as its actually a fairly complex function, but all you have to know is that we are going to take a startPosition (where the elevator is currently), and endPosition (where the elevator is going to go) and then have it travel there over a certain amount of time (which will be calculated using the speed we want to go). The magic really happens in the OnTriggerEnter method though. When a collider enters our trigger, this method is instantly called once. In it we check to see if what collided with us is the player. If so, we allow the lerp to begin. Lastly, in CheckLerpComplete, we do a little clean up that once the position of the elevator is at the endPositon, we stop the Lerp. This will clean up a little overhead for Unity. Drag this Script on to the ElevatorTrigger button, and give the attributes the following values, and your scene should be ready to go! Just remember, learning is all about failing, over and over, so don't become discouraged when things fail or code you have written just doesn't seem to work. That is part of the building process, and it would be a lie to tell you that when we wrote this code for article and that it worked the first time. It didn't, but you iterate, change the idea to something better, and come out with a working product at the end. About the Authors Denny is a Mobile Application Developer at Canadian Tire Development Operations. While working, Denny regularly uses Unity to create in-store experiences, but also works on other technologies like Famous, Phaser.IO, LibGDX, and CreateJS when creating game-like apps. In his own words: "I also enjoy making non-game mobile apps, but who cares about that, am I right?" Travis is a Software Engineer, living in the bitter region of Winnipeg, Canada. His work and hobbies include Game Development with Unity or Phaser.IO, as well as Mobile App Development. He can enjoy a good video game or two, but only if he knows he'll win!
Read more
  • 0
  • 0
  • 5148

article-image-text-mining-r-part-2
Robi Sen
16 Apr 2015
4 min read
Save for later

Text Mining with R: Part 2

Robi Sen
16 Apr 2015
4 min read
In Part 1, we covered the basics of doing text mining in R by selecting data, preparing it, cleaning, then performing various operations on it to visualize that data. In this post we look at a simple use case showing how we can derive real meaning and value from a visualization by seeing how a simple word cloud and help you understand the impact of an advertisement. Building the document matrix A common technique in text mining is using a matrix of documents terms called a document term matrix. A document term matrix is simply a matrix where columns are terms and rows are documents that contain the occurrence of specific terms within the document. Or if you reverse the order and have terms as rows and documents as columns it’s called a term document matrix. For example let’s say we have two documents D 1 and D2. For example let’s say we have the documents: D1 = "I like cats" D2 = "I hate cats" Then the document term matrix would look like:   I like hate cats D1 1 1 0 1 D2 1 0 1 1 For our project to make a Document term matrix in R all you need to do is use the DocumentTermMatrix() like this: tdm <- DocumentTermMatrix(mycorpus) You can see information on your document term matrix by using print like: print(tdm) <<DocumentTermMatrix (documents: 4688, terms: 18363)>> Non-/sparse entries: 44400/86041344 Sparsity : 100% Maximal term length: 65 Weighting : term frequency (tf) Next because we need to sum up all the values in each term column so that we can drive the frequency of each term occurrence. We also want to sort those values from highest to lowest. You can use this code: m <- as.matrix(tdm) v <- sort(colSums(m),decreasing=TRUE) Next we will use the names() to pull the each term object’s name which in our case is a word. Then we want to build a dataframe from our words associated with their frequency of occurrences. Finally we want to create our word cloud but remove any terms that have an occurrence of less than 45 times to reduce clutter in our wordcloud. You could also use max.words to limit the total number of words in your word cloud. So your final code should look like this: words <- names(v) d <- data.frame(word=words, freq=v) wordcloud(d$word,d$freq,min.freq=45) If you run this in R studio you should see something like the figure which shows the words with highest occurrence in our corpus. The wordcloud object automatically scales the drawn words by the size of their frequency value. From here you can do a lot with your word cloud including change the scale, associate color to various values, and much more. You can read more about wordcloud here. While word clouds are often used on the web for things like blogs, news sites, and other similar use cases they have real value for data analysis beyond just visual indicators for users to find terms of interest. For example if you look at the word cloud we generated you will notice that one of the most popular terms mentioned in tweets is chocolate. Doing a short inspection of our CSV document for the term chocolate we find a lot of people mentioning the word in a variety of contexts but one of the most common is in relationship to a specific super bowl add. For example here is a tweet: Alexalabesky 41673.39 Chocolate chips and peanut butter 0 0 0 Unknown Unknown Unknown Unknown Unknown This appeared after the airing of this advertisement from Butterfinger. So even with this simple R code we can generate real meaning from social media which is the measurable impact of an advertisement during the Super Bowl. Summary In this post we looked at a simple use case showing how we can derive real meaning and value from a visualization by seeing how a simple word cloud and help you understand the impact of an advertisement. About the author Robi Sen, CSO at Department 13, is an experienced inventor, serial entrepreneur, and futurist whose dynamic twenty-plus year career in technology, engineering, and research has led him to work on cutting edge projects for DARPA, TSWG, SOCOM, RRTO, NASA, DOE, and the DOD. Robi also has extensive experience in the commercial space, including the co-creation of several successful start-up companies. He has worked with companies such as UnderArmour, Sony, CISCO, IBM, and many others to help build out new products and services. Robi specializes in bringing his unique vision and thought process to difficult and complex problems allowing companies and organizations to find innovative solutions that they can rapidly operationalize or go to market with.
Read more
  • 0
  • 0
  • 4816

article-image-evenly-spaced-views-auto-layout-ios
Joe Masilotti
16 Apr 2015
5 min read
Save for later

Evenly Spaced Views with Auto Layout in iOS

Joe Masilotti
16 Apr 2015
5 min read
When the iPhone first came out there was only one screen size to worry about 320, 480. Then the Retina screen was introduced doubling the screen's resolution. Apple quickly introduced the iPhone 5 and added an extra 88 points to the bottom. With the most recent line of iPhones two more sizes were added to the mix. Before even mentioning the iPad line that is already five different combinations of heights and widths to account for. To help remedy this growing number of sizes and resolutions Apple introduced Auto Layout with iOS 6. Auto Layout is a dynamic way of laying out views with constraints and rules to let the content fit on multiple screen sizes; think "responsive" for mobile. Lots of layouts are possible with Auto Layout but some require an extra bit of work. One of the more common, albeit tricky, arrangements is to have evenly spaced elements. Having the view scale up to different resolutions and look great on all devices isn't hard and can be done in both Interface Builder or manually in code. Let's walk through how to evenly space views with Auto Layout using Xcode's Interface Builder. Using Interface Builder The easiest way to play around and test layout in IB is to create a new Single View Application iOS project.   Open Main.storyboard and select ViewController on the left. Don't worry that it is showing a square view since we will be laying everything out dynamically. The first addition to the view will be the three `UIView`s we will be evenly spacing. Add them along the view from left to right and assign different colors to each. This will make it easier to distinguish them later. Don't worry about where they are we will fix the layout soon enough.   Spacer View Layout Ideally we would be able to add constraints that evenly space these out directly. Unfortunately, you can not set *equals* restrictions on constraints, only on views. What that means is we have to create spacer views in between our content and then set equal constraints on those. Add four more views, one between the edges and the content views.   Before we add our first constraint let's name each view so we can have a little more context when adding their attributes. One of the most frustrating things when working with Auto Layout is seeing the little red arrow telling you something is wrong. Let's try and incrementally add constraints and get back to a working state as quickly as possible. The first item we want to add will constrain the Left Content view using the spacer. Select the Left Spacer and add left 20, top 20, and bottom 20 constraints.   To fix this first error we need to assign a width to the spacer. While we will be removing it later it makes sense to always have a clean slate when moving on to another view. Add in a width (50) constraint and let IB automatically and update its frame.   Now do the same thing to the Right Spacer.   Content View Layout We will remove the width constraints when everything else is working correctly. Consider them temporary placeholders for now. Next lets lay out the Left Content view. Add a left 0, top 20, bottom 20, width 20 constraint to it.   Follow the same method on the Right Content view.   Twice more follow the same procedure for the Middle Spacer Views giving them left/right 0, top 20, bottom 20, width 50 constraints.   Finally, let's constrain the Middle Content view. Add left 0, top 20, right 0, bottom 20 constraints to it and lay it out.   Final Constraints Remember when I said it was tricky? Maybe a better way to describe this process is long and tedious. All of the setup we have done so far was to set us up in a good position to give the constraints we actually want. If you look at the view it doesn't look very special and it won't even resize the right way yet. To start fix this we bring in the magic constraint of this entire example, Equal Widths on the spacer views. Go ahead and delete the four explicit Width constraints on the spacer views and add an Equal Width constraint to each. Select them all at the same time then add the constraint so they work off of each other.   Finally, set explicit widths on the three content views. This is where you can start customizing the layout to have it look the way you want. For my view I want the three views to be 75 points wide so I removed all of the Width constraints and added them back in for each. Now set the background color of the four spacer views to clear and hide them. Running the app on different size simulators will produce the same result: the three content views remain the same width and stay evenly spaced out along the screen. Even when you rotate the device the views remain spaced out correctly.   Try playing around with different explicit widths of the content views. The same technique can be used to create very dynamic layouts for a variety of applications. For example, this procedurecan be used to create a table cell with an image on the left, text in the middle, and a button on the right. Or it can make one row in a calculator that sizes to fit the screen of the device. What are you using it for? About the author Joe Masilotti is a test-driven iOS developer living in Brooklyn, NY. He contributes to open-source testing tools on GitHub and talks about development, cooking, and craft beer on Twitter.
Read more
  • 0
  • 0
  • 10436

article-image-visualization
Packt
15 Apr 2015
29 min read
Save for later

Visualization

Packt
15 Apr 2015
29 min read
Humans are visual creatures and have evolved to be able to quickly notice the meaning when information is presented in certain ways that cause the wiring in our brains to have the light bulb of insight turn on. This "aha" can often be performed very quickly, given the correct tools, instead of through tedious numerical analysis. Tools for data analysis, such as pandas, take advantage of being able to quickly and iteratively provide the user to take data, process it, and quickly visualize the meaning. Often, much of what you will do with pandas is massaging your data to be able to visualize it in one or more visual patterns, in an attempt to get to "aha" by simply glancing at the visual representation of the information. In this article by Michael Heydt, author of the book Learning pandas we will cover common patterns in visualizing data with pandas. It is not meant to be exhaustive in coverage. The goal is to give you the required knowledge to create beautiful data visualizations on pandas data quickly and with very few lines of code. (For more resources related to this topic, see here.) This article is presented in three sections. The first introduces you to the general concepts of programming visualizations with pandas, emphasizing the process of creating time-series charts. We will also dive into techniques to label axes and create legends, colors, line styles, and markets. The second part of the article will then focus on the many types of data visualizations commonly used in pandas programs and data sciences, including: Bar plots Histograms Box and whisker charts Area plots Scatter plots Density plots Scatter plot matrixes Heatmaps The final section will briefly look at creating composite plots by dividing plots into subparts and drawing multiple plots within a single graphical canvas. Setting up the IPython notebook The first step to plot with pandas data, is to first include the appropriate libraries, primarily, matplotlib. The examples in this article will all be based on the following imports, where the plotting capabilities are from matplotlib, which will be aliased with plt: In [1]:# import pandas, numpy and datetimeimport numpy as npimport pandas as pd# needed for representing dates and timesimport datetimefrom datetime import datetime# Set some pandas options for controlling outputpd.set_option('display.notebook_repr_html', False)pd.set_option('display.max_columns', 10)pd.set_option('display.max_rows', 10)# used for seeding random number sequencesseedval = 111111# matplotlibimport matplotlib as mpl# matplotlib plotting functionsimport matplotlib.pyplot as plt# we want our plots inline%matplotlib inline The %matplotlib inline line is the statement that tells matplotlib to produce inline graphics. This will make the resulting graphs appear either inside your IPython notebook or IPython session. All examples will seed the random number generator with 111111, so that the graphs remain the same every time they run. Plotting basics with pandas The pandas library itself performs data manipulation. It does not provide data visualization capabilities itself. The visualization of data in pandas data structures is handed off by pandas to other robust visualization libraries that are part of the Python ecosystem, most commonly, matplotlib, which is what we will use in this article. All of the visualizations and techniques covered in this article can be performed without pandas. These techniques are all available independently in matplotlib. pandas tightly integrates with matplotlib, and by doing this, it is very simple to go directly from pandas data to a matplotlib visualization without having to work with intermediate forms of data. pandas does not draw the graphs, but it will tell matplotlib how to draw graphs using pandas data, taking care of many details on your behalf, such as automatically selecting Series for plots, labeling axes, creating legends, and defaulting color. Therefore, you often have to write very little code to create stunning visualizations. Creating time-series charts with .plot() One of the most common data visualizations created, is of the time-series data. Visualizing a time series in pandas is as simple as calling .plot() on a DataFrame or Series object. To demonstrate, the following creates a time series representing a random walk of values over time, akin to the movements in the price of a stock: In [2]:# generate a random walk time-seriesnp.random.seed(seedval)s = pd.Series(np.random.randn(1096),index=pd.date_range('2012-01-01','2014-12-31'))walk_ts = s.cumsum()# this plots the walk - just that easy :)walk_ts.plot(); The ; character at the end suppresses the generation of an IPython out tag, as well as the trace information. It is a common practice to execute the following statement to produce plots that have a richer visual style. This sets a pandas option that makes resulting plots have a shaded background and what is considered a slightly more pleasing style: In [3]:# tells pandas plots to use a default style# which has a background fillpd.options.display.mpl_style = 'default'walk_ts.plot(); The .plot() method on pandas objects is a wrapper function around the matplotlib libraries' plot() function. It makes plots of pandas data very easy to create. It is coded to know how to use the data in the pandas objects to create the appropriate plots for the data, handling many of the details of plot generation, such as selecting series, labeling, and axes generation. In this situation, the .plot() method determines that as Series contains dates for its index that the x axis should be formatted as dates and it selects a default color for the data. This example used a single series and the result would be the same using DataFrame with a single column. As an example, the following produces the same graph with one small difference. It has added a legend to the graph, which charts by default, generated from a DataFrame object, will have a legend even if there is only one series of data: In [4]:# a DataFrame with a single column will produce# the same plot as plotting the Series it is created fromwalk_df = pd.DataFrame(walk_ts)walk_df.plot(); The .plot() function is smart enough to know whether DataFrame has multiple columns, and it should create multiple lines/series in the plot and include a key for each, and also select a distinct color for each line. This is demonstrated with the following example: In [5]:# generate two random walks, one in each of# two columns in a DataFramenp.random.seed(seedval)df = pd.DataFrame(np.random.randn(1096, 2),index=walk_ts.index, columns=list('AB'))walk_df = df.cumsum()walk_df.head()Out [5]:A B2012-01-01 -1.878324 1.3623672012-01-02 -2.804186 1.4272612012-01-03 -3.241758 3.1653682012-01-04 -2.750550 3.3326852012-01-05 -1.620667 2.930017In [6]:# plot the DataFrame, which will plot a line# for each column, with a legendwalk_df.plot(); If you want to use one column of DataFrame as the labels on the x axis of the plot instead of the index labels, you can use the x and y parameters to the .plot() method, giving the x parameter the name of the column to use as the x axis and y parameter the names of the columns to be used as data in the plot. The following recreates the random walks as columns 'A' and 'B', creates a column 'C' with sequential values starting with 0, and uses these values as the x axis labels and the 'A' and 'B' columns values as the two plotted lines: In [7]:# copy the walkdf2 = walk_df.copy()# add a column C which is 0 .. 1096df2['C'] = pd.Series(np.arange(0, len(df2)), index=df2.index)# instead of dates on the x axis, use the 'C' column,# which will label the axis with 0..1000df2.plot(x='C', y=['A', 'B']); The .plot() functions, provided by pandas for the Series and DataFrame objects, take care of most of the details of generating plots. However, if you want to modify characteristics of the generated plots beyond their capabilities, you can directly use the matplotlib functions or one of more of the many optional parameters of the .plot() method. Adorning and styling your time-series plot The built-in .plot() method has many options that you can use to change the content in the plot. We will cover several of the common options used in most plots. Adding a title and changing axes labels The title of the chart can be set using the title parameter of the .plot() method. Axes labels are not set with .plot(), but by directly using the plt.ylabel() and plt.xlabel() functions after calling .plot(): In [8]:# create a time-series chart with a title and specific# x and y axes labels# the title is set in the .plot() method as a parameterwalk_df.plot(title='Title of the Chart')# explicitly set the x and y axes labels after the .plot()plt.xlabel('Time')plt.ylabel('Money'); The labels in this plot were added after the call to .plot(). A question that may be asked, is that if the plot is generated in the call to .plot(), then how are they changed on the plot? The answer, is that plots in matplotlib are not displayed until either .show() is called on the plot or the code reaches the end of the execution and returns to the interactive prompt. At either of these points, any plot generated by plot commands will be flushed out to the display. In this example, although .plot() is called, the plot is not generated until the IPython notebook code section finishes completion, so the changes for labels and title are added to the plot. Specifying the legend content and position To change the text used in the legend (the default is the column name from DataFrame), you can use the ax object returned from the .plot() method to modify the text using its .legend() method. The ax object is an AxesSubplot object, which is a representation of the elements of the plot, that can be used to change various aspects of the plot before it is generated: In [9]:# change the legend items to be different# from the names of the columns in the DataFrameax = walk_df.plot(title='Title of the Chart')# this sets the legend labelsax.legend(['1', '2']); The location of the legend can be set using the loc parameter of the .legend() method. By default, pandas sets the location to 'best', which tells matplotlib to examine the data and determine the best place to put the legend. However, you can also specify any of the following to position the legend more specifically (you can use either the string or the numeric code): Text Code 'best' 0 'upper right' 1 'upper left' 2 'lower left' 3 'lower right' 4 'right' 5 'center left' 6 'center right' 7 'lower center' 8 'upper center' 9 'center' 10 In our last chart, the 'best' option actually had the legend overlap the line from one of the series. We can reposition the legend in the upper center of the chart, which will prevent this and create a better chart of this data: In [10]:# change the position of the legendax = walk_df.plot(title='Title of the Chart')# put the legend in the upper center of the chartax.legend(['1', '2'], loc='upper center'); Legends can also be turned off with the legend parameter: In [11]:# omit the legend by using legend=Falsewalk_df.plot(title='Title of the Chart', legend=False); There are more possibilities for locating and actually controlling the content of the legend, but we leave that for you to do some more experimentation. Specifying line colors, styles, thickness, and markers pandas automatically sets the colors of each series on any chart. If you would like to specify your own color, you can do so by supplying style code to the style parameter of the plot function. pandas has a number of built-in single character code for colors, several of which are listed here: b: Blue g: Green r: Red c: Cyan m: Magenta y: Yellow k: Black w: White It is also possible to specify the color using a hexadecimal RGB code of the #RRGGBB format. To demonstrate both options, the following example sets the color of the first series to green using a single digit code and the second series to red using the hexadecimal code: In [12]:# change the line colors on the plot# use character code for the first line,# hex RGB for the secondwalk_df.plot(style=['g', '#FF0000']); Line styles can be specified using a line style code. These can be used in combination with the color style codes, following the color code. The following are examples of several useful line style codes: '-' = solid '--' = dashed ':' = dotted '-.' = dot-dashed '.' = points The following plot demonstrates these five line styles by drawing five data series, each with one of these styles. Notice how each style item now consists of a color symbol and a line style code: In [13]:# show off different line stylest = np.arange(0., 5., 0.2)legend_labels = ['Solid', 'Dashed', 'Dotted','Dot-dashed', 'Points']line_style = pd.DataFrame({0 : t,1 : t**1.5,2 : t**2.0,3 : t**2.5,4 : t**3.0})# generate the plot, specifying color and line style for each lineax = line_style.plot(style=['r-', 'g--', 'b:', 'm-.', 'k:'])# set the legendax.legend(legend_labels, loc='upper left'); The thickness of lines can be specified using the lw parameter of .plot(). This can be passed a thickness for multiple lines, by passing a list of widths, or a single width that is applied to all lines. The following redraws the graph with a line width of 3, making the lines a little more pronounced: In [14]:# regenerate the plot, specifying color and line style# for each line and a line width of 3 for all linesax = line_style.plot(style=['r-', 'g--', 'b:', 'm-.', 'k:'], lw=3)ax.legend(legend_labels, loc='upper left'); Markers on a line can also be specified using abbreviations in the style code. There are quite a few marker types provided and you can see them all at http://matplotlib.org/api/markers_api.html. We will examine five of them in the following chart by having each series use a different marker from the following: circles, stars, triangles, diamonds, and points. The type of marker is also specified using a code at the end of the style: In [15]:# redraw, adding markers to the linesax = line_style.plot(style=['r-o', 'g--^', 'b:*','m-.D', 'k:o'], lw=3)ax.legend(legend_labels, loc='upper left'); Specifying tick mark locations and tick labels Every plot we have seen to this point, has used the default tick marks and labels on the ticks that pandas decides are appropriate for the plot. These can also be customized using various matplotlib functions. We will demonstrate how ticks are handled by first examining a simple DataFrame. We can retrieve the locations of the ticks that were generated on the x axis using the plt.xticks() method. This method returns two values, the location, and the actual labels: In [16]:# a simple plot to use to examine ticksticks_data = pd.DataFrame(np.arange(0,5))ticks_data.plot()ticks, labels = plt.xticks()ticksOut [16]:array([ 0. , 0.5, 1. , 1.5, 2. , 2.5, 3. , 3.5, 4. ]) This array contains the locations of the ticks in units of the values along the x axis. pandas has decided that a range of 0 through 4 (the min and max) and an interval of 0.5 is appropriate. If we want to use other locations, we can provide these by passing them to plt.xticks() as a list. The following demonstrates these using even integers from -1 to 5, which will both change the extents of the axis, as well as remove non integral labels: In [17]:# resize x axis to (-1, 5), and draw ticks# only at integer valuesticks_data = pd.DataFrame(np.arange(0,5))ticks_data.plot()plt.xticks(np.arange(-1, 6)); Also, we can specify new labels at these locations by passing them as the second parameter. Just as an example, we can change the y axis ticks and labels to integral values and consecutive alpha characters using the following: In [18]:# rename y axis tick labels to A, B, C, D, and Eticks_data = pd.DataFrame(np.arange(0,5))ticks_data.plot()plt.yticks(np.arange(0, 5), list("ABCDE")); Formatting axes tick date labels using formatters The formatting of axes labels whose underlying data types is datetime is performed using locators and formatters. Locators control the position of the ticks, and the formatters control the formatting of the labels. To facilitate locating ticks and formatting labels based on dates, matplotlib provides several classes in maptplotlib.dates to help facilitate the process: MinuteLocator, HourLocator, DayLocator, WeekdayLocator, MonthLocator, and YearLocator: These are specific locators coded to determine where ticks for each type of date field will be found on the axis DateFormatter: This is a class that can be used to format date objects into labels on the axis By default, the default locator and formatter are AutoDateLocator and AutoDateFormatter, respectively. You can change these by providing different objects to use the appropriate methods on the specific axis object. To demonstrate, we will use a subset of the random walk data from earlier, which represents just the data from January through February of 2014. Plotting this gives us the following output: In [19]:# plot January-February 2014 from the random walkwalk_df.loc['2014-01':'2014-02'].plot(); The labels on the x axis of this plot have two series of labels, the minor and the major. The minor labels in this plot contain the day of the month, and the major contains the year and month (the year only for the first month). We can set locators and formatters for each of the minor and major levels. This will be demonstrated by changing the minor labels to be located at the Monday of each week and to contain the date and day of the week (right now, the chart uses weekly and only Friday's date—without the day name). On the major labels, we will use the monthly location and always include both the month name and the year: In [20]:# this import styles helps us type lessfrom matplotlib.dates import WeekdayLocator, DateFormatter, MonthLocator# plot Jan-Feb 2014ax = walk_df.loc['2014-01':'2014-02'].plot()# do the minor labelsweekday_locator = WeekdayLocator(byweekday=(0), interval=1)ax.xaxis.set_minor_locator(weekday_locator)ax.xaxis.set_minor_formatter(DateFormatter("%dn%a"))# do the major labelsax.xaxis.set_major_locator(MonthLocator())ax.xaxis.set_major_formatter(DateFormatter('nnn%bn%Y')); This is almost what we wanted. However, note that the year is being reported as 45. This, unfortunately, seems to be an issue between pandas and the matplotlib representation of values for the year. The best reference I have on this is this following link from Stack Overflow (http://stackoverflow.com/questions/12945971/pandas-timeseries-plot-setting-x-axis-major-and-minor-ticks-and-labels). So, it appears to create a plot with custom-date-based labels, we need to avoid the pandas .plot() and need to kick all the way down to using matplotlib. Fortunately, this is not too hard. The following changes the code slightly and renders what we wanted: In [21]:# this gets around the pandas / matplotlib year issue# need to reference the subset twice, so let's make a variablewalk_subset = walk_df['2014-01':'2014-02']# this gets the plot so we can use it, we can ignore figfig, ax = plt.subplots()# inform matplotlib that we will use the following as dates# note we need to convert the index to a pydatetime seriesax.plot_date(walk_subset.index.to_pydatetime(), walk_subset, '-')# do the minor labelsweekday_locator = WeekdayLocator(byweekday=(0), interval=1)ax.xaxis.set_minor_locator(weekday_locator)ax.xaxis.set_minor_formatter(DateFormatter('%dn%a'))# do the major labelsax.xaxis.set_major_locator(MonthLocator())ax.xaxis.set_major_formatter(DateFormatter('nnn%bn%Y'));ax.xaxis.set_major_formatter(DateFormatter('nnn%bn%Y')); To add grid lines for the minor axes ticks, you can use the .grid() method of the x axis object of the plot, the first parameter specifying the lines to use and the second parameter specifying the minor or major set of ticks. The following replots this graph without the major grid line and with the minor grid lines: In [22]:# this gets the plot so we can use it, we can ignore figfig, ax = plt.subplots()# inform matplotlib that we will use the following as dates# note we need to convert the index to a pydatetime seriesax.plot_date(walk_subset.index.to_pydatetime(), walk_subset, '-')# do the minor labelsweekday_locator = WeekdayLocator(byweekday=(0), interval=1)ax.xaxis.set_minor_locator(weekday_locator)ax.xaxis.set_minor_formatter(DateFormatter('%dn%a'))ax.xaxis.grid(True, "minor") # turn on minor tick grid linesax.xaxis.grid(False, "major") # turn off major tick grid lines# do the major labelsax.xaxis.set_major_locator(MonthLocator())ax.xaxis.set_major_formatter(DateFormatter('nnn%bn%Y')); The last demonstration of formatting will use only the major labels but on a weekly basis and using a YYYY-MM-DD format. However, because these would overlap, we will specify that they should be rotated to prevent the overlap. This is done using the fig.autofmt_xdate() function: In [23]:# this gets the plot so we can use it, we can ignore figfig, ax = plt.subplots()# inform matplotlib that we will use the following as dates# note we need to convert the index to a pydatetime seriesax.plot_date(walk_subset.index.to_pydatetime(), walk_subset, '-')ax.xaxis.grid(True, "major") # turn off major tick grid lines# do the major labelsax.xaxis.set_major_locator(weekday_locator)ax.xaxis.set_major_formatter(DateFormatter('%Y-%m-%d'));# informs to rotate date labelsfig.autofmt_xdate(); Common plots used in statistical analyses Having seen how to create, lay out, and annotate time-series charts, we will now look at creating a number of charts, other than time series that are commonplace in presenting statistical information. Bar plots Bar plots are useful in order to visualize the relative differences in values of non time-series data. Bar plots can be created using the kind='bar' parameter of the .plot() method: In [24]:# make a bar plot# create a small series of 10 random values centered at 0.0np.random.seed(seedval)s = pd.Series(np.random.rand(10) - 0.5)# plot the bar charts.plot(kind='bar'); If the data being plotted consists of multiple columns, a multiple series bar plot will be created: In [25]:# draw a multiple series bar chart# generate 4 columns of 10 random valuesnp.random.seed(seedval)df2 = pd.DataFrame(np.random.rand(10, 4),columns=['a', 'b', 'c', 'd'])# draw the multi-series bar chartdf2.plot(kind='bar'); If you would prefer stacked bars, you can use the stacked parameter, setting it to True: In [26]:# horizontal stacked bar chartdf2.plot(kind='bar', stacked=True); If you want the bars to be horizontally aligned, you can use kind='barh': In [27]:# horizontal stacked bar chartdf2.plot(kind='barh', stacked=True); Histograms Histograms are useful for visualizing distributions of data. The following shows you a histogram of generating 1000 values from the normal distribution: In [28]:# create a histogramnp.random.seed(seedval)# 1000 random numbersdfh = pd.DataFrame(np.random.randn(1000))# draw the histogramdfh.hist(); The resolution of a histogram can be controlled by specifying the number of bins to allocate to the graph. The default is 10, and increasing the number of bins gives finer detail to the histogram. The following increases the number of bins to 100: In [29]:# histogram again, but with more binsdfh.hist(bins = 100); If the data has multiple series, the histogram function will automatically generate multiple histograms, one for each series: In [30]:# generate a multiple histogram plot# create DataFrame with 4 columns of 1000 random valuesnp.random.seed(seedval)dfh = pd.DataFrame(np.random.randn(1000, 4),columns=['a', 'b', 'c', 'd'])# draw the chart. There are four columns so pandas draws# four historgramsdfh.hist(); If you want to overlay multiple histograms on the same graph (to give a quick visual difference of distribution), you can call the pyplot.hist() function multiple times before .show() is called to render the chart: In [31]:# directly use pyplot to overlay multiple histograms# generate two distributions, each with a different# mean and standard deviationnp.random.seed(seedval)x = [np.random.normal(3,1) for _ in range(400)]y = [np.random.normal(4,2) for _ in range(400)]# specify the bins (-10 to 10 with 100 bins)bins = np.linspace(-10, 10, 100)# generate plot x using plt.hist, 50% transparentplt.hist(x, bins, alpha=0.5, label='x')# generate plot y using plt.hist, 50% transparentplt.hist(y, bins, alpha=0.5, label='y')plt.legend(loc='upper right'); Box and whisker charts Box plots come from descriptive statistics and are a useful way of graphically depicting the distributions of categorical data using quartiles. Each box represents the values between the first and third quartiles of the data with a line across the box at the median. Each whisker reaches out to demonstrate the extent to five interquartile ranges below and above the first and third quartiles: In [32]:# create a box plot# generate the seriesnp.random.seed(seedval)dfb = pd.DataFrame(np.random.randn(10,5))# generate the plotdfb.boxplot(return_type='axes'); There are ways to overlay dots and show outliers, but for brevity, they will not be covered in this text. Area plots Area plots are used to represent cumulative totals over time, to demonstrate the change in trends over time among related attributes. They can also be "stacked" to demonstrate representative totals across all variables. Area plots are generated by specifying kind='area'. A stacked area chart is the default: In [33]:# create a stacked area plot# generate a 4-column data frame of random datanp.random.seed(seedval)dfa = pd.DataFrame(np.random.rand(10, 4),columns=['a', 'b', 'c', 'd'])# create the area plotdfa.plot(kind='area'); To produce an unstacked plot, specify stacked=False: In [34]:# do not stack the area plotdfa.plot(kind='area', stacked=False); By default, unstacked plots have an alpha value of 0.5, so that it is possible to see how the data series overlaps. Scatter plots A scatter plot displays the correlation between a pair of variables. A scatter plot can be created from DataFrame using .plot() and specifying kind='scatter', as well as specifying the x and y columns from the DataFrame source: In [35]:# generate a scatter plot of two series of normally# distributed random values# we would expect this to cluster around 0,0np.random.seed(111111)sp_df = pd.DataFrame(np.random.randn(10000, 2),columns=['a', 'b'])sp_df.plot(kind='scatter', x='a', y='b') We can easily create more elaborate scatter plots by dropping down a little lower into matplotlib. The following code gets Google stock data for the year of 2011 and calculates delta in the closing price per day, and renders close versus volume as bubbles of different sizes, derived on the size of the values in the data: In [36]:# get Google stock data from 1/1/2011 to 12/31/2011from pandas.io.data import DataReaderstock_data = DataReader("GOOGL", "yahoo",datetime(2011, 1, 1),datetime(2011, 12, 31))# % change per daydelta = np.diff(stock_data["Adj Close"])/stock_data["Adj Close"][:-1]# this calculates size of markersvolume = (15 * stock_data.Volume[:-2] / stock_data.Volume[0])**2close = 0.003 * stock_data.Close[:-2] / 0.003 * stock_data.Open[:-2]# generate scatter plotfig, ax = plt.subplots()ax.scatter(delta[:-1], delta[1:], c=close, s=volume, alpha=0.5)# add some labels and styleax.set_xlabel(r'$Delta_i$', fontsize=20)ax.set_ylabel(r'$Delta_{i+1}$', fontsize=20)ax.set_title('Volume and percent change')ax.grid(True); Note the nomenclature for the x and y axes labels, which creates a nice mathematical style for the labels. Density plot You can create kernel density estimation plots using the .plot() method and setting the kind='kde' parameter. A kernel density estimate plot, instead of being a pure empirical representation of the data, makes an attempt and estimates the true distribution of the data, and hence smoothes it into a continuous plot. The following generates a normal distributed set of numbers, displays it as a histogram, and overlays the kde plot: In [37]:# create a kde density plot# generate a series of 1000 random numbersnp.random.seed(seedval)s = pd.Series(np.random.randn(1000))# generate the plots.hist(normed=True) # shows the barss.plot(kind='kde'); The scatter plot matrix The final composite graph we'll look at in this article is one that is provided by pandas in its plotting tools subcomponent: the scatter plot matrix. A scatter plot matrix is a popular way of determining whether there is a linear correlation between multiple variables. The following creates a scatter plot matrix with random values, which then shows a scatter plot for each combination, as well as a kde graph for each variable: In [38]:# create a scatter plot matrix# import this classfrom pandas.tools.plotting import scatter_matrix# generate DataFrame with 4 columns of 1000 random numbersnp.random.seed(111111)df_spm = pd.DataFrame(np.random.randn(1000, 4),columns=['a', 'b', 'c', 'd'])# create the scatter matrixscatter_matrix(df_spm, alpha=0.2, figsize=(6, 6), diagonal='kde'); Heatmaps A heatmap is a graphical representation of data, where values within a matrix are represented by colors. This is an effective means to show relationships of values that are measured at the intersection of two variables, at each intersection of the rows and the columns of the matrix. A common scenario, is to have the values in the matrix normalized to 0.0 through 1.0 and have the intersections between a row and column represent the correlation between the two variables. Values with less correlation (0.0) are the darkest, and those with the highest correlation (1.0) are white. Heatmaps are easily created with pandas and matplotlib using the .imshow() function: In [39]:# create a heatmap# start with data for the heatmaps = pd.Series([0.0, 0.1, 0.2, 0.3, 0.4],['V', 'W', 'X', 'Y', 'Z'])heatmap_data = pd.DataFrame({'A' : s + 0.0,'B' : s + 0.1,'C' : s + 0.2,'D' : s + 0.3,'E' : s + 0.4,'F' : s + 0.5,'G' : s + 0.6})heatmap_dataOut [39]:A B C D E F GV 0.0 0.1 0.2 0.3 0.4 0.5 0.6W 0.1 0.2 0.3 0.4 0.5 0.6 0.7X 0.2 0.3 0.4 0.5 0.6 0.7 0.8Y 0.3 0.4 0.5 0.6 0.7 0.8 0.9Z 0.4 0.5 0.6 0.7 0.8 0.9 1.0In [40]:# generate the heatmapplt.imshow(heatmap_data, cmap='hot', interpolation='none')plt.colorbar() # add the scale of colors bar# set the labelsplt.xticks(range(len(heatmap_data.columns)), heatmap_data.columns)plt.yticks(range(len(heatmap_data)), heatmap_data.index); Multiple plots in a single chart It is often useful to contrast data by displaying multiple plots next to each other. This is actually quite easy to when using matplotlib. To draw multiple subplots on a grid, we can make multiple calls to plt.subplot2grid(), each time passing the size of the grid the subplot is to be located on (shape=(height, width)) and the location on the grid of the upper-left section of the subplot (loc=(row, column)). Each call to plt.subplot2grid() returns a different AxesSubplot object that can be used to reference the specific subplot and direct the rendering into. The following demonstrates this, by creating a plot with two subplots based on a two row by one column grid (shape=(2,1)). The first subplot, referred to by ax1, is located in the first row (loc=(0,0)), and the second, referred to as ax2, is in the second row (loc=(1,0)): In [41]:# create two sub plots on the new plot using a 2x1 grid# ax1 is the upper rowax1 = plt.subplot2grid(shape=(2,1), loc=(0,0))# and ax2 is in the lower rowax2 = plt.subplot2grid(shape=(2,1), loc=(1,0)) The subplots have been created, but we have not drawn into either yet. The size of any subplot can be specified using the rowspan and colspan parameters in each call to plt.subplot2grid(). This actually feels a lot like placing content in HTML tables. The following demonstrates a more complicated layout of five plots, specifying different row and column spans for each: In [42]:# layout sub plots on a 4x4 grid# ax1 on top row, 4 columns wideax1 = plt.subplot2grid((4,4), (0,0), colspan=4)# ax2 is row 2, leftmost and 2 columns wideax2 = plt.subplot2grid((4,4), (1,0), colspan=2)# ax3 is 2 cols wide and 2 rows high, starting# on second row and the third columnax3 = plt.subplot2grid((4,4), (1,2), colspan=2, rowspan=2)# ax4 1 high 1 wide, in row 4 column 0ax4 = plt.subplot2grid((4,4), (2,0))# ax4 1 high 1 wide, in row 4 column 1ax5 = plt.subplot2grid((4,4), (2,1)); To draw into a specific subplot using the pandas .plot() method, you can pass the specific axes into the plot function via the ax parameter. The following demonstrates this by extracting each series from the random walk we created at the beginning of this article, and drawing each into different subplots: In [43]:# demonstrating drawing into specific sub-plots# generate a layout of 2 rows 1 column# create the subplots, one on each rowax5 = plt.subplot2grid((2,1), (0,0))ax6 = plt.subplot2grid((2,1), (1,0))# plot column 0 of walk_df into top row of the gridwalk_df[[0]].plot(ax = ax5)# and column 1 of walk_df into bottom rowwalk_df[[1]].plot(ax = ax6); Using this technique, we can perform combinations of different series of data, such as a stock close versus volume graph. Given the data we read during a previous example for Google, the following will plot the volume versus the closing price: In [44]:# draw the close on the top charttop = plt.subplot2grid((4,4), (0, 0), rowspan=3, colspan=4)top.plot(stock_data.index, stock_data['Close'], label='Close')plt.title('Google Opening Stock Price 2001')# draw the volume chart on the bottombottom = plt.subplot2grid((4,4), (3,0), rowspan=1, colspan=4)bottom.bar(stock_data.index, stock_data['Volume'])plt.title('Google Trading Volume')# set the size of the plotplt.gcf().set_size_inches(15,8) Summary Visualizing your data is one of the best ways to quickly understand the story that is being told with the data. Python, pandas, and matplotlib (and a few other libraries) provide a means of very quickly, and with a few lines of code, getting the gist of what you are trying to discover, as well as the underlying message (and displaying it beautifully too). In this article, we examined many of the most common means of visualizing data from pandas. There are also a lot of interesting visualizations that were not covered, and indeed, the concept of data visualization with pandas and/or Python is the subject of entire texts, but I believe this article provides a much-needed reference to get up and going with the visualizations that provide most of what is needed. Resources for Article: Further resources on this subject: Prototyping Arduino Projects using Python [Article] Classifying with Real-world Examples [Article] Python functions – Avoid repeating code [Article]
Read more
  • 0
  • 0
  • 4245
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-managing-images
Packt
14 Apr 2015
11 min read
Save for later

Managing Images

Packt
14 Apr 2015
11 min read
Cats, dogs and all sorts of memes, the Internet as we know it today is dominated by images. You can open almost any web page and you'll surely find images on the page. The more interactive our web browsing experience becomes, the more images we tend to use. So, it is tremendously important to ensure that the images we use are optimized and loaded as fast as possible. We should also make sure that we choose the correct image type. In this article by Dewald Els, author of the book Responsive Design High Performance,we will talk about, why image formats are important, conditional loading, visibility for DOM elements, specifying sizes, media queries, introducing sprite sheets, and caching. Let's talk basics. (For more resources related to this topic, see here.) Choosing the correct image format Deciding what image format to use is usually the first step you take when you start your website. Take a look at this table for an overview and comparison ofthe available image formats: Format Features GIF 256 colors Support for animation Transparency PNG 256 colors True colors Transparency JPEG/JPG 256 colors True colors From the preceding listed formats, you can conclude that, if you had a complex image that was 1000 x 1000 pixels, the image in the JPEG format would be the smallest in file size. This also means that it would load the fastest. The smallest image is not always the best choice though. If you need to have images with transparent parts, you'll have to use the PNG or GIF formats and if you need an animation, you are stuck with using a GIF format or the lesser know APNG format. Optimizing images Optimizing your image can have a huge impact on your overall website performance. There are some great applications to help you with image optimization and compression. TinyPNG is a great example of a site that helps you to compress you PNG's images online for free. They also have a Photoshop plugin that is available for download at https://tinypng.com/. Another great application to help you with JPG compression is JPEGMini. Head over to http://www.jpegmini.com/ to get a copy for either Windows or Mac OS X. Another application that is worth considering is Radical Image Optimization Tool (RIOT). It is a free program and can be found at http://luci.criosweb.ro/riot/. RIOT is a Windows application. Viewing as JPEG is not the only image format that we use in the Web; you can also look at a Mac OS X application called ImageOptim (http://www.imageoptim.com) It is also a free application and compresses both JPEG and PNG images. If you are not on Mac OS X, you can head over to https://tinypng.com/. This handy little site allows you to upload your image to the site, where it is then compressed. The optimized images are then linked to the site as downloadable files. As JPEG image formats make up the majority of most web pages, with some exceptions, lets take a look at how to make your images load faster. Progressive images Most advanced image editors such as Photoshop and GIMP give you the option to encode your JPEG images using either baseline or progressive. If you Save For Web using Photoshop, you will see this section at the top of the dialog box: In most cases, for use on web pages, I would advise you to use the Progressive encoding type. When you save an image using baseline, the full image data of every pixel block is written to the file one after the other. Baseline images load gradually from the top-left corner. If you save an image using the Progressive option, then it saves only a part of each of these blocks to the file and then another part and so on, until the entire image's information is captured in the file. When you render a progressive image, you will see a very grainy image display and this will gradually become sharper as it loads. Progressive images are also smaller than baseline images for various technical reasons. This means that they load faster. In addition, they appear to load faster when something is displayed on the screen. Here is a typical example of the visual difference between loading a progressive and a baseline JPEG image: Here, you can clearly see how the two encodings load in a browser. On the left, the progressive image is already displayed whereas the baseline image is still loading from the top. Alright, that was some really basic stuff, but it was extremely important nonetheless. Let's move on to conditional loading. Adaptive images Adaptive images are an adaptation of Filament Group's context-aware image sizing experiment. What does it do? Well, this is what the guys say about themselves: "Adaptive images detects your visitor's screen size and automatically creates, caches, and delivers device appropriate re-scaled versions of your web page's embedded HTML images. No mark-up changes needed. It is intended for use with Responsive Designs and to be combined with Fluid Images techniques." It certainly trumps the experiment in the simplicity of implementation. So, how does it work? It's quite simple. There is no need to change any of your current code. Head over to http://adaptive-images.com/download.htm and get the latest version of adaptive images. You can place the adaptive-images.php file in the root of your site. Make sure to add the content of the .htaccess file to your own as well. Head over to the index file of your site and add this in the <head> tags: <script>document.cookie='resolution='+Math.max(screen.width,screen.height)+'; path=/';</script> Note that it is has to be in the <head> tag of your site. Open the adaptive-images.php file and add you media query values into the $resolutions variable. Here is a snippet of code that is pretty self-explanatory: $resolutions   = array(1382, 992, 768, 480);$cache_path   = "ai-cache";$jpg_quality   = 80;$sharpen       = TRUE;$watch_cache   = TRUE;$browser_cache = 60*60*24*7; The $resolution variable accepts the break-points that you use for your website. You can simply add the value of the screen width in pixels. So, in the the preceding example, it would read 1382 pixels as the first break-point, 992 pixels as the second one, and so on. The cache path tells adaptive images where to store the generated resized images. It's a relative path from your document root. So, in this case, your folder structure would read as document_root/a-cache/{images stored here}. The next variable, $jpg_quality, sets the quality of any generated JPGs images on a scale of 0 to 100. Shrinking images could cause blurred details. Set $sharpen to TRUE to perform a sharpening process on rescaled images. When you set $watch_cache to TRUE, you force adaptive images to check that the adapted image isn't stale; that is, it ensures that the updated source images are recached. Lastly, $browser_cache sets how long the browser cache should last for. The values are seconds, minutes, hours, days (7 days by default). You can change the last digit to modify the days. So, if you want images to be cached for two days, simply change the last value to 2. Then,… oh wait, that's all? It is indeed! Adaptive images will work with your existing website and they don't require any markup changes. They are also device-agnostic and follow a mobile-first philosophy. Conditional loading Responsive designs combine three main techniques, which are as follows: Fluid grids Flexible images Media queries The technique that I want to focus on in this section is media queries. In most cases, developers use media queries to change the layout, width height, padding, font size and so on, depending on conditions related to the viewport. Let's see how we can achieve conditional image loading using CSS3's image-set function: .my-background-img {background-image: image-set(url(icon1x.jpg) 1x,url(icon2x.jpg) 2x);} You can see in the preceding piece of CSS3 code that the image is loaded conditionally based on its display type. The second statement url(icon2x.jpg) 2x would load the hi-resolution image or retina image. This reduces the number of CSS rules we have to create. Maintaining a site with a lot of background images can become quite a chore if a separate rule exists for each one. Here is a simple media query example: @media screen and (max-width: 480px) {   .container {       width: 320px;   }} As I'm sure you already know, this snippet tells the browser that, for any device with a viewport of fewer than 480 pixels, any element with the class container has to be 320 pixels wide. When you use media queries, always make sure to include the viewport <meta> tag in the head of your HTML document, as follows: <meta name="viewport" content="width=device-width, initial-scale=1"> I've included this template here as I'd like to start with this. It really makes it very easy to get started with new responsive projects: /* MOBILE */@media screen and (max-width: 480px) {   .container {       width: 320px;   }}/* TABLETS */@media screen and (min-width: 481px) and (max-width: 720px) {   .container {       width: 480px;   }}/* SMALL DESKTOP OR LARGE TABLETS */@media screen and (min-width: 721px) and (max-width: 960px) {   .container {       width: 720px;   }}/* STANDARD DESKTOP */@media screen and (min-width: 961px) and (max-width: 1200px) {   .container {       width: 960px;   }}/* LARGE DESKTOP */@media screen and (min-width: 1201px) and (max-width: 1600px) {   .container {       width: 1200px;   }}/* EXTRA LARGE DESKTOP */@media screen and (min-width: 1601px) {   .container {       width: 1600px;   }} When you view a website on a desktop, it's quite a common thing to have a left and a right column. Generally, the left column contains information that requires more focus and the right column contains content with a bit less importance. In some cases, you might even have three columns. Take the social website Facebook as an example. At the time of writing this article, Facebook used a three-column layout, which is as follows:   When you view a web page on a mobile device, you won't be able to fit all three columns into the smaller viewport. So, you'd probably want to hide some of the columns and not request the data that is usually displayed in the columns that are hidden. Alright, we've done some talking. Well, you've done some reading. Now, let's get into our code! Our goal in this section is to learn about conditional development, with the focus on images. I've constructed a little website with a two-column layout. The left column houses the content and the right column is used to populate a little news feed. I made a simple PHP script that returns a JSON object with the news items. Here is a preview of the different screens that we will work on:   These two views are a result of the queries that are shown in the following style sheet code: /* MOBILE */@media screen and (max-width: 480px) {}/* TABLETS */@media screen and (min-width: 481px) and (max-width: 720px) {} Summary Managing images is no small feat in a website. Almost all modern websites rely heavily on images to present content to the users. In this article we looked at which image formats to use and when. We also looked at how to optimize your images for websites. We discussed the difference between progressive and optimized images as well. Conditional loading can greatly help you to load your site faster. In this article, we briefly discussed how to use conditional loading to improve your site's performance. Resources for Article: Further resources on this subject: A look into responsive design frameworks [article] Building Responsive Image Sliders [article] Creating a Responsive Project [article]
Read more
  • 0
  • 0
  • 12081

article-image-understanding-self-tuning-thresholds
Packt
14 Apr 2015
5 min read
Save for later

Understanding Self-tuning Thresholds

Packt
14 Apr 2015
5 min read
In this article by Chiyo Odika coauthor of the book Microsoft System Center 2012 R2 Operations Manager Cookbook, a self-tuning threshold monitor is a Windows Performance Counter monitor type that was introduced in System Center Operations Manager 2007. Unlike monitors that use a fixed threshold (static monitors), self-tuning Threshold (STT) monitors learn what is acceptable for a performance counter and, over time, update the threshold for the performance counter. (For more resources related to this topic, see here.) In contrast to STTs, static thresholds are simple monitor types and are based on predefined values and counters that are monitored for conformity within the predefined values. For instance, a static threshold could be used to monitor for specific thresholds, such as Available Megabytes of Memory. Static thresholds are very useful for various monitoring scenarios but have some drawbacks. Primarily, there's some acceptable variation in performance of servers even when they fulfil the same role, and as such, a performance value that may be appropriate for one server may not apply to another. STTs were therefore created as an option for monitoring in such instances. Baselines in SCOM 2012 R2 are used to collect the usual values for a performance counter, which then allows SCOM to adjust alert thresholds accordingly. STTs are very useful for collecting performance counter baselines on the basis of what it has learned over time. Getting ready To understand how STTs work, we will take a look at the basic components of an STT. To do so, we will create a self-tuning monitor using the wizard. The process for configuring an STT involves configuring the logic for the STT to learn. The configuration can be performed in the wizard for creating the performance counter monitor. To create a performance counter monitor in System Center Operations Manager, you will need to log on to a computer that has an Operations console, using an account that is a member of the Operations Manager Administrators user role, or Operations Manager Authors user role for your Operations Manager 2012 R2 management group. Create a management pack for your custom monitor if you don't already have one. How to do it... For illustration purposes, we will create a 2-state self-tuning threshold monitor. Creating a self-tuning threshold monitor To create a self-tuning threshold monitor, carry out the following steps: Log in to a computer with an account that is a member of the Operations Manager Administrators user role or Operations Manager Authors user role for the Operations Manager 2012 R2 management group. In the Operations console, click on the Authoring button. In the Authoring pane, expand Authoring, expand Management Pack Objects, click on Monitors, right-click on the Monitors, select Create a Monitor, select Unit Monitor, and then expand Windows Performance Counters. Select 2-state Baselining, select a Destination Management Pack, and then click on Next. Name the monitor, select Windows Server as your monitor target, and then select the Performance parent monitor from the drop-down option. In the Object field, enter processor, enter % Processor Time in the Counter field, enter _Total in the Instance field, and set the Interval to 1 minute. Click on Next to Configure business cycle, which is the unit of time you would like to monitor. The default is 1 week, which is fine in general, but for the purpose of illustration, select 1 Day(s). Under Alerting, leave the default value of 1 business cycle(s) of analysis. Move the Sensitivity slider to the left to select a low sensitivity value and then click on Next. Leave the default values on the Configure Health screen and click on Next. On the Configure Alerts screen, check the box to generate alerts for the monitor and click on Create. How it works... A self-tuning threshold consists of two rules and a monitor. The performance collection rule collects performance counter data, and the signature collection rule establishes a signature. The monitor compares the value of the performance counter data with the signature. The signature is a numeric data provider that learns the characteristics of a business cycle. SCOM then uses the signature to set and adjust the thresholds for alerts by evaluating performance counter results against the business cycle pattern. In this article, we effectively created a 2-state baselining self-tuning threshold monitor, as you can see in the following screenshot: You will find that this also created some rules such as performance collection and signature collection rules to collect performance and signature data, respectively. Data collection will occur at the frequency specified at the time the monitor was created, as you can see in the following screenshot: You will also notice that the collection frequency values can be changed, along with the sensitivity values for the monitor, as you can see in the following screenshot: There's more... Monitors that use self-tuning thresholds are based on Windows performance counters and the business cycle setting. The business cycle establishes a time period of activity that SCOM will use to create a signature. The business cycle can be configured in either days or weeks, and the default is 1 week. For example, the STT monitor for the % Processor Time counter that we created learns that processor activity for some database servers spikes between noon and 2 pm on Wednesdays. The threshold is adjusted to take that pattern into account. As a result, an alert would not be generated for a spike in processor activity at 12:30 pm on Wednesday. However, if similar processor activity spikes at the same time on Thursday, the monitor will generate an alert. See also For detailed information on activities listed in this article, refer to the Microsoft TechNet article Understanding Self-Tuning Threshold Monitors in the following link: http://TechNet.microsoft.com/en-us/library/dd789011.aspx. Summary We looked at the basic components of a self-tuning threshold to understand how STTs work. For that we created a self-tuning monitor using the wizard. Resources for Article: Further resources on this subject: Upgrading from Previous Versions [article] Deploying Applications and Software Updates on Microsoft System Center 2012 Configuration Manager [article] Unpacking System Center 2012 Orchestrator [article]
Read more
  • 0
  • 0
  • 3867

article-image-our-first-api-go
Packt
14 Apr 2015
15 min read
Save for later

Our First API in Go

Packt
14 Apr 2015
15 min read
This article is penned by Nathan Kozyra, the author of the book, Mastering Go Web Services. This quickly introduces—or reintroduces—some core concepts related to Go setup and usage as well as the http package. (For more resources related to this topic, see here.) If you spend any time developing applications on the Web (or off it, for that matter), it won't be long before you find yourself facing the prospect of interacting with a web service or an API. Whether it's a library that you need or another application's sandbox with which you have to interact, the world of development relies in no small part on the cooperation among dissonant applications, languages, and formats. That, after all, is why we have APIs to begin with—to allow standardized communication between any two given platforms. If you spend a long amount of time working on the Web, you'll encounter bad APIs. By bad we mean APIs that are not all-inclusive, do not adhere to best practices and standards, are confusing semantically, or lack consistency. You'll encounter APIs that haphazardly use OAuth or simple HTTP authentication in some places and the opposite in others, or more commonly, APIs that ignore the stated purposes of HTTP verbs. Google's Go language is particularly well suited to servers. With its built-in HTTP serving, a simple method for XML and JSON encoding of data, high availability, and concurrency, it is the ideal platform for your API. We will cover the following topics in this article: Understanding requirements and dependencies Introducing the HTTP package Understanding requirements and dependencies Before we get too deep into the weeds in this article, it would be a good idea for us to examine the things that you will need to have installed. Installing Go It should go without saying that we will need to have the Go language installed. However, there are a few associated items that you will also need to install in order to do everything we do in this book. Go is available for Mac OS X, Windows, and most common Linux variants. You can download the binaries at http://golang.org/doc/install. On Linux, you can generally grab Go through your distribution's package manager. For example, you can grab it on Ubuntu with a simple apt-get install golang command. Something similar exists for most distributions. In addition to the core language, we'll also work a bit with the Google App Engine, and the best way to test with the App Engine is to install the Software Development Kit (SDK). This will allow us to test our applications locally prior to deploying them and simulate a lot of the functionality that is provided only on the App Engine. The App Engine SDK can be downloaded from https://developers.google.com/appengine/downloads. While we're obviously most interested in the Go SDK, you should also grab the Python SDK as there are some minor dependencies that may not be available solely in the Go SDK. Installing and using MySQL We'll be using quite a few different databases and datastores to manage our test and real data, and MySQL will be one of the primary ones. We will use MySQL as a storage system for our users; their messages and their relationships will be stored in our larger application (we will discuss more about this in a bit). MySQL can be downloaded from http://dev.mysql.com/downloads/. You can also grab it easily from a package manager on Linux/OS X as follows: Ubuntu: sudo apt-get install mysql-server mysql-client OS X with Homebrew: brew install mysql Redis Redis is the first of the two NoSQL datastores that we'll be using for a couple of different demonstrations, including caching data from our databases as well as the API output. If you're unfamiliar with NoSQL, we'll do some pretty simple introductions to results gathering using both Redis and Couchbase in our examples. If you know MySQL, Redis will at least feel similar, and you won't need the full knowledge base to be able to use the application in the fashion in which we'll use it for our purposes. Redis can be downloaded from http://redis.io/download. Redis can be downloaded on Linux/OS X using the following: Ubuntu: sudo apt-get install redis-server OS X with Homebrew: brew install redis Couchbase As mentioned earlier, Couchbase will be our second NoSQL solution that we'll use in various products, primarily to set short-lived or ephemeral key store lookups to avoid bottlenecks and as an experiment with in-memory caching. Unlike Redis, Couchbase uses simple REST commands to set and receive data, and everything exists in the JSON format. Couchbase can be downloaded from http://www.couchbase.com/download. For Ubuntu (deb), use the following command to download Couchbase: dpkg -i couchbase-server version.deb For OS X with Homebrew use the following command to download Couchbase: brew install https://github.com/couchbase/homebrew/raw/    stable/Library/Formula/libcouchbase.rb Nginx Although Go comes with everything you need to run a highly concurrent, performant web server, we're going to experiment with wrapping a reverse proxy around our results. We'll do this primarily as a response to the real-world issues regarding availability and speed. Nginx is not available natively for Windows. For Ubuntu, use the following command to download Nginx: apt-get install nginx For OS X with Homebrew, use the following command to download Nginx: brew install nginx Apache JMeter We'll utilize JMeter for benchmarking and tuning our API for performance. You have a bit of a choice here, as there are several stress-testing applications for simulating traffic. The two we'll touch on are JMeter and Apache's built-in Apache Benchmark (AB) platform. The latter is a stalwart in benchmarking but is a bit limited in what you can throw at your API, so JMeter is preferred. One of the things that we'll need to consider when building an API is its ability to stand up to heavy traffic (and introduce some mitigating actions when it cannot), so we'll need to know what our limits are. Apache JMeter can be downloaded from http://jmeter.apache.org/download_jmeter.cgi. Using predefined datasets While it's not entirely necessary to have our dummy dataset, you can save a lot of time as we build our social network by bringing it in because it is full of users, posts, and images. By using this dataset, you can skip creating this data to test certain aspects of the API and API creation. Our dummy dataset can be downloaded at https://github.com/nkozyra/masteringwebservices. Choosing an IDE A choice of Integrated Development Environment (IDE) is one of the most personal choices a developer can make, and it's rare to find a developer who is not steadfastly passionate about their favorite. Nothing in this article will require one IDE over another; indeed, most of Go's strength in terms of compiling, formatting, and testing lies at the command-line level. That said, we'd like to at least explore some of the more popular choices for editors and IDEs that exist for Go. Eclipse As one of the most popular and expansive IDEs available for any language, Eclipse is an obvious first mention. Most languages get their support in the form of an Eclipse plugin and Go is no exception. There are some downsides to this monolithic piece of software; it is occasionally buggy on some languages, notoriously slow for some autocompletion functions, and is a bit heavier than most of the other available options. However, the pluses are myriad. Eclipse is very mature and has a gigantic community from which you can seek support when issues arise. Also, it's free to use. Eclipse can be downloaded from http://eclipse.org/ Get the Goclipse plugin at http://goclipse.github.io/ Sublime Text Sublime Text is our particular favorite, but it comes with a large caveat—it is the only one listed here that is not free. This one feels more like a complete code/text editor than a heavy IDE, but it includes code completion options and the ability to integrate the Go compilers (or other languages' compilers) directly into the interface. Although Sublime Text's license costs $70, many developers find its elegance and speed to be well worth it. You can try out the software indefinitely to see if it's right for you; it operates as nagware unless and until you purchase a license. Sublime Text can be downloaded from http://www.sublimetext.com/2. LiteIDE LiteIDE is a much younger IDE than the others mentioned here, but it is noteworthy because it has a focus on the Go language. It's cross-platform and does a lot of Go's command-line magic in the background, making it truly integrated. LiteIDE also handles code autocompletion, go fmt, build, run, and test directly in the IDE and a robust package browser. It's free and totally worth a shot if you want something lean and targeted directly for the Go language. LiteIDE can be downloaded from https://code.google.com/p/golangide/. IntelliJ IDEA Right up there with Eclipse is the JetBrains family of IDE, which has spanned approximately the same number of languages as Eclipse. Ultimately, both are primarily built with Java in mind, which means that sometimes other language support can feel secondary. The Go integration here, however, seems fairly robust and complete, so it's worth a shot if you have a license. If you do not have a license, you can try the Community Edition, which is free. You can download IntelliJ IDEA at http://www.jetbrains.com/idea/download/ The Go language support plugin is available at http://plugins.jetbrains.com/plugin/?idea&id=5047 Some client-side tools Although the vast majority of what we'll be covering will focus on Go and API services, we will be doing some visualization of client-side interactions with our API. In doing so, we'll primarily focus on straight HTML and JavaScript, but for our more interactive points, we'll also rope in jQuery and AngularJS. Most of what we do for client-side demonstrations will be available at this book's GitHub repository at https://github.com/nkozyra/goweb under client. Both jQuery and AngularJS can be loaded dynamically from Google's CDN, which will prevent you from having to download and store them locally. The examples hosted on GitHub call these dynamically. To load AngularJS dynamically, use the following code: <script src="//ajax.googleapis.com/ajax/libs/ angularjs/1.2.18/angular.min.js"></script> To load jQuery dynamically, use the following code: <script src="//ajax.googleapis.com/ajax/ libs/jquery/1.11.1/jquery.min.js"></script> Looking at our application Well in the book, we'll be building myriad small applications to demonstrate points, functions, libraries, and other techniques. However, we'll also focus on a larger project that mimics a social network wherein we create and return to users, statuses, and so on, via the API. For that you'll need to have a copy of it. Setting up our database As mentioned earlier, we'll be designing a social network that operates almost entirely at the API level (at least at first) as our master project in the book. Time and space wouldn't allow us to cover this here in the article. When we think of the major social networks (from the past and in the present), there are a few omnipresent concepts endemic among them, which are as follows: The ability to create a user and maintain a user profile The ability to share messages or statuses and have conversations based on them The ability to express pleasure or displeasure on the said statuses/messages to dictate the worthiness of any given message There are a few other features that we'll be building here, but let's start with the basics. Let's create our database in MySQL as follows: create database social_network; This will be the basis of our social network product in the book. For now, we'll just need a users table to store our individual users and their most basic information. We'll amend this to include more features as we go along: CREATE TABLE users ( user_id INT(10) UNSIGNED NOT NULL AUTO_INCREMENT, user_nickname VARCHAR(32) NOT NULL, user_first VARCHAR(32) NOT NULL, user_last VARCHAR(32) NOT NULL, user_email VARCHAR(128) NOT NULL, PRIMARY KEY (user_id), UNIQUE INDEX user_nickname (user_nickname) ) We won't need to do too much in this article, so this should suffice. We'll have a user's most basic information—name, nickname, and e-mail, and not much else. Introducing the HTTP package The vast majority of our API work will be handled through REST, so you should become pretty familiar with Go's http package. In addition to serving via HTTP, the http package comprises of a number of other very useful utilities that we'll look at in detail. These include cookie jars, setting up clients, reverse proxies, and more. The primary entity about which we're interested right now, though, is the http.Server struct, which provides the very basis of all of our server's actions and parameters. Within the server, we can set our TCP address, HTTP multiplexing for routing specific requests, timeouts, and header information. Go also provides some shortcuts for invoking a server without directly initializing the struct. For example, if you have a lot of default properties, you could use the following code: Server := Server { Addr: ":8080", Handler: urlHandler, ReadTimeout: 1000 * time.MicroSecond, WriteTimeout: 1000 * time.MicroSecond, MaxHeaderBytes: 0, TLSConfig: nil } You can simply execute using the following code: http.ListenAndServe(":8080", nil) This will invoke a server struct for you and set only the Addr and Handler  properties within. There will be times, of course, when we'll want more granular control over our server, but for the time being, this will do just fine. Let's take this concept and output some JSON data via HTTP for the first time. Quick hitter – saying Hello, World via API As mentioned earlier in this article, we'll go off course and do some work that we'll preface with quick hitter to denote that it's unrelated to our larger project. In this case, we just want to rev up our http package and deliver some JSON to the browser. Unsurprisingly, we'll be merely outputting the uninspiring Hello, world message to, well, the world. Let's set this up with our required package and imports: package main   import ( "net/http" "encoding/json" "fmt" ) This is the bare minimum that we need to output a simple string in JSON via HTTP. Marshalling JSON data can be a bit more complex than what we'll look at here, so if the struct for our message doesn't immediately make sense, don't worry. This is our response struct, which contains all of the data that we wish to send to the client after grabbing it from our API: type API struct { Message string "json:message" } There is not a lot here yet, obviously. All we're setting is a single message string in the obviously-named Message variable. Finally, we need to set up our main function (as follows) to respond to a route and deliver a marshaled JSON response: func main() {   http.HandleFunc("/api", func(w http.ResponseWriter, r    *http.Request) {      message := API{"Hello, world!"}      output, err := json.Marshal(message)      if err != nil {      fmt.Println("Something went wrong!")    }      fmt.Fprintf(w, string(output))   })   http.ListenAndServe(":8080", nil) } Upon entering main(), we set a route handling function to respond to requests at /api that initializes an API struct with Hello, world! We then marshal this to a JSON byte array, output, and after sending this message to our iowriter class (in this case, an http.ResponseWriter value), we cast that to a string. The last step is a kind of quick-and-dirty approach for sending our byte array through a function that expects a string, but there's not much that could go wrong in doing so. Go handles typecasting pretty simply by applying the type as a function that flanks the target variable. In other words, we can cast an int64 value to an integer by simply surrounding it with the int(OurInt64) function. There are some exceptions to this—types that cannot be directly cast and some other pitfalls, but that's the general idea. Among the possible exceptions, some types cannot be directly cast to others and some require a package like strconv to manage typecasting. If we head over to our browser and call localhost:8080/api (as shown in the following screenshot), you should get exactly what we expect, assuming everything went correctly: Summary We've touched on the very basics of developing a simple web service interface in Go. Admittedly, this particular version is extremely limited and vulnerable to attack, but it shows the basic mechanisms that we can employ to produce usable, formalized output that can be ingested by other services. At this point, you should have the basic tools at your disposal that are necessary to start refining this process and our application as a whole. Resources for Article: Further resources on this subject: Adding Authentication [article] C10K – A Non-blocking Web Server in Go [article] Clusters, Parallel Computing, and Raspberry Pi – A Brief Background [article]
Read more
  • 0
  • 0
  • 1766

article-image-appium-essentials
Packt
09 Apr 2015
9 min read
Save for later

Appium Essentials

Packt
09 Apr 2015
9 min read
In this article by Manoj Hans, author of the book Appium Essentials, we will see how toautomate mobile apps on real devices. Appium provides the support for automating apps on real devices. We can test the apps in a physical device and experience the look and feel that an end user would. (For more resources related to this topic, see here.) Important initial points Before starting with Appium, make sure that you have all the necessary software installed. Let's take a look at the prerequisites for Android and iOS: Prerequisites for Android Prerequisites for iOS Java (Version 7 or later) Mac OS (Version 10.7 or later) The Android SDK API, Version 17 or higher Xcode (Version 4.6.3 or higher; 5.1 is recommended) A real Android device An iOS provisional profile Chrome browser on a real device An iOS real device Eclipse The SafariLauncher app TestNG ios-webkit-debug-proxy The Appium Server Java Version 7 The Appium client library (Java) Eclipse Selenium Server and WebDriver Java library TestNG The Apk Info app The Appium server   The Appium client library (Java)   Selenium Server and WebDriver Java library While working with the Android real device, we need to enable USB debugging under Developer options. To enable USB debugging, we have to perform the following steps: Navigate to Settings | About Phone and tap on Build number seven times (assuming that you have Android Version 4.2 or newer); then, return to the previous screen and find Developer options, as shown in the following screenshot: Tap on Developer options and then tap on the ON switch to switch on the developer settings (You will get an alert to allow developer settings; just click on the OK button.). Make sure that the USB debugging option is checked, as shown here: After performing the preceding steps, you have to use a USB cable to connect your Android device with the desktop. Make sure you have installed the appropriate USB driver for your device before doing this. After connecting, you will get an alert on your device asking you to allow USB debugging; just tap on OK. To ensure that we are ready to automate apps on the device, perform the following steps: Open Command Prompt or terminal (on Mac). Type adb devices and press the Enter button. You will get a list of Android devices; if not, then try to kill and start the adb server with the adb kill-server and adb start-server commands. Now, we've come to the coding part. First, we need to set the desired capabilities and initiate an Android/iOS driver to work with Appium on a real device. Let's discuss them one by one. Desired capabilities for Android and initiating the Android driver We have two ways to set the desired capabilities, one with the Appium GUI and the other by initiating the desired capabilities object. Using the desired capabilities object is preferable; otherwise, we have to change the desired capabilities in the GUI repeatedly whenever we are testing another mobile app. Let's take a look at the Appium GUI settings for native and hybrid apps: Perform the following steps to set the Android Settings: Click on the Android Settings icon. Select Application Path and provide the path of the app. Click on Package and choose the appropriate package from the drop-down menu. Click on Launch Activity and choose an activity from the drop-down menu. If the application is already installed on the real device, then we don't need to follow steps 2-4. In this case, we have to install the Apk Info app on the device to know the package and activities of the app and set them using the desired capabilities object, which we will see in the next section. You can easily get the 'Apk info' app from the Android Play Store. Select PlatformVersion from the dropdown. Select Device Name and type in a device name (For example, Moto X). Now, start the Appium Server. Perform the following steps to set the Android Settings for web apps: Click on the Android Settings icon. Select PlatformVersion from the dropdown. Select Use Browser and choose Chrome from the dropdown. Select Device Name and type in a device name (For example, Moto X). Now, start the Appium Server. Let's discuss how to initiate the desired capabilities object and set the capabilities. Desired capabilities for native and hybrid apps Here we will directly dive into the code with comments. First, we need to import the following packages: import java.io.File;import org.openqa.selenium.remote.DesiredCapabilities;import io.appium.java_client.remote.MobileCapabilityType; Now, let's set the desired capabilities for the native and hybrid apps: DesiredCapabilities caps = new DesiredCapabilities();//To create an objectFile app=new File("path of the apk");//To create file object to specify the app pathcaps.setCapability(MobileCapabilityType.APP,app);//If app is already installed on the device then no need to set this capability.caps.setCapability(MobileCapabilityType.PLATFORM_VERSION, "4.4");//To set the Android versioncaps.setCapability(MobileCapabilityType.PLATFORM_NAME, "Android");//To set the OS namecaps.setCapability(MobileCapabilityType.DEVICE_NAME,"Moto X");//You can change the device name as yours.caps.setCapability(MobileCapabilityType.APP_PACKAGE, "package name of your app (you can get it from apk info app)"); //To specify the android app packagecaps.setCapability(MobileCapabilityType.APP_ACTIVITY, "Launch activity of your app (you can get it from apk info app)");//To specify the activity which we want to launch Desired capabilities for web apps In Android mobile web apps, some of the capabilities that we used in native and hybrid apps such as APP, APP PACKAGE, and APP ACTIVITY are not needed because we launch a browser here. First, we need to import the following packages: import java.io.File;import org.openqa.selenium.remote.DesiredCapabilities;import io.appium.java_client.remote.MobileCapabilityType; Now, let's set the desired capabilities for the web apps: DesiredCapabilities caps = new DesiredCapabilities();//To create an objectcaps.setCapability(MobileCapabilityType.PLATFORM_VERSION, "4.4");//To set the android versioncaps.setCapability(MobileCapabilityType.PLATFORM_NAME, "Android");//To set the OS namecaps.setCapability(MobileCapabilityType.DEVICE_NAME,"Moto X");//You can change the device name as yourscaps.setCapability(MobileCapabilityType.BROWSER_NAME,"Chrome"); //To launch the Chrome browser We are done with the desired capability part; now, we have to initiate the Android driver to connect with the Appium Server by importing the following packages: import io.appium.java_client.android.AndroidDriver;import java.net.URL; Then, initiate the Android driver as shown here: AndroidDriver driver = new AndroidDriver (new URL("http://127.0.0.1:4723/wd/hub"), caps);//TO pass the url where Appium server is running This will launch the app in the Android real device using the configurations specified in the desired capabilities. Installing provisional profile, SafariLauncher, and ios-webkit-debug-proxy Before moving on to the desired capabilities for iOS, we have to make sure that we have set up a provisional profile and installed the SafariLauncher app and ios-webkit-debug-proxy to work with the real device. First, let's discuss the provisional profile. Provisional profile This profile is used to deploy an app on a real iOS device. To do this, we need to join the iOS Developer Program (https://developer.apple.com/programs/ios/). For this, you will have to pay USD 99. Now, visit https://developer.apple.com/library/ios/documentation/IDEs/Conceptual/AppDistributionGuide/MaintainingProfiles/MaintainingProfiles.html#//apple_ref/doc/uid/TP40012582-CH30-SW24 to generate the profile. After this, you will also need to install provisional profile on your real device; to do this, perform the following steps: Download the generated provisional profile. Connect the iOS device to a Mac using a USB cable. Open Xcode (version 6) and click on Devices under the Window menu, as shown here: Now, context click on the connected device and click on Show Provisional Profiles...: Click on +, select the downloaded profile, and then click on the Done button, as shown in the following screenshot: SafariLauncher app and ios-webkit-debug-proxy The SafariLauncher app is used to launch the Safari browser on a real device for web app testing. You will need to build and deploy the SafariLauncher app on a real iOS device to work with the Safari browser. To do this, you need to perform the following steps: Download the source code from https://github.com/snevesbarros/SafariLauncher/archive/master.zip. Open Xcode and then open the SafariLauncher project. Select the device to deploy the app on and click on the build button. After this, we need to change the SafariLauncher app in Appium.dmg; to do this, perform the following steps: Right-click on Appium.dmg. Click on Show Package Contents and navigate to Contents/Resources/node_modules/appium/build/SafariLauncher. Now, extract SafariLauncher.zip. Navigate to submodules/SafariLauncher/build/Release-iphoneos and replace the SafariLauncher app with your app. Zip submodules and rename it as SafariLauncher.zip. Now, we need to install the ios-webkit-debug-proxy on Mac to establish a connection in order to access the web view. To install the proxy, you can use brew and run the command brew install ios-webkit-debug-proxy in the terminal (this will install the latest tagged version), or you can clone it from Git and install it using the following steps: Open the terminal, type git clone https://github.com/google/ios-webkit-debug-proxy.git, and then click on the Enter button. Then, enter the following commands: cd ios-webkit-debug-proxy ./autogen.sh ./configure make sudo make install We are now all set to test web and hybrid apps. Summary In this article, we looked at how we can execute the test scripts of native, hybrid, and web mobile apps on iOS and Android real devices. Specifically, we learned how to perform actions on native mobile apps and also got to know about the desired capabilities for real devices. We ran a test with the Android Chrome browser and learned how to load the Chrome browser on an Android real device with the necessary capabilities. We then moved on to starting the Safari browser on a real device and setting up the desired capabilities to test web applications. Lastly, we looked at how easily we can automate hybrid apps and switch from native to web view. Resources for Article: Further resources on this subject: Cross-browser Tests using Selenium WebDriver [article] First Steps with Selenium RC [article] Selenium Testing Tools [article]
Read more
  • 0
  • 0
  • 2602
article-image-creating-responsive-project
Packt
08 Apr 2015
14 min read
Save for later

Creating a Responsive Project

Packt
08 Apr 2015
14 min read
In today's ultra connected world, a good portion of your students probably own multiple devices. Of course, they may want to take your eLearning course on all their devices. They might want to start the course on their desktop computer at work, continue it on their phone while commuting back home, and finish it at night on their tablet. In other situations, students might only have a mobile phone available to take the course, and sometimes the topic to teach only makes sense on a mobile device. To address these needs, you want to deliver your course on multiple screens. As of Captivate 6, you can publish your courses in HTML5, which makes them available on mobile devices that do not support the Flash technology. Now, Captivate 8 takes it one huge step further by introducing Responsive Projects. A Responsive Project is a project that you can optimize for the desktop, the tablet, and the mobile phone. It is like providing three different versions of the course in a single project. In this article, by Damien Bruyndonckx, author of the book Mastering Adobe Captivate 8, you will be introduced to the key concepts and techniques used to create a responsive project in Captivate 8. While reading, keep the following two things in mind. First, everything you have learned so far can be applied to a responsive project. Second, creating a responsive project requires more experience than what a book can offer. I hope that this article will give you a solid understanding of the core concepts in order to jump start your own discovery of Captivate 8 Responsive Projects. (For more resources related to this topic, see here.) About Responsive Projects A Responsive Project is meant to be used on multiple devices, including tablets and smartphones that do not support the Flash technology. Therefore, it can be published only in HTML5. This means that all the restrictions of a traditional HTML5 project also apply to a Responsive Project. For example, you will not be able to add Text Animations or Rollover Objects in a Responsive Project because these features are not supported in HTML5. Responsive design is not limited to eLearning projects made in Captivate. It is actually used by web designers and developers around the world to create websites that have the ability to automatically adapt themselves to the screen they are viewed on. To do so, they need to detect the screen width that is available to their content and adapt accordingly. Responsive Design by Ethan Marcotte If you want to know more about responsive design, I strongly recommend a book by Ethan Marcotte in the A Book Apart collection. This is the founding book of responsive design. If you have some knowledge of HTML and CSS, this is a must have resource in order to fully understand what responsive design is all about. More information on this book can be found at http://www.abookapart.com/products/responsive-web-design. Viewport size versus screen size At the heart of the responsive design approach is the width of the screen used by the student to consume the content. To be more exact, it is the width of the viewport that is detected—not the width of the screen. The viewport is the area that is actually available to the content. On a desktop or laptop computer, the difference between the screen width and the viewport width is very easy to understand. Let's do a simple experiment to grasp that concept hands-on: Open your default web browser and make sure it is in fullscreen mode. Browse to http://www.viewportsizes.com/mine. The main information provided by this page is the size of your viewport. Because your web browser is currently in fullscreen mode, the viewport size should be close (but not quite the same) to the resolution of your screen. Use your mouse to resize your browser window and see how the viewport size evolves. As shown in the following screenshot, the size of the viewport changes as you resize your browser window but the actual screen you use is always the same: This viewport concept is also valid on a mobile device, even though it may be a bit subtler to grasp. The following screenshot shows the http://www.viewportsizes.com/mine web page as viewed in the Safari mobile browser on an iPad mini held in landscape (left) and in portrait (right). As you can see, the viewport size changes but, once again, the actual screen used is always the same. Don't hesitate to perform these experiments on your own mobile devices and compare your results to mine. Another thing that might affect the viewport size on a mobile device is the browser used. The following screenshot shows the viewport size of the same iPad mini held in portrait mode in Safari mobile (left) and in Chrome mobile (right). Note that the viewport size is slightly different in Chrome than in Safari. This is due to the interface elements of the browser (such as the address bar and the tabs) that use a variable portion of the screen real estate in each browser. Understanding breakpoints Before setting up your own Responsive Project there is one more concept to explore. To discover this second concept, you will also perform a simple experiment with your desktop or laptop computer: Open the web browser of your desktop or laptop computer and maximize it to fullscreen size. Browse to http://courses.dbr-training.eu/8/goingmobile. This is the online version of the Responsive Project that you will build in this article. When viewed on a desktop or laptop computer in fullscreen mode, you should see a version of the course optimized for larger screens. Use your mouse to slowly scale your browser window down. Note how the size and the position of the elements are automatically recalculated as you resize the browser window. At some point, you should see that the height of the slide changes and that another layout is applied. The point at which the layout changes is situated at a width of exactly 768 px. In other words, if the width of the browser (actually the viewport) is above 768 px, one layout is applied, but if the width of the viewport falls under 768 px, another layout is applied. You just discovered a breakpoint. The layout that is applied after the breakpoint (in other words when the viewport width is lower than 768 px) is optimized for a tablet device held in portrait mode. Note that even though you are using a desktop or laptop computer, it is the tablet-optimized layout that is applied when the viewport width is at or under 768 px. Keep scaling the browser window down and see how the position and the size of the elements of the slide are recalculated in real time as you resize the browser window. This simple experiment should better explain what a breakpoint is and how these breakpoints work. Before moving on to the next section, let's take some time to summarize the important concepts uncovered in this section: The aim of responsive design is to provide an optimized viewing experience across a wide range of devices and form factors. To achieve this goal, responsive design uses fluid sizing and positioning techniques, responsive images, and breakpoints. Responsive design is not limited to eLearning courses made in Captivate, but is widely used in web and app design by thousands of designers around the world. A Captivate 8 Responsive Project can only be published in HTML5. The capabilities and restrictions of a standard HTML5 project also apply to a Responsive Project. A breakpoint defines the exact viewport width at which the layout breaks and another layout is applied. The breakpoints, and therefore the optimized layouts, are based on the width of the viewport and not on the detection of an actual device. This explains why the tablet-optimized layout is applied to the downsized browser window on a desktop computer. The viewport width and the screen width are two different things. In the next section, you will start the creation of your very first Responsive Project. To learn more about these concepts, there is a video course on Responsive eLearning with Captivate 8 available on Adobe KnowHow. The course itself is for a fee, but there is a free sample of 15 minutes that walks you through these concepts using another approach. I suggest you take some time to watch this 15-minute sample at https://www.adobeknowhow.com/courselanding/create-responsive-elearning-adobe-captivate-8. Setting up a Responsive Project It is now time to open Captivate and set up your first Responsive Project using the following steps: Open Captivate or close every open file. Switch to the New tab of the Welcome screen. Double-click on the Responsive Project thumbnail. Alternatively, you can also use the File | New Project | Responsive Project menu item. This action creates a new Responsive Project. Note that the choice to create a Responsive Project or a regular Captivate project must be done up front when creating the project. As of Captivate 8, it is not yet possible to take an existing non-responsive project and make it responsive after the fact. The workspace of Captivate should be very similar to what you are used to, with the exception of an extra ruler that spans across the top of the screen. This ruler contains three predefined breakpoints. As shown in the following screenshot, the first breakpoint is called the Primary breakpoint and is situated at 1024 pixels. Also, note that the breakpoint ruler is green when the Primary breakpoint is selected. You will now discover the other two breakpoints using the following steps. In the breakpoint ruler, click on the icon of a tablet to select the second breakpoint. The stage and all the elements it contains are resized. In the breakpoint ruler at the top of the stage, the second breakpoint is now selected. It is called the Tablet breakpoint and is situated at 768 pixels. Note the blue color associated with the Tablet breakpoint. In the breakpoint ruler, click on the icon of a smartphone to select the third and last breakpoint. Once again, the stage and the elements it contains are resized. The third breakpoint is called the Mobile breakpoint and is situated at 360 pixels. The orange color is associated with this third breakpoint. Adjusting the breakpoints In some situations, the default location of these three breakpoints works just fine But, in other situations, some adjustments are needed. In this project, you want to target the regular screen of a desktop or laptop computer in the Primary view, an iPad mini held in portrait in the Tablet view, and an iPhone 4 held in portrait in the Mobile view. You will now adjust the breakpoints to fit these particular specifications by using the following steps: Click on the Primary breakpoint in the breakpoints ruler to select it. Use your mouse to move the breakpoint all the way to the left. Captivate should stop at a width of 1280 pixels. It is not possible to have a stage wider than 1280 pixels in a Responsive Project. For this project, the default width of 1024 pixels is perfect, so you will now move this breakpoint back to its original location. Move the Primary breakpoint to the right until it is placed at 1024 pixels. Return to your web browser and browse to http://www.viewportsizes.com. Once on the website, type iPad in the Filter field at the top of the page. The portrait width of an iPad mini is 768 pixels. In Captivate, the Tablet breakpoint is placed at 768 pixels by default, which is perfectly fine for the needs of this project. Still on the http://www.viewportsizes.com website, type iPhone in the Filter field at the top of the page. The portrait width of an iPhone 4 is 320 pixels. In Captivate, the Mobile breakpoint is placed at 360 pixels by default. You will now move it to 320 pixels so that it matches the portrait width of an iPhone 4. Return to Captivate and select the Mobile breakpoint. Move the Mobile breakpoint to the right until it is placed at exactly 320 pixels. Note that the minimum width of the stage in the Mobile breakpoint is 320 pixels. In other words, the stage cannot be narrower than 320 pixels in a Responsive Project. The viewport size of your device Before moving on to the next section, take some time to inspect the http://viewportsizes.com site a little further. For example, type the name of the devices you own and compare their characteristics to the breakpoints of the current project. Will the project fit on your devices? How do you need to change the breakpoints so the project perfectly fits your devices? The breakpoints are now in place. But these breakpoints only take care of the width of the stage. In the next section, you will adjust the height of the stage in each breakpoint. Adjusting the slide height Captivate slides have a fixed height. This is the primary difference between a Captivate project and a regular responsive website whose page height is infinite. In this section, you will adjust the height of the stage in all three breakpoints. The steps are as follows: Still in Captivate, click on the desktop icon situated on the left side of the breakpoint switcher to return to the Primary view. On the far right of the breakpoint ruler, select the View Device Height checkbox. As shown in the following screenshot, a yellow border now surrounds the stage in the Primary view, and the slide height is displayed in the top left corner of the stage: For the Primary view, a slide height of 627 pixels is perfect. It matches the viewport size of an iPad held in landscape and provides a big enough area on a desktop or laptop computer. Click on the Tablet breakpoint to select it. Return to http://www.viewportsizes.com/ and type iPad in the filter field at the top of the page. According to the site, the height of an iPad is 1024 pixels. Use your mouse to drag the yellow rectangle situated at the bottom of the stage down until the stage height is around 950 pixels. It may be needed to reduce the zoom magnification to perform this action in good conditions. After this operation, the stage should look like the following screenshot (the zoom magnification has been reduced to 50 percent in the screenshot): With a height of 950 pixels, the Captivate slide can fit on an iPad screen and still account for the screen real estate consumed by the interface elements of the browser such as the address bar and the tabs. Still in the Tablet view, make sure the slide is the selected object and open the Properties panel. Note that, at the end of the Properties panel, the Slide Height property is currently unavailable. Click on the chain icon (Unlink from Device height) next to the Slide Height property. By default, the slide height is linked to the device height. By clicking on the chain icon you have broken the link between the slide height and the device (or viewport) height. This allows you to modify the height of the Captivate slide without modifying the height of the device. Use the Properties panel to change the Slide Height to 1024 pixels. On the stage, note that the slide is now a little bit higher than the yellow rectangle. This means that this particular slide will generate a vertical scrollbar on the tablet device held in portrait. Scrolling is something you want to avoid as much as possible, so you will now enable the link between the device height and the Slide Height. In the Properties panel, click on the chain icon next to the Slide Height property to enable the link. The slide height is automatically readjusted to the device height of 950 pixels. Use the breakpoint ruler to select the Mobile breakpoint. By default, the device height in the Mobile breakpoint is set to 415 pixels. According to the http://www.viewportsizes.com/ website, the screen of an iPhone 4 has a height of 480 pixels. A slide height of 415 pixels is perfect to accommodate the slide itself plus the interface elements of the mobile browser. Summary In this article, you learned the key concepts and techniques used to create a responsive project in Captivate 8. Resources for Article: Further resources on this subject: Publishing the project for mobile [article] Getting Started with Adobe Premiere Pro CS6 Hotshot [article] Creating Motion Through the Timeline [article]
Read more
  • 0
  • 0
  • 2483

article-image-zoom-and-turn
Packt
07 Apr 2015
14 min read
Save for later

Zoom and Turn

Packt
07 Apr 2015
14 min read
In this article by Charlotte Olsson and Christina Hoyer, the authors of the book Prezi Cookbook, we will cover the following recipes: Zoom in Zoom out Zooming with frames Zooming out with frames Zooming in with frames Turns Turning an element Turning a frame Anatomy of a turn Combining turns for elements and frames (For more resources related to this topic, see here.) Many things about Prezi are distinctive, but two of the really special and characteristic features are the zoom and the turn. A good way to understand zooming is to compare it with reading letters from your lawyer or your bank. When you switch from the general information to the small writing at the bottom of the page, you zoom in by moving the paper closer to your eyes. When you want to read the general information again, you zoom out by moving the paper further out. The zoom feature allows us to zoom in on the canvas to show even the smallest detail, and to zoom out to show larger elements or beautiful and informative overviews. Prezi's turning feature makes it possible for us to change the direction of our travel on the canvas as we move forward in the presentation. See also Zooming is easier to work with if you understand how to create and edit your prezi's path and steps. Zoom in Zooming occurs between two steps in a prezi. Your work with zooms will be easier (and better) if you understand how steps work. We invite you to follow along with this first recipe. It will quickly recap how steps work, and by following along, you will be able to create your first zoom. If you are unsure about steps but prefer skipping this recap, think of a step as either an element or a frame that you have decided to show when you are in the Present mode. Getting ready Because zooming happens between two steps, we begin by creating two steps. The content of these steps can be images, texts, frames, or any other element that has been added to the path. We will be working with images. Perform the following steps: Open a new blank prezi. Delete any existing frames. Insert an image, sizing the image as you please. We inserted a red car. Insert a second image. Make this image smaller than the first image. We inserted a green car. Take a look at the following screenshot, where our prezi has two cars on the canvas; the green car is smaller than the red car: How to do it… Now you are just about ready to zoom: Switch to the Edit Path mode. First, click on the bigger image to add it as a step to the path lane. Next, click on the smaller image to add it as a step to the path lane. Click on Present to see how Prezi first shows step 1 and then zooms in to show step 2. There! That was your first zoom. Pretty easy, eh? We did it too! You can see it right here: Both cars have been added as steps; steps are shown in the path lane as thumbnails. There's more… To understand the zoom, let's look at the preceding screenshot. Take a look at the two cars on the canvas and compare them to the thumbnails in the path lane. Remember that each thumbnail represents a step. On the canvas there is a difference in the sizes of the two cars. But what is going on in the path lane? Here, in the path lane, the thumbnails for the red and the green cars show the cars at identical sizes. Hmmm! Does this mean that when we switch to the Present mode, Prezi will show the two cars at the same size? Yes, that is exactly what it means! What a thumbnail shows is exactly how the step that it represents will be shown in the Present mode. For this prezi, it means that when we click on Present, the red car will fill the screen entirely. When we move forward to step 2, the green car will also fill the screen entirely. This is how steps function: a step always fills the screen. And that is the anatomy of the zoom! Zooming happens because a step always fills the screen in Present mode. Consequently, if two steps have different sizes on the canvas, Prezi needs to zoom in or out to allow whatever step is next to fill the screen. Remember the equation for steps: 1 step = 1 full screen Zoom out Zooming out means going from a smaller section of the canvas to a relatively bigger section of the canvas in the Present mode. Zooming out is a great tool that is typically used for visual illustrations on the canvas, when the content of the presentation shifts from a detailed level to some degree of overview of the canvas. The biggest zoom-outs in Prezi are path steps that are overviews of the entire canvas, which many presenters use to open or close their presentations. Getting ready Open a new blank prezi. Delete any existing frames. Insert an image, sizing it as you please. We inserted a green car. Insert a second image. Make this image larger than the first image. We inserted a red car. We will use the prezi shown in the following screenshot. We want our presentation to begin by showing the green car. Then we want it to zoom out so that the next step shows the red car. How to do it… Switch to the Edit Path mode. First, click on the smaller image to add it as a step to the path lane. Then, select the larger image to add it as a step to the path lane. Click on Present to see how Prezi first shows step 1 and then zooms out to show the bigger step 2. Take a look at the following screenshot, where both cars have been added as steps; steps are shown in the path lane as thumbnails: Zooming with frames When we work in Prezi, we often need to zoom in or out to show a section or specific area of the canvas, rather than a single element. Sometimes the section that we want to show is small; sometimes it is a larger section. This is easily done with frames. Frames are a fantastic tool because they put you in the driver's seat. By carefully using frames, it is possible to target the exact area of the canvas that we want to show. In the following text and recipes, we will be using a red bracket frame for most purposes. That is because we want to ensure that the frame is clearly visible to you. Normally in Prezi, the reality about zooms is that we typically use invisible frames. Invisible frames are our favorite for zooming purposes because they do not interfere with style, colors, or the general design of our prezi. Take a look at the following screenshot, where the invisible frames on the car to the right do not disturb the design: Zooming out with frames It is easy to zoom out to any section on the canvas using frames. All you have to do is frame the section that you want to show. Then set that frame as a step, and you are ready to go to that frame in the Present mode, where Prezi will show the chosen (framed) section of the canvas. Getting ready Perform the following steps on a new prezi: Open a new blank prezi. Delete any existing frames. Insert an image, sizing it as you please. How to do it… In the top-left corner, choose a frame type in the Frame drop-down menu (you may use any type). Click on the frame icon to insert the frame into the canvas. Insert and adjust the frame so that it encloses the image and leaves a nice amount of space around it. Switch to the Edit Path mode. First, select the image to add it as a step to the path lane. Then select the frame around the image to add it as step 2 to the path lane. Click on Present to see how Prezi shows step 1 (the image) and then zooms out to step 2 (the frame). The frame around the car, as shown in the following screenshot, enables us to zoom out: There's more… Overviews are used a lot in most Prezi presentations. The overview can be of a portion of the information on the canvas, such as a chapter overview, or it can include your entire Prezi. Overviews are great because they help your audience get just that—an overview! It is easy to create an overview. Just frame the section of the canvas that you want to show, add it to the path, and that's it! The following is a screenshot showing an overview of all the images we have used so far in this article: Zooming in with frames It is easy to zoom in or out to any section on the canvas using frames. Just frame the section that you want to show and set that frame as a step. This recipe demonstrates how to zoom in. We will be zooming into a detail of a car, but you can use these steps to perform any movement from a larger section to a smaller section on the canvas. Getting ready Perform the following steps on a new prezi: Open a new blank prezi. Insert an image. Delete any existing frames. How to do it… In the top-left corner, choose a frame type in the Frame drop-down menu (you may use any type). Click on the frame icon to insert the frame on the canvas. Drag the frame onto a detail of the image. Resize the frame to your liking by pulling any of the four corner markers of the frame. Switch to the Edit Path mode. Now select the image to add it as a step to the path lane. Then select your new frame to add it as step 2 to the path lane. Click on Present to see how Prezi shows step 1 (the full image) and then zooms in to show step 2 (the frame). Take a look at the following screenshot, where the full image of the car is step 1 and the frame is step 2: When you click on Present, the prezi will show the car (step 1) and then zoom in to show the frame (step 2), allowing you to see the wheel closely. Turns Prezi allows us to create turning effects. The turn feature makes it possible to change directions as we move forward in the presentation. Turns have the potential of adding great dynamics to your Prezi presentation. If used correctly, turns can be a powerful tool that actively support your message. The following recipes will show you how to easily create turns for elements and frames. Towards the end of the article, we will also show you how combining frames and turns create interesting effects. In Appendix B, Transitions, we will show you how to integrate zooms and turns with your overall design. How it works… When you place an element on the canvas, it is not turned. It is in its original or "right-way-up" position. Now suppose you switch to Present mode, and begin moving forward through the steps in your presentation. When a step is shown, the element that is that is this step will always be shown at its original (right-way-up) position, no matter how much you turned it on the canvas. So, if the elements actually do not turn, how does Prezi create this turning effect? Well, as we are about to see, it is actually the canvas that is being turned. Read on! There's more… When we refer to elements that can be turned, it is important to keep in mind that this can be any element that you can put on the canvas. Images, videos, PDF files, text elements, and frames are all elements that can be turned. Confused? Don't be! Try it out on your canvas. It's pretty easy, and you'll quickly get the hang of it. Turning an element Turns create a feel of action that is great for grabbing the attention of your audience. Fortunately, you can easily turn any selected element on the canvas. For this recipe, we will be using an image. Getting ready Perform the following steps on a new prezi: Open a new blank prezi. Delete any existing frames. Insert an image. Notice how the image looks on the canvas in its right-way-up position. How to do it… Click on the image to select it. Hover over any one of the square-shaped corner markers. This activates the turning tool (the circle handle). Grab the turning tool and drag it up or down to turn the image. Place the image in its final position by releasing your mouse key. The turning handle is shown in the following screenshot. Use it to drag up or down to turn the selected element: Turning a frame Any element on the canvas can be turned. This applies to frames as well. Practice by experimenting, and you will gradually develop a good sense of how turned frames work on the canvas. Getting ready Perform the following steps on a new prezi: Open a new blank prezi. Insert any frame (except the circle frame, which makes it difficult to notice the turn). How to do it… Click on the frame to select it. Hover over any one of the square-shaped corner markers. This activates the turning tool (the circle handle). Grab the turning tool and drag it up or down to turn the frame. Place it in its chosen final position by releasing your mouse button. In the following screenshot, use the turning handle to drag up or down to turn the frame: Anatomy of a turn The following image provides an overview of the zooms and turns we discussed in this article: Overview of elements and frames showing right-way-up and turned positions To study the anatomy of turns, let's take a look at the preceding screenshot: Step Element Canvas Path lane (and present mode) 1 Image Right-way-up image Image is shown right way up Image fills the screen 2 Image Turned image Image is shown right way up Canvas must turn Image fills the screen 3 Frame Right-way-up frame Frame is shown right way up Frame fills the screen 4 Frame Turned frame Frame is shown right way up Canvas must turn Frame fills the screen There's more… If you need to edit a frame without affecting the content, you can do so by selecting only the frame. This is done by holding down the Alt key while using you mouse to select the frame. Once the frame is selected, you can edit its size and position (including turning) as it pleases you. For more keyboard shortcuts, please refer to Appendix C, Keyboard Shortcuts. Did you notice that other elements sometimes get highlighted when you turn an element? This reflects that their angle is similar to the element you are editing. Combining turns for elements and frames Take a look at the following screenshot, where turned elements that are steps make the canvas turn when in Present mode: Step Element Canvas Path lane (and present mode) 1 Frame Frame is turned Frame is shown right way up Canvas must turn Frame fills the screen 2 Image Image is turned Image is shown right way up Canvas must turn Image fills the screen Zooms and turns can be combined in numerous ways, and the best way to get the feel of them is by experimenting on the canvas. Go for it! Summary In this article, we took a hands-on approach to the "how-to" of zooms and turns. When zooms and turns are used correctly, they become powerful tools that greatly enhance your prezi. We also showed you how to create them, how to combine them with each other, and how they are applied to all the elements we can insert onto the canvas. Resources for Article: Further resources on this subject: The Fastest Way to Go from an Idea to a Prezi [article] Using Prezi - The Online Presentation Software Tool [article] Getting Started with Impressive Presentations/a> [article]
Read more
  • 0
  • 0
  • 1519

article-image-work-item-querying
Packt
07 Apr 2015
9 min read
Save for later

Work Item Querying

Packt
07 Apr 2015
9 min read
In this article by Dipti Chhatrapati, author of Reporting in TFS, shows us that work items are the primary element project managers and team leaders focus on to track and identify the pending work to be completed. A team member uses work items to track their personal work queue. In order to achieve the current status of the project via work items, it's essential to query work items based on the requirements. This article will cover the following topics: Team project scenario Work item queries Search box queries Flat queries Direct link queries Tree queries (For more resources related to this topic, see here.) Team project scenario Here, we are considering a sports item website that helps user to buy sport items from an item gallery based on their category. The user has to register for membership in order to buy sport products such as footballs, tennis rackets, cricket bats, and so on. Moreover, a registered user can also view/add sport-related articles or news, which will be visible to everyone irrespective of whether they are anonymous or registered. This project is mapped with TFS and has a repository created in TFS Server with work items such as user stories, tasks, bugs, and test cases to plan and track the project's work. We have the following TFS configuration settings for the team project: Team Foundation Server: DIPSTFS Website project: SportsWeb Team project: SportsWebTeamProject Team Foundation Server URL: http://dipstfs:8080/tfs Team project collection URL: http://dipstfs:8080/tfs/DefaultCollection Team Project URL: http://dipstfs:8080/tfs/DefaultCollection/SportsWebTeamProject Team project administrators: DIPSTFSDipsAdministrator Team project members: DIPSTFSDipti Chhatrapati, DIPSTFSBjoern H Rapp, DIPSTFSEdric Taylor, DIPSTFSJohn Smith, DIPSTFSNelson Hall, DIPSTFSScott Harley The following figure shows the project with TFS configuration and setup: Work item queries Work item queries smoothen the process of identifying the status of the team project; this helps in creating a custom report in TFS. We can query work items by a search box or a query editor via Team Web Access. For more information on Work Item Queries, have a look at following links: http://msdn.microsoft.com/en-us/library/ms181308(v=vs.110).aspx http://msdn.microsoft.com/en-us/library/dd286638.aspx There are three types of queries: Flat queries Direct link queries Tree queries Search box queries We can find a work item using the search box available in the team project web portal, which is shown in the following screenshot: You can type in keywords in the search box located on top right of the team project web portal site; for example master, will result in the following work items: The search box content menu also has the ability to find work items based on assignment, status, created by, or work item type, as shown in the following screenshot: The search box finds items using shortcut filters or by specifying keywords or phrases, specific fields/field values, assignment or date modifications, or using the equals, contains, and not operators. For more information on search box filtering, have a look at http://msdn.microsoft.com/en-us/library/cc668120.aspx. Flat queries A flat query list of work items is used when you want to perform the following tasks: Finding a work item with an unknown ID Checking the status or other columns of work items Finding work items that you want to link to other work items Exporting work items to Microsoft Office, Microsoft Excel, and Office Project for bulk updates to column fields Generating a report about a set of work items As a general practice, to easily find work items, a team member can create Shared Queries, which are predefined queries shared across the team. They can be created, modified, and saved as a new query too. The following steps demonstrate how to open a flat query list and create a new query list: In the team project web portal, expand Shared Query List located on the left-hand side and click on the My Tasks query, as shown in the following screenshot: The resulting work items generated by the My Tasks query will be shown in the Work item pane, as shown in the following screenshot: As there are now three active tasks and two new tasks, we will create the My Active Tasks flat Query. To do so, click on Editor, as shown here: Add a clause to filter work items by Active State: Now click on the Save Query as… icon to save the query as My Active Task: Enter the query name and folder as appropriate. Here, we will save the query in the Shared Queries Folder and click on OK: Click on Results to view the work items for the My Active Tasks query and it will display the items, as shown in the following screenshot: Now let's have a look at how to create a query that represents all the work item details of different sprints/iterations. For example, you have a number of sprints in the Release 1 iteration and another release to test an application that's named Test Release 1 that you can find in Team Web Access site's settings page under the Iterations tab, as indicated in the following screenshot: In order to fetch the work item data of all the sprints to know which task is allocated to which team member in which sprint, go to the Backlogs tab and click on Create query: Specify the query name and folder location to store the query. Then click on OK: Then click on the link as indicated in the following screenshot, which will redirect you to the created query: Click on Flat list of work items and remove all the conditions except the iteration path, as shown in the following screenshot: Now save the query and run it. Add columns such as Work Item Type, State, Iteration Path, Title, and Assigned To as appropriate. As a result, this query will display the work items available under the team project for different sprints or releases, as indicated in the following screenshot: To filter work items based on the sprintreleaseiteration, change the iteration path condition for Value to Sprint 1, as indicated in the following screenshot: Finally, save and run the query, which will return the work items available under Sprint 1 of the Release 1 iteration: For more information on flat queries, have a look at http://msdn.microsoft.com/en-us/library/ms181308(v=vs.110).aspx. Direct link queries There are work items that are dependent on other work items such as tasks, bugs, and issues, and they can be tracked using direct links. They help determine risks and dependencies in order to collaborate among teams effectively. Direct link queries help perform the following tasks: Creating a custom view of linked work items Tracking dependencies across team projects and manage the commitments made to other project teams Assessing changes to work items that you do not own but that your work items depend on The following steps demonstrate how to generate a linked query list: Open My Tasks List from Shared Queries. Click on Editor. Click on Work items and direct links, as shown in the following screenshot: Specify the clause for the work item type: Task in Filters for linked work items: We can filter the first level work items by choosing the following option: The meanings of the filter options are described as follows: Only return work items that have the specified links: This option returns only the top-level work items that have links to work items. Return all top level work items: This option returns all the work items whether they have linked work items or not. This option also returns the second-level work items that are linked to the first-level work items. Only return work items that do not have the specified links: This option returns only the top-level work items those are not linked to any work items. Run the query, save it as My Linked Tasks and click on OK: Click on Results to view the linked tasks as configured previously. For more information on direct link queries, have a look at http://msdn.microsoft.com/en-us/library/dd286501(v=vs.110).aspx. Tree queries To view nested work items, tree queries are used by selecting the Tree of Work Items query type. Tree queries are used to execute following tasks: Viewing the hierarchy Finding parent or child work items Changing the tree hierarchy Exporting the tree view to Microsoft Excel for either bulk updates to column fields or to change the tree hierarchy The following steps demonstrate how to generate a tree query list: Open the My Tasks list from Shared Queries. Click on Editor. Click on Tree of work items, as shown in the following screenshot: Define the filter criteria for both parent and child work items. Specify the clause for work item type: Task in Filters for linked work items. Also, select Match top-level work items first. We can filter linked work items by choosing the following option: To find linked children, select Match top-level work items first and, to find linked parents, select Match linked work items first. Run the query, save it as My Tree Tasks, and click on OK. Click on Results to view the linked tasks as configured previously: For more information on Tree queries, have a look at: http://msdn.microsoft.com/en-us/library/dd286633(v=vs.110).aspx Summary In this article, we reviewed the team project scenario; and we also walked through the types of work item queries that produce work items we need in order to know the status of work progress. Resources for Article: Further resources on this subject: Creating a basic JavaScript plugin [article] Building Financial Functions into Excel 2010 [article] Team Foundation Server 2012 [article]
Read more
  • 0
  • 0
  • 3573
article-image-giving-containers-data-and-parameters
Packt
07 Apr 2015
12 min read
Save for later

Giving Containers Data and Parameters

Packt
07 Apr 2015
12 min read
In this article by Oskar Hane, author of the book, Build Your Own PaaS with Docker, we'll cover the following topics: Data volumes Creating a data volume image Host on GitHub Publishing on Docker Registry Hub Running on Docker Registry Hub Passing parameters to containers Creating a parameterized image (For more resources related to this topic, see here.) Data volumes There are two ways in which we can mount external volumes on our containers. A data volume lets you share data between containers, and the data inside the data volume is untouched if you update, stop, or even delete your service container. A data volume is mounted with the –v option in the docker run statement: docker run –v /host/dir:container/dir You can add as many data volumes as you want to a container, simply by adding multiple –v directives. A very good thing about data volumes is that the containers that get data volumes passed into them don't know about it, and don't need to know about it either. No changes are needed for the container; it works just as if it were writing to the local filesystem. You can override existing directories inside containers, which is a common thing to do. One usage of this is to have the web root (usually at /var/www inside the container) in a directory at the Docker host. Mounting a host directory as a data volume You can mount a directory (or file) from your host on your container: docker run –d --name some-wordpress –v /home/web/wp-one:/var/www wordpress This will mount the host's local directory, /home/web/wp-one, as /var/www on the container. If you want to give the container only the read permission, you can change the directive to –v /home/web/wp-one:/var/www:ro where the :ro is the read-only flag. It's not very common to use a host directory as a data volume in production, since data in a directory isn't very portable. But it's very convenient when testing how your service container behaves when the source code changes. Any change you make in the host directory is direct in the container's mounted data volume. Mounting a data volume container A more common way of handling data is to use a container whose only task is to hold data. The services running in the container should be as few as possible, thus keeping it as stable as possible. Data volume containers can have exposed volumes via the Dockerfile's VOLUME keyword, and these volumes will be mounted on the service container while using the data volume container with the --volumes-from directive. A very simple Dockerfile with a VOLUME directive can look like this: FROM ubuntu:latest VOLUME ["/var/www"] A container using the preceding Dockerfile will mount /var/www. To mount the volumes from a data container onto a service container, we create the data container and then mount it, as follows: docker run –d --name data-container our-data-container docker run –d --name some-wordpress --volumes-from data-container wordpress Backup and restore data volumes Since the data in a data volume is shared between containers, it's easy to access the data by mounting it onto a temporary container. Here's how you can create a .zip file (from your host) from the data inside a data volume container that has VOLUME ["/var/www"] in its Dockerfile: docker run --volumes-from data-container -v $(pwd):/host ubuntu zip -r /host/data-containers-www /var/www This creates a .zip file named data-containers-www.zip, containing what was in the. www data container from var directory. This .zip file places that content in your current host directory. Creating a data volume image Since our data volume container will just hold our data, we should keep it as small as possible to start with so that it doesn't take lots of unnecessary space on the server. The data inside the container can, of course, grow to be as big as the space on the server's disk. We don't need anything fancy at all; we just need a working file storage system. For this article, we'll keep all our data (MySQL database files and WordPress files) in the same container. You can, of course, separate them into two data volume containers named something like dbdata and webdata. Data volume image Our data volume image does not need anything other than a working filesystem that we can read and write to. That's why our base image of choice will be BusyBox. This is how BusyBox describes itself: "BusyBox combines tiny versions of many common UNIX utilities into a single small executable. It provides replacements for most of the utilities you usually find in GNU fileutils, shellutils, etc. The utilities in BusyBox generally have fewer options than their full-featured GNU cousins; however, the options that are included provide the expected functionality and behave very much like their GNU counterparts. BusyBox provides a fairly complete environment for any small or embedded system." That sounds great! We'll go ahead and add this to our Dockerfile: FROM busybox:latest Exposing mount points There is a VOLUME instruction for the Dockerfile, where you can define which directories to expose to other containers when this data volume container is added using --volumes-from attribute. In our data volume containers, we first need to add a directory for MySQL data. Let's take a look inside the MySQL image we will be using to see which directory is used for the data storage, and expose that directory to our data volume container so that we can own it: RUN mkdir –p /var/lib/mysql VOLUME ["/var/lib/mysql"] We also want our WordPress installation in this container, including all .php files and graphic images. Once again, we go to the image we will be using and find out which directory will be used. In this case, it's /var/www/html. When you add this to the Dockerfile, don't add new lines; just append the lines with the MySQL data directory: RUN mkdir -p /var/lib/mysql && mkdir -p /var/www/html VOLUME ["/var/lib/mysql", "/var/www/html"] The Dockerfile The following is a simple Dockerfile for the data image: FROM busybox:latest MAINTAINER Oskar Hane <oh@oskarhane.com> RUN mkdir -p /var/lib/mysql && mkdir -p /var/www/html VOLUME ["/var/lib/mysql", "/var/www/html"] And that's it! When publishing images to the Docker Registry Hub, it's good to include a MAINTAINER instruction in the Dockerfiles so that you can be contacted if someone wants, for some reason. Host on GitHub When we use our knowledge on how to host Docker image sources on GitHub and how to publish images on the Docker Registry Hub, it'll be no problem creating our data volume image. Let's create a branch and a Dockerfile and add the content for our data volume image: git checkout -b data vi Dockerfile git add Dockerfile On line number 2 in the preceding code, you can use the text editor of your choice. I just happen to find vi suits my needs. The content you should add to the Dockerfile is this: FROM busybox:latest MAINTAINER Oskar Hane <oh@oskarhane.com> RUN mkdir /var/lib/mysql && mkdir /var/www/html VOLUME ["/var/lib/mysql", "/var/www/html"] Replace the maintainer information with your name and e-mail. You can—and should—always ensure that it works before committing and pushing to GitHub. To do so, you need to build a Docker image from your Dockerfile: docker build –t data-test Make sure you notice the dot at the end of the line, which means that Docker should look for a Dockerfile in the current directory. Docker will try to build an image from the instructions in our Dockerfile. It should be pretty fast, since it's a small base image and there's nothing but a couple of VOLUME instructions on top of it. The screenshot is as follows: When everything works as we want, it's time to commit the changes and push it to our GitHub repository: git commit –m "Dockerfile for data volume added." git push origin data When you have pushed it to the repository, head over to GitHub to verify that your new branch is present there. The following screenshot shows the GitHub repository: Publishing on Docker Hub Registry Now that we have our new branch on GitHub, we can go to the Docker Hub Registry and create a new automated build, named data. It will have our GitHub data branch as source. Wait for the build to finish, and then try to pull the image with your Docker daemon to verify that it's there and it's working. The screenshot will be as follows: Amazing! Check out the size of the image; it's just less than 2.5 MB. This is perfect since we just want to store data in it. A container on top of this image can, of course, be as big as your hard drive allows. This is just to show how big the image is. The image is read-only, remember? Running a data volume container Data volume containers are special; they can be stopped and still fulfill their purpose. Personally, I like to see all containers in use when executing docker ps command, since I like to delete stopped containers once in a while. This is totally up to you. If you're okay with keeping the container stopped, you can start it using this command: docker run –d oskarhane/data true The true argument is just there to enter a valid command, and the –d argument places the container in detached mode, running in the background. If you want to keep the container running, you need to place a service in the foreground, like this: docker run –d oskarhane/data tail –f /dev/null The tail –f /dev/null command is a command that never ends, so the container will be running until we stop it. Resource-wise, the tail command is pretty harmless. Passing parameters to containers We have seen how to give containers parameters or environment variables when starting the official MySQL container: docker run --name mysql-one -e MYSQL_ROOT_PASSWORD=pw -d mysql The –e MYSQL_ROOT_PASSWORD=pw command is an example showing how you can do it. It means that the MYSQL_ROOT_PASSWORD environment variable inside the container has pw as the value. This is a very convenient way to have configurable containers where you can have a setup script as ENTRYPOINT or a foreground script configuring passwords; hosts; test, staging, or production environments; and other settings that the container needs. Creating a parameterized image Just to get the hang of this feature, which is very good, let's create a small Docker image that converts a string to uppercase or lowercase, depending on the state of an environment variable. The Docker image will be based on the latest Debian distribution and will have only an ENTRYPOINT command. This is the Dockerfile: FROM debian:latest ADD ./case.sh /root/case.sh RUN chmod +x /root/case.sh ENTRYPOINT /root/case.sh This takes the case.sh file from our current directory, adds it to the container, makes it executable, and assigns it as ENTRYPOINT. The case.sh file may look something like this: #!/bin/bash   if [ -z "$STR" ]; then        echo "No STR string specified."        exit 0 fi   if [ -z "$TO_CASE" ]; then        echo "No TO_CASE specified."        exit 0 fi   if [ "$TO_CASE" = "upper" ]; then        echo "${STR^^*}"        exit 0 fi if [ "$TO_CASE" = "lower" ]; then        echo "${STR,,*}"        exit 0 fi echo "TO_CASE was not upper or lower" This file checks whether the $STR and $TO_CASE environment variables are set. If the check on whether $TO_CASE is upper or lower is done and if that fails, an error message saying that we only handle upper and lower is displayed. If $TO_STR was set to upper or lower, the content of the environment variable $STR is transformed to uppercase or lowercase respectively, and then printed to stdout. Let's try this! Here are some commands we can try: docker run –i case docker run –i case –e STR="My String" case docker run –i case –e STR="My String" –e TO_CASE=camel case docker run –i case –e STR="My String" –e TO_CASE=upper case docker run –i case –e STR="My String" –e TO_CASE=lower case This seems to be working as expected, at least for this purpose. Now we have created a container that takes parameters and acts upon them. Summary In this article, you learned that you can keep your data out of your service containers using data volumes. Data volumes can be any one of directories, files from the host's filesystem, or data volume containers. We explored how we can pass parameters to containers and how to read them from inside ENTRYPOINT. Parameters are a great way to configure containers, making it easier to create more generalized Docker images. We created a data volume container and published it to the Docker Hub Registry. Resources for Article: Further resources on this subject: Supporting hypervisors by OpenNebula [article] Securing vCloud Using the vCloud Networking and Security App Firewall [article] Platform as a Service and CloudBees [article]
Read more
  • 0
  • 0
  • 1824

article-image-security-microsoft-azure
Packt
06 Apr 2015
9 min read
Save for later

Security in Microsoft Azure

Packt
06 Apr 2015
9 min read
In this article, we highlight some security points of interest, according to the ones explained in the book Microsoft Azure Security, by Roberto Freato. Microsoft Azure is a comprehensive set of services, which enable Cloud computing solutions for enterprises and small businesses. It supports a variety of tools and languages, providing users with building blocks that can be composed as needed. Azure is actually one of the biggest players in the Cloud computing market, solving scalability issues, speeding up the entire management process, and integrating with the existing development tool ecosystem. (For more resources related to this topic, see here.) Standards and Azure It is probably well known that the most widely accepted principles of IT security are confidentiality, integrity, and availability. Despite many security experts defining even more indicators/principles related to IT security, most security controls are focused on these principles, since the vulnerabilities are often expressed as a breach of one (or many) of these three. These three principles are also known as the CIA triangle: Confidentiality: It is about disclosure. A breach of confidentiality means that somewhere, some critical and confidential information has been disclosed unexpectedly. Integrity: It is about the state of information. A breach of integrity means that information has been corrupted or, alternatively, the meaning of the information has been altered unexpectedly. Availability: It is about interruption. A breach of availability means that information access is denied unexpectedly. Ensuring confidentiality, integrity, and availability means that information flows are always monitored and the necessary controls are enforced. To conclude, this is the purpose of a Security Management System, which, when talking about IT, becomes Information Security Management System (ISMS). The International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) often work together to build international standards for specific technical fields. They released the ISO/IEC 27000 series to provide a family of standards for ISMS, starting from definitions (ISO/IEC 27000), up to governance (ISO/IEC 27014), and even more. Two standards of particular interests are the ISO/IEC 27001 and the ISO/IEC 27002. Microsoft manages the Azure infrastructure, At the most, users can manage the operating system inside a Virtual Machine (VM), but they do not need to administer, edit, or influence the under-the-hood infrastructure. They should not be able to do this at all. Therefore, Azure is a shared environment. This means that a customer's VM can run on the physical server of another customer and, for any given Azure service, two customers can even share the same VM (in some Platform as a Service (PaaS) and Software as a Service (SaaS) scenarios). The Microsoft Azure Trust Center (http://azure.microsoft.com/en-us/support/trust-center/) highlights the attention given to the Cloud infrastructure, in terms of what Microsoft does to enforce security, privacy, and compliance. Identity and Access Management It is very common that different people within the same organization would access and use the same Azure resources. In this case, a few scenarios arise: with the current portal, we can add several co-administrators; with the Preview portal, we can define fine-grained ACLs with the Role-Based Access Control (RBAC) features it implements. By default, we can add external users into the Azure Active Directory (AD), by inviting them through their e-mail address, which must be either a Microsoft account or an Azure AD account. In the Preview portal, the following hierarchy is, as follows: Subscription: It is the permission given at the subscription level, which is valid for each object within the subscription (that is, a Reader subscription can view everything within the subscription). Resource group: This is a fairly new concept of Azure. A resource group is (as the name suggests) a group or resources logically connected, as the collection of resources used for the same web project (a web hosting plan, an SQL server, and so on). Permission given at this level is valid for each object within the resource group. Individual resource: It is the permission given to an individual resource, and is valid only for that resource (that is, giving read-only access to a client to view the Application Insights of a website). Despite it resembles from its name, Azure AD is just an Identity and Access Management (IAM) service, managed and hosted by Microsoft in Azure. We should not even try to make a comparison, because they have different scopes and features. It is true that we can link Azure AD with an on-premise AD, but only for the purpose of extending its functionalities to work with Internet-based applications. Azure AD can be considered a SaaS for IAM before its relationship with Azure Services. A company that is offers its SaaS solution to clients, can also use Azure AD as the Identity Provider, relying on the several existing users of Office 365 (which relies on Azure AD for authentication) or Azure AD itself. Access Control Service (ACS) has been famous for a while for its capability to act as an identity bridge between applications and social identities. In the last few years, if developers wanted to integrate Facebook, Google, Yahoo, and Microsoft accounts (Live ID), they would have probably used ACS. Using Platform as a Service Although there are several ways to host custom code on Azure, the two most important building blocks are Websites and Cloud services. The first is actually a PaaS built on top of the second (a PaaS too), and uses an open source engine named Project Kudu (https://github.com/projectkudu/kudu). Kudu is an open source engine, which works with IIS and manages automatic or manual deployments of Azure Websites in a sandboxed environment. Kudu can also run outside Azure, but it is primarily supported to enable Website services. An Azure Cloud service is a container of roles: a role is the representation of a unit of execution and it can be a worker role (an arbitrary application) or a web role (an IIS application). Each role within a Cloud service can be deployed to several VMs (instances) at the same time, to provide scalability and load-balancing. From the security perspective, we need to pay attention to these aspects: Remote endpoints Remote desktops Startup tasks Microsoft Antimalware Network communication Azure Websites are some of the most advanced PaaS in the Cloud computing market, providing users with a lock-in free solution to run applications built in various languages/platforms. From the security perspective, we need to pay attention to these aspects: Credentials Connection modes Settings and connection strings Backups Extensions Azure services have grown much faster (with regard to the number of services and the surface area) than in the past, at an amazingly increasing rate: consequently, we have several options to store any kind of data (relational, NoSQL, binary, JSON, and so on). Azure Storage is the base service for almost everything on the platform. Storage security is implemented in two different ways: Account Keys Shared Access Signatures While looking at the specifications of many Azure Services, we often see the scalability targets section. For a given service, Azure provides users with a set of upper limits, in terms of capacity, bandwidth, and throughput to let them design their Cloud solutions better. Working with SQL Database is straightforward. However, a few security best practices must be implemented to improve security: Setting up firewall rules Setting up users and roles Connection settings Modern software architectures often rely on an in-memory caching system to save frequently accessed data that do not change too often. Some extreme scenarios require us to use an in-memory cache as the primary data store for sensitive data, pursuing design patterns oriented to eventual persistence and consistency. Azure Managed Cache is the evolution of the former AppFabric Cache for Windows servers and it is a managed by an in-memory cache service. Redis is an open source, high performance data store written in ANSI C: since its name stands for Remote Dictionary Server, it is a key value data store with optional durability. Azure Key Vault is a new and promising service that is used to store cryptographic keys and application secrets. There is an official library to operate against Key Vault from .NET, using Azure AD authentication to get secrets or use the keys. Before using it, it is necessary to set appropriate permissions on the Key Vault for external access, using the Set-AzureKeyVaultAccessPolicy command. Using Infrastructure as a Service Customers choosing Infrastructure as a Service (IaaS) usually have existing project constraints, which are not adaptive to PaaS. We can think about a complex installation of an enterprise-level software suite, such as ERP or a SharePoint farm. This is one of the cases where a service, such as an Azure Website, probably cannot fit. There two main services where the security requirements should be correctly understood and addressed are: Azure Virtual Machines Azure Virtual Networks VMs are the most configurable execution environments for applications that Azure provides. With VMs, we can run arbitrary workloads and run custom tools and applications, but we need to manage and maintain them directly, including the security. From the security perspective, we need to pay attention to these aspects: VM creation Endpoints and ACLs Networking and isolation Microsoft Antimalware Operating system firewalls Auditing and best practices Azure Backup helps protect servers or clients against data loss, providing a second place backup solution. While performing a backup, in fact, one of the primary requirements is the location of the backup: avoid backing up sensitive or critical data to a physical location that is strictly connected to the primary source of the data itself. In case of a disaster, if you involve the facility where the source is located, a higher probability of losing data (including the backup) can occur. Summary In this article we covered the various security related aspects of Microsoft Azure with services, such as PaaS, IaaS, and IAM. Resources for Article: Further resources on this subject: Web API and Client Integration [article] Setting up of Software Infrastructure on the Cloud [article] Windows Phone 8 Applications [article]
Read more
  • 0
  • 0
  • 11699
Modal Close icon
Modal Close icon