Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7010 Articles
article-image-arduino-development
Packt
22 Apr 2015
19 min read
Save for later

Arduino Development

Packt
22 Apr 2015
19 min read
Most systems using the Arduino have a similar architecture. They have a way of reading data from the environment—a sensor—they make decision using the code running inside the Arduino and then output those decisions to the environment using various actuators, such as a simple motor. Using three recipes from the book, Arduino Development Cookbook, by Cornel Amariei, we will build such a system, and quite a useful one—a fan controlled by the air temperature. Let's break the process into three key steps, the first and easiest will be to connect an LED to the Arduino, a few of them will act as a thermometer, displaying the room temperature. The second step will be to connect the sensor and program it, and the third will be to connect the motor. Here, we will learn this basic skills. (For more resources related to this topic, see here.) Connecting an external LED Luckily, the Arduino boards come with an internal LED connected to pin 13. It is simple to use and always there. But most times we want our own LEDs in different places of our system. It is possible that we connect something on top of the Arduino board and are unable to see the internal LED anymore. Here, we will explore how to connect an external LED. Getting ready For this step we need the following ingredients: An Arduino board connected to the computer via USB A breadboard and jumper wires A regular LED (the typical LED size is 3 mm) A resistor between 220–1,000 ohm How to do it… Follow these steps to connect an external LED to an Arduino board: Mount the resistor on the breadboard. Connect one end of the resistor to a digital pin on the Arduino board using a jumper wire. Mount the LED on the breadboard. Connect the anode (+) pin of the LED to the available pin on the resistor. We can determine the anode on the LED in two ways. Usually, the longer pin is the anode. Another way is to look for the flat edge on the outer casing of the LED. The pin next to the flat edge is the cathode (-). Connect the LED cathode (-) to the Arduino GND using jumper wires. Schematic This is one possible implementation on the second digital pin. Other digital pins can also be used. Here is a simple way of wiring the LED: Code The following code will make the external LED blink: // Declare the LED pin int LED = 2; void setup() { // Declare the pin for the LED as Output pinMode(LED, OUTPUT); } void loop(){ // Here we will turn the LED ON and wait 200 milliseconds digitalWrite(LED, HIGH); delay(200); // Here we will turn the LED OFF and wait 200 milliseconds digitalWrite(LED, LOW); delay(200); } If the LED is connected to a different pin, simply change the LED value to the value of the pin that has been used. How it works… This is all semiconductor magic. When the second digital pin is set to HIGH, the Arduino provides 5 V of electricity, which travels through the resistor to the LED and GND. When enough voltage and current is present, the LED will light up. The resistor limits the amount of current passing through the LED. Without it, it is possible that the LED (or worse, the Arduino pin) will burn. Try to avoid using LEDs without resistors; this can easily destroy the LED or even your Arduino. Code breakdown The code simply turns the LED on, waits, and then turns it off again. In this one we will use a blocking approach by using the delay() function. Here we declare the LED pin on digital pin 2: int LED = 2; In the setup() function we set the LED pin as an output: void setup() { pinMode(LED, OUTPUT); } In the loop() function, we continuously turn the LED on, wait 200 milliseconds, and then we turn it off. After turning it off we need to wait another 200 milliseconds, otherwise it will instantaneously turn on again and we will only see a permanently on LED. void loop(){ // Here we will turn the LED ON and wait 200 miliseconds digitalWrite(LED, HIGH); delay(200); // Here we will turn the LED OFF and wait 200 miliseconds digitalWrite(LED, LOW); delay(200); } There's more… There are a few more things we can do. For example, what if we want more LEDs? Do we really need to mount the resistor first and then the LED? LED resistor We do need the resistor connected to the LED; otherwise there is a chance that the LED or the Arduino pin will burn. However, we can also mount the LED first and then the resistor. This means we will connect the Arduino digital pin to the anode (+) and the resistor between the LED cathode (-) and GND. If we want a quick cheat, check the following See also section. Multiple LEDs Each LED will require its own resistor and digital pin. For example, we can mount one LED on pin 2 and one on pin 3 and individually control each. What if we want multiple LEDs on the same pin? Due to the low voltage of the Arduino, we cannot really mount more than three LEDs on a single pin. For this we require a small resistor, 220 ohm for example, and we need to mount the LEDs in series. This means that the cathode (-) of the first LED will be mounted to the anode (+) of the second LED, and the cathode (-) of the second LED will be connected to the GND. The resistor can be placed anywhere in the path from the digital pin to the GND. See also For more information on external LEDs, take a look at the following recipes and links: For more details about LEDs in general, visit http://electronicsclub.info/leds.htm To connect multiple LEDs to a single pin, read the instructable at http://www.instructables.com/id/How-to-make-a-string-of-LEDs-in-parallel-for-ardu/ Because we are always lazy and we don't want to compute the needed resistor values, use the calculator at http://www.evilmadscientist.com/2009/wallet-size-led-resistance-calculator/ Now that we know how to connect an LED, let's also learn how to work with a basic temperature sensor, and built the thermometer we need. Temperature sensor Almost all sensors use the same analog interface. Here, we explore a very useful and fun sensor that uses the same. Temperature sensors are useful for obtaining data from the environment. They come in a variety of shapes, sizes, and specifications. We can mount one at the end of a robotic hand and measure the temperature in dangerous liquids. Or we can just build a thermometer. Here, we will build a small thermometer using the classic LM35 and a bunch of LEDs. Getting ready The following are the ingredients required: A LM35 temperature sensor A bunch of LEDs, different colors for a better effect Some resistors between 220–1,000 ohm How to do it… The following are the steps to connect a button without a resistor: Connect the LEDs next to each other on the breadboard. Connect all LED negative terminals—the cathodes—together and then connect them to the Arduino GND. Connect a resistor to each positive terminal of the LED. Then, connect each of the remaining resistor terminals to a digital pin on the Arduino. Here, we used pins 2 to 6. Plug the LM35 in the breadboard and connect its ground to the GND line. The GND pin is the one on the right, when looking at the flat face. Connect the leftmost pin on the LM35 to 5V on the Arduino. Lastly, use a jumper wire to connect the center LM35 pin to an analog input on the Arduino. Here we used the A0 analog pin. Schematic This is one possible implementation using the pin A0 for analog input and pins 2 to 6 for the LEDs: Here is a possible breadboard implementation: Code The following code will read the temperature from the LM35 sensor, write it on the serial, and light up the LEDs to create a thermometer effect: // Declare the LEDs in an array int LED [5] = {2, 3, 4, 5, 6}; int sensorPin = A0; // Declare the used sensor pin void setup(){    // Start the Serial connection Serial.begin(9600); // Set all LEDs as OUTPUTS for (int i = 0; i < 5; i++){    pinMode(LED[i], OUTPUT); } }   void loop(){ // Read the value of the sensor int val = analogRead(sensorPin); Serial.println(val); // Print it to the Serial // On the LM35 each degree Celsius equals 10 mV // 20C is represented by 200 mV which means 0.2 V / 5 V * 1023 = 41 // Each degree is represented by an analogue value change of   approximately 2 // Set all LEDs off for (int i = 0; i < 5; i++){    digitalWrite(LED[i], LOW); } if (val > 40 && val < 45){ // 20 - 22 C    digitalWrite( LED[0], HIGH); } else if (val > 45 && val < 49){ // 22 - 24 C    digitalWrite( LED[0], HIGH);    digitalWrite( LED[1], HIGH); } else if (val > 49 && val < 53){ // 24 - 26 C    digitalWrite( LED[0], HIGH);    digitalWrite( LED[1], HIGH);    digitalWrite( LED[2], HIGH); } else if (val > 53 && val < 57){ // 26 - 28 C    digitalWrite( LED[0], HIGH);    digitalWrite( LED[1], HIGH);    digitalWrite( LED[2], HIGH);    digitalWrite( LED[3], HIGH); } else if (val > 57){ // Over 28 C    digitalWrite( LED[0], HIGH);    digitalWrite( LED[1], HIGH);    digitalWrite( LED[2], HIGH);    digitalWrite( LED[3], HIGH);    digitalWrite( LED[4], HIGH); } delay(100); // Small delay for the Serial to send } Blow into the temperature sensor to observe how the temperature goes up or down. How it works… The LM35 is a very simple and reliable sensor. It outputs an analog voltage on the center pin that is proportional to the temperature. More exactly, it outputs 10 mV for each degree Celsius. For a common value of 25 degrees, it will output 250 mV, or 0.25 V. We use the ADC inside the Arduino to read that voltage and light up LEDs accordingly. If it's hot, we light up more of them, if not, less. If the LEDs are in order, we will get a nice thermometer effect. Code breakdown First, we declare the used LED pins and the analog input to which we connected the sensor. We have five LEDs to declare so, rather than defining five variables, we can store all five pin numbers in an array with 5 elements: int LED [5] = {2, 3, 4, 5, 6}; int sensorPin = A0; We use the same array trick to simplify setting each pin as an output in the setup() function. Rather than using the pinMode() function five times, we have a for loop that will do it for us. It will iterate through each value in the LED[i] array and set each pin as output: void setup(){ Serial.begin(9600); for (int i = 0; i < 5; i++){    pinMode(LED[i], OUTPUT); } } In the loop() function, we continuously read the value of the sensor using the analogRead() function; then we print it on the serial: int val = analogRead(sensorPin); Serial.println(val); At last, we create our thermometer effect. For each degree Celsius, the LM35 returns 10 mV more. We can convert this to our analogRead() value in this way: 5V returns 1023, so a value of 0.20 V, corresponding to 20 degrees Celsius, will return 0.20 V/5 V * 1023, which will be equal to around 41. We have five different temperature areas; we'll use standard if and else casuals to determine which region we are in. Then we light the required LEDs. There's more… Almost all analog sensors use this method to return a value. They bring a proportional voltage to the value they read that we can read using the analogRead() function. Here are just a few of the sensor types we can use with this interface: Temperature Humidity Pressure Altitude Depth Liquid level Distance Radiation Interference Current Voltage Inductance Resistance Capacitance Acceleration Orientation Angular velocity Magnetism Compass Infrared Flexing Weight Force Alcohol Methane and other gases Light Sound Pulse Unique ID such as fingerprint Ghost! The last building block is the fan motor. Any DC fan motor will do for this, here we will learn how to connect and program it. Controlling motors with transistors We can control a motor by directly connecting it to the Arduino digital pin; however, any motor bigger than a coin would kill the digital pin and most probably burn Arduino. The solution is to use a simple amplification device, the transistor, to aid in controlling motors of any size. Here, we will explore how to control larger motors using both NPN and PNP transistors. Getting ready To execute this recipe, you will require the following ingredients: A DC motor A resistor between 220 ohm and 10K ohm A standard NPN transistor (BC547, 2N3904, N2222A, TIP120) A standard diode (1N4148, 1N4001, 1N4007) All these components can be found on websites such as Adafruit, Pololu, and Sparkfun, or in any general electronics store. How to do it… The following are the steps to connect a motor using a transistor: Connect the Arduino GND to the long strip on the breadboard. Connect one of the motor terminals to VIN or 5V on the Arduino. We use 5V if we power the board from the USB port. If we want higher voltages, we could use an external power source, such as a battery, and connect it to the power jack on Arduino. However, even the power jack has an input voltage range of 7 V–12 V. Don't exceed these limitations. Connect the other terminal of the motor to the collector pin on the NPN transistor. Check the datasheet to identify which terminal on the transistor is the collector. Connect the emitter pin of the NPN transistor to the GND using the long strip or a long connection. Mount a resistor between the base pin of the NPN transistor and one digital pin on the Arduino board. Mount a protection diode in parallel with the motor. The diode should point to 5V if the motor is powered by 5V, or should point to VIN if we use an external power supply. Schematic This is one possible implementation on the ninth digital pin. The Arduino has to be powered by an external supply. If not, we can connect the motor to 5V and it will be powered with 5 volts. Here is one way of hooking up the motor and the transistor on a breadboard: Code For the coding part, nothing changes if we compare it with a small motor directly mounted on the pin. The code will start the motor for 1 second and then stop it for another one: // Declare the pin for the motor int motorPin = 2; void setup() { // Define pin #2 as output pinMode(motorPin, OUTPUT); } void loop(){ // Turn motor on digitalWrite(motorPin, HIGH); // Wait 1000 ms delay(1000); // Turn motor off digitalWrite(motorPin, LOW); // Wait another 1000 ms delay(1000); } If the motor is connected to a different pin, simply change the motorPin value to the value of the pin that has been used. How it works… Transistors are very neat components that are unfortunately hard to understand. We should think of a transistor as an electric valve: the more current we put into the valve, the more water it will allow to flow. The same happens with a transistor; only here, current flows. If we apply a current on the base of the transistor, a proportional current will be allowed to pass from the collector to the emitter, in the case of an NPN transistor. The more current we put on the base, the more the flow of current will be between the other two terminals. When we set the digital pin at HIGH on the Arduino, current passes from the pin to the base of the NPN transistor, thus allowing current to pass through the other two terminals. When we set the pin at LOW, no current goes to the base and so, no current will pass through the other two terminals. Another analogy would be a digital switch that allows current to pass from the collector to the emitter only when we 'push' the base with current. Transistors are very useful because, with a very small current on the base, we can control a very large current from the collector to the emitter. A typical amplification factor called b for a transistor is 200. This means that, for a base current of 1 mA, the transistor will allow a maximum of 200 mA to pass from the collector to the emitter. An important component is the diode, which should never be omitted. A motor is also an inductor; whenever an inductor is cut from power it may generate large voltage spikes, which could easily destroy a transistor. The diode makes sure that all current coming out of the motor goes back to the power supply and not to the motor. There's more… Transistors are handy devices; here are a few more things that can be done with them. Pull-down resistor The base of a transistor is very sensitive. Even touching it with a finger might make the motor turn. A solution to avoid unwanted noise and starting the motor is to use a pull-down resistor on the base pin, as shown in the following figure. A value of around 10K is recommended, and it will safeguard the transistor from accidentally starting. PNP transistors A PNP transistor is even harder to understand. It uses the same principle, but in reverse. Current flows from the base to the digital pin on the Arduino; if we allow that current to flow, the transistor will allow current to pass from its emitter to its collector (yes, the opposite of what happens with an NPN transistor). Another important point is that the PNP is mounted between the power source and the load we want to power up. The load, in this case a motor, will be connected between the collector on the PNP and the ground. A key point to remember while using PNP transistors with Arduino is that the maximum voltage on the emitter is 5 V, so the motor will never receive more than 5 V. If we use an external power supply for the motor, the base will have a voltage higher than 5 V and will burn the Arduino. One possible solution, which is quite complicated, has been shown here: MOSFETs Let's face it; NPN and PNP transistors are old. There are better things these days that can provide much better performance. They are called Metal-oxide-semiconductor field-effect transistors. Normal people just call them MOSFETs and they work mostly the same. The three pins on a normal transistor are called collector, base, and emitter. On the MOSFET, they are called drain, gate, and source. Operation-wise, we can use them exactly the same way as with normal transistors. When voltage is applied at the gate, current will pass from the drain to the source in the case of an N-channel MOSFET. A P-channel is the equivalent of a PNP transistor. However, there are some important differences in the way a MOSFET works compared with a normal transistor. Not all MOSFETs can be properly powered on by the Arduino. Usually logic-level MOSFETs will work. Some of the famous N-channel MOSFETs are the FQP30N06, the IRF510, and the IRF520. The first one can handle up to 30 A and 60 V while the following two can handle 5.6 A and 10 A, respectively, at 100 V. Here is one implementation of the previous circuit, this time using an N-channel MOSFET: We can also use the following breadboard arrangement: Different loads A motor is not the only thing we can control with a transistor. Any kind of DC load can be controlled. An LED, a light or other tools, even another Arduino can be powered up by an Arduino and a PNP or NPN transistor. Arduinoception! See also For general and easy to use motors, Solarbotics is quite nice. Visit the site at https://solarbotics.com/catalog/motors-servos/. For higher-end motors that pack quite some power, Pololu has made a name for itself. Visit the site at https://www.pololu.com/category/51/pololu-metal-gearmotors. Putting it all together Now that we have the three key building blocks, we need to assemble them together. For the code, we only need to briefly modify the Temperature Sensor code, to also output to the motor: // Declare the LEDs in an array int LED [5] = {2, 3, 4, 5, 6}; int sensorPin = A0; // Declare the used sensor pin int motorPin = 9; // Declare the used motor pin void setup(){    // Start the Serial connection Serial.begin(9600); // Set all LEDs as OUTPUTS for (int i = 0; i < 5; i++){    pinMode(LED[i], OUTPUT); } // Define motorPin as output pinMode(motorPin, OUTPUT);   }   void loop(){ // Read the value of the sensor int val = analogRead(sensorPin); Serial.println(val); // Print it to the Serial // On the LM35 each degree Celsius equals 10 mV // 20C is represented by 200 mV which means 0.2 V / 5 V * 1023 = 41 // Each degree is represented by an analogue value change of   approximately 2 // Set all LEDs off for (int i = 0; i < 5; i++){    digitalWrite(LED[i], LOW); } if (val > 40 && val < 45){ // 20 - 22 C    digitalWrite( LED[0], HIGH);    digitalWrite( motorPIN, LOW); // Fan OFF } else if (val > 45 && val < 49){ // 22 - 24 C    digitalWrite( LED[0], HIGH);    digitalWrite( LED[1], HIGH);    digitalWrite( motorPIN, LOW); // Fan OFF } else if (val > 49 && val < 53){ // 24 - 26 C    digitalWrite( LED[0], HIGH);    digitalWrite( LED[1], HIGH);    digitalWrite( LED[2], HIGH);    digitalWrite( motorPIN, LOW); // Fan OFF } else if (val > 53 && val < 57){ // 26 - 28 C    digitalWrite( LED[0], HIGH);    digitalWrite( LED[1], HIGH);    digitalWrite( LED[2], HIGH);    digitalWrite( LED[3], HIGH);    digitalWrite( motorPIN, LOW); // Fan OFF } else if (val > 57){ // Over 28 C    digitalWrite( LED[0], HIGH);    digitalWrite( LED[1], HIGH);    digitalWrite( LED[2], HIGH);    digitalWrite( LED[3], HIGH);    digitalWrite( LED[4], HIGH);    digitalWrite( motorPIN, HIGH); // Fan ON } delay(100); // Small delay for the Serial to send } Summary In this article, we learned the three basic skills—to connect an LED to the Arduino, to connect a sensor, and to connect a motor. Resources for Article: Further resources on this subject: Internet of Things with Xively [article] Avoiding Obstacles Using Sensors [article] Hardware configuration [article]
Read more
  • 0
  • 0
  • 10878

article-image-designing-jasmine-tests-spies
Packt
22 Apr 2015
17 min read
Save for later

Designing Jasmine Tests with Spies

Packt
22 Apr 2015
17 min read
In this article by Munish Sethi, author of the book Jasmine Cookbook, we will see the implementation of Jasmine tests using spies. (For more resources related to this topic, see here.) Nowadays, JavaScript has become the de facto programming language to build and empower frontend/web applications. We can use JavaScript to develop simple or complex applications. However, applications in production are often vulnerable to bugs caused by design inconsistencies, logical implementation errors, and similar issues. Due to this, it is usually difficult to predict how applications will behave in real-time environments, which leads to unexpected behavior, nonavailability of applications, or outage for shorter/longer durations. This generates lack of confidence and dissatisfaction among application users. Also, high cost is often associated with fixing the production bugs. Therefore, there is a need to develop applications with high quality and high availability. Jasmine is a Behavior-Driven development (BDD) framework for testing JavaScript code both in browser and on server side. It plays a vital role to establish effective development process by applying efficient testing processes. Jasmine provides a rich set of libraries to design and develop Jasmine specs (unit tests) for JavaScript (or JavaScript enabled) applications. In this article, we will see how to develop specs using Jasmine spies and matchers. We will also see how to write Jasmine specs with the Data-Driven approach using JSON/HTML fixture from end-to-end (E2E) perspective. Let's understand the concept of mocks before we start developing Jasmine specs with spies. Generally, we write one unit test corresponding to a Jasmine spec to test a method, object, or component in isolation, and see how it behaves in different circumstances. However, there are situations where a method/object has dependencies on other methods or objects. In this scenario, we need to design tests/specs across the unit/methods or components to validate behavior or simulate a real-time scenario. However, due to nonavailability of dependent methods/objects or staging/production environment, it is quite challenging to write Jasmine tests for methods that have dependencies on other methods/objects. This is where Mocks come into the picture. A mock is a fake object that replaces the original object/method and imitates the behavior of the real object without going into the nitty-gritty or creating the real object/method. Mocks work by implementing the proxy model. Whenever, we create a mock object, it creates a proxy object, which replaces the real object/method. We can then define the methods that are called and their returned values in our test method. Mocks can then be utilized to retrieve some of the runtime statistics, as follows: How many times was the mocked function/object method called? What was the value that the function returned to the caller? With how many arguments was the function called? Developing Jasmine specs using spies In Jasmine, mocks are referred to as spies. Spies are used to mock a function/object method. A spy can stub any function and track calls to it and all its arguments. Jasmine provides a rich set of functions and properties to enable mocking. There are special matchers to interact with spies, that is, toHaveBeenCalled and toHaveBeenCalledWith. Now, to understand the preceding concepts, let's assume that you are developing an application for a company providing solutions for the healthcare industry. Currently, there is a need to design a component that gets a person's details (such as name, age, blood group, details of diseases, and so on) and processes it further for other usage. Now, assume that you are developing a component that verifies a person's details for blood or organ donation. There are also a few factors or biological rules that exist to donate or receive blood. For now, we can consider the following biological factors: The person's age should be greater than or equal to 18 years The person should not be infected with HIV+ Let's create the validate_person_eligibility.js file and consider the following code in the current context: var Person = function(name, DOB, bloodgroup, donor_receiver) {    this.myName = name; this.myDOB = DOB; this.myBloodGroup = bloodgroup; this.donor_receiver = donor_receiver; this.ValidateAge    = function(myDOB){     this.myDOB = myDOB || DOB;     return this.getAge(this.myDOB);    };    this.ValidateHIV   = function(personName,personDOB,personBloodGroup){     this.myName = personName || this.myName;     this.myDOB = personDOB || this.myDOB;     this.myBloodGroup = personBloodGroup || this.myBloodGroup;     return this.checkHIV(this.myName, this.myDOB, this.myBloodGroup);    }; }; Person.prototype.getAge = function(birth){ console.log("getAge() function is called"); var calculatedAge=0; // Logic to calculate person's age will be implemented later   if (calculatedAge<18) {    throw new ValidationError("Person must be 18 years or older"); }; return calculatedAge; }; Person.prototype.checkHIV = function(pName, pDOB, pBloodGroup){ console.log("checkHIV() function is called"); bolHIVResult=true; // Logic to verify HIV+ will be implemented later   if (bolHIVResult == true) {    throw new ValidationError("A person is infected with HIV+");   }; return bolHIVResult; };   // Define custom error for validation function ValidationError(message) { this.message = message; } ValidationError.prototype = Object.create(Error.prototype); In the preceding code snapshot, we created an object Person, which accepts four parameters, that is, name of the person, date of birth, the person's blood group, and the donor or receiver. Further, we defined the following functions within the person's object to validate biological factors: ValidateAge(): This function accepts an argument as the date of birth and returns the person's age by calling the getAge function. You can also notice that under the getAge function, the code is not developed to calculate the person's age. ValidateHIV(): This function accepts three arguments as name, date of birth, and the person's blood group. It verifies whether the person is infected with HIV or not by calling the checkHIV function. Under the function checkHIV, you can observe that code is not developed to check whether the person is infected with HIV+ or not. Next, let's create the spec file (validate_person_eligibility_spec.js) and code the following lines to develop the Jasmine spec, which validates all the test conditions (biological rules) described in the previous sections: describe("<ABC> Company: Health Care Solution, ", function() { describe("When to donate or receive blood, ", function(){    it("Person's age should be greater than " +        "or equal to 18 years", function() {      var testPersonCriteria = new Person();      spyOn(testPersonCriteria, "getAge");      testPersonCriteria.ValidateAge("10/25/1990");      expect(testPersonCriteria.getAge).toHaveBeenCalled();      expect(testPersonCriteria.getAge).toHaveBeenCalledWith("10/25/1990");    });    it("A person should not be " +        "infected with HIV+", function() {      var testPersonCriteria = new Person();      spyOn(testPersonCriteria, "checkHIV");      testPersonCriteria.ValidateHIV();      expect(testPersonCriteria.checkHIV).toHaveBeenCalled();    }); }); }); In the preceding snapshot, we mocked the functions getAge and checkHIV using spyOn(). Also, we applied the toHaveBeenCalled matcher to verify whether the function getAge is called or not. Let's look at the following pointers before we jump to the next step: Jasmine provides the spyOn() function to mock any JavaScript function. A spy can stub any function and track calls to it and to all arguments. A spy only exists in the describe or it block; it is defined, and will be removed after each spec. Jasmine provides special matchers, toHaveBeenCalled and toHaveBeenCalledWith, to interact with spies. The matcher toHaveBeenCalled returns true if the spy was called. The matcher toHaveBeenCalledWith returns true if the argument list matches any of the recorded calls to the spy. Let's add the reference of the validate_person_eligibility.js file to the Jasmine runner (that is, SpecRunner.html) and run the spec file to execute both the specs. You will see that both the specs are passing, as shown in the following screenshot: While executing the Jasmine specs, you can notice that log messages, which we defined under the getAge() and checkHIV functions, are not printed in the browser console window. Whenever, we mock a function using Jasmine's spyOn() function, it replaces the original method of the object with a proxy method. Next, let's consider a situation where the function <B> is called under the function <A>, which is mocked in your test. Due to the mock behavior, it creates a proxy object that replaces the function <A>, and function <B> will never be called. However, in order to pass the test, it needs to be executed. In this situation, we chain the spyOn() function with .and.callThrough. Let's consider the following test code: it("Person's age should be greater than " +    "or equal to 18 years", function() { var testPersonCriteria = new Person(); spyOn(testPersonCriteria, "getAge").and.callThrough(); testPersonCriteria.ValidateAge("10/25/1990"); expect(testPersonCriteria.getAge).toHaveBeenCalled(); expect(testPersonCriteria.getAge).toHaveBeenCalledWith("10/25/1990"); }); Whenever the spyOn() function is chained with and.callThrough, the spy will still track all calls to it. However, in addition, it will delegate the control back to the actual implementation/function. To see the effect, let's run the spec file check_person_eligibility_spec.js with the Jasmine runner. You will see that the spec is failing, as shown in the following screenshot: This time, while executing the spec file, you can notice that a log message (that is, getAge() function is called) is also printed in the browser console window. On the other hand, you can also define your own logic or set values in your test code as per specific requirements by chaining the spyOn() function with and.callFake. For example, consider the following code: it("Person's age should be greater than " +    "or equal to 18 years", function() { var testPersonCriteria = new Person(); spyOn(testPersonCriteria, "getAge").and.callFake(function()      {    return 18; }); testPersonCriteria.ValidateAge("10/25/1990"); expect(testPersonCriteria.getAge).toHaveBeenCalled(); expect(testPersonCriteria.getAge).toHaveBeenCalledWith("10/25/1990"); expect(testPersonCriteria.getAge()).toEqual(18); }); Whenever the spyOn() function is chained with and.callFake, all calls to the spy will be delegated to the supplied function. You can also notice that we added one more expectation to validate the person's age. To see execution results, run the spec file with the Jasmine runner. You will see that both the specs are passing: Implementing Jasmine specs using custom spy method In the previous section, we looked at how we can spy on a function. Now, we will understand the need of custom spy method and how Jasmine specs can be designed using it. There are several cases when one would need to replace the original method. For example, original function/method takes a long time to execute or it depends on the other method/object (or third-party system) that is/are not available in the test environment. In this situation, it is beneficial to replace the original method with a fake/custom spy method for testing purpose. Jasmine provides a method called jasmine.createSpy to create your own custom spy method. As we described in the previous section, there are few factors or biological rules that exist to donate or receive bloods. Let's consider few more biological rules as follows: Person with O+ blood group can receive blood from a person with O+ blood group Person with O+ blood group can give the blood to a person with A+ blood group First, let's update the JavaScript file validate_person_eligibility.js and add a new method ValidateBloodGroup to the Person object. Consider the following code: this.ValidateBloodGroup   = function(callback){ var _this = this; var matchBloodGroup; this.MatchBloodGroupToGiveReceive(function (personBloodGroup) {    _this.personBloodGroup = personBloodGroup;    matchBloodGroup = personBloodGroup;    callback.call(_this, _this.personBloodGroup); }); return matchBloodGroup; };   Person.prototype.MatchBloodGroupToGiveReceive = function(callback){ // Network actions are required to match the values corresponding // to match blood group. Network actions are asynchronous hence the // need for a callback. // But, for now, let's use hard coded values. var matchBloodGroup; if (this.donor_receiver == null || this.donor_receiver == undefined){    throw new ValidationError("Argument (donor_receiver) is missing "); }; if (this.myBloodGroup == "O+" && this.donor_receiver.toUpperCase() == "RECEIVER"){    matchBloodGroup = ["O+"]; }else if (this.myBloodGroup == "O+" && this.donor_receiver.toUpperCase() == "DONOR"){    matchBloodGroup = ["A+"]; }; callback.call(this, matchBloodGroup); }; In the preceding code snapshot, you can notice that the ValidateBloodGroup() function accepts an argument as the callback function. The ValidateBloodGroup() function returns matching/eligible blood group(s) for receiver/donor by calling the MatchBloodGroupToGiveReceive function. Let's create the Jasmine tests with custom spy method using the following code: describe("Person With O+ Blood Group: ", function(){ it("can receive the blood of the " +      "person with O+ blood group", function() {    var testPersonCriteria = new Person("John Player", "10/30/1980", "O+", "Receiver");    spyOn(testPersonCriteria, "MatchBloodGroupToGiveReceive").and.callThrough();    var callback = jasmine.createSpy();    testPersonCriteria.ValidateBloodGroup(callback);    //Verify, callback method is called or not    expect(callback).toHaveBeenCalled();    //Verify, MatchBloodGroupToGiveReceive is    // called and check whether control goes back    // to the function or not    expect(testPersonCriteria.MatchBloodGroupToGiveReceive).toHaveBeenCalled();    expect(testPersonCriteria.MatchBloodGroupToGiveReceive.calls.any()).toEqual(true);        expect(testPersonCriteria.MatchBloodGroupToGiveReceive.calls.count()).toEqual(1);    expect(testPersonCriteria.ValidateBloodGroup(callback)).toContain("O+"); }); it("can give the blood to the " +      "person with A+ blood group", function() {    var testPersonCriteria = new Person("John Player", "10/30/1980", "O+", "Donor");    spyOn(testPersonCriteria, "MatchBloodGroupToGiveReceive").and.callThrough();    var callback = jasmine.createSpy();    testPersonCriteria.ValidateBloodGroup(callback);    expect(callback).toHaveBeenCalled();    expect(testPersonCriteria.MatchBloodGroupToGiveReceive).toHaveBeenCalled();    expect(testPersonCriteria.ValidateBloodGroup(callback)).toContain("A+"); }); }); You can notice that in the preceding snapshot, first we mocked the function MatchBloodGroupToGiveReceive using spyOn() and chained it with and.callThrough() to hand over the control back to the function. Thereafter, we created callback as the custom spy method using jasmine.createSpy. Furthermore, we are tracking calls/arguments to the callback and MatchBloodGroupToGiveReceive functions using tracking properties (that is, .calls.any() and .calls.count()). Whenever we create a custom spy method using jasmine.createSpy, it creates a bare spy. It is a good mechanism to test the callbacks. You can also track calls and arguments corresponding to custom spy method. However, there is no implementation behind it. To execute the tests, run the spec file with the Jasmine runner. You will see that all the specs are passing: Implementing Jasmine specs using Data-Driven approach In Data-Driven approach, Jasmine specs get input or expected values from the external data files (JSON, CSV, TXT files, and so on), which are required to run/execute tests. In other words, we isolate test data and Jasmine specs so that one can prepare the test data (input/expected values) separately as per the need of specs. For example, in the previous section, we provided all the input values (that is, name of person, date of birth, blood group, donor or receiver) to the person's object in the test code itself. However, for better management, it's always good to maintain test data and code/specs separately. To implement Jasmine tests with the data-driven approach, let's create a data file fixture_input_data.json. For now, you can use the following data in JSON format: [ { "Name": "John Player", "DOB": "10/30/1980", "Blood_Group": "O+", "Donor_Receiver": "Receiver" }, { "Name": "John Player", "DOB": "10/30/1980", "Blood_Group": "O+", "Donor_Receiver": "Donor" } ] Next, we will see how to provide all the required input values in our tests through a data file using the jasmine-jquery plugin. Before we move to the next step and implement the Jasmine tests with the Data-Driven approach, let's note the following points regarding the jasmine-jquery plugin: It provides two extensions to write the tests with HTML and JSON fixture: An API for handling HTML and JSON fixtures in your specs A set of custom matchers for jQuery framework The loadJSONFixtures method loads fixture(s) from one or more JSON files and makes it available at runtime To know more about the jasmine-jquery plugin, you can visit the following website: https://github.com/velesin/jasmine-jquery Let's implement both the specs created in the previous section using the Data-Driven approach. Consider the following code: describe("Person With O+ Blood Group: ", function(){    var fixturefile, fixtures, myResult; beforeEach(function() {        //Start - Load JSON Files to provide input data for all the test scenarios        fixturefile = "fixture_input_data.json";        fixtures = loadJSONFixtures(fixturefile);        myResult = fixtures[fixturefile];            //End - Load JSON Files to provide input data for all the test scenarios });   it("can receive the blood of the " +      "person with O+ blood group", function() {    //Start - Provide input values from the data file    var testPersonCriteria = new Person(        myResult[0].Name,        myResult[0].DOB,        myResult[0].Blood_Group,        myResult[0].Donor_Receiver    );    //End - Provide input values from the data file    spyOn(testPersonCriteria, "MatchBloodGroupToGiveReceive").and.callThrough();    var callback = jasmine.createSpy();    testPersonCriteria.ValidateBloodGroup(callback);    //Verify, callback method is called or not    expect(callback).toHaveBeenCalled();    //Verify, MatchBloodGroupToGiveReceive is    // called and check whether control goes back    // to the function or not    expect(testPersonCriteria.MatchBloodGroupToGiveReceive).toHaveBeenCalled();    expect(testPersonCriteria.MatchBloodGroupToGiveReceive.calls.any()).toEqual(true);        expect(testPersonCriteria.MatchBloodGroupToGiveReceive.calls.count()).toEqual(1);    expect(testPersonCriteria.ValidateBloodGroup(callback)).toContain("O+"); }); it("can give the blood to the " +      "person with A+ blood group", function() {    //Start - Provide input values from the data file    var testPersonCriteria = new Person(        myResult[1].Name,        myResult[1].DOB,        myResult[1].Blood_Group,        myResult[1].Donor_Receiver    );    //End - Provide input values from the data file    spyOn(testPersonCriteria, "MatchBloodGroupToGiveReceive").and.callThrough();    var callback = jasmine.createSpy();    testPersonCriteria.ValidateBloodGroup(callback);    expect(callback).toHaveBeenCalled();    expect(testPersonCriteria.MatchBloodGroupToGiveReceive).toHaveBeenCalled();    expect(testPersonCriteria.ValidateBloodGroup(callback)).toContain("A+"); }); }); In the preceding code snapshot, you can notice that first we provided the input data from an external JSON file (that is, fixture_input_data.json) using the loadJSONFixtures function and made it available at runtime. Thereafter, we provided input values/data to both the specs, as required; we set the value of name, date of birth, blood group, and donor/receiver for specs 1 and 2, respectively. Further, following the same methodology, we can also create a separate data file for expected values, which we require in our tests to compare with actual values. If test data (input or expected values) is required during execution, it is advisable to provide it from an external file instead of using hardcoded values in your tests. Now, execute the test suite with the Jasmine runner and you will see that all the specs are passing: Summary In this article, we looked at the implementation of Jasmine tests using spies. We also demonstrated how to test the callback function using custom spy method. Further, we saw the implementation of Data-Driven approach, where you learned how to to isolate test data from the code. Resources for Article: Further resources on this subject: Web Application Testing [article] Testing Backbone.js Application [article] The architecture of JavaScriptMVC [article]
Read more
  • 0
  • 0
  • 2718

article-image-constructing-common-ui-widgets
Packt
22 Apr 2015
21 min read
Save for later

Constructing Common UI Widgets

Packt
22 Apr 2015
21 min read
One of the biggest features that draws developers to Ext JS is the vast array of UI widgets available out of the box. The ease with which they can be integrated with each other and the attractive and consistent visuals each of them offers is also a big attraction. No other framework can compete on this front, and this is a huge reason Ext JS leads the field of large-scale web applications. In this article by Stuart Ashworth and Andrew Duncan by authors of the book, Ext JS Essentials, we will look at how UI widgets fit into the framework's structure, how they interact with each other, and how we can retrieve and reference them. We will then delve under the surface and investigate the lifecycle of a component and the stages it will go through during the lifetime of an application. (For more resources related to this topic, see here.) Anatomy of a UI widget Every UI element in Ext JS extends from the base component class Ext.Component. This class is responsible for rendering UI elements to the HTML document. They are generally sized and positioned by layouts used by their parent components and participate in the automatic component lifecycle process. You can imagine an instance of Ext.Component as a single section of the user interface in a similar way that you might think of a DOM element when building traditional web interfaces. Each subclass of Ext.Component builds upon this simple fact and is responsible for generating more complex HTML structures or combining multiple Ext.Components to create a more complex interface. Ext.Component classes, however, can't contain other Ext.Components. To combine components, one must use the Ext.container.Container class, which itself extends from Ext.Component. This class allows multiple components to be rendered inside it and have their size and positioning managed by the framework's layout classes. Components and HTML Creating and manipulating UIs using components requires a slightly different way of thinking than you may be used to when creating interactive websites with libraries such as jQuery. The Ext.Component class provides a layer of abstraction from the underlying HTML and allows us to encapsulate additional logic to build and manipulate this HTML. This concept is different from the way other libraries allow you to manipulate UI elements and provides a hurdle for new developers to get over. The Ext.Component class generates HTML for us, which we rarely need to interact with directly; instead, we manipulate the configuration and properties of the component. The following code and screenshot show the HTML generated by a simple Ext.Component instance: var simpleComponent = Ext.create('Ext.Component', { html   : 'Ext JS Essentials!', renderTo: Ext.getBody() }); As you can see, a simple <DIV> tag is created, which is given some CSS classes and an autogenerated ID, and has the HTML config displayed inside it. This generated HTML is created and managed by the Ext.dom.Element class, which wraps a DOM element and its children, offering us numerous helper methods to interrogate and manipulate it. After it is rendered, each Ext.Component instance has the element instance stored in its el property. You can then use this property to manipulate the underlying HTML that represents the component. As mentioned earlier, the el property won't be populated until the component has been rendered to the DOM. You should put logic dependent on altering the raw HTML of the component in an afterrender event listener or override the afterRender method. The following example shows how you can manipulate the underlying HTML once the component has been rendered. It will set the background color of the element to red: Ext.create('Ext.Component', { html     : 'Ext JS Essentials!', renderTo : Ext.getBody(), listeners: {    afterrender: function(comp) {      comp.el.setStyle('background-color', 'red');    } } }); It is important to understand that digging into and updating the HTML and CSS that Ext JS creates for you is a dangerous game to play and can result in unexpected results when the framework tries to update things itself. There is usually a framework way to achieve the manipulations you want to include, which we recommend you use first. We always advise new developers to try not to fight the framework too much when starting out. Instead, we encourage them to follow its conventions and patterns, rather than having to wrestle it to do things in the way they may have previously done when developing traditional websites and web apps. The component lifecycle When a component is created, it follows a lifecycle process that is important to understand, so as to have an awareness of the order in which things happen. By understanding this sequence of events, you will have a much better idea of where your logic will fit and ensure you have control over your components at the right points. The creation lifecycle The following process is followed when a new component is instantiated and rendered to the document by adding it to an existing container. When a component is shown explicitly (for example, without adding to a parent, such as a floating component) some additional steps are included. These have been denoted with a * in the following process. constructor First, the class' constructor function is executed, which triggers all of the other steps in turn. By overriding this function, we can add any setup code required for the component. Config options processed The next thing to be handled is the config options that are present in the class. This involves each option's apply and update methods being called, if they exist, meaning the values are available via the getter from now onwards. initComponent The initComponent method is now called and is generally used to apply configurations to the class and perform any initialization logic. render Once added to a container, or when the show method is called, the component is rendered to the document. boxready At this stage, the component is rendered and has been laid out by its parent's layout class, and is ready at its initial size. This event will only happen once on the component's first layout. activate (*) If the component is a floating item, then the activate event will fire, showing that the component is the active one on the screen. This will also fire when the component is brought back to focus, for example, in a Tab panel when a tab is selected. show (*) Similar to the previous step, the show event will fire when the component is finally visible on screen. The destruction process When we are removing a component from the Viewport and want to destroy it, it will follow a destruction sequence that we can use to ensure things are cleaned up sufficiently, so as to avoid memory leaks and so on. The framework takes care of the majority of this cleanup for us, but it is important that we tidy up any additional things we instantiate. hide (*) When a component is manually hidden (using the hide method), this event will fire and any additional hide logic can be included here. deactivate (*) Similar to the activate step, this is fired when the component becomes inactive. As with the activate step, this will happen when floating and nested components are hidden and are no longer the items under focus. destroy This is the final step in the teardown process and is implemented when the component and its internal properties and objects are cleaned up. At this stage, it is best to remove event handlers, destroy subclasses, and ensure any other references are released. Component Queries Ext JS boasts a powerful system to retrieve references to components called Component Queries. This is a CSS/XPath style query syntax that lets us target broad sets or specific components within our application. For example, within our controller, we may want to find a button with the text "Save" within a component of type MyForm. In this section, we will demonstrate the Component Query syntax and how it can be used to select components. We will also go into details about how it can be used within Ext.container.Container classes to scope selections. xtypes Before we dive in, it is important to understand the concept of xtypes in Ext JS. An xtype is a shorthand name for an Ext.Component that allows us to identify its declarative component configuration objects. For example, we can create a new Ext.Component as a child of an Ext.container.Container using an xtype with the following code: Ext.create('Ext.Container', { items: [    {      xtype: 'component',      html : 'My Component!'    } ] }); Using xtypes allows you to lazily instantiate components when required, rather than having them all created upfront. Common component xtypes include: Classes xtypes Ext.tab.Panel tabpanel Ext.container.Container container Ext.grid.Panel gridpanel Ext.Button button xtypes form the basis of our Component Query syntax in the same way that element types (for example, div, p, span, and so on) do for CSS selectors. We will use these heavily in the following examples. Sample component structure We will use the following sample component structure—a panel with a child tab panel, form, and buttons—to perform our example queries on: var panel = Ext.create('Ext.panel.Panel', { height : 500, width : 500, renderTo: Ext.getBody(), layout: {    type : 'vbox',    align: 'stretch' }, items : [    {      xtype : 'tabpanel',      itemId: 'mainTabPanel',      flex : 1,      items : [        {          xtype : 'panel',          title : 'Users',          itemId: 'usersPanel',          layout: {            type : 'vbox',            align: 'stretch'            },            tbar : [              {                xtype : 'button',                text : 'Edit',                itemId: 'editButton'                }              ],              items : [                {                  xtype : 'form',                  border : 0,                  items : [                  {                      xtype : 'textfield',                      fieldLabel: 'Name',                      allowBlank: false                    },                    {                      xtype : 'textfield',                      fieldLabel: 'Email',                      allowBlank: false                    }                  ],                  buttons: [                    {                      xtype : 'button',                      text : 'Save',                      action: 'saveUser'                    }                  ]                },                {                  xtype : 'grid',                  flex : 1,                  border : 0,                  columns: [                    {                     header : 'Name',                      dataIndex: 'Name',                      flex : 1                    },                    {                      header : 'Email',                      dataIndex: 'Email'                    }                   ],                  store : Ext.create('Ext.data.Store', {                    fields: [                      'Name',                      'Email'                    ],                    data : [                      {                        Name : 'Joe Bloggs',                        Email: 'joe@example.com'                      },                      {                        Name : 'Jane Doe',                        Email: 'jane@example.com'                      }                    ]                  })                }              ]            }          ]        },        {          xtype : 'component',          itemId : 'footerComponent',          html : 'Footer Information',          extraOptions: {            option1: 'test',            option2: 'test'          },          height : 40        }      ]    }); Queries with Ext.ComponentQuery The Ext.ComponentQuery class is used to perform Component Queries, with the query method primarily used. This method accepts two parameters: a query string and an optional Ext.container.Container instance to use as the root of the selection (that is, only components below this one in the hierarchy will be returned). The method will return an array of components or an empty array if none are found. We will work through a number of scenarios and use Component Queries to find a specific set of components. Finding components based on xtype As we have seen, we use xtypes like element types in CSS selectors. We can select all the Ext.panel.Panel instances using its xtype—panel: var panels = Ext.ComponentQuery.query('panel'); We can also add the concept of hierarchy by including a second xtype separated by a space. The following code will select all Ext.Button instances that are descendants (at any level) of an Ext.panel.Panel class: var buttons = Ext.ComponentQuery.query('panel buttons'); We could also use the > character to limit it to buttons that are direct descendants of a panel. var directDescendantButtons = Ext.ComponentQuery.query('panel > button'); Finding components based on attributes It is simple to select a component based on the value of a property. We use the XPath syntax to specify the attribute and the value. The following code will select buttons with an action attribute of saveUser: var saveButtons = Ext.ComponentQuery.query('button[action="saveUser"]); Finding components based on itemIds ItemIds are commonly used to retrieve components, and they are specially optimized for performance within the ComponentQuery class. They should be unique only within their parent container and not globally unique like the id config. To select a component based on itemId, we prefix the itemId with a # symbol: var usersPanel = Ext.ComponentQuery.query('#usersPanel'); Finding components based on member functions It is also possible to identify matching components based on the result of a function of that component. For example, we can select all text fields whose values are valid (that is, when a call to the isValid method returns true): var validFields = Ext.ComponentQuery.query('form > textfield{isValid()}'); Scoped Component Queries All of our previous examples will search the entire component tree to find matches, but often we may want to keep our searches local to a specific container and its descendants. This can help reduce the complexity of the query and improve the performance, as fewer components have to be processed. Ext.Containers have three handy methods to do this: up, down, and query. We will take each of these in turn and explain their features. up This method accepts a selector and will traverse up the hierarchy to find a single matching parent component. This can be useful to find the grid panel that a button belongs to, so an action can be taken on it: var grid = button.up('gridpanel'); down This returns the first descendant component that matches the given selector: var firstButton = grid.down('button'); query The query method performs much like Ext.ComponentQuery.query but is automatically scoped to the current container. This means that it will search all descendant components of the current container and return all matching ones as an array. var allButtons = grid.query('button'); Hierarchical data with trees Now that we know and understand components, their lifecycle, and how to retrieve references to them, we will move on to more specific UI widgets. The tree panel component allows us to display hierarchical data in a way that reflects the data's structure and relationships. In our application, we are going to use a tree panel to represent our navigation structure to allow users to see how the different areas of the app are linked and structured. Binding to a data source Like all other data-bound components, tree panels must be bound to a data store—in this particular case it must be an Ext.data.TreeStore instance or subclass, as it takes advantage of the extra features added to this specialist store class. We will make use of the BizDash.store.Navigation TreeStore to bind to our tree panel. Defining a tree panel The tree panel is defined in the Ext.tree.Panel class (which has an xtype of treepanel), which we will extend to create a custom class called BizDash.view.navigation.NavigationTree: Ext.define('BizDash.view.navigation.NavigationTree', { extend: 'Ext.tree.Panel', alias: 'widget.navigation-NavigationTree', store : 'Navigation', columns: [    {      xtype : 'treecolumn',      text : 'Navigation',      dataIndex: 'Label',      flex : 1    } ], rootVisible: false, useArrows : true }); We configure the tree to be bound to our TreeStore by using its storeId, in this case, Navigation. A tree panel is a subclass of the Ext.panel.Table class (similar to the Ext.grid.Panel class), which means it must have a columns configuration present. This tells the component what values to display as part of the tree. In a simple, traditional tree, we might only have one column showing the item and its children; however, we can define multiple columns and display additional fields in each row. This would be useful if we were displaying, for example, files and folders and wanted to have additional columns to display the file type and file size of each item. In our example, we are only going to have one column, displaying the Label field. We do this by using the treecolumn xtype, which is responsible for rendering the tree's navigation elements. Without defining treecolumn, the component won't display correctly. The treecolumn xtype's configuration allows us to define which of the attached data model's fields to use (dataIndex), the column's header text (text), and the fact that the column should fill the horizontal space. Additionally, we set the rootVisible to false, so the data's root is hidden, as it has no real meaning other than holding the rest of the data together. Finally, we set useArrows to true, so the items with children use an arrow instead of the +/- icon. Summary In this article, we have learnt how Ext JS' components fit together and the lifecycle that they follow when created and destroyed. We covered the component lifecycle and Component Queries. Resources for Article: Further resources on this subject: So, what is Ext JS? [article] Function passing [article] Static Data Management [article]
Read more
  • 0
  • 0
  • 4163

article-image-third-party-libraries
Packt
21 Apr 2015
21 min read
Save for later

Third Party Libraries

Packt
21 Apr 2015
21 min read
In this article by Nathan Rozentals, author of the book Mastering TypeScript, the author believes that our TypeScript development environment would not amount to much if we were not able to reuse the myriad of existing JavaScript libraries, frameworks and general goodness. However, in order to use a particular third party library with TypeScript, we will first need a matching definition file. Soon after TypeScript was released, Boris Yankov set up a github repository to house TypeScript definition files for third party JavaScript libraries. This repository, named DefinitelyTyped (https://github.com/borisyankov/DefinitelyTyped) quickly became very popular, and is currently the place to go for high-quality definition files. DefinitelyTyped currently has over 700 definition files, built up over time from hundreds of contributors from all over the world. If we were to measure the success of TypeScript within the JavaScript community, then the DefinitelyTyped repository would be a good indication of how well TypeScript has been adopted. Before you go ahead and try to write your own definition files, check the DefinitelyTyped repository to see if there is one already available. In this article, we will have a closer look at using these definition files, and cover the following topics: Choosing a JavaScript Framework Using TypeScript with Backbone Using TypeScript with Angular (For more resources related to this topic, see here.) Using third party libraries In this section of the article, we will begin to explore some of the more popular third party JavaScript libraries, their declaration files, and how to write compatible TypeScript for each of these frameworks. We will compare Backbone, and Angular which are all frameworks for building rich client-side JavaScript applications. During our discussion, we will see that some frameworks are highly compliant with the TypeScript language and its features, some are partially compliant, and some have very low compliance. Choosing a JavaScript framework Choosing a JavaScript framework or library to develop Single Page Applications is a difficult and sometimes daunting task. It seems that there is a new framework appearing every other month, promising more and more functionality for less and less code. To help developers compare these frameworks, and make an informed choice, Addy Osmani wrote an excellent article, named Journey Through the JavaScript MVC Jungle. (http://www.smashingmagazine.com/2012/07/27/journey-through-the-javascript-mvc-jungle/). In essence, his advice is simple – it's a personal choice – so try some frameworks out, and see what best fits your needs, your programming mindset, and your existing skill set. The TodoMVC project (http://todomvc.com), which Addy started, does an excellent job of implementing the same application in a number of MV* JavaScript frameworks. This really is a reference site for digging into a fully working application, and comparing for yourself the coding techniques and styles of different frameworks. Again, depending on the JavaScript library that you are using within TypeScript, you may need to write your TypeScript code in a specific way. Bear this in mind when choosing a framework - if it is difficult to use with TypeScript, then you may be better off looking at another framework with better integration. If it is easy and natural to work with the framework in TypeScript, then your productivity and overall development experience will be much better. We will look at some of the popular JavaScript libraries, along with their declaration files, and see how to write compatible TypeScript. The key thing to remember is that TypeScript generates JavaScript - so if you are battling to use a third party library, then crack open the generated JavaScript and see what the JavaScript code looks like that TypeScript is emitting. If the generated JavaScript matches the JavaScript code samples in the library's documentation, then you are on the right track. If not, then you may need to modify your TypeScript until the compiled JavaScript starts matching up with the samples. When trying to write TypeScript code for a third party JavaScript framework – particularly if you are working off the JavaScript documentation – your initial foray may just be one of trial and error. Along the way, you may find that you need to write your TypeScript in a specific way in order to match this particular third party library. The rest of this article shows how three different libraries require different ways of writing TypeScript. Backbone Backbone is a popular JavaScript library that gives structure to web applications by providing models, collections and views, amongst other things. Backbone has been around since 2010, and has gained a very large following, with a wealth of commercial websites using the framework. According to Infoworld.com, Backbone has over 1,600 Backbone related projects on GitHub that rate over 3 stars - meaning that it has a vast ecosystem of extensions and related libraries. Let's take a quick look at Backbone written in TypeScript. To follow along with the code in your own project, you will need to install the following NuGet packages: backbone.js ( currently at v1.1.2), and backbone.TypeScript.DefinitelyTyped (currently at version 1.2.3). Using inheritance with Backbone From the Backbone documentation, we find an example of creating a Backbone.Model in JavaScript as follows: var Note = Backbone.Model.extend(    {        initialize: function() {            alert("Note Model JavaScript initialize");        },        author: function () { },        coordinates: function () { },        allowedToEdit: function(account) {            return true;        }    } ); This code shows a typical usage of Backbone in JavaScript. We start by creating a variable named Note that extends (or derives from) Backbone.Model. This can be seen with the Backbone.Model.extend syntax. The Backbone extend function uses JavaScript object notation to define an object within the outer curly braces { … }. In the preceding code, this object has four functions: initialize, author, coordinates and allowedToEdit. According to the Backbone documentation, the initialize function will be called once a new instance of this class is created. The initialize function simply creates an alert to indicate that the function was called. The author and coordinates functions are blank at this stage, with only the allowedToEdit function actually doing something: return true. If we were to simply copy and paste the above JavaScript into a TypeScript file, we would generate the following compile error: Build: 'Backbone.Model.extend' is inaccessible. When working with a third party library, and a definition file from DefinitelyTyped, our first port of call should be to see if the definition file may be in error. After all, the JavaScript documentation says that we should be able to use the extend method as shown, so why is this definition file causing an error? If we open up the backbone.d.ts file, and then search to find the definition of the class Model, we will find the cause of the compilation error: class Model extends ModelBase {      /**    * Do not use, prefer TypeScript's extend functionality.    **/    private static extend(        properties: any, classProperties?: any): any; This declaration file snippet shows some of the definition of the Backbone Model class. Here, we can see that the extend function is defined as private static, and as such, it will not be available outside the Model class itself. This, however, seems contradictory to the JavaScript sample that we saw in the documentation. In the preceding comment on the extend function definition, we find the key to using Backbone in TypeScript: prefer TypeScript's extend functionality. This comment indicates that the declaration file for Backbone is built around TypeScript's extends keyword – thereby allowing us to use natural TypeScript inheritance syntax to create Backbone objects. The TypeScript equivalent to this code, therefore, must use the extends TypeScript keyword to derive a class from the base class Backbone.Model, as follows: class Note extends Backbone.Model {    initialize() {      alert("Note model Typescript initialize");    }    author() { }    coordinates() { }    allowedToEdit(account) {        return true;    } } We are now creating a class definition named Note that extends the Backbone.Model base class. This class then has the functions initialize, author, coordinates and allowedToEdit, similar to the previous JavaScript version. Our Backbone sample will now compile and run correctly. With either of these versions, we can create an instance of the Note object by including the following script within an HTML page: <script type="text/javascript">    $(document).ready( function () {        var note = new Note();    }); </script> This JavaScript sample simply waits for the jQuery document.ready event to be fired, and then creates an instance of the Note class. As documented earlier, the initialize function will be called when an instance of the class is constructed, so we would see an alert box appear when we run this in a browser. All of Backbone's core objects are designed with inheritance in mind. This means that creating new Backbone collections, views and routers will use the same extends syntax in TypeScript. Backbone, therefore, is a very good fit for TypeScript, because we can use natural TypeScript syntax for inheritance to create new Backbone objects. Using interfaces As Backbone allows us to use TypeScript inheritance to create objects, we can just as easily use TypeScript interfaces with any of our Backbone objects as well. Extracting an interface for the Note class above would be as follows: interface INoteInterface {    initialize();    author();    coordinates();    allowedToEdit(account: string); } We can now update our Note class definition to implement this interface as follows: class Note extends Backbone.Model implements INoteInterface {    // existing code } Our class definition now implements the INoteInterface TypeScript interface. This simple change protects our code from being modified inadvertently, and also opens up the ability to work with core Backbone objects in standard object-oriented design patterns. We could, if we needed to, apply the Factory Pattern to return a particular type of Backbone Model – or any other Backbone object for that matter. Using generic syntax The declaration file for Backbone has also added generic syntax to some class definitions. This brings with it further strong typing benefits when writing TypeScript code for Backbone. Backbone collections (surprise, surprise) house a collection of Backbone models, allowing us to define collections in TypeScript as follows: class NoteCollection extends Backbone.Collection<Note> {    model = Note;    //model: Note; // generates compile error    //model: { new (): Note }; // ok } Here, we have a NoteCollection that derives from, or extends a Backbone.Collection, but also uses generic syntax to constrain the collection to handle only objects of type Note. This means that any of the standard collection functions such as at() or pluck() will be strongly typed to return Note models, further enhancing our type safety and Intellisense. Note the syntax used to assign a type to the internal model property of the collection class on the second line. We cannot use the standard TypeScript syntax model: Note, as this causes a compile time error. We need to assign the model property to a the class definition, as seen with the model=Note syntax, or we can use the { new(): Note } syntax as seen on the last line. Using ECMAScript 5 Backbone also allows us to use ECMAScript 5 capabilities to define getters and setters for Backbone.Model classes, as follows: interface ISimpleModel {    Name: string;    Id: number; } class SimpleModel extends Backbone.Model implements ISimpleModel {    get Name() {        return this.get('Name');    }    set Name(value: string) {        this.set('Name', value);    }    get Id() {        return this.get('Id');    }    set Id(value: number) {        this.set('Id', value);    } } In this snippet, we have defined an interface with two properties, named ISimpleModel. We then define a SimpleModel class that derives from Backbone.Model, and also implements the ISimpleModel interface. We then have ES 5 getters and setters for our Name and Id properties. Backbone uses class attributes to store model values, so our getters and setters simply call the underlying get and set methods of Backbone.Model. Backbone TypeScript compatibility Backbone allows us to use all of TypeScript's language features within our code. We can use classes, interfaces, inheritance, generics and even ECMAScript 5 properties. All of our classes also derive from base Backbone objects. This makes Backbone a highly compatible library for building web applications with TypeScript. Angular AngularJs (or just Angular) is also a very popular JavaScript framework, and is maintained by Google. Angular takes a completely different approach to building JavaScript SPA's, introducing an HTML syntax that the running Angular application understands. This provides the application with two-way data binding capabilities, which automatically synchronizes models, views and the HTML page. Angular also provides a mechanism for Dependency Injection (DI), and uses services to provide data to your views and models. The example provided in the tutorial shows the following JavaScript: var phonecatApp = angular.module('phonecatApp', []); phonecatApp.controller('PhoneListCtrl', function ($scope) { $scope.phones = [    {'name': 'Nexus S',      'snippet': 'Fast just got faster with Nexus S.'},    {'name': 'Motorola XOOM™ with Wi-Fi',      'snippet': 'The Next, Next Generation tablet.'},    {'name': 'MOTOROLA XOOM™',      'snippet': 'The Next, Next Generation tablet.'} ]; }); This code snippet is typical of Angular JavaScript syntax. We start by creating a variable named phonecatApp, and register this as an Angular module by calling the module function on the angular global instance. The first argument to the module function is a global name for the Angular module, and the empty array is a place-holder for other modules that will be injected via Angular's Dependency Injection routines. We then call the controller function on the newly created phonecatApp variable with two arguments. The first argument is the global name of the controller, and the second argument is a function that accepts a specially named Angular variable named $scope. Within this function, the code sets the phones object of the $scope variable to be an array of JSON objects, each with a name and snippet property. If we continue reading through the tutorial, we find a unit test that shows how the PhoneListCtrl controller is used: describe('PhoneListCtrl', function(){    it('should create "phones" model with 3 phones', function() {      var scope = {},          ctrl = new PhoneListCtrl(scope);        expect(scope.phones.length).toBe(3); });   }); The first two lines of this code snippet use a global function called describe, and within this function another function called it. These two functions are part of a unit testing framework named Jasmine. We declare a variable named scope to be an empty JavaScript object, and then a variable named ctrl that uses the new keyword to create an instance of our PhoneListCtrl class. The new PhoneListCtrl(scope) syntax shows that Angular is using the definition of the controller just like we would use a normal class in TypeScript. Building the same object in TypeScript would allow us to use TypeScript classes, as follows: var phonecatApp = angular.module('phonecatApp', []);   class PhoneListCtrl {    constructor($scope) {        $scope.phones = [            { 'name': 'Nexus S',              'snippet': 'Fast just got faster' },            { 'name': 'Motorola',              'snippet': 'Next generation tablet' },            { 'name': 'Motorola Xoom',              'snippet': 'Next, next generation tablet' }        ];    } }; Our first line is the same as in our previous JavaScript sample. We then, however, use the TypeScript class syntax to create a class named PhoneListCtrl. By creating a TypeScript class, we can now use this class as shown in our Jasmine test code: ctrl = new PhoneListCtrl(scope). The constructor function of our PhoneListCtrl class now acts as the anonymous function seen in the original JavaScript sample: phonecatApp.controller('PhoneListCtrl', function ($scope) {    // this function is replaced by the constructor } Angular classes and $scope Let's expand our PhoneListCtrl class a little further, and have a look at what it would look like when completed: class PhoneListCtrl {    myScope: IScope;    constructor($scope, $http: ng.IHttpService, Phone) {        this.myScope = $scope;        this.myScope.phones = Phone.query();        $scope.orderProp = 'age';          _.bindAll(this, 'GetPhonesSuccess');    }    GetPhonesSuccess(data: any) {       this.myScope.phones = data;    } }; The first thing to note in this class, is that we are defining a variable named myScope, and storing the $scope argument that is passed in via the constructor, into this internal variable. This is again because of JavaScript's lexical scoping rules. Note the call to _.bindAll at the end of the constructor. This Underscore utility function will ensure that whenever the GetPhonesSuccess function is called, it will use the variable this in the context of the class instance, and not in the context of the calling code. The GetPhonesSuccess function uses the this.myScope variable within its implementation. This is why we needed to store the initial $scope argument in an internal variable. Another thing we notice from this code, is that the myScope variable is typed to an interface named IScope, which will need to be defined as follows: interface IScope {    phones: IPhone[]; } interface IPhone {    age: number;    id: string;    imageUrl: string;    name: string;    snippet: string; }; This IScope interface just contains an array of objects of type IPhone (pardon the unfortunate name of this interface – it can hold Android phones as well). What this means is that we don't have a standard interface or TypeScript type to use when dealing with $scope objects. By its nature, the $scope argument will change its type depending on when and where the Angular runtime calls it, hence our need to define an IScope interface, and strongly type the myScope variable to this interface. Another interesting thing to note on the constructor function of the PhoneListCtrl class is the type of the $http argument. It is set to be of type ng.IHttpService. This IHttpService interface is found in the declaration file for Angular. In order to use TypeScript with Angular variables such as $scope or $http, we need to find the matching interface within our declaration file, before we can use any of the Angular functions available on these variables. The last point to note in this constructor code is the final argument, named Phone. It does not have a TypeScript type assigned to it, and so automatically becomes of type any. Let's take a quick look at the implementation of this Phone service, which is as follows: var phonecatServices =     angular.module('phonecatServices', ['ngResource']);   phonecatServices.factory('Phone',    [        '$resource', ($resource) => {            return $resource('phones/:phoneId.json', {}, {                query: {                    method: 'GET',                    params: {                        phoneId: 'phones'                    },                    isArray: true                }            });        }    ] ); The first line of this code snippet again creates a global variable named phonecatServices, using the angular.module global function. We then call the factory function available on the phonecatServices variable, in order to define our Phone resource. This factory function uses a string named 'Phone' to define the Phone resource, and then uses Angular's dependency injection syntax to inject a $resource object. Looking through this code, we can see that we cannot easily create standard TypeScript classes for Angular to use here. Nor can we use standard TypeScript interfaces or inheritance on this Angular service. Angular TypeScript compatibility When writing Angular code with TypeScript, we are able to use classes in certain instances, but must rely on the underlying Angular functions such as module and factory to define our objects in other cases. Also, when using standard Angular services, such as $http or $resource, we will need to specify the matching declaration file interface in order to use these services. We can therefore describe the Angular library as having medium compatibility with TypeScript. Inheritance – Angular versus Backbone Inheritance is a very powerful feature of object-oriented programming, and is also a fundamental concept when using JavaScript frameworks. Using a Backbone controller or an Angular controller within each framework relies on certain characteristics, or functions being available. Each framework implements inheritance in a different way. As JavaScript does not have the concept of inheritance, each framework needs to find a way to implement it, so that the framework can allow us to extend base classes and their functionality. In Backbone, this inheritance implementation is via the extend function of each Backbone object. The TypeScript extends keyword follows a similar implementation to Backbone, allowing the framework and language to dovetail each other. Angular, on the other hand, uses its own implementation of inheritance, and defines functions on the angular global namespace to create classes (that is angular.module). We can also sometimes use the instance of an application (that is <appName>.controller) to create modules or controllers. We have found, though, that Angular uses controllers in a very similar way to TypeScript classes, and we can therefore simply create standard TypeScript classes that will work within an Angular application. So far, we have only skimmed the surface of both the Angular TypeScript syntax and the Backbone TypeScript syntax. The point of this exercise was to try and understand how TypeScript can be used within each of these two third party frameworks. Be sure to visit http://todomvc.com, and have a look at the full source-code for the Todo application written in TypeScript for both Angular and Backbone. They can be found on the Compile-to-JS tab in the example section. These running code samples, combined with the documentation on each of these sites, will prove to be an invaluable resource when trying to write TypeScript syntax with an external third party library such as Angular or Backbone. Angular 2.0 The Microsoft TypeScript team and the Google Angular team have just completed a months long partnership, and have announced that the upcoming release of Angular, named Angular 2.0, will be built using TypeScript. Originally, Angular 2.0 was going to use a new language named AtScript for Angular development. During the collaboration work between the Microsoft and Google teams, however, the features of AtScript that were needed for Angular 2.0 development have now been implemented within TypeScript. This means that the Angular 2.0 library will be classed as highly compatible with TypeScript, once the Angular 2.0 library, and the 1.5 edition of the TypeScript compiler are available. Summary In this article, we looked at three types of third party libraries, and discussed how to integrate these libraries with TypeScript. We explored Backbone, which can be categorized as a highly compliant third party library, Angular, which is a partially compliant library. Resources for Article: Further resources on this subject: Optimizing JavaScript for iOS Hybrid Apps [article] Introduction to TypeScript [article] Getting Ready with CoffeeScript [article]
Read more
  • 0
  • 0
  • 2458

article-image-inserting-gis-objects
Packt
21 Apr 2015
15 min read
Save for later

Inserting GIS Objects

Packt
21 Apr 2015
15 min read
In this article by Angel Marquez author of the book PostGIS Essentials see how to insert GIS objects. Now is the time to fill our tables with data. It's very important to understand some of the theoretical concepts about spatial data before we can properly work with it. We will cover this concept through the real estate company example, used previously. Basically, we will insert two kinds of data: firstly, all the data that belongs to our own scope of interest. By this, I mean the spatial data that was generated by us (the positions of properties in the case of the example of the real estate company) for our specific problem, so as to save this data in a way that can be easily exploited. Secondly, we will import data of a more general use, which was provided by a third party. Another important feature that we will cover in this article are the spatial data files that we could use to share, import, and export spatial data within a standardized and popular format called shp or Shape files. In this article, we will cover the following topics: Developing insertion queries that include GIS objects Obtaining useful spatial data from a public third-party Filling our spatial tables with the help of spatial data files using a command line tool Filling our spatial tables with the help of spatial data files using a GUI tool provided by PostGIS (For more resources related to this topic, see here.) Developing insertion queries with GIS objects Developing an insertion query is a very common task for someone who works with databases. Basically, we follow the SQL language syntax of the insertion, by first listing all the fields involved and then listing all the data that will be saved in each one: INSERT INTO tbl_properties( id, town, postal_code, street, "number) VALUES (1, 'London', 'N7 6PA', 'Holloway Road', 32); If the field is of a numerical value, we simply write the number; if it's a string-like data type, we have to enclose the text in two single quotes. Now, if we wish to include a spatial value in the insertion query, we must first find a way to represent this value. This is where the Well-Known Text (WKT) notation enters. WKT is a notation that represents a geometry object that can be easily read by humans; following is an example of this: POINT(-0.116190 51.556173) Here, we defined a geographic point by using a list of two real values, the latitude (y-axis) and the longitude (x-axis). Additionally, if we need to specify the elevation of some point, we will have to specify a third value for the z-axis; this value will be defined in meters by default, as shown in the following code snippet: POINT(-0.116190 51.556173 100) Some of the other basic geometry types defined by the WKT notation are: MULTILINESTRING: This is used to define one or more lines POLYGON: This is used to define only one polygon MULTIPOLYGON: This is used to define several polygons in the same row So, as an example, an SQL insertion query to add the first row to the table, tbl_properties, of our real estate database using the WKT notation, should be as follows: INSERT INTO tbl_properties (id, town, postal_code, street, "number", the_geom) VALUES (1, 'London', 'N7 6PA', 'Holloway Road', 32, ST_GeomFromText('POINT(-0.116190 51.556173)')); The special function provided by PostGIS, ST_GeomFromText, parses the text given as a parameter and converts it into a GIS object that can be inserted in the_geom field. Now, we could think this is everything and, therefore, start to develop all the insertion queries that we need. It could be true if we just want to work with the data generated by us and there isn't a need to share this information with other entities. However, if we want to have a better understanding of GIS (believe me, it could help you a lot and prevent a lot of unnecessary headache when working with data from several sources), it would be better to specify another piece of information as part of our GIS object representation to establish its Spatial Reference System (SRS). In the next section, we will explain this concept. What is a spatial reference system? We could think about Earth as a perfect sphere that will float forever in space and never change its shape, but it is not. Earth is alive and in a state of constant change, and it's certainly not a perfect circle; it is more like an ellipse (though not a perfect ellipse) with a lot of small variations, which have taken place over the years. If we want to represent a specific position inside this irregular shape called Earth, we must first make some abstractions: First we have to choose a method to represent Earth's surface into a regular form (such as a sphere, ellipsoid, and so on). After this, we must take this abstract three-dimensional form and represent it into a two-dimensional plane. This process is commonly called map projection, also known as projection. There are a lot of ways to make a projection; some of them are more precise than others. This depends on the usefulness that we want to give to the data, and the kind of projection that we choose. The SRS defines which projection will be used and the transformation that will be used to translate a position from a given projection to another. This leads us to another important point. Maybe it has occurred to you that a geographic position was unique, but it is not. By this, I mean that there could be two different positions with the same latitude and longitude values but be in different physical places on Earth. For a position to be unique, it is necessary to specify the SRS that was used to obtain this position. To explain this concept, let's consider Earth as a perfect sphere; how can you represent it as a two-dimensional square? Well, to do this, you will have to make a projection, as shown in the following figure:   A projection implies that you will have to make a spherical 3D image fit into a 2D figure, as shown in the preceding image; there are several ways to achieve this. We applied an azimuthal projection, which is a result of projecting a spherical surface onto a plane. However, as I told you earlier, there are several other ways to do this, as we can see in the following image:   These are examples of cylindrical and conical projections. Each one produces a different kind of 2D image of the terrain. Each has its own peculiarities and is used for several distinct purposes. If we put all the resultant images of these projections one above the other, we must get an image similar to the following figure:   As you can see, the terrain positions, which are not necessary, are the same between two projections, so you must clearly specify which projection you are using in your project in order to avoid possible mistakes and errors when you establish a position. There are a lot of SRS defined around the world. They could be grouped by their reach, that is, they could be local (state or province), national (an entire country), regional (several countries from the same area), or global (worldwide). The International Association of Oil and Gas Producers has defined a collection of Coordinate Reference System (CRS) known as the European Petroleum Survey Group (EPSG) dataset and has assigned a unique ID to each of these SRSs; this ID is called SRID. For uniquely defining a position, you must establish the SRS that it belongs to, using its particular ID; this is the SRID. There are literally hundreds of SRSs defined; to avoid any possible error, we must standardize which SRS we will use. A very common SRS, widely used around the globe is the WGS84 SRS with the SRID 4326. It is very important that you store the spatial data on your database, using EPSG: 4326 as much as possible, or almost use one equal projection on your database; this way you will avoid problems when you analyze your data. The WKT notation doesn't support the SRID specification as part of the text, since this was developed at the EWKT notation that allows us to include this information as part of our input string, as we will see in the following example: 'SRID=4326;POINT(51.556173 -0.116190)' When you create a spatial field, you must specify the SRID that will be used. Including SRS information in our spatial tables The matter that was discussed in the previous section is very important to develop our spatial tables. Taking into account the SRS that they will use from the beginning, we will follow a procedure to recreate our tables by adding this feature. This procedure must be applied to all the tables that we have created on both databases. Perform the following steps: Open a command session on pgSQL in your command line tool or by using the graphical GUI, PGAdmin III. We will open the Real_Estate database. Drop the spatial fields of your tables using the following instruction: SELECT DropGeometryColumn('tbl_properties', 'the_geom') Add the spatial field using this command: SELECT AddGeometryColumn('tbl_properties', 'the_geom', 4326, 'POINT', 2); Repeat these steps for the rest of the spatial tables. Now that we can specify the SRS that was used to obtain this position, we will develop an insertion query using the Extended WKT (EWKT) notation: INSERT INTO tbl_properties ( id, town, postal_code, street, "number", the_geom)VALUES (1, 'London', 'N7 6PA', 'Holloway Road', 32, ST_GeomFromEWKT('SRID=4326;POINT(51.556173 -0.116190)')); The ST_GeomFromEWKT function works exactly as ST_GeomFromText, but it implements the extended functionality of the WKT notation. Now that you know how to represent a GIS object as text, it is up to you to choose the most convenient way to generate a SQL script that inserts existing data into the spatial data tables. As an example, you could develop a macro in Excel, a desktop application in C#, a PHP script on your server, and so on. Getting data from external sources In this section, we will learn how to obtain data from third-party sources. Most often, this data interchange is achieved through a spatial data file. There are many data formats for this file (such as KML, geoJSON, and so on). We will choose to work with the *.shp files, because they are widely used and supported in practically all the GIS tools available in the market. There are dozens of sites where you could get useful spatial data from practically any city, state, or country in the world. Much of this data is public and freely available. In this case, we will use data from a fabulous website called http://www.openstreetmap.org/. The following is a series of steps that you could follow if you want to obtain spatial data from this particular provider. I'm pretty sure you can easily adapt this procedure to obtain data from another provider on the Internet. Using the example of the real estate company, we will get data from the English county of Buckinghamshire. The idea is that you, as a member of the IT department, import data from the cities where the company has activities: Open your favorite Internet browser and go to http://www.openstreetmap.org/, as shown in the following screenshot: Click on the Export tab. Click on the Geofabrik Downloads link; you will be taken to http://download.geofabrik.de/, as shown in the following screenshot: There, you will find a list of sub regions of the world; select Europe: Next is a list of all countries in Europe; notice a new column called .shp.zip. This is the file format that we need to download. Select Great Britain: In the next list, select England, you can see your selection on the map located at the right-hand side of the web page, as shown in the following screenshot: Now, you will see a list of all the counties. Select the .shp.zip column from the county of Buckinghamshire: A download will start. When it finishes, you will get a file called buckinghamshire-latest.shp.zip. Unzip it. At this point, we have just obtained the data (several shp files). The next procedure will show us how to convert this file into SQL insertion scripts. Extracting spatial data from an shp file In the unzipped folder are shp files; each of them stores a particular feature of the geography of this county. We will focus on the shp named buildings.shp. Now, we will extract this data stored in the shp file. We will convert this data to a sql script so that we can insert its data into the tbl_buildings table. For this, we will use a Postgis tool called shp2pgSQL. Perform the following steps for extracting spatial data from an shp file: Open a command window with the cmd command. Go to the unzipped folder. Type the following command: shp2pgsql -g the_geom buildings.shp tbl_buildings > buildings.sql Open the script with Notepad. Delete the following lines from the script: CREATE TABLE "tbl_buildings"(gid serial, "osm_id" varchar(20), "name" varchar(50), "type" varchar(20), "timestamp" varchar (30) ); ALTER TABLE "tbl_buildings" ADD PRIMARY KEY (gid); SELECT AddGeometryColumn('','tbl_buildings','geom','0','MULTIPOLYGON',2); Save the script. Open and run it with the pgAdmin query editor. Open the table; you must have at least 13363 new registers. Keep in mind that this number can change when new updates come. Importing shp files with a graphical tool There is another way to import an shp file into our table; we could use a graphical tool called postgisgui for this. To use this tool, perform the following steps: In the file explorer, open the folder: C:Program FilesPostgreSQL9.3binpostgisgui. Execute the shp2pgsql-gui application. Once this is done, we will see the following window: Configure the connection with the server. Click on the View Connections Details... button. Set the data to connect to the server, as shown in the following screenshot: Click the Add File button. Select the points.shp file. Once selected, type the following parameters in the Import List section:     Mode: In this field, type Append     SRID: In this field, type 4326     Geo column: In this field, type the_geom     Table: In this field, type tbl_landmarks   Click on the Import button. The import process will fail and show you the following message: This is because the structure is not the same as shown in the shp and in our table. There is no way to indicate to the tool which field we don't want to import. So, the only way for us to solve this problem is let the tool create a new table and after this, change the structure. This can be done by following these steps: Go to pgAdmin and drop the tbl_landmarks table. Change the mode to Create in the Import list. Click on the Import button. Now, the import process is successful, but the table structure has changed. Go to the PGAdmin again, refresh the data, and edit the table structure to be the same as it was before:     Change the name of the geom field to the_geom.     Change the name of the osm_id field to id.     Drop the Timestamp field.     Drop the primary key constraint and add a new one attached to the id field. For that, right-click on Constraints in the left panel.     Navigate to New Object | New Primary Key and type pk_landmarks_id. In the Columns tab, add the id field.   Now, we have two spatial tables, one with data that contains positions represented as the PostGIS type, POINT (tbl_landmarks), and the other with polygons, represented by PostGIS with the type, MULTIPOLYGON(tbl_buildings). Now, I would like you to import the data contained in the roads.shp file, using one of the two previously viewed methods. The following table has data that represents the path of different highways, streets, roads, and so on, which belong to this area in the form of lines, represented by PostGIS with the MULTILINESTRING type. When it's imported, change its name to tbl_roads and adjust the columns to the structure used for the other tables in this article. Here's an example of how the imported data must look like, as you can see the spatial data is show in its binary form in the following table: Summary In this article, you learned some basic concepts of GIS (such as WKT, EWKT, and SRS), which are fundamental for working with the GIS data. Now, you are able to craft your own spatial insertion queries or import this data into your own data tables. Resources for Article: Further resources on this subject: Improving proximity filtering with KNN [article] Installing PostgreSQL [article] Securing the WAL Stream [article]
Read more
  • 0
  • 0
  • 1865

article-image-structure-applications
Packt
21 Apr 2015
21 min read
Save for later

Structure of Applications

Packt
21 Apr 2015
21 min read
In this article by Colin Ramsay, author of the book Ext JS Application Development Blueprints, we will learn that one of the great things about imposing structure is that it automatically gives predictability (a kind of filing system in which we immediately know where a particular piece of code should live). The same applies to the files that make up your application. Certainly, we could put all of our files in the root of the website, mixing CSS, JavaScript, configuration and HTML files in a long alphabetical list, but we'd be losing out on a number of opportunities to keep our application organized. In this article, we'll look at: Ideas to structure your code The layout of a typical Ext JS application Use of singletons, mixins, and inheritance Why global state is a bad thing Structuring your application is like keeping your house in order. You'll know where to find your car keys, and you'll be prepared for unexpected guests. (For more resources related to this topic, see here.) Ideas for structure One of the ways in which code is structured in large applications involves namespacing (the practice of dividing code up by naming identifiers). One namespace could contain everything relating to Ajax, whereas another could contain classes related to mathematics. Programming languages (such as C# and Java) even incorporate namespaces as a first-class language construct to help with code organization. Separating code from directories based on namespace becomes a sensible extension of this: From left: Java's Platform API, Ext JS 5, and .NET Framework A namespace identifier is made up of one or more name tokens, such as "Java" or "Ext", "Ajax" or "Math", separated by a symbol, in most cases a full stop/period. The top level name will be an overarching identifier for the whole package (such as "Ext") and will become less specific as names are added and you drill down into the code base. The Ext JS source code makes heavy use of this practice to partition UI components, utility classes, and all the other parts of the framework, so let's look at a real example. The GridPanel component is perhaps one of the most complicated in the framework; a large collection of classes contribute to features (such as columns, cell editing, selection, and grouping). These work together to create a highly powerful UI widget. Take a look at the following files that make up GridPanel: The Ext JS grid component's directory structure The grid directory reflects the Ext.grid namespace. Likewise, the subdirectories are child namespaces with the deepest namespace being Ext.grid.filters.filter. The main Panel and View classes: Ext.grid.Grid and Ext.grid.View respectively are there in the main director. Then, additional pieces of functionality, for example, the Column class and the various column subclasses are further grouped together in their own subdirectories. We can also see a plugins directory, which contains a number of grid-specific plugins. Ext JS actually already has an Ext.plugins namespace. It contains classes to support the plugin infrastructure as well as plugins that are generic enough to apply across the entire framework. In the event of uncertainty regarding the best place in the code base for a plugin, we might mistakenly have put it in Ext.plugins. Instead, Ext JS follows best practice and creates a new, more specific namespace underneath Ext.grid. Going back to the root of the Ext JS framework, we can see that there are only a few files at the top level. In general, these will be classes that are either responsible for orchestrating other parts of the framework (such as EventManager or StoreManager) or classes that are widely reused across the framework (such as Action or Component). Any more specific functionality should be namespaced in a suitably specific way. As a rule of thumb, you can take your inspiration from the organization of the Ext JS framework, though as a framework rather than a full-blown application. It's lacking some of the structural aspects we'll talk about shortly. Getting to know your application When generating an Ext JS application using Sencha Cmd, we end up with a code base that adheres to the concept of namespacing in class names and in the directory structure, as shown here: The structure created with Sencha Cmd's "generate app" feature We should be familiar with all of this, as it was already covered when we discussed MVVM in Ext JS. Having said that, there are some parts of this that are worth examining further to see whether they're being used to the full. /overrides This is a handy one to help us fall into a positive and predictable pattern. There are some cases where you need to override Ext JS functionality on a global level. Maybe, you want to change the implementation of a low-level class (such as Ext.data.proxy.Proxy) to provide custom batching behavior for your application. Sometimes, you might even find a bug in Ext JS itself and use an override to hotfix until the next point release. The overrides directory provides a logical place to put these changes (just mirror the directory structure and namespacing of the code you're overriding). This also provides us with a helpful rule, that is, subclasses go in /app and overrides go in /overrides. /.sencha This contains configuration information and build files used by Sencha Cmd. In general, I'd say try and avoid fiddling around in here too much until you know Sencha Cmd inside out because there's a chance you'll end up with nasty conflicts if you try and upgrade to a newer version of Sencha Cmd. bootstrap.js, bootstrap.json, and bootstrap.css The Ext JS class system has powerful dependency management through the requires feature, which gives us the means to create a build that contains only the code that's in use. The bootstrap files contain information about the minimum CSS and JavaScript needed to run your application as provided by the dependency system. /packages In a similar way to something like Ruby has RubyGems and Node.js has npm, Sencha Cmd has the concept of packages (a bundle which can be pulled into your application from a local or remote source). This allows you to reuse and publish bundles of functionality (including CSS, images, and other resources) to reduce copy and paste of code and share your work with the Sencha community. This directory is empty until you configure packages to be used in your app. /resources and SASS SASS is a technology that aids in the creation of complex CSS by promoting reuse and bringing powerful features (such as mixins and functions) to your style sheets. Ext JS uses SASS for its theme files and encourages you to use it as well. index.html We know that index.html is the root HTML page of our application. It can be customized as you see fit (although, it's rare you'll need to). There's one caveat and it's written in a comment in the file already: <!-- The line below must be kept intact for Sencha Cmd to build your application --><script id="microloader" type="text/javascript" src="bootstrap.js"></script> We know what bootstrap.js refers to (loading up our application and starting to fulfill its dependencies according to the current build). So, heed the comment and leave this script tag, well, alone! /build and build.xml The /build directory contains build artifacts (the files created when the build process is run). If you run a production build, then you'll get a directory inside /build called production and you should use only these files when deploying. The build.xml file allows you to avoid tweaking some of the files in /.sencha when you want to add some extra functionality to a build process. If you want to do something before, during, or after the build, this is the place to do it. app.js This is the main JavaScript entry point to your application. The comments in this file advise avoiding editing it in order to allow Sencha Cmd to upgrade it in the future. The Application.js file at /app/Application.js can be edited without fear of conflicts and will enable you to do the majority of things you might need to do. app.json This contains configuration options related to Sencha Cmd and to boot your application. When we refer to the subject of this article as a JavaScript application, we need to remember that it's just a website composed of HTML, CSS, and JavaScript as well. However, when dealing with a large application that needs to target different environments, it's incredibly useful to augment this simplicity with tools that assist in the development process. At first, it may seem that the default application template contains a lot of cruft, but they are the key to supporting the tools that will help you craft a solid product. Cultivating your code As you build your application, there will come a point at which you create a new class and yet it doesn't logically fit into the directory structure Sencha Cmd created for you. Let's look at a few examples. I'm a lumberjack – let's go log in Many applications have a centralized SessionManager to take care of the currently logged in user, perform authentication operations, and set up persistent storage for session credentials. There's only one SessionManager in an application. A truncated version might look like this: /** * @class CultivateCode.SessionManager * @extends extendsClass * Description */ Ext.define('CultivateCode.SessionManager', {    singleton: true,    isLoggedIn: false,      login: function(username, password) {        // login impl    },        logout: function() {        // logout impl    },        isLoggedIn() {        return isLoggedIn;    } }); We create a singleton class. This class doesn't have to be instantiated using the new keyword. As per its class name, CultivateCode.SessionManager, it's a top-level class and so it goes in the top-level directory. In a more complicated application, there could be a dedicated Session class too and some other ancillary code, so maybe, we'd create the following structure: The directory structure for our session namespace What about user interface elements? There's an informal practice in the Ext JS community that helps here. We want to create an extension that shows the coordinates of the currently selected cell (similar to cell references in Excel). In this case, we'd create an ux directory—user experience or user extensions—and then go with the naming conventions of the Ext JS framework: Ext.define('CultivateCode.ux.grid.plugins.CoordViewer', {    extend: 'Ext.plugin.Abstract',    alias: 'plugin.coordviewer',      mixins: {        observable: 'Ext.util.Observable'    },      init: function(grid) {        this.mon(grid.view, 'cellclick', this.onCellClick, this);    },      onCellClick: function(view, cell, colIdx, record, row, rowIdx, e) {        var coords = Ext.String.format('Cell is at {0}, {1}', colIdx, rowIdx)          Ext.Msg.alert('Coordinates', coords);    } }); It looks a little like this, triggering when you click on a grid cell: Also, the corresponding directory structure follows directly from the namespace: You can probably see a pattern emerging already. We've mentioned before that organizing an application is often about setting things up to fall into a position of success. A positive pattern like this is a good sign that you're doing things right. We've got a predictable system that should enable us to create new classes without having to think too hard about where they're going to sit in our application. Let's take a look at one more example of a mathematics helper class (one that is a little less obvious). Again, we can look at the Ext JS framework itself for inspiration. There's an Ext.util namespace containing over 20 general classes that just don't fit anywhere else. So, in this case, let's create CultivateCode.util.Mathematics that contains our specialized methods for numerical work: Ext.define('CultivateCode.util.Mathematics', {    singleton: true,      square: function(num) {        return Math.pow(num, 2);    },      circumference: function(radius) {        return 2 * Math.PI * radius;    } }); There is one caveat here and it's an important one. There's a real danger that rather than thinking about the namespace for your code and its place in your application, a lot of stuff ends up under the utils namespace, thereby defeating the whole purpose. Take time to carefully check whether there's a more suitable location for your code before putting it in the utils bucket. This is particularly applicable if you're considering adding lots of code to a single class in the utils namespace. Looking again at Ext JS, there are lots of specialized namespaces (such as Ext.state or Ext.draw. If you were working with an application with lots of mathematics, perhaps you'd be better off with the following namespace and directory structure: Ext.define('CultivateCode.math.Combinatorics', {    // implementation here! }); Ext.define('CultivateCode.math.Geometry', {    // implementation here! }); The directory structure for the math namespace is shown in the following screenshot: This is another situation where there is no definitive right answer. It will come to you with experience and will depend entirely on the application you're building. Over time, putting together these high-level applications, building blocks will become second nature. Money can't buy class Now that we're learning where our classes belong, we need to make sure that we're actually using the right type of class. Here's the standard way of instantiating an Ext JS class: var geometry = Ext.create('MyApp.math.Geometry'); However, think about your code. Think how rare it's in Ext JS to actually manually invoke Ext.create. So, how else are the class instances created? Singletons A singleton is simply a class that only has one instance across the lifetime of your application. There are quite a number of singleton classes in the Ext JS framework. While the use of singletons in general is a contentious point in software architecture, they tend to be used fairly well in Ext JS. It could be that you prefer to implement the mathematical functions (we discussed earlier) as a singleton. For example, the following command could work: var area = CultivateCode.math.areaOfCircle(radius); However, most developers would implement a circle class: var circle = Ext.create('CultivateCode.math.Circle', { radius: radius }); var area = circle.getArea(); This keeps the circle-related functionality partitioned off into the circle class. It also enables us to pass the circle variable round to other functions and classes for additional processing. On the other hand, look at Ext.Msg. Each of the methods here are fired and forget, there's never going to be anything to do further actions on. The same is true of Ext.Ajax. So, once more we find ourselves with a question that does not have a definitive answer. It depends entirely on the context. This is going to happen a lot, but it's a good thing! This article isn't going to teach you a list of facts and figures; it's going to teach you to think for yourself. Read other people's code and learn from experience. This isn't coding by numbers! The other place you might find yourself reaching for the power of the singleton is when you're creating an overarching manager class (such as the inbuilt StoreManager or our previous SessionManager example). One of the objections about singletons is that they tend to be abused to store lots of global state and break down the separation of concerns we've set up in our code as follows: Ext.define('CultivateCode.ux.grid.GridManager', {       singleton: true,    currentGrid: null,    grids: [],      add: function(grid) {        this.grids.push(grid);    },      setCurrentGrid: function(grid) {        this.focusedGrid = grid;    } }); No one wants to see this sort of thing in a code base. It brings behavior and state to a high level in the application. In theory, any part of the code base could call this manager with unexpected results. Instead, we'd do something like this: Ext.define('CultivateCode.view.main.Main', {    extend: 'CultivateCode.ux.GridContainer',      currentGrid: null,    grids: [],      add: function(grid) {        this.grids.push(grid);    },      setCurrentGrid: function(grid) {        this.currentGrid = grid;    } }); We still have the same behavior (a way of collecting together grids), but now, it's limited to a more contextually appropriate part of the grid. Also, we're working with the MVVM system. We avoid global state and organize our code in a more correct manner. A win all round. As a general rule, if you can avoid using a singleton, do so. Otherwise, think very carefully to make sure that it's the right choice for your application and that a standard class wouldn't better fit your requirements. In the previous example, we could have taken the easy way out and used a manager singleton, but it would have been a poor choice that would compromise the structure of our code. Mixins We're used to the concept of inheriting from a subclass in Ext JS—a grid extends a panel to take on all of its functionality. Mixins provide a similar opportunity to reuse functionality to augment an existing class with a thin slice of behavior. An Ext.Panel "is an" Ext.Component, but it also "has a" pinnable feature that provides a pin tool via the Ext.panel.Pinnable mixin. In your code, you should be looking at mixins to provide a feature, particularly in cases where this feature can be reused. In the next example, we'll create a UI mixin called shakeable, which provides a UI component with a shake method that draws the user's attention by rocking it from side to side: Ext.define('CultivateCode.util.Shakeable', {    mixinId: 'shakeable',      shake: function() {        var el = this.el,            box = el.getBox(),            left = box.x - (box.width / 3),            right = box.x + (box.width / 3),            end = box.x;          el.animate({            duration: 400,            keyframes: {                33: {                      x: left                },                66: {                    x: right                },                 100: {                    x: end                }            }        });    } }); We use the animate method (which itself is actually mixed in Ext.Element) to set up some animation keyframes to move the component's element first left, then right, then back to its original position. Here's a class that implements it: Ext.define('CultivateCode.ux.button.ShakingButton', {    extend: 'Ext.Button',    mixins: ['CultivateCode.util.Shakeable'],    xtype: 'shakingbutton' }); Also it's used like this: var btn = Ext.create('CultivateCode.ux.button.ShakingButton', {    text: 'Shake It!' }); btn.on('click', function(btn) {    btn.shake(); }); The button has taken on the new shake method provided by the mixin. Now, if we'd like a class to have the shakeable feature, we can reuse this mixin where necessary. In addition, mixins can simply be used to pull out the functionality of a class into logical chunks, rather than having a single file of many thousands of lines. Ext.Component is an example of this. In fact, most of its core functionality is found in classes that are mixed in Ext.Component. This is also helpful when navigating a code base. Methods that work together to build a feature can be grouped and set aside in a tidy little package. Let's take a look at a practical example of how an existing class could be refactored using a mixin. Here's the skeleton of the original: Ext.define('CultivateCode.ux.form.MetaPanel', {    extend: 'Ext.form.Panel',      initialize: function() {        this.callParent(arguments);        this.addPersistenceEvents();    },      loadRecord: function(model) {        this.buildItemsFromRecord(model);        this.callParent(arguments);    },      buildItemsFromRecord: function(model) {        // Implementation    },      buildFieldsetsFromRecord: function(model){        // Implementation    },      buildItemForField: function(field){        // Implementation    },      isStateAvailable: function(){        // Implementation    },      addPersistenceEvents: function(){      // Implementation    },      persistFieldOnChange: function(){        // Implementation    },      restorePersistedForm: function(){        // Implementation    },      clearPersistence: function(){        // Implementation    } }); This MetaPanel does two things that the normal FormPanel does not: It reads the Ext.data.Fields from an Ext.data.Model and automatically generates a form layout based on these fields. It can also generate field sets if the fields have the same group configuration value. When the values of the form change, it persists them to localStorage so that the user can navigate away and resume completing the form later. This is useful for long forms. In reality, implementing these features would probably require additional methods to the ones shown in the previous code skeleton. As the two extra features are clearly defined, it's easy enough to refactor this code to better describe our intent: Ext.define('CultivateCode.ux.form.MetaPanel', {    extend: 'Ext.form.Panel',      mixins: [        // Contains methods:        // - buildItemsFromRecord        // - buildFieldsetsFromRecord        // - buildItemForField        'CultivateCode.ux.form.Builder',          // - isStateAvailable        // - addPersistenceEvents        // - persistFieldOnChange        // - restorePersistedForm        // - clearPersistence        'CultivateCode.ux.form.Persistence'    ],      initialize: function() {        this.callParent(arguments);        this.addPersistenceEvents();    },      loadRecord: function(model) {        this.buildItemsFromRecord(model);        this.callParent(arguments);    } }); We have a much shorter file and the behavior we're including in this class is described a lot more concisely. Rather than seven or more method bodies that may span a couple of hundred lines of code, we have two mixin lines and the relevant methods extracted to a well-named mixin class. Summary This article showed how the various parts of an Ext JS application can be organized into a form that eases the development process. Resources for Article: Further resources on this subject: CreateJS – Performing Animation and Transforming Function [article] Good time management in CasperJS tests [article] The Login Page using Ext JS [article]
Read more
  • 0
  • 0
  • 1685
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-bluetooth-low-energy-blend-micro
Michael Ang
17 Apr 2015
7 min read
Save for later

Combining the Blend Micro with the Bluetooth Low-Energy Module

Michael Ang
17 Apr 2015
7 min read
Have you ever wanted an easy way to connect your Arduino to your phone? The Blend Micro from RedBearLab is an Arduino-compatible development board that includes a Bluetooth Low-Energy (BLE) module for connecting with phones and computers. With BLE you can make a quick connection between your phone and Arduino to exchange simple messages like sensor readings or commands for the Arduino to execute. Blend Micro top and bottom showing Bluetooth module on the top side. ATMega32u4 (Arduino) microcontroller on the bottom and on-board antenna. The Blend Micro is a rather small development board - much smaller than a normal-sized Arduino. This makes it great for portable devices - just the kind we might like to connect to our phone! The larger Blend is available in the full-size Arduino format, if you have shields you’d like to connect. Bluetooth Low-Energy offers a lot of advantages over older versions of Bluetooth, particularly for battery-powered devices that aren’t always transmitting data. Recent iOS / Android devices and laptops with Bluetooth 4.0 should work (there’s a list of compatible devices on the Blend Micro page). I’ve been using the Blend Micro with my iPhone 5. Even on a breadboard it’s small enough to be portable. Coin cell battery pack on the right. Getting set up for development is unfortunately a bit complicated. To use the normal Arduino IDE you have to download an older version of Arduino (I’m using 1.0.6), install a few libraries, and patch a file inside the Arduino application itself (details here). Luckily that only has to be done once. A potentially easier way to get started is to use the online programming environment Codebender (quickstart instructions). One hitch with Codebender is you may need to manually press the reset button on the Blend Micro while programming (this isn’t required when programming using the normal Arduino IDE). If the Blend Micro is actively connected via Bluetooth, closing the connection on your phone or other device before programming seems to help. Once you’re set up for development programming the board is relatively straightforward. Blinking an LED from your Arduino is cool. How about blinking an LED on an Arduino, controlled by your phone? You can load the SimpleControls example onto your Blend Micro and the BLE Controller app onto your phone (iOS, Android). Connecting to the Blend Micro is simple - with the app open just tap "Scan" and your Blend Micro should be shown in the list of discovered devices. There’s no pairing step (required by previous Bluetooth versions) so connecting is easy. The BLE Controller app lets you control a few pins on the Arduino and receive data back, all without needing any more hardware on your phone. Pretty slick! Having the user interface to our device on our phone allows us to show a lot of information since there’s a lot of screen real estate. Since we already have our phone with us, why carry another screen for our portable device? I’m currently working on a wearable light logger that will record the intensity and color of the ambient light that people experience throughout their day. The wearable device is part of my Light Catchers project that collects our "light histories" into a public light sculpture. The wearable will have an Arduino-compatible micro-controller, RGB light sensor and data storage. For prototyping the wearable I’ve been using the Blend Micro to get a real-time view of the light sensor data on my phone so I can see how the sensor reacts to different lighting conditions. Sending live color readings under blue light. I started with the SimpleControls example and adapted it to send the RGB data from the light sensor. You can see the full code that runs on the Blend in my RGBLE.ino sketch. Sending the light sensor data was fairly straight forward. Let’s have a quick look at the code that’s needed to send data over BLE. Color display on the iPhone. RSSI is Bluetooth signal strength. Inside our setup function we can set the name of our BLE device. This name will show up when we scan for the device on our phone. Then we start the BLE library. void setup() { // ... // Set your BLE device name here, max. length 10 ble_set_name("RGB Sensor"); // Init. and start BLE library. ble_begin(); // ... } Inside our loop function, we can check if data is available, and read the bytes that were sent from the phone. void loop() { // ... // If data is ready while(ble_available()) { // read out command and data byte data0 = ble_read(); The RGB sensor that I’m using reads each color channel as a 10-bit value. Since the data won’t fit in an 8-bit byte the value is stored as two bytes. Sending a byte over the BLE connection is as simple as calling ble_write. I send each byte of the two-byte value separately using a little math with the shift operator (>>). I only take a reading and send the data if there is an active BLE connection. // Check if we’re connected if (ble_connected()) { // Take a light reading // ... // Send reading as bytes. ble_write(r >> 8); ble_write(r); At the end of our loop function the library needs to do some work to handle the Bluetooth data. // Allow BLE library to send/receive data ble_do_events(); } // end loop The app I run on my iPhone is a customized version of the Simple Controls sample app. My app shows the received color values on-screen. RedBearLab has sample code for various platforms available on the RedBearLab github page. For prototyping my device having an on-screen display with debugging controls is great. The small size of the Blend Micro makes it well suited for prototyping my wearable device. Range seems to be fairly limited (think inside a room rather than between rooms) but I haven’t done anything to optimize the antenna placement, so your mileage may vary. Color sensor prototype "in the field" on a sunny day at Tempelhofer Feld in Berlin. Battery life seems quite promising. I’m running my prototype off two 3V lithium coin cells and get several hours of life even before doing power optimization. Some Arduino boards have a power LED that’s always on while the board is powered. That LED might draw 20mA of current, which is a lot when you consider that good coin cells might provide 240mAh of current in the best case (typical datasheet). With the Blend Micro it’s easy to turn off all the onboard LEDs (see the RGBLE sketch for details). I measured the current consumption of my prototype around 14-16mA, with peaks around 20mA when starting the Bluetooth connection to my phone. It’s impressive to be sending data over the air using less power than you might use to light an LED! Accurately measuring the power consumption can be tricky since the radio transmissions can happen in short bursts. Probably the topic of another post! Other than some initial difficulty setting up the development environment programming with the Blend Micro is pretty smooth. Connecting your Arduino to your phone over a low power radio link opens up a lot of possibilities when you consider that your phone probably has a large touchscreen, cellular Internet connection, GPS and more. Once you try an Arduino that can wirelessly talk to your phone and computer, you always want it to do that! Resources RedBearLab Blend Micro RGBLE Arduino sketch (We Are) Light Catchers - wearable light logger About the Author Michael Ang is a Berlin-based artist and engineer working at the intersection of art, engineering and human experience. His works use technology to enhance our understanding of natural phenomena, modulate social interaction, and bridge the divide between the virtual and physical. His Light Catchers workshops and public installations will take place in Germany for the International Year of Light 2015.
Read more
  • 0
  • 0
  • 5488

article-image-using-networking-distributed-computing-openframeworks
Packt
16 Apr 2015
16 min read
Save for later

Using networking for distributed computing with openFrameworks

Packt
16 Apr 2015
16 min read
In this article by Denis Perevalov and Igor (Sodazot) Tatarnikov, authors of the book openFrameworks Essentials, we will investigate how to create a distributed project consisting of several programs working together and communicating with each other via networking. (For more resources related to this topic, see here.) Distributed computing with networking Networking is a way of sending and receiving data between programs, which work on a single or different computers and mobile devices. Using networking, it is possible to split a complex project into several programs working together. There are at least three reasons to create distributed projects: The first reason is splitting to obtain better performance. For example, when creating a big interactive wall with cameras and projectors, it is possible to use two computers. The first computer (tracker) will process data from cameras and send the result to the second computer (render), which will render the picture and output it to projectors. The second reason is creating a heterogeneous project using different development languages. For example, consider a project that generates a real-time visualization of data captured from the Web. It is easy to capture and analyze the data from the Web using a programming language like Python, but it is hard to create a rich, real-time visualization with it.On the opposite side, openFrameworks is good for real-time visualization but is not very elegant when dealing with data from the Web. So, it is a good idea to build a project consisting of two programs. The first Python program will capture data from the Web, and the second openFrameworks program will perform rendering. The third reason is synchronization with, and external control of, one program with other programs/devices. For example, a video synthesizer can be controlled from other computers and mobiles via networking. Networking in openFrameworks openFrameworks' networking capabilities are implemented in two core addons: ofxNetwork and ofxOsc. To use an addon in your project, you need to include it in the new project when creating a project using Project Generator, or by including the addon's headers and libraries into the existing project manually. If you need to use only one particular addon, you can use an existing addon's example as a sketch for your project. The ofxNetwork addon The ofxNetwork addon contains classes for sending and receiving data using the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP). The difference between these protocols is that TCP guarantees receiving data without losses and errors but requires the establishment of a preliminary connection (known as handshake) between a sender and a receiver. UDP doesn't require the establishment of any preliminary connection but also doesn't guarantee delivery and correctness of the received data. Typically, TCP is used in tasks where data needs to be received without errors, such as downloading a JPEG file from a web server. UDP is used in tasks where data should be received in real time at a fast rate, such as receiving a game state 60 times per second in a networking game. The ofxNetwork addon's classes are quite generic and allow the implementation of a wide range of low-level networking tasks. In this article, we don't explore it in detail. The ofxOsc addon The ofxOsc addon is intended for sending and receiving messages using the Open Sound Control (OSC) protocol. Messages of this protocol (OSC messages) are intended to store control commands and parameter values. This protocol is very popular today and is implemented in many VJ and multimedia programs and software for live electronic sound performance. All the popular programming tools support OSC too. An OSC protocol can use the UDP or TCP protocols for data transmission. Most often, as in openFrameworks implementation, a UDP protocol is used. See details of the OSC protocol at opensoundcontrol.org/spec-1_0. The main classes of ofxOsc are the following: ofxOscSender: This sends OSC messages ofxOscReceiver: This receives OSC messages ofxOscMessage: This class is for storing a single OSC message ofxOscBundle: This class is for storing several OSC messages, which can be sent and received as a bundle Let's add the OSC receiver to our VideoSynth project and then create a simple OSC sender, which will send messages to the VideoSynth project. Implementing the OSC messages receiver To implement the receiving of OSC messages in the VideoSynth project, perform the following steps: Include the ofxOsc addon's header to the ofApp.h file by inserting the following line after the #include "ofxGui.h" line: #include "ofxOsc.h" Add a declaration of the OSC receiver object to the ofApp class: ofxOscReceiver oscReceiver; Set up the OSC receiver in setup(): oscReceiver.setup( 12345 ); The argument of the setup() method is the networking port number. After executing this command, oscReceiver begins listening on this port for incoming OSC messages. Each received message is added to a special message queue for further processing. A networking port is a number from 0 to 65535. Ports from 10000 to 65535 normally are not used by existing operating systems, so you can use them as port numbers for OSC messages. Note that two programs receiving networking data and working on the same computer must have different port numbers. Add the processing of incoming OSC messages to update(): while ( oscReceiver.hasWaitingMessages() ) {ofxOscMessage m;oscReceiver.getNextMessage( &m );if ( m.getAddress() == "/pinchY" ) {pinchY = m.getArgAsFloat( 0 );}} The first line is a while loop, which checks whether there are unprocessed messages in the message queue of oscReceiver. The second line declares an empty OSC message m. The third line pops the latest message from the message queue and copies it to m. Now, we can process this message. Any OSC message consists of two parts: an address and (optionally) one or several arguments. An address is a string beginning with the / character. An address denotes the name of a control command or the name of a parameter that should be adjusted. Arguments can be float, integer, or string values, which specify some parameters of the command. In our example, we want to adjust the pinchY slider with OSC commands, so we expect to have an OSC message with the address /pinchY and the first argument with its float value. Hence, in the fourth line, we check whether the address of the m message is equal to /pinchY. If this is true, in the fifth line, we get the first message's argument (an argument with the index value 0) and set the pinchY slider to this value. Of course, we could use any other address instead of /pinchY (for example, /val), but normally, it is convenient to have the address similar to the parameter's name. It is easy to control other sliders with OSC. For example, to add control of the extrude slider, just add the following code: if ( m.getAddress() == "/extrude" ) {extrude = m.getArgAsFloat( 0 );} After running the project, nothing new happens; it works as always. But now, the project is listening for incoming OSC messages on port 12345. To check this, let's create a tiny openFrameworks project that sends OSC messages. Creating an OSC sender with openFrameworks Let's create a new project OscOF, one that contains a GUI panel with one slider, and send the slider's value via OSC to the VideoSynth project. Here, we assume that the OSC sender and receiver run on the same computer. See the details on running the sender on a separate computer in the upcoming Sending OSC messages between two separate computers section. Now perform the following steps: Create a new project using Project Generator. Namely, start Project Generator, set the project's name to OscOF (that means OSC with openFrameworks), and include the ofxGui and ofxOsc addons to the newly created project. The ofxGui addon is needed to create the GUI slider, and the ofxOsc addon is needed to send OSC messages. Open this project in your IDE. Include both addons' headers to the ofApp.h file by inserting the following lines (after the #include "ofMain.h" line): #include "ofxGui.h"#include "ofxOsc.h" Add the declarations of the OSC sender object, the GUI panel, and the GUI slider to the ofApp class declaration: ofxOscSender oscSender;ofxPanel gui;ofxFloatSlider slider;void sliderChanged( float &value ); The last line declares a new function, which will be called by openFrameworks when the slider's value is changed. This function will send the corresponding OSC message. The symbol & before value means that the value argument is passed to the function as a reference. Using reference here is not important for us, but is required by ofxGui; please see the information on the notion of a reference in the C++ documentation. Set up the OSC sender, the GUI panel with the slider, and the project's window title and size by adding the following code to setup(): oscSender.setup( "localhost", 12345 );slider.addListener( this, &ofApp::sliderChanged );gui.setup( "Parameters" );gui.add( slider.setup("pinchY", 0, 0, 1) );ofSetWindowTitle( "OscOF" );ofSetWindowShape( 300, 150 ); The first line starts the OSC sender. Here, the first argument specifies the IP address to which the OSC sender will send its messages. In our case, it is "localhost". This means the sender will send data to the same computer on which the sender runs. The second argument specifies the networking port, 12345. The difference between setting up the OSC sender and receiver is that we need to specify the address and port for the sender, and not only the port. Also, after starting, the sender does nothing until we give it the explicit command to send an OSC message. The second line starts listening to the slider's value changes. The first and second arguments of the addListener() command specify the object (this) and its member function (sliderChanged), which should be called when the slider is changed. The remaining lines set up the GUI panel, the GUI slider, and the project's window title and shape. Now, add the sliderChanged() function definition to ofApp.cpp: void ofApp::sliderChanged( float &value ) {ofxOscMessage m;m.setAddress( "/pinchY" );m.addFloatArg( value );oscSender.sendMessage( m );} This function is called when the slider value is changed, and the value parameter is its new value. The first three lines of the function create an OSC message m, set its address to /pinchY, and add a float argument equal to value. The last line sends this OSC message. As you may see, the m message's address (/pinchY) coincides with the address implemented in the previous section, which is expected by the receiver. Also, the receiver expects that this message has a float argument—and it is true too! So, the receiver will properly interpret our messages and set its pinchY slider to the desired value. Finally, add the command to draw GUI to draw(): gui.draw(); On running the project, you will see its window, consisting of a GUI panel with a slider, as shown in the following screenshot: This is the OSC sender made with openFrameworks Don't stop this project for a while. Run the VideoSynth project and change the pinchY slider's value in the OscOF window using the mouse. The pinchY slider in VideoSynth should change accordingly. This means that the OSC transmission between the two openFrameworks programs works. If you are not interested in sending data between two separate computers, feel free to skip the following section. Sending OSC messages between two separate computers We have checked passing OSC messages between two programs that run on the same computer. Now let's consider a situation when an OSC sender and an OSC receiver run on two separate computers connected to the same Local Area Network (LAN) using Ethernet or Wi-Fi. If you have two laptops, most probably they are already connected to the same networking router and hence are in the same LAN. To make an OSC connection work in this case, we need to change the "localhost" value in the sender's setup command by the local IP address of the receiver's computer. Typically, this address has a form like "192.168.0.2", or it could be a name, for example, "LAPTOP3". You can get the receiver's computer IP address by opening the properties of your network adapter or by executing the ifconfig command in the terminal window (for OS X or Linux) or ipconfig in the command prompt window (for Windows). Connection troubleshooting If you set the IP address in the sender's setup, but OSC messages from the OSC sender don't come to the OSC receiver, then it could be caused by the network firewall or antivirus software, which blocks transmitting data over our 12345 port. So please check the firewall and antivirus settings. To make sure that the connection between the two computers exists, use the ping command in the terminal (or the command prompt) window. Creating OSC senders with TouchOSC and Python At this point, we create the OSC sender using openFrameworks and send its data out to the VideoSynth project. But, it's easy to create the OSC sender using other programming tools. Such an opportunity can be useful for you in creating complex projects. So, let's show how to create an OSC sender on a mobile device using the TouchOSC app and also create simple senders using the Python and Max/MSP languages. If you are not interested in sending OSC from mobile devices or in Python or Max/MSP, feel free to skip the corresponding sections. Creating an OSC sender for a mobile device using the TouchOSC app It is very handy to control your openFrameworks project by a mobile device (or devices) using the OSC protocol. You can create a custom OSC sender by yourself, or you can use special apps made for this purpose. One such application is TouchOSC. It's a paid application available for iOS (see hexler.net/software/touchosc) and Android (see hexler.net/software/touchosc-android). Working with TouchOSC consists of four steps: creating the GUI panel (called layout) on the laptop, uploading it to a mobile device, setting up the OSC receiver's address and port, and working with the layout. Let's consider them in detail: To create the layout, download, unzip, and run a special program, TouchOSC Editor, on a laptop (it's available for OS X, Windows, and Linux). Add the desired GUI elements on the layout by right-clicking on the layout. When the layout is ready, upload it to a mobile device by running the TouchOSC app on the mobile and pressing the Sync button in TouchOSC Editor. In the TouchOSC app, go to the settings and set up the OSC receiver's IP address and port number. Next, open the created layout by choosing it from the list of all the existing layouts. Now, you can use the layout's GUI elements to send the OSC messages to your openFrameworks project (and, of course, to any other OSC-supporting software). Creating an OSC sender with Python In this section, we will create a project that sends OSC messages using the Python language. Here, we assume that the OSC sender and receiver run on the same computer. See the details on running the sender on a separate computer in the previous Sending OSC messages between two separate computers section. Python is a free, interpreted language available for all operating systems. It is extremely popular nowadays in various fields, including teaching programming, developing web projects, and performing computations in natural sciences. Using Python, you can easily capture information from the Web and social networks (using their API) and send it to openFrameworks for further processing, such as visualization or sonification, that is, converting data to a picture or sound. Using Python, it is quite easy to create GUI applications, but here we consider creating a project without a GUI. Perform the following steps to install Python, create an OSC sender, and run it: Install Python from www.python.org/downloads (the current version is 3.4). Download the python-osc library from pypi.python.org/pypi/python-osc and unzip it. This library implements the OSC protocol support in Python. Install this library, open the terminal (or command prompt) window, go to the folder where you unzipped python-osc and type the following: python setup.py install If this doesn't work, type the following: python3 setup.py install Python is ready to send OSC messages. Now let's create the sender program. Using your preferred code or text editor, create the OscPython.py file and fill it with the following code: from pythonosc import udp_clientfrom pythonosc import osc_message_builderimport timeif __name__ == "__main__":oscSender = udp_client.UDPClient("localhost", 12345)for i in range(10):m = osc_message_builder.OscMessageBuilder(address ="/pinchY")m.add_arg(i*0.1)oscSender.send(m.build())print(i)time.sleep(1) The first three lines import the udp_client, osc_message_builder, and time modules for sending the UDP data (we will send OSC messages using UDP), creating OSC messages, and working with time respectively. The if __name__ == "__main__": line is generic for Python programs and denotes the part of the code that will be executed when the program runs from the command line. The first line of the executed code creates the oscSender object, which will send the UDP data to the localhost IP address and the 12345 port. The second line starts a for cycle, where i runs the values 0, 1, 2, …, 9. The body of the cycle consists of commands for creating an OSC message m with address /pinchY and argument i*0.1, and sending it by OSC. The last two lines print the value i to the console and delay the execution for one second. Open the terminal (or command prompt) window, go to the folder with the OscPython.py file, and execute it by the python OscPython.py command. If this doesn't work, use the python3 OscPython.py command. The program starts and will send 10 OSC messages with the /pinchY address and the 0.0, 0.1, 0.2, …, 0.9 argument values, with 1 second of pause between the sent messages. Additionally, the program prints values from 0 to 9, as shown in the following screenshot: This is the output of an OSC sender made with Python Run the VideoSynth project and start our Python sender again. You will see how its pinchY slider gradually changes from 0.0 to 0.9. This means that OSC transmission from a Python program to an openFrameworks program works. Summary In this article, we learned how to create distributed projects using the OSC networking protocol. At first, we implemented receiving OSC in our openFrameworks project. Next, we created a simple OSC sender project with openFrameworks. Then, we considered how to create an OSC sender on mobile devices using TouchOSC and also how to build senders using the Python language. Now, we can control the video synthesizer from other computers or mobile devices via networking. Resources for Article: Further resources on this subject: Kinect in Motion – An Overview [Article] Learn Cinder Basics – Now [Article] Getting Started with Kinect [Article]
Read more
  • 0
  • 0
  • 5367

article-image-evenly-spaced-views-auto-layout-ios
Joe Masilotti
16 Apr 2015
5 min read
Save for later

Evenly Spaced Views with Auto Layout in iOS

Joe Masilotti
16 Apr 2015
5 min read
When the iPhone first came out there was only one screen size to worry about 320, 480. Then the Retina screen was introduced doubling the screen's resolution. Apple quickly introduced the iPhone 5 and added an extra 88 points to the bottom. With the most recent line of iPhones two more sizes were added to the mix. Before even mentioning the iPad line that is already five different combinations of heights and widths to account for. To help remedy this growing number of sizes and resolutions Apple introduced Auto Layout with iOS 6. Auto Layout is a dynamic way of laying out views with constraints and rules to let the content fit on multiple screen sizes; think "responsive" for mobile. Lots of layouts are possible with Auto Layout but some require an extra bit of work. One of the more common, albeit tricky, arrangements is to have evenly spaced elements. Having the view scale up to different resolutions and look great on all devices isn't hard and can be done in both Interface Builder or manually in code. Let's walk through how to evenly space views with Auto Layout using Xcode's Interface Builder. Using Interface Builder The easiest way to play around and test layout in IB is to create a new Single View Application iOS project.   Open Main.storyboard and select ViewController on the left. Don't worry that it is showing a square view since we will be laying everything out dynamically. The first addition to the view will be the three `UIView`s we will be evenly spacing. Add them along the view from left to right and assign different colors to each. This will make it easier to distinguish them later. Don't worry about where they are we will fix the layout soon enough.   Spacer View Layout Ideally we would be able to add constraints that evenly space these out directly. Unfortunately, you can not set *equals* restrictions on constraints, only on views. What that means is we have to create spacer views in between our content and then set equal constraints on those. Add four more views, one between the edges and the content views.   Before we add our first constraint let's name each view so we can have a little more context when adding their attributes. One of the most frustrating things when working with Auto Layout is seeing the little red arrow telling you something is wrong. Let's try and incrementally add constraints and get back to a working state as quickly as possible. The first item we want to add will constrain the Left Content view using the spacer. Select the Left Spacer and add left 20, top 20, and bottom 20 constraints.   To fix this first error we need to assign a width to the spacer. While we will be removing it later it makes sense to always have a clean slate when moving on to another view. Add in a width (50) constraint and let IB automatically and update its frame.   Now do the same thing to the Right Spacer.   Content View Layout We will remove the width constraints when everything else is working correctly. Consider them temporary placeholders for now. Next lets lay out the Left Content view. Add a left 0, top 20, bottom 20, width 20 constraint to it.   Follow the same method on the Right Content view.   Twice more follow the same procedure for the Middle Spacer Views giving them left/right 0, top 20, bottom 20, width 50 constraints.   Finally, let's constrain the Middle Content view. Add left 0, top 20, right 0, bottom 20 constraints to it and lay it out.   Final Constraints Remember when I said it was tricky? Maybe a better way to describe this process is long and tedious. All of the setup we have done so far was to set us up in a good position to give the constraints we actually want. If you look at the view it doesn't look very special and it won't even resize the right way yet. To start fix this we bring in the magic constraint of this entire example, Equal Widths on the spacer views. Go ahead and delete the four explicit Width constraints on the spacer views and add an Equal Width constraint to each. Select them all at the same time then add the constraint so they work off of each other.   Finally, set explicit widths on the three content views. This is where you can start customizing the layout to have it look the way you want. For my view I want the three views to be 75 points wide so I removed all of the Width constraints and added them back in for each. Now set the background color of the four spacer views to clear and hide them. Running the app on different size simulators will produce the same result: the three content views remain the same width and stay evenly spaced out along the screen. Even when you rotate the device the views remain spaced out correctly.   Try playing around with different explicit widths of the content views. The same technique can be used to create very dynamic layouts for a variety of applications. For example, this procedurecan be used to create a table cell with an image on the left, text in the middle, and a button on the right. Or it can make one row in a calculator that sizes to fit the screen of the device. What are you using it for? About the author Joe Masilotti is a test-driven iOS developer living in Brooklyn, NY. He contributes to open-source testing tools on GitHub and talks about development, cooking, and craft beer on Twitter.
Read more
  • 0
  • 0
  • 10436

article-image-how-create-simple-first-person-puzzle-game
Travis and
16 Apr 2015
6 min read
Save for later

How to create a simple First Person Puzzle Game

Travis and
16 Apr 2015
6 min read
So you want to make a first person puzzle game, but you're not sure where to start. Well, this post can hopefully give you a heads-up on how to start and create a game. When creating this post we wanted to make a complex system of colored keys and locked doors, but we decided on a simple trigger system moving an elevator. That may seem really simplistic, but it encompasses really everything you need to make the harder key color key scene described, while keeping this lesson short. Let's begin. First create a project in Unity and name it something simple, like FirstPersonPuzzle. Include the Character Controller package, as this package contains the FirstPersonController that we are going to use in this project! If this is your first time using Unity, there are some great scripts packaged with Unity. Two examples are SmoothFollow.js and SmoothLookAt.js. The first of which has the camera follow a specific game object you designate and follow it without giving a choppy look that can come from just having the camera follow the object. SmoothLookAt will have the camera look at a designated game object, but it does not have a quick cut feeling when the script is run.  There are also C# versions you can find of almost all of these scripts online through the Unity community. We don't have enough time to get into them, but we encourage you to try them for yourself! Next we are going to make a simple plane to walk on, and give it the following information inside the transform component. Click the plane in the hierarchy view, and rename it to Ground. Hmm, it's a little dark in here, so lets quickly throw in a directional light, just to spice the place up a little bit, and put it in a nice place above us to keep it out of way while building the scene. First we will make a material, and then drag and drop that material on to the plane in the hierarchy. Delete the Main Camera found in the hierarchy. You hierarchy should now look like this. Now drag the First Person Controller from the Standard Assets folder into the hierarchy. Put the controller at the following transform position and you should be ready to go walking around the scene by hitting the play button! Just remember to switch the tag of the Controller to the Player tag, as seen in the screenshot. Next, we're going to create a little elevator script. We know, why an elevator in a puzzle game. Well, I want to show a little bit of how moving objects look, as well as triggering an action, and I wanted the single most ridiculous, jaw dropping, out of this world way to do so. Unfortunately it didn't work... so we put in an elevator. Create a cube quickly straight in the hierarchy and give it the following transform information. Now let's make another material, make it red, and place it onto the cube we just made. Rename that cube to "Elevator". Your scene should look like this: Create another cube in the hierarchy and call it Platform, and give it the following transform attributes. Okay, lastly for objects, create another cube, and name it ElevatorTrigger. For ease, in the hierarchy, drag the ElevatorTrigger cube we created into the Elevator object, making Elevator now the parent of ElevatorTrigger as shown. Now go to the inspector, and right click the Mesh Renderer and remove the component. This will essentially leave our object invisible. Also check the box in the Box Collider called Is Trigger so that this collider will watch for trigger enters. We're going to be using this for our coding. Make sure all transform attributes are as given. Now click create and select create --> C# Script, and name it Elevator. This script is going to be our first and only script. Explaining code is always very hard, so we’re going to try and do our best without being boring. First, the lerpToPosition and StartLerp function are taken almost word for word from the Unity documentation for Vector3.Lerp. We did this so as to not have to explain Lerping heavily as its actually a fairly complex function, but all you have to know is that we are going to take a startPosition (where the elevator is currently), and endPosition (where the elevator is going to go) and then have it travel there over a certain amount of time (which will be calculated using the speed we want to go). The magic really happens in the OnTriggerEnter method though. When a collider enters our trigger, this method is instantly called once. In it we check to see if what collided with us is the player. If so, we allow the lerp to begin. Lastly, in CheckLerpComplete, we do a little clean up that once the position of the elevator is at the endPositon, we stop the Lerp. This will clean up a little overhead for Unity. Drag this Script on to the ElevatorTrigger button, and give the attributes the following values, and your scene should be ready to go! Just remember, learning is all about failing, over and over, so don't become discouraged when things fail or code you have written just doesn't seem to work. That is part of the building process, and it would be a lie to tell you that when we wrote this code for article and that it worked the first time. It didn't, but you iterate, change the idea to something better, and come out with a working product at the end. About the Authors Denny is a Mobile Application Developer at Canadian Tire Development Operations. While working, Denny regularly uses Unity to create in-store experiences, but also works on other technologies like Famous, Phaser.IO, LibGDX, and CreateJS when creating game-like apps. In his own words: "I also enjoy making non-game mobile apps, but who cares about that, am I right?" Travis is a Software Engineer, living in the bitter region of Winnipeg, Canada. His work and hobbies include Game Development with Unity or Phaser.IO, as well as Mobile App Development. He can enjoy a good video game or two, but only if he knows he'll win!
Read more
  • 0
  • 0
  • 5148
article-image-text-mining-r-part-2
Robi Sen
16 Apr 2015
4 min read
Save for later

Text Mining with R: Part 2

Robi Sen
16 Apr 2015
4 min read
In Part 1, we covered the basics of doing text mining in R by selecting data, preparing it, cleaning, then performing various operations on it to visualize that data. In this post we look at a simple use case showing how we can derive real meaning and value from a visualization by seeing how a simple word cloud and help you understand the impact of an advertisement. Building the document matrix A common technique in text mining is using a matrix of documents terms called a document term matrix. A document term matrix is simply a matrix where columns are terms and rows are documents that contain the occurrence of specific terms within the document. Or if you reverse the order and have terms as rows and documents as columns it’s called a term document matrix. For example let’s say we have two documents D 1 and D2. For example let’s say we have the documents: D1 = "I like cats" D2 = "I hate cats" Then the document term matrix would look like:   I like hate cats D1 1 1 0 1 D2 1 0 1 1 For our project to make a Document term matrix in R all you need to do is use the DocumentTermMatrix() like this: tdm <- DocumentTermMatrix(mycorpus) You can see information on your document term matrix by using print like: print(tdm) <<DocumentTermMatrix (documents: 4688, terms: 18363)>> Non-/sparse entries: 44400/86041344 Sparsity : 100% Maximal term length: 65 Weighting : term frequency (tf) Next because we need to sum up all the values in each term column so that we can drive the frequency of each term occurrence. We also want to sort those values from highest to lowest. You can use this code: m <- as.matrix(tdm) v <- sort(colSums(m),decreasing=TRUE) Next we will use the names() to pull the each term object’s name which in our case is a word. Then we want to build a dataframe from our words associated with their frequency of occurrences. Finally we want to create our word cloud but remove any terms that have an occurrence of less than 45 times to reduce clutter in our wordcloud. You could also use max.words to limit the total number of words in your word cloud. So your final code should look like this: words <- names(v) d <- data.frame(word=words, freq=v) wordcloud(d$word,d$freq,min.freq=45) If you run this in R studio you should see something like the figure which shows the words with highest occurrence in our corpus. The wordcloud object automatically scales the drawn words by the size of their frequency value. From here you can do a lot with your word cloud including change the scale, associate color to various values, and much more. You can read more about wordcloud here. While word clouds are often used on the web for things like blogs, news sites, and other similar use cases they have real value for data analysis beyond just visual indicators for users to find terms of interest. For example if you look at the word cloud we generated you will notice that one of the most popular terms mentioned in tweets is chocolate. Doing a short inspection of our CSV document for the term chocolate we find a lot of people mentioning the word in a variety of contexts but one of the most common is in relationship to a specific super bowl add. For example here is a tweet: Alexalabesky 41673.39 Chocolate chips and peanut butter 0 0 0 Unknown Unknown Unknown Unknown Unknown This appeared after the airing of this advertisement from Butterfinger. So even with this simple R code we can generate real meaning from social media which is the measurable impact of an advertisement during the Super Bowl. Summary In this post we looked at a simple use case showing how we can derive real meaning and value from a visualization by seeing how a simple word cloud and help you understand the impact of an advertisement. About the author Robi Sen, CSO at Department 13, is an experienced inventor, serial entrepreneur, and futurist whose dynamic twenty-plus year career in technology, engineering, and research has led him to work on cutting edge projects for DARPA, TSWG, SOCOM, RRTO, NASA, DOE, and the DOD. Robi also has extensive experience in the commercial space, including the co-creation of several successful start-up companies. He has worked with companies such as UnderArmour, Sony, CISCO, IBM, and many others to help build out new products and services. Robi specializes in bringing his unique vision and thought process to difficult and complex problems allowing companies and organizations to find innovative solutions that they can rapidly operationalize or go to market with.
Read more
  • 0
  • 0
  • 4816

article-image-visualization
Packt
15 Apr 2015
29 min read
Save for later

Visualization

Packt
15 Apr 2015
29 min read
Humans are visual creatures and have evolved to be able to quickly notice the meaning when information is presented in certain ways that cause the wiring in our brains to have the light bulb of insight turn on. This "aha" can often be performed very quickly, given the correct tools, instead of through tedious numerical analysis. Tools for data analysis, such as pandas, take advantage of being able to quickly and iteratively provide the user to take data, process it, and quickly visualize the meaning. Often, much of what you will do with pandas is massaging your data to be able to visualize it in one or more visual patterns, in an attempt to get to "aha" by simply glancing at the visual representation of the information. In this article by Michael Heydt, author of the book Learning pandas we will cover common patterns in visualizing data with pandas. It is not meant to be exhaustive in coverage. The goal is to give you the required knowledge to create beautiful data visualizations on pandas data quickly and with very few lines of code. (For more resources related to this topic, see here.) This article is presented in three sections. The first introduces you to the general concepts of programming visualizations with pandas, emphasizing the process of creating time-series charts. We will also dive into techniques to label axes and create legends, colors, line styles, and markets. The second part of the article will then focus on the many types of data visualizations commonly used in pandas programs and data sciences, including: Bar plots Histograms Box and whisker charts Area plots Scatter plots Density plots Scatter plot matrixes Heatmaps The final section will briefly look at creating composite plots by dividing plots into subparts and drawing multiple plots within a single graphical canvas. Setting up the IPython notebook The first step to plot with pandas data, is to first include the appropriate libraries, primarily, matplotlib. The examples in this article will all be based on the following imports, where the plotting capabilities are from matplotlib, which will be aliased with plt: In [1]:# import pandas, numpy and datetimeimport numpy as npimport pandas as pd# needed for representing dates and timesimport datetimefrom datetime import datetime# Set some pandas options for controlling outputpd.set_option('display.notebook_repr_html', False)pd.set_option('display.max_columns', 10)pd.set_option('display.max_rows', 10)# used for seeding random number sequencesseedval = 111111# matplotlibimport matplotlib as mpl# matplotlib plotting functionsimport matplotlib.pyplot as plt# we want our plots inline%matplotlib inline The %matplotlib inline line is the statement that tells matplotlib to produce inline graphics. This will make the resulting graphs appear either inside your IPython notebook or IPython session. All examples will seed the random number generator with 111111, so that the graphs remain the same every time they run. Plotting basics with pandas The pandas library itself performs data manipulation. It does not provide data visualization capabilities itself. The visualization of data in pandas data structures is handed off by pandas to other robust visualization libraries that are part of the Python ecosystem, most commonly, matplotlib, which is what we will use in this article. All of the visualizations and techniques covered in this article can be performed without pandas. These techniques are all available independently in matplotlib. pandas tightly integrates with matplotlib, and by doing this, it is very simple to go directly from pandas data to a matplotlib visualization without having to work with intermediate forms of data. pandas does not draw the graphs, but it will tell matplotlib how to draw graphs using pandas data, taking care of many details on your behalf, such as automatically selecting Series for plots, labeling axes, creating legends, and defaulting color. Therefore, you often have to write very little code to create stunning visualizations. Creating time-series charts with .plot() One of the most common data visualizations created, is of the time-series data. Visualizing a time series in pandas is as simple as calling .plot() on a DataFrame or Series object. To demonstrate, the following creates a time series representing a random walk of values over time, akin to the movements in the price of a stock: In [2]:# generate a random walk time-seriesnp.random.seed(seedval)s = pd.Series(np.random.randn(1096),index=pd.date_range('2012-01-01','2014-12-31'))walk_ts = s.cumsum()# this plots the walk - just that easy :)walk_ts.plot(); The ; character at the end suppresses the generation of an IPython out tag, as well as the trace information. It is a common practice to execute the following statement to produce plots that have a richer visual style. This sets a pandas option that makes resulting plots have a shaded background and what is considered a slightly more pleasing style: In [3]:# tells pandas plots to use a default style# which has a background fillpd.options.display.mpl_style = 'default'walk_ts.plot(); The .plot() method on pandas objects is a wrapper function around the matplotlib libraries' plot() function. It makes plots of pandas data very easy to create. It is coded to know how to use the data in the pandas objects to create the appropriate plots for the data, handling many of the details of plot generation, such as selecting series, labeling, and axes generation. In this situation, the .plot() method determines that as Series contains dates for its index that the x axis should be formatted as dates and it selects a default color for the data. This example used a single series and the result would be the same using DataFrame with a single column. As an example, the following produces the same graph with one small difference. It has added a legend to the graph, which charts by default, generated from a DataFrame object, will have a legend even if there is only one series of data: In [4]:# a DataFrame with a single column will produce# the same plot as plotting the Series it is created fromwalk_df = pd.DataFrame(walk_ts)walk_df.plot(); The .plot() function is smart enough to know whether DataFrame has multiple columns, and it should create multiple lines/series in the plot and include a key for each, and also select a distinct color for each line. This is demonstrated with the following example: In [5]:# generate two random walks, one in each of# two columns in a DataFramenp.random.seed(seedval)df = pd.DataFrame(np.random.randn(1096, 2),index=walk_ts.index, columns=list('AB'))walk_df = df.cumsum()walk_df.head()Out [5]:A B2012-01-01 -1.878324 1.3623672012-01-02 -2.804186 1.4272612012-01-03 -3.241758 3.1653682012-01-04 -2.750550 3.3326852012-01-05 -1.620667 2.930017In [6]:# plot the DataFrame, which will plot a line# for each column, with a legendwalk_df.plot(); If you want to use one column of DataFrame as the labels on the x axis of the plot instead of the index labels, you can use the x and y parameters to the .plot() method, giving the x parameter the name of the column to use as the x axis and y parameter the names of the columns to be used as data in the plot. The following recreates the random walks as columns 'A' and 'B', creates a column 'C' with sequential values starting with 0, and uses these values as the x axis labels and the 'A' and 'B' columns values as the two plotted lines: In [7]:# copy the walkdf2 = walk_df.copy()# add a column C which is 0 .. 1096df2['C'] = pd.Series(np.arange(0, len(df2)), index=df2.index)# instead of dates on the x axis, use the 'C' column,# which will label the axis with 0..1000df2.plot(x='C', y=['A', 'B']); The .plot() functions, provided by pandas for the Series and DataFrame objects, take care of most of the details of generating plots. However, if you want to modify characteristics of the generated plots beyond their capabilities, you can directly use the matplotlib functions or one of more of the many optional parameters of the .plot() method. Adorning and styling your time-series plot The built-in .plot() method has many options that you can use to change the content in the plot. We will cover several of the common options used in most plots. Adding a title and changing axes labels The title of the chart can be set using the title parameter of the .plot() method. Axes labels are not set with .plot(), but by directly using the plt.ylabel() and plt.xlabel() functions after calling .plot(): In [8]:# create a time-series chart with a title and specific# x and y axes labels# the title is set in the .plot() method as a parameterwalk_df.plot(title='Title of the Chart')# explicitly set the x and y axes labels after the .plot()plt.xlabel('Time')plt.ylabel('Money'); The labels in this plot were added after the call to .plot(). A question that may be asked, is that if the plot is generated in the call to .plot(), then how are they changed on the plot? The answer, is that plots in matplotlib are not displayed until either .show() is called on the plot or the code reaches the end of the execution and returns to the interactive prompt. At either of these points, any plot generated by plot commands will be flushed out to the display. In this example, although .plot() is called, the plot is not generated until the IPython notebook code section finishes completion, so the changes for labels and title are added to the plot. Specifying the legend content and position To change the text used in the legend (the default is the column name from DataFrame), you can use the ax object returned from the .plot() method to modify the text using its .legend() method. The ax object is an AxesSubplot object, which is a representation of the elements of the plot, that can be used to change various aspects of the plot before it is generated: In [9]:# change the legend items to be different# from the names of the columns in the DataFrameax = walk_df.plot(title='Title of the Chart')# this sets the legend labelsax.legend(['1', '2']); The location of the legend can be set using the loc parameter of the .legend() method. By default, pandas sets the location to 'best', which tells matplotlib to examine the data and determine the best place to put the legend. However, you can also specify any of the following to position the legend more specifically (you can use either the string or the numeric code): Text Code 'best' 0 'upper right' 1 'upper left' 2 'lower left' 3 'lower right' 4 'right' 5 'center left' 6 'center right' 7 'lower center' 8 'upper center' 9 'center' 10 In our last chart, the 'best' option actually had the legend overlap the line from one of the series. We can reposition the legend in the upper center of the chart, which will prevent this and create a better chart of this data: In [10]:# change the position of the legendax = walk_df.plot(title='Title of the Chart')# put the legend in the upper center of the chartax.legend(['1', '2'], loc='upper center'); Legends can also be turned off with the legend parameter: In [11]:# omit the legend by using legend=Falsewalk_df.plot(title='Title of the Chart', legend=False); There are more possibilities for locating and actually controlling the content of the legend, but we leave that for you to do some more experimentation. Specifying line colors, styles, thickness, and markers pandas automatically sets the colors of each series on any chart. If you would like to specify your own color, you can do so by supplying style code to the style parameter of the plot function. pandas has a number of built-in single character code for colors, several of which are listed here: b: Blue g: Green r: Red c: Cyan m: Magenta y: Yellow k: Black w: White It is also possible to specify the color using a hexadecimal RGB code of the #RRGGBB format. To demonstrate both options, the following example sets the color of the first series to green using a single digit code and the second series to red using the hexadecimal code: In [12]:# change the line colors on the plot# use character code for the first line,# hex RGB for the secondwalk_df.plot(style=['g', '#FF0000']); Line styles can be specified using a line style code. These can be used in combination with the color style codes, following the color code. The following are examples of several useful line style codes: '-' = solid '--' = dashed ':' = dotted '-.' = dot-dashed '.' = points The following plot demonstrates these five line styles by drawing five data series, each with one of these styles. Notice how each style item now consists of a color symbol and a line style code: In [13]:# show off different line stylest = np.arange(0., 5., 0.2)legend_labels = ['Solid', 'Dashed', 'Dotted','Dot-dashed', 'Points']line_style = pd.DataFrame({0 : t,1 : t**1.5,2 : t**2.0,3 : t**2.5,4 : t**3.0})# generate the plot, specifying color and line style for each lineax = line_style.plot(style=['r-', 'g--', 'b:', 'm-.', 'k:'])# set the legendax.legend(legend_labels, loc='upper left'); The thickness of lines can be specified using the lw parameter of .plot(). This can be passed a thickness for multiple lines, by passing a list of widths, or a single width that is applied to all lines. The following redraws the graph with a line width of 3, making the lines a little more pronounced: In [14]:# regenerate the plot, specifying color and line style# for each line and a line width of 3 for all linesax = line_style.plot(style=['r-', 'g--', 'b:', 'm-.', 'k:'], lw=3)ax.legend(legend_labels, loc='upper left'); Markers on a line can also be specified using abbreviations in the style code. There are quite a few marker types provided and you can see them all at http://matplotlib.org/api/markers_api.html. We will examine five of them in the following chart by having each series use a different marker from the following: circles, stars, triangles, diamonds, and points. The type of marker is also specified using a code at the end of the style: In [15]:# redraw, adding markers to the linesax = line_style.plot(style=['r-o', 'g--^', 'b:*','m-.D', 'k:o'], lw=3)ax.legend(legend_labels, loc='upper left'); Specifying tick mark locations and tick labels Every plot we have seen to this point, has used the default tick marks and labels on the ticks that pandas decides are appropriate for the plot. These can also be customized using various matplotlib functions. We will demonstrate how ticks are handled by first examining a simple DataFrame. We can retrieve the locations of the ticks that were generated on the x axis using the plt.xticks() method. This method returns two values, the location, and the actual labels: In [16]:# a simple plot to use to examine ticksticks_data = pd.DataFrame(np.arange(0,5))ticks_data.plot()ticks, labels = plt.xticks()ticksOut [16]:array([ 0. , 0.5, 1. , 1.5, 2. , 2.5, 3. , 3.5, 4. ]) This array contains the locations of the ticks in units of the values along the x axis. pandas has decided that a range of 0 through 4 (the min and max) and an interval of 0.5 is appropriate. If we want to use other locations, we can provide these by passing them to plt.xticks() as a list. The following demonstrates these using even integers from -1 to 5, which will both change the extents of the axis, as well as remove non integral labels: In [17]:# resize x axis to (-1, 5), and draw ticks# only at integer valuesticks_data = pd.DataFrame(np.arange(0,5))ticks_data.plot()plt.xticks(np.arange(-1, 6)); Also, we can specify new labels at these locations by passing them as the second parameter. Just as an example, we can change the y axis ticks and labels to integral values and consecutive alpha characters using the following: In [18]:# rename y axis tick labels to A, B, C, D, and Eticks_data = pd.DataFrame(np.arange(0,5))ticks_data.plot()plt.yticks(np.arange(0, 5), list("ABCDE")); Formatting axes tick date labels using formatters The formatting of axes labels whose underlying data types is datetime is performed using locators and formatters. Locators control the position of the ticks, and the formatters control the formatting of the labels. To facilitate locating ticks and formatting labels based on dates, matplotlib provides several classes in maptplotlib.dates to help facilitate the process: MinuteLocator, HourLocator, DayLocator, WeekdayLocator, MonthLocator, and YearLocator: These are specific locators coded to determine where ticks for each type of date field will be found on the axis DateFormatter: This is a class that can be used to format date objects into labels on the axis By default, the default locator and formatter are AutoDateLocator and AutoDateFormatter, respectively. You can change these by providing different objects to use the appropriate methods on the specific axis object. To demonstrate, we will use a subset of the random walk data from earlier, which represents just the data from January through February of 2014. Plotting this gives us the following output: In [19]:# plot January-February 2014 from the random walkwalk_df.loc['2014-01':'2014-02'].plot(); The labels on the x axis of this plot have two series of labels, the minor and the major. The minor labels in this plot contain the day of the month, and the major contains the year and month (the year only for the first month). We can set locators and formatters for each of the minor and major levels. This will be demonstrated by changing the minor labels to be located at the Monday of each week and to contain the date and day of the week (right now, the chart uses weekly and only Friday's date—without the day name). On the major labels, we will use the monthly location and always include both the month name and the year: In [20]:# this import styles helps us type lessfrom matplotlib.dates import WeekdayLocator, DateFormatter, MonthLocator# plot Jan-Feb 2014ax = walk_df.loc['2014-01':'2014-02'].plot()# do the minor labelsweekday_locator = WeekdayLocator(byweekday=(0), interval=1)ax.xaxis.set_minor_locator(weekday_locator)ax.xaxis.set_minor_formatter(DateFormatter("%dn%a"))# do the major labelsax.xaxis.set_major_locator(MonthLocator())ax.xaxis.set_major_formatter(DateFormatter('nnn%bn%Y')); This is almost what we wanted. However, note that the year is being reported as 45. This, unfortunately, seems to be an issue between pandas and the matplotlib representation of values for the year. The best reference I have on this is this following link from Stack Overflow (http://stackoverflow.com/questions/12945971/pandas-timeseries-plot-setting-x-axis-major-and-minor-ticks-and-labels). So, it appears to create a plot with custom-date-based labels, we need to avoid the pandas .plot() and need to kick all the way down to using matplotlib. Fortunately, this is not too hard. The following changes the code slightly and renders what we wanted: In [21]:# this gets around the pandas / matplotlib year issue# need to reference the subset twice, so let's make a variablewalk_subset = walk_df['2014-01':'2014-02']# this gets the plot so we can use it, we can ignore figfig, ax = plt.subplots()# inform matplotlib that we will use the following as dates# note we need to convert the index to a pydatetime seriesax.plot_date(walk_subset.index.to_pydatetime(), walk_subset, '-')# do the minor labelsweekday_locator = WeekdayLocator(byweekday=(0), interval=1)ax.xaxis.set_minor_locator(weekday_locator)ax.xaxis.set_minor_formatter(DateFormatter('%dn%a'))# do the major labelsax.xaxis.set_major_locator(MonthLocator())ax.xaxis.set_major_formatter(DateFormatter('nnn%bn%Y'));ax.xaxis.set_major_formatter(DateFormatter('nnn%bn%Y')); To add grid lines for the minor axes ticks, you can use the .grid() method of the x axis object of the plot, the first parameter specifying the lines to use and the second parameter specifying the minor or major set of ticks. The following replots this graph without the major grid line and with the minor grid lines: In [22]:# this gets the plot so we can use it, we can ignore figfig, ax = plt.subplots()# inform matplotlib that we will use the following as dates# note we need to convert the index to a pydatetime seriesax.plot_date(walk_subset.index.to_pydatetime(), walk_subset, '-')# do the minor labelsweekday_locator = WeekdayLocator(byweekday=(0), interval=1)ax.xaxis.set_minor_locator(weekday_locator)ax.xaxis.set_minor_formatter(DateFormatter('%dn%a'))ax.xaxis.grid(True, "minor") # turn on minor tick grid linesax.xaxis.grid(False, "major") # turn off major tick grid lines# do the major labelsax.xaxis.set_major_locator(MonthLocator())ax.xaxis.set_major_formatter(DateFormatter('nnn%bn%Y')); The last demonstration of formatting will use only the major labels but on a weekly basis and using a YYYY-MM-DD format. However, because these would overlap, we will specify that they should be rotated to prevent the overlap. This is done using the fig.autofmt_xdate() function: In [23]:# this gets the plot so we can use it, we can ignore figfig, ax = plt.subplots()# inform matplotlib that we will use the following as dates# note we need to convert the index to a pydatetime seriesax.plot_date(walk_subset.index.to_pydatetime(), walk_subset, '-')ax.xaxis.grid(True, "major") # turn off major tick grid lines# do the major labelsax.xaxis.set_major_locator(weekday_locator)ax.xaxis.set_major_formatter(DateFormatter('%Y-%m-%d'));# informs to rotate date labelsfig.autofmt_xdate(); Common plots used in statistical analyses Having seen how to create, lay out, and annotate time-series charts, we will now look at creating a number of charts, other than time series that are commonplace in presenting statistical information. Bar plots Bar plots are useful in order to visualize the relative differences in values of non time-series data. Bar plots can be created using the kind='bar' parameter of the .plot() method: In [24]:# make a bar plot# create a small series of 10 random values centered at 0.0np.random.seed(seedval)s = pd.Series(np.random.rand(10) - 0.5)# plot the bar charts.plot(kind='bar'); If the data being plotted consists of multiple columns, a multiple series bar plot will be created: In [25]:# draw a multiple series bar chart# generate 4 columns of 10 random valuesnp.random.seed(seedval)df2 = pd.DataFrame(np.random.rand(10, 4),columns=['a', 'b', 'c', 'd'])# draw the multi-series bar chartdf2.plot(kind='bar'); If you would prefer stacked bars, you can use the stacked parameter, setting it to True: In [26]:# horizontal stacked bar chartdf2.plot(kind='bar', stacked=True); If you want the bars to be horizontally aligned, you can use kind='barh': In [27]:# horizontal stacked bar chartdf2.plot(kind='barh', stacked=True); Histograms Histograms are useful for visualizing distributions of data. The following shows you a histogram of generating 1000 values from the normal distribution: In [28]:# create a histogramnp.random.seed(seedval)# 1000 random numbersdfh = pd.DataFrame(np.random.randn(1000))# draw the histogramdfh.hist(); The resolution of a histogram can be controlled by specifying the number of bins to allocate to the graph. The default is 10, and increasing the number of bins gives finer detail to the histogram. The following increases the number of bins to 100: In [29]:# histogram again, but with more binsdfh.hist(bins = 100); If the data has multiple series, the histogram function will automatically generate multiple histograms, one for each series: In [30]:# generate a multiple histogram plot# create DataFrame with 4 columns of 1000 random valuesnp.random.seed(seedval)dfh = pd.DataFrame(np.random.randn(1000, 4),columns=['a', 'b', 'c', 'd'])# draw the chart. There are four columns so pandas draws# four historgramsdfh.hist(); If you want to overlay multiple histograms on the same graph (to give a quick visual difference of distribution), you can call the pyplot.hist() function multiple times before .show() is called to render the chart: In [31]:# directly use pyplot to overlay multiple histograms# generate two distributions, each with a different# mean and standard deviationnp.random.seed(seedval)x = [np.random.normal(3,1) for _ in range(400)]y = [np.random.normal(4,2) for _ in range(400)]# specify the bins (-10 to 10 with 100 bins)bins = np.linspace(-10, 10, 100)# generate plot x using plt.hist, 50% transparentplt.hist(x, bins, alpha=0.5, label='x')# generate plot y using plt.hist, 50% transparentplt.hist(y, bins, alpha=0.5, label='y')plt.legend(loc='upper right'); Box and whisker charts Box plots come from descriptive statistics and are a useful way of graphically depicting the distributions of categorical data using quartiles. Each box represents the values between the first and third quartiles of the data with a line across the box at the median. Each whisker reaches out to demonstrate the extent to five interquartile ranges below and above the first and third quartiles: In [32]:# create a box plot# generate the seriesnp.random.seed(seedval)dfb = pd.DataFrame(np.random.randn(10,5))# generate the plotdfb.boxplot(return_type='axes'); There are ways to overlay dots and show outliers, but for brevity, they will not be covered in this text. Area plots Area plots are used to represent cumulative totals over time, to demonstrate the change in trends over time among related attributes. They can also be "stacked" to demonstrate representative totals across all variables. Area plots are generated by specifying kind='area'. A stacked area chart is the default: In [33]:# create a stacked area plot# generate a 4-column data frame of random datanp.random.seed(seedval)dfa = pd.DataFrame(np.random.rand(10, 4),columns=['a', 'b', 'c', 'd'])# create the area plotdfa.plot(kind='area'); To produce an unstacked plot, specify stacked=False: In [34]:# do not stack the area plotdfa.plot(kind='area', stacked=False); By default, unstacked plots have an alpha value of 0.5, so that it is possible to see how the data series overlaps. Scatter plots A scatter plot displays the correlation between a pair of variables. A scatter plot can be created from DataFrame using .plot() and specifying kind='scatter', as well as specifying the x and y columns from the DataFrame source: In [35]:# generate a scatter plot of two series of normally# distributed random values# we would expect this to cluster around 0,0np.random.seed(111111)sp_df = pd.DataFrame(np.random.randn(10000, 2),columns=['a', 'b'])sp_df.plot(kind='scatter', x='a', y='b') We can easily create more elaborate scatter plots by dropping down a little lower into matplotlib. The following code gets Google stock data for the year of 2011 and calculates delta in the closing price per day, and renders close versus volume as bubbles of different sizes, derived on the size of the values in the data: In [36]:# get Google stock data from 1/1/2011 to 12/31/2011from pandas.io.data import DataReaderstock_data = DataReader("GOOGL", "yahoo",datetime(2011, 1, 1),datetime(2011, 12, 31))# % change per daydelta = np.diff(stock_data["Adj Close"])/stock_data["Adj Close"][:-1]# this calculates size of markersvolume = (15 * stock_data.Volume[:-2] / stock_data.Volume[0])**2close = 0.003 * stock_data.Close[:-2] / 0.003 * stock_data.Open[:-2]# generate scatter plotfig, ax = plt.subplots()ax.scatter(delta[:-1], delta[1:], c=close, s=volume, alpha=0.5)# add some labels and styleax.set_xlabel(r'$Delta_i$', fontsize=20)ax.set_ylabel(r'$Delta_{i+1}$', fontsize=20)ax.set_title('Volume and percent change')ax.grid(True); Note the nomenclature for the x and y axes labels, which creates a nice mathematical style for the labels. Density plot You can create kernel density estimation plots using the .plot() method and setting the kind='kde' parameter. A kernel density estimate plot, instead of being a pure empirical representation of the data, makes an attempt and estimates the true distribution of the data, and hence smoothes it into a continuous plot. The following generates a normal distributed set of numbers, displays it as a histogram, and overlays the kde plot: In [37]:# create a kde density plot# generate a series of 1000 random numbersnp.random.seed(seedval)s = pd.Series(np.random.randn(1000))# generate the plots.hist(normed=True) # shows the barss.plot(kind='kde'); The scatter plot matrix The final composite graph we'll look at in this article is one that is provided by pandas in its plotting tools subcomponent: the scatter plot matrix. A scatter plot matrix is a popular way of determining whether there is a linear correlation between multiple variables. The following creates a scatter plot matrix with random values, which then shows a scatter plot for each combination, as well as a kde graph for each variable: In [38]:# create a scatter plot matrix# import this classfrom pandas.tools.plotting import scatter_matrix# generate DataFrame with 4 columns of 1000 random numbersnp.random.seed(111111)df_spm = pd.DataFrame(np.random.randn(1000, 4),columns=['a', 'b', 'c', 'd'])# create the scatter matrixscatter_matrix(df_spm, alpha=0.2, figsize=(6, 6), diagonal='kde'); Heatmaps A heatmap is a graphical representation of data, where values within a matrix are represented by colors. This is an effective means to show relationships of values that are measured at the intersection of two variables, at each intersection of the rows and the columns of the matrix. A common scenario, is to have the values in the matrix normalized to 0.0 through 1.0 and have the intersections between a row and column represent the correlation between the two variables. Values with less correlation (0.0) are the darkest, and those with the highest correlation (1.0) are white. Heatmaps are easily created with pandas and matplotlib using the .imshow() function: In [39]:# create a heatmap# start with data for the heatmaps = pd.Series([0.0, 0.1, 0.2, 0.3, 0.4],['V', 'W', 'X', 'Y', 'Z'])heatmap_data = pd.DataFrame({'A' : s + 0.0,'B' : s + 0.1,'C' : s + 0.2,'D' : s + 0.3,'E' : s + 0.4,'F' : s + 0.5,'G' : s + 0.6})heatmap_dataOut [39]:A B C D E F GV 0.0 0.1 0.2 0.3 0.4 0.5 0.6W 0.1 0.2 0.3 0.4 0.5 0.6 0.7X 0.2 0.3 0.4 0.5 0.6 0.7 0.8Y 0.3 0.4 0.5 0.6 0.7 0.8 0.9Z 0.4 0.5 0.6 0.7 0.8 0.9 1.0In [40]:# generate the heatmapplt.imshow(heatmap_data, cmap='hot', interpolation='none')plt.colorbar() # add the scale of colors bar# set the labelsplt.xticks(range(len(heatmap_data.columns)), heatmap_data.columns)plt.yticks(range(len(heatmap_data)), heatmap_data.index); Multiple plots in a single chart It is often useful to contrast data by displaying multiple plots next to each other. This is actually quite easy to when using matplotlib. To draw multiple subplots on a grid, we can make multiple calls to plt.subplot2grid(), each time passing the size of the grid the subplot is to be located on (shape=(height, width)) and the location on the grid of the upper-left section of the subplot (loc=(row, column)). Each call to plt.subplot2grid() returns a different AxesSubplot object that can be used to reference the specific subplot and direct the rendering into. The following demonstrates this, by creating a plot with two subplots based on a two row by one column grid (shape=(2,1)). The first subplot, referred to by ax1, is located in the first row (loc=(0,0)), and the second, referred to as ax2, is in the second row (loc=(1,0)): In [41]:# create two sub plots on the new plot using a 2x1 grid# ax1 is the upper rowax1 = plt.subplot2grid(shape=(2,1), loc=(0,0))# and ax2 is in the lower rowax2 = plt.subplot2grid(shape=(2,1), loc=(1,0)) The subplots have been created, but we have not drawn into either yet. The size of any subplot can be specified using the rowspan and colspan parameters in each call to plt.subplot2grid(). This actually feels a lot like placing content in HTML tables. The following demonstrates a more complicated layout of five plots, specifying different row and column spans for each: In [42]:# layout sub plots on a 4x4 grid# ax1 on top row, 4 columns wideax1 = plt.subplot2grid((4,4), (0,0), colspan=4)# ax2 is row 2, leftmost and 2 columns wideax2 = plt.subplot2grid((4,4), (1,0), colspan=2)# ax3 is 2 cols wide and 2 rows high, starting# on second row and the third columnax3 = plt.subplot2grid((4,4), (1,2), colspan=2, rowspan=2)# ax4 1 high 1 wide, in row 4 column 0ax4 = plt.subplot2grid((4,4), (2,0))# ax4 1 high 1 wide, in row 4 column 1ax5 = plt.subplot2grid((4,4), (2,1)); To draw into a specific subplot using the pandas .plot() method, you can pass the specific axes into the plot function via the ax parameter. The following demonstrates this by extracting each series from the random walk we created at the beginning of this article, and drawing each into different subplots: In [43]:# demonstrating drawing into specific sub-plots# generate a layout of 2 rows 1 column# create the subplots, one on each rowax5 = plt.subplot2grid((2,1), (0,0))ax6 = plt.subplot2grid((2,1), (1,0))# plot column 0 of walk_df into top row of the gridwalk_df[[0]].plot(ax = ax5)# and column 1 of walk_df into bottom rowwalk_df[[1]].plot(ax = ax6); Using this technique, we can perform combinations of different series of data, such as a stock close versus volume graph. Given the data we read during a previous example for Google, the following will plot the volume versus the closing price: In [44]:# draw the close on the top charttop = plt.subplot2grid((4,4), (0, 0), rowspan=3, colspan=4)top.plot(stock_data.index, stock_data['Close'], label='Close')plt.title('Google Opening Stock Price 2001')# draw the volume chart on the bottombottom = plt.subplot2grid((4,4), (3,0), rowspan=1, colspan=4)bottom.bar(stock_data.index, stock_data['Volume'])plt.title('Google Trading Volume')# set the size of the plotplt.gcf().set_size_inches(15,8) Summary Visualizing your data is one of the best ways to quickly understand the story that is being told with the data. Python, pandas, and matplotlib (and a few other libraries) provide a means of very quickly, and with a few lines of code, getting the gist of what you are trying to discover, as well as the underlying message (and displaying it beautifully too). In this article, we examined many of the most common means of visualizing data from pandas. There are also a lot of interesting visualizations that were not covered, and indeed, the concept of data visualization with pandas and/or Python is the subject of entire texts, but I believe this article provides a much-needed reference to get up and going with the visualizations that provide most of what is needed. Resources for Article: Further resources on this subject: Prototyping Arduino Projects using Python [Article] Classifying with Real-world Examples [Article] Python functions – Avoid repeating code [Article]
Read more
  • 0
  • 0
  • 4245

article-image-our-first-api-go
Packt
14 Apr 2015
15 min read
Save for later

Our First API in Go

Packt
14 Apr 2015
15 min read
This article is penned by Nathan Kozyra, the author of the book, Mastering Go Web Services. This quickly introduces—or reintroduces—some core concepts related to Go setup and usage as well as the http package. (For more resources related to this topic, see here.) If you spend any time developing applications on the Web (or off it, for that matter), it won't be long before you find yourself facing the prospect of interacting with a web service or an API. Whether it's a library that you need or another application's sandbox with which you have to interact, the world of development relies in no small part on the cooperation among dissonant applications, languages, and formats. That, after all, is why we have APIs to begin with—to allow standardized communication between any two given platforms. If you spend a long amount of time working on the Web, you'll encounter bad APIs. By bad we mean APIs that are not all-inclusive, do not adhere to best practices and standards, are confusing semantically, or lack consistency. You'll encounter APIs that haphazardly use OAuth or simple HTTP authentication in some places and the opposite in others, or more commonly, APIs that ignore the stated purposes of HTTP verbs. Google's Go language is particularly well suited to servers. With its built-in HTTP serving, a simple method for XML and JSON encoding of data, high availability, and concurrency, it is the ideal platform for your API. We will cover the following topics in this article: Understanding requirements and dependencies Introducing the HTTP package Understanding requirements and dependencies Before we get too deep into the weeds in this article, it would be a good idea for us to examine the things that you will need to have installed. Installing Go It should go without saying that we will need to have the Go language installed. However, there are a few associated items that you will also need to install in order to do everything we do in this book. Go is available for Mac OS X, Windows, and most common Linux variants. You can download the binaries at http://golang.org/doc/install. On Linux, you can generally grab Go through your distribution's package manager. For example, you can grab it on Ubuntu with a simple apt-get install golang command. Something similar exists for most distributions. In addition to the core language, we'll also work a bit with the Google App Engine, and the best way to test with the App Engine is to install the Software Development Kit (SDK). This will allow us to test our applications locally prior to deploying them and simulate a lot of the functionality that is provided only on the App Engine. The App Engine SDK can be downloaded from https://developers.google.com/appengine/downloads. While we're obviously most interested in the Go SDK, you should also grab the Python SDK as there are some minor dependencies that may not be available solely in the Go SDK. Installing and using MySQL We'll be using quite a few different databases and datastores to manage our test and real data, and MySQL will be one of the primary ones. We will use MySQL as a storage system for our users; their messages and their relationships will be stored in our larger application (we will discuss more about this in a bit). MySQL can be downloaded from http://dev.mysql.com/downloads/. You can also grab it easily from a package manager on Linux/OS X as follows: Ubuntu: sudo apt-get install mysql-server mysql-client OS X with Homebrew: brew install mysql Redis Redis is the first of the two NoSQL datastores that we'll be using for a couple of different demonstrations, including caching data from our databases as well as the API output. If you're unfamiliar with NoSQL, we'll do some pretty simple introductions to results gathering using both Redis and Couchbase in our examples. If you know MySQL, Redis will at least feel similar, and you won't need the full knowledge base to be able to use the application in the fashion in which we'll use it for our purposes. Redis can be downloaded from http://redis.io/download. Redis can be downloaded on Linux/OS X using the following: Ubuntu: sudo apt-get install redis-server OS X with Homebrew: brew install redis Couchbase As mentioned earlier, Couchbase will be our second NoSQL solution that we'll use in various products, primarily to set short-lived or ephemeral key store lookups to avoid bottlenecks and as an experiment with in-memory caching. Unlike Redis, Couchbase uses simple REST commands to set and receive data, and everything exists in the JSON format. Couchbase can be downloaded from http://www.couchbase.com/download. For Ubuntu (deb), use the following command to download Couchbase: dpkg -i couchbase-server version.deb For OS X with Homebrew use the following command to download Couchbase: brew install https://github.com/couchbase/homebrew/raw/    stable/Library/Formula/libcouchbase.rb Nginx Although Go comes with everything you need to run a highly concurrent, performant web server, we're going to experiment with wrapping a reverse proxy around our results. We'll do this primarily as a response to the real-world issues regarding availability and speed. Nginx is not available natively for Windows. For Ubuntu, use the following command to download Nginx: apt-get install nginx For OS X with Homebrew, use the following command to download Nginx: brew install nginx Apache JMeter We'll utilize JMeter for benchmarking and tuning our API for performance. You have a bit of a choice here, as there are several stress-testing applications for simulating traffic. The two we'll touch on are JMeter and Apache's built-in Apache Benchmark (AB) platform. The latter is a stalwart in benchmarking but is a bit limited in what you can throw at your API, so JMeter is preferred. One of the things that we'll need to consider when building an API is its ability to stand up to heavy traffic (and introduce some mitigating actions when it cannot), so we'll need to know what our limits are. Apache JMeter can be downloaded from http://jmeter.apache.org/download_jmeter.cgi. Using predefined datasets While it's not entirely necessary to have our dummy dataset, you can save a lot of time as we build our social network by bringing it in because it is full of users, posts, and images. By using this dataset, you can skip creating this data to test certain aspects of the API and API creation. Our dummy dataset can be downloaded at https://github.com/nkozyra/masteringwebservices. Choosing an IDE A choice of Integrated Development Environment (IDE) is one of the most personal choices a developer can make, and it's rare to find a developer who is not steadfastly passionate about their favorite. Nothing in this article will require one IDE over another; indeed, most of Go's strength in terms of compiling, formatting, and testing lies at the command-line level. That said, we'd like to at least explore some of the more popular choices for editors and IDEs that exist for Go. Eclipse As one of the most popular and expansive IDEs available for any language, Eclipse is an obvious first mention. Most languages get their support in the form of an Eclipse plugin and Go is no exception. There are some downsides to this monolithic piece of software; it is occasionally buggy on some languages, notoriously slow for some autocompletion functions, and is a bit heavier than most of the other available options. However, the pluses are myriad. Eclipse is very mature and has a gigantic community from which you can seek support when issues arise. Also, it's free to use. Eclipse can be downloaded from http://eclipse.org/ Get the Goclipse plugin at http://goclipse.github.io/ Sublime Text Sublime Text is our particular favorite, but it comes with a large caveat—it is the only one listed here that is not free. This one feels more like a complete code/text editor than a heavy IDE, but it includes code completion options and the ability to integrate the Go compilers (or other languages' compilers) directly into the interface. Although Sublime Text's license costs $70, many developers find its elegance and speed to be well worth it. You can try out the software indefinitely to see if it's right for you; it operates as nagware unless and until you purchase a license. Sublime Text can be downloaded from http://www.sublimetext.com/2. LiteIDE LiteIDE is a much younger IDE than the others mentioned here, but it is noteworthy because it has a focus on the Go language. It's cross-platform and does a lot of Go's command-line magic in the background, making it truly integrated. LiteIDE also handles code autocompletion, go fmt, build, run, and test directly in the IDE and a robust package browser. It's free and totally worth a shot if you want something lean and targeted directly for the Go language. LiteIDE can be downloaded from https://code.google.com/p/golangide/. IntelliJ IDEA Right up there with Eclipse is the JetBrains family of IDE, which has spanned approximately the same number of languages as Eclipse. Ultimately, both are primarily built with Java in mind, which means that sometimes other language support can feel secondary. The Go integration here, however, seems fairly robust and complete, so it's worth a shot if you have a license. If you do not have a license, you can try the Community Edition, which is free. You can download IntelliJ IDEA at http://www.jetbrains.com/idea/download/ The Go language support plugin is available at http://plugins.jetbrains.com/plugin/?idea&id=5047 Some client-side tools Although the vast majority of what we'll be covering will focus on Go and API services, we will be doing some visualization of client-side interactions with our API. In doing so, we'll primarily focus on straight HTML and JavaScript, but for our more interactive points, we'll also rope in jQuery and AngularJS. Most of what we do for client-side demonstrations will be available at this book's GitHub repository at https://github.com/nkozyra/goweb under client. Both jQuery and AngularJS can be loaded dynamically from Google's CDN, which will prevent you from having to download and store them locally. The examples hosted on GitHub call these dynamically. To load AngularJS dynamically, use the following code: <script src="//ajax.googleapis.com/ajax/libs/ angularjs/1.2.18/angular.min.js"></script> To load jQuery dynamically, use the following code: <script src="//ajax.googleapis.com/ajax/ libs/jquery/1.11.1/jquery.min.js"></script> Looking at our application Well in the book, we'll be building myriad small applications to demonstrate points, functions, libraries, and other techniques. However, we'll also focus on a larger project that mimics a social network wherein we create and return to users, statuses, and so on, via the API. For that you'll need to have a copy of it. Setting up our database As mentioned earlier, we'll be designing a social network that operates almost entirely at the API level (at least at first) as our master project in the book. Time and space wouldn't allow us to cover this here in the article. When we think of the major social networks (from the past and in the present), there are a few omnipresent concepts endemic among them, which are as follows: The ability to create a user and maintain a user profile The ability to share messages or statuses and have conversations based on them The ability to express pleasure or displeasure on the said statuses/messages to dictate the worthiness of any given message There are a few other features that we'll be building here, but let's start with the basics. Let's create our database in MySQL as follows: create database social_network; This will be the basis of our social network product in the book. For now, we'll just need a users table to store our individual users and their most basic information. We'll amend this to include more features as we go along: CREATE TABLE users ( user_id INT(10) UNSIGNED NOT NULL AUTO_INCREMENT, user_nickname VARCHAR(32) NOT NULL, user_first VARCHAR(32) NOT NULL, user_last VARCHAR(32) NOT NULL, user_email VARCHAR(128) NOT NULL, PRIMARY KEY (user_id), UNIQUE INDEX user_nickname (user_nickname) ) We won't need to do too much in this article, so this should suffice. We'll have a user's most basic information—name, nickname, and e-mail, and not much else. Introducing the HTTP package The vast majority of our API work will be handled through REST, so you should become pretty familiar with Go's http package. In addition to serving via HTTP, the http package comprises of a number of other very useful utilities that we'll look at in detail. These include cookie jars, setting up clients, reverse proxies, and more. The primary entity about which we're interested right now, though, is the http.Server struct, which provides the very basis of all of our server's actions and parameters. Within the server, we can set our TCP address, HTTP multiplexing for routing specific requests, timeouts, and header information. Go also provides some shortcuts for invoking a server without directly initializing the struct. For example, if you have a lot of default properties, you could use the following code: Server := Server { Addr: ":8080", Handler: urlHandler, ReadTimeout: 1000 * time.MicroSecond, WriteTimeout: 1000 * time.MicroSecond, MaxHeaderBytes: 0, TLSConfig: nil } You can simply execute using the following code: http.ListenAndServe(":8080", nil) This will invoke a server struct for you and set only the Addr and Handler  properties within. There will be times, of course, when we'll want more granular control over our server, but for the time being, this will do just fine. Let's take this concept and output some JSON data via HTTP for the first time. Quick hitter – saying Hello, World via API As mentioned earlier in this article, we'll go off course and do some work that we'll preface with quick hitter to denote that it's unrelated to our larger project. In this case, we just want to rev up our http package and deliver some JSON to the browser. Unsurprisingly, we'll be merely outputting the uninspiring Hello, world message to, well, the world. Let's set this up with our required package and imports: package main   import ( "net/http" "encoding/json" "fmt" ) This is the bare minimum that we need to output a simple string in JSON via HTTP. Marshalling JSON data can be a bit more complex than what we'll look at here, so if the struct for our message doesn't immediately make sense, don't worry. This is our response struct, which contains all of the data that we wish to send to the client after grabbing it from our API: type API struct { Message string "json:message" } There is not a lot here yet, obviously. All we're setting is a single message string in the obviously-named Message variable. Finally, we need to set up our main function (as follows) to respond to a route and deliver a marshaled JSON response: func main() {   http.HandleFunc("/api", func(w http.ResponseWriter, r    *http.Request) {      message := API{"Hello, world!"}      output, err := json.Marshal(message)      if err != nil {      fmt.Println("Something went wrong!")    }      fmt.Fprintf(w, string(output))   })   http.ListenAndServe(":8080", nil) } Upon entering main(), we set a route handling function to respond to requests at /api that initializes an API struct with Hello, world! We then marshal this to a JSON byte array, output, and after sending this message to our iowriter class (in this case, an http.ResponseWriter value), we cast that to a string. The last step is a kind of quick-and-dirty approach for sending our byte array through a function that expects a string, but there's not much that could go wrong in doing so. Go handles typecasting pretty simply by applying the type as a function that flanks the target variable. In other words, we can cast an int64 value to an integer by simply surrounding it with the int(OurInt64) function. There are some exceptions to this—types that cannot be directly cast and some other pitfalls, but that's the general idea. Among the possible exceptions, some types cannot be directly cast to others and some require a package like strconv to manage typecasting. If we head over to our browser and call localhost:8080/api (as shown in the following screenshot), you should get exactly what we expect, assuming everything went correctly: Summary We've touched on the very basics of developing a simple web service interface in Go. Admittedly, this particular version is extremely limited and vulnerable to attack, but it shows the basic mechanisms that we can employ to produce usable, formalized output that can be ingested by other services. At this point, you should have the basic tools at your disposal that are necessary to start refining this process and our application as a whole. Resources for Article: Further resources on this subject: Adding Authentication [article] C10K – A Non-blocking Web Server in Go [article] Clusters, Parallel Computing, and Raspberry Pi – A Brief Background [article]
Read more
  • 0
  • 0
  • 1766
article-image-understanding-self-tuning-thresholds
Packt
14 Apr 2015
5 min read
Save for later

Understanding Self-tuning Thresholds

Packt
14 Apr 2015
5 min read
In this article by Chiyo Odika coauthor of the book Microsoft System Center 2012 R2 Operations Manager Cookbook, a self-tuning threshold monitor is a Windows Performance Counter monitor type that was introduced in System Center Operations Manager 2007. Unlike monitors that use a fixed threshold (static monitors), self-tuning Threshold (STT) monitors learn what is acceptable for a performance counter and, over time, update the threshold for the performance counter. (For more resources related to this topic, see here.) In contrast to STTs, static thresholds are simple monitor types and are based on predefined values and counters that are monitored for conformity within the predefined values. For instance, a static threshold could be used to monitor for specific thresholds, such as Available Megabytes of Memory. Static thresholds are very useful for various monitoring scenarios but have some drawbacks. Primarily, there's some acceptable variation in performance of servers even when they fulfil the same role, and as such, a performance value that may be appropriate for one server may not apply to another. STTs were therefore created as an option for monitoring in such instances. Baselines in SCOM 2012 R2 are used to collect the usual values for a performance counter, which then allows SCOM to adjust alert thresholds accordingly. STTs are very useful for collecting performance counter baselines on the basis of what it has learned over time. Getting ready To understand how STTs work, we will take a look at the basic components of an STT. To do so, we will create a self-tuning monitor using the wizard. The process for configuring an STT involves configuring the logic for the STT to learn. The configuration can be performed in the wizard for creating the performance counter monitor. To create a performance counter monitor in System Center Operations Manager, you will need to log on to a computer that has an Operations console, using an account that is a member of the Operations Manager Administrators user role, or Operations Manager Authors user role for your Operations Manager 2012 R2 management group. Create a management pack for your custom monitor if you don't already have one. How to do it... For illustration purposes, we will create a 2-state self-tuning threshold monitor. Creating a self-tuning threshold monitor To create a self-tuning threshold monitor, carry out the following steps: Log in to a computer with an account that is a member of the Operations Manager Administrators user role or Operations Manager Authors user role for the Operations Manager 2012 R2 management group. In the Operations console, click on the Authoring button. In the Authoring pane, expand Authoring, expand Management Pack Objects, click on Monitors, right-click on the Monitors, select Create a Monitor, select Unit Monitor, and then expand Windows Performance Counters. Select 2-state Baselining, select a Destination Management Pack, and then click on Next. Name the monitor, select Windows Server as your monitor target, and then select the Performance parent monitor from the drop-down option. In the Object field, enter processor, enter % Processor Time in the Counter field, enter _Total in the Instance field, and set the Interval to 1 minute. Click on Next to Configure business cycle, which is the unit of time you would like to monitor. The default is 1 week, which is fine in general, but for the purpose of illustration, select 1 Day(s). Under Alerting, leave the default value of 1 business cycle(s) of analysis. Move the Sensitivity slider to the left to select a low sensitivity value and then click on Next. Leave the default values on the Configure Health screen and click on Next. On the Configure Alerts screen, check the box to generate alerts for the monitor and click on Create. How it works... A self-tuning threshold consists of two rules and a monitor. The performance collection rule collects performance counter data, and the signature collection rule establishes a signature. The monitor compares the value of the performance counter data with the signature. The signature is a numeric data provider that learns the characteristics of a business cycle. SCOM then uses the signature to set and adjust the thresholds for alerts by evaluating performance counter results against the business cycle pattern. In this article, we effectively created a 2-state baselining self-tuning threshold monitor, as you can see in the following screenshot: You will find that this also created some rules such as performance collection and signature collection rules to collect performance and signature data, respectively. Data collection will occur at the frequency specified at the time the monitor was created, as you can see in the following screenshot: You will also notice that the collection frequency values can be changed, along with the sensitivity values for the monitor, as you can see in the following screenshot: There's more... Monitors that use self-tuning thresholds are based on Windows performance counters and the business cycle setting. The business cycle establishes a time period of activity that SCOM will use to create a signature. The business cycle can be configured in either days or weeks, and the default is 1 week. For example, the STT monitor for the % Processor Time counter that we created learns that processor activity for some database servers spikes between noon and 2 pm on Wednesdays. The threshold is adjusted to take that pattern into account. As a result, an alert would not be generated for a spike in processor activity at 12:30 pm on Wednesday. However, if similar processor activity spikes at the same time on Thursday, the monitor will generate an alert. See also For detailed information on activities listed in this article, refer to the Microsoft TechNet article Understanding Self-Tuning Threshold Monitors in the following link: http://TechNet.microsoft.com/en-us/library/dd789011.aspx. Summary We looked at the basic components of a self-tuning threshold to understand how STTs work. For that we created a self-tuning monitor using the wizard. Resources for Article: Further resources on this subject: Upgrading from Previous Versions [article] Deploying Applications and Software Updates on Microsoft System Center 2012 Configuration Manager [article] Unpacking System Center 2012 Orchestrator [article]
Read more
  • 0
  • 0
  • 3867

article-image-managing-images
Packt
14 Apr 2015
11 min read
Save for later

Managing Images

Packt
14 Apr 2015
11 min read
Cats, dogs and all sorts of memes, the Internet as we know it today is dominated by images. You can open almost any web page and you'll surely find images on the page. The more interactive our web browsing experience becomes, the more images we tend to use. So, it is tremendously important to ensure that the images we use are optimized and loaded as fast as possible. We should also make sure that we choose the correct image type. In this article by Dewald Els, author of the book Responsive Design High Performance,we will talk about, why image formats are important, conditional loading, visibility for DOM elements, specifying sizes, media queries, introducing sprite sheets, and caching. Let's talk basics. (For more resources related to this topic, see here.) Choosing the correct image format Deciding what image format to use is usually the first step you take when you start your website. Take a look at this table for an overview and comparison ofthe available image formats: Format Features GIF 256 colors Support for animation Transparency PNG 256 colors True colors Transparency JPEG/JPG 256 colors True colors From the preceding listed formats, you can conclude that, if you had a complex image that was 1000 x 1000 pixels, the image in the JPEG format would be the smallest in file size. This also means that it would load the fastest. The smallest image is not always the best choice though. If you need to have images with transparent parts, you'll have to use the PNG or GIF formats and if you need an animation, you are stuck with using a GIF format or the lesser know APNG format. Optimizing images Optimizing your image can have a huge impact on your overall website performance. There are some great applications to help you with image optimization and compression. TinyPNG is a great example of a site that helps you to compress you PNG's images online for free. They also have a Photoshop plugin that is available for download at https://tinypng.com/. Another great application to help you with JPG compression is JPEGMini. Head over to http://www.jpegmini.com/ to get a copy for either Windows or Mac OS X. Another application that is worth considering is Radical Image Optimization Tool (RIOT). It is a free program and can be found at http://luci.criosweb.ro/riot/. RIOT is a Windows application. Viewing as JPEG is not the only image format that we use in the Web; you can also look at a Mac OS X application called ImageOptim (http://www.imageoptim.com) It is also a free application and compresses both JPEG and PNG images. If you are not on Mac OS X, you can head over to https://tinypng.com/. This handy little site allows you to upload your image to the site, where it is then compressed. The optimized images are then linked to the site as downloadable files. As JPEG image formats make up the majority of most web pages, with some exceptions, lets take a look at how to make your images load faster. Progressive images Most advanced image editors such as Photoshop and GIMP give you the option to encode your JPEG images using either baseline or progressive. If you Save For Web using Photoshop, you will see this section at the top of the dialog box: In most cases, for use on web pages, I would advise you to use the Progressive encoding type. When you save an image using baseline, the full image data of every pixel block is written to the file one after the other. Baseline images load gradually from the top-left corner. If you save an image using the Progressive option, then it saves only a part of each of these blocks to the file and then another part and so on, until the entire image's information is captured in the file. When you render a progressive image, you will see a very grainy image display and this will gradually become sharper as it loads. Progressive images are also smaller than baseline images for various technical reasons. This means that they load faster. In addition, they appear to load faster when something is displayed on the screen. Here is a typical example of the visual difference between loading a progressive and a baseline JPEG image: Here, you can clearly see how the two encodings load in a browser. On the left, the progressive image is already displayed whereas the baseline image is still loading from the top. Alright, that was some really basic stuff, but it was extremely important nonetheless. Let's move on to conditional loading. Adaptive images Adaptive images are an adaptation of Filament Group's context-aware image sizing experiment. What does it do? Well, this is what the guys say about themselves: "Adaptive images detects your visitor's screen size and automatically creates, caches, and delivers device appropriate re-scaled versions of your web page's embedded HTML images. No mark-up changes needed. It is intended for use with Responsive Designs and to be combined with Fluid Images techniques." It certainly trumps the experiment in the simplicity of implementation. So, how does it work? It's quite simple. There is no need to change any of your current code. Head over to http://adaptive-images.com/download.htm and get the latest version of adaptive images. You can place the adaptive-images.php file in the root of your site. Make sure to add the content of the .htaccess file to your own as well. Head over to the index file of your site and add this in the <head> tags: <script>document.cookie='resolution='+Math.max(screen.width,screen.height)+'; path=/';</script> Note that it is has to be in the <head> tag of your site. Open the adaptive-images.php file and add you media query values into the $resolutions variable. Here is a snippet of code that is pretty self-explanatory: $resolutions   = array(1382, 992, 768, 480);$cache_path   = "ai-cache";$jpg_quality   = 80;$sharpen       = TRUE;$watch_cache   = TRUE;$browser_cache = 60*60*24*7; The $resolution variable accepts the break-points that you use for your website. You can simply add the value of the screen width in pixels. So, in the the preceding example, it would read 1382 pixels as the first break-point, 992 pixels as the second one, and so on. The cache path tells adaptive images where to store the generated resized images. It's a relative path from your document root. So, in this case, your folder structure would read as document_root/a-cache/{images stored here}. The next variable, $jpg_quality, sets the quality of any generated JPGs images on a scale of 0 to 100. Shrinking images could cause blurred details. Set $sharpen to TRUE to perform a sharpening process on rescaled images. When you set $watch_cache to TRUE, you force adaptive images to check that the adapted image isn't stale; that is, it ensures that the updated source images are recached. Lastly, $browser_cache sets how long the browser cache should last for. The values are seconds, minutes, hours, days (7 days by default). You can change the last digit to modify the days. So, if you want images to be cached for two days, simply change the last value to 2. Then,… oh wait, that's all? It is indeed! Adaptive images will work with your existing website and they don't require any markup changes. They are also device-agnostic and follow a mobile-first philosophy. Conditional loading Responsive designs combine three main techniques, which are as follows: Fluid grids Flexible images Media queries The technique that I want to focus on in this section is media queries. In most cases, developers use media queries to change the layout, width height, padding, font size and so on, depending on conditions related to the viewport. Let's see how we can achieve conditional image loading using CSS3's image-set function: .my-background-img {background-image: image-set(url(icon1x.jpg) 1x,url(icon2x.jpg) 2x);} You can see in the preceding piece of CSS3 code that the image is loaded conditionally based on its display type. The second statement url(icon2x.jpg) 2x would load the hi-resolution image or retina image. This reduces the number of CSS rules we have to create. Maintaining a site with a lot of background images can become quite a chore if a separate rule exists for each one. Here is a simple media query example: @media screen and (max-width: 480px) {   .container {       width: 320px;   }} As I'm sure you already know, this snippet tells the browser that, for any device with a viewport of fewer than 480 pixels, any element with the class container has to be 320 pixels wide. When you use media queries, always make sure to include the viewport <meta> tag in the head of your HTML document, as follows: <meta name="viewport" content="width=device-width, initial-scale=1"> I've included this template here as I'd like to start with this. It really makes it very easy to get started with new responsive projects: /* MOBILE */@media screen and (max-width: 480px) {   .container {       width: 320px;   }}/* TABLETS */@media screen and (min-width: 481px) and (max-width: 720px) {   .container {       width: 480px;   }}/* SMALL DESKTOP OR LARGE TABLETS */@media screen and (min-width: 721px) and (max-width: 960px) {   .container {       width: 720px;   }}/* STANDARD DESKTOP */@media screen and (min-width: 961px) and (max-width: 1200px) {   .container {       width: 960px;   }}/* LARGE DESKTOP */@media screen and (min-width: 1201px) and (max-width: 1600px) {   .container {       width: 1200px;   }}/* EXTRA LARGE DESKTOP */@media screen and (min-width: 1601px) {   .container {       width: 1600px;   }} When you view a website on a desktop, it's quite a common thing to have a left and a right column. Generally, the left column contains information that requires more focus and the right column contains content with a bit less importance. In some cases, you might even have three columns. Take the social website Facebook as an example. At the time of writing this article, Facebook used a three-column layout, which is as follows:   When you view a web page on a mobile device, you won't be able to fit all three columns into the smaller viewport. So, you'd probably want to hide some of the columns and not request the data that is usually displayed in the columns that are hidden. Alright, we've done some talking. Well, you've done some reading. Now, let's get into our code! Our goal in this section is to learn about conditional development, with the focus on images. I've constructed a little website with a two-column layout. The left column houses the content and the right column is used to populate a little news feed. I made a simple PHP script that returns a JSON object with the news items. Here is a preview of the different screens that we will work on:   These two views are a result of the queries that are shown in the following style sheet code: /* MOBILE */@media screen and (max-width: 480px) {}/* TABLETS */@media screen and (min-width: 481px) and (max-width: 720px) {} Summary Managing images is no small feat in a website. Almost all modern websites rely heavily on images to present content to the users. In this article we looked at which image formats to use and when. We also looked at how to optimize your images for websites. We discussed the difference between progressive and optimized images as well. Conditional loading can greatly help you to load your site faster. In this article, we briefly discussed how to use conditional loading to improve your site's performance. Resources for Article: Further resources on this subject: A look into responsive design frameworks [article] Building Responsive Image Sliders [article] Creating a Responsive Project [article]
Read more
  • 0
  • 0
  • 12081
Modal Close icon
Modal Close icon