Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-overview-chips
Packt
16 Dec 2014
7 min read
Save for later

Overview of Chips

Packt
16 Dec 2014
7 min read
In this article by Olliver M. Schinagl, author of Getting Started with Cubieboard, we will overview various development boards and compare a few popular ones to help you choose a board tailored to your requirements. In the last few years, ARM-based Systems on Chips (SoCs) have become immensely popular. Compared to the regular x86 Intel-based or AMD-based CPUs, they are much more energy efficient and still performs adequately. They also incorporate a lot of peripherals, such as a Graphics Processor Unit (GPU), a Video Accelerator (VPU), an audio controller, various storage controllers, and various buses (I2C and SPI), to name a few. This immensely reduces the required components on a board. With the reduction in the required components, there are a few obvious advantages, such as reduction in the cost and, consequentially, a much easier design of boards. Thus, many companies with electronic engineers are able to design and manufacture these boards cheaply. (For more resources related to this topic, see here.) So, there are many boards; does that mean there are also many SoCs? Quite a few actually, but to keep the following list short, only the most popular ones are listed: Allwinner's A-series Broadcom's BCM-series Freescale's i.MX-series MediaTek's MT-series Rockchip's RK-series Samsung's Exynos-series NVIDIA's Tegra-series Texas Instruments' AM-series and OMAP-series Qualcomm's APQ-series and MSM-series While many of the potential chips are interesting, Allwinner's A-series of SoCs will be the focus of this book. Due to their low price and decent availability, quite a few companies design development boards around these chips and sell them at a low cost. Additionally, the A-series is, presently, the most open source friendly series of chips available. There is a fully open source bootloader, and nearly all the hardware is supported by open source drivers. Among the A-series of chips, there are a few choices. The following is the list of the most common and most interesting devices: A10: This is the first chip of the A-series and the best supported one, as it has been around for long. It is able to communicate with the outside world over I2C, SPI, MMC, NAND, digital and analog video out, analog audio out, SPDIF, I2S, Ethernet MAC, USB, SATA, and HDMI. This chip initially targeted everything, such as phones, tablets, set-top boxes, and mini PC sticks. For its GPU, it features the MALI-400. A10S: This chip followed the A10; it focused mainly on the PC stick market and left out several parts, such as SATA and analog video in/out, and it has no LCD interface. These parts were left out to reduce the cost of the chip, making it interesting for cheap TV sticks. A13: This chip was introduced more or less simultaneously with the A10S for primary use in tablets. It lacked SATA, Ethernet MAC, and also HDMI, which reduced the chip's cost even more. A20: This chip was introduced way after the others and hence it was pin-compatible to the A10 intended to replace it. As the name hints, the A20 is a dual-core variant of the A10. The ARM cores are slightly different; Cortex-A7 has been used in the A10 instead of Cortex-A8. A23: This chip was introduced after the A31 and A31S and is reasonably similar to the A31 in its design. It features a dual-core Cortex-A7 design and is intended to replace the A13. It is mainly intended to be used in tablets. A31: This chip features four Cortex-A7 cores and generally has all the connections that the A10 has. It is, however, not popular within the community because it features a PowerVR GPU that, until now, has seen no community support at all. Additionally, there are no development boards commonly available for this chip. A31S: This chip was released slightly after the A31 to solve some issues with the A31. There are no common development boards available. Choosing the right development board Allwinner's A-series of SoCs was produced and sold so cheaply that many companies used these chips in their products, such as tablets, set-top boxes, and eventually, development boards. Before the availability of development boards, people worked on and with tablets and set-top boxes. The most common and popular boards are from Cubietech and Olimex, in part because both companies handed out development boards to community developers for free. Olimex Olimex has released a fair amount of different development boards and peripherals. A lot of its boards are open source hardware with schematics and layout files available, and Olimex is also very open source friendly. You can see the Olimex board in the following image: Olimex offers the A10-OLinuXino-LIME, an A10-based micro board that is marketed to compete with the famous Raspberry Pi price-wise. Due to its small size, it uses less standard, 1.27 mm pitch headers for the pins, but it has nearly all of these pins exposed for use. You can see the A10-OLinuXino-LIME board in the following image: The Olimex OLinuXino series of boards is available in the A10, A13, and A20 flavors and has more standard, 2.54 mm pitch headers that are compatible with the old IDE and serial connectors. Olimex has various sensors, displays, and other peripherals that are also compatible with these headers. Cubietech Cubietech was formed by previous Allwinner employees and was one of the first development boards available using the Allwinner SoC. While it is not open source hardware, it does offer the schematics for download. Cubietech released three boards: the Cubieboard1, the Cubieboard2, and the Cubieboard3—also known as the Cubietruck. Interfacing with these boards can be quite tricky, as they use 2 mm pitch headers that might be hard to find in Europe or America. You can see the Cubietech board in the following image: Cubieboard1 and Cubieboard2 use identical boards; the only difference is that A20 is used instead of A10 in Cubieboard2. These boards only have a subset of the pins exposed. You can see the Cubietruck board in the following image: Cubietruck is quite different but a well-designed A20 board. It features everything that the previous boards offer, along with Gigabit Ethernet, VGA, Bluetooth, Wi-Fi, and an optical audio out. This does come at the cost that there are fewer pins to keep the size reasonably small. Compared to Raspberry Pi or LIME, it is almost double the size. Lemaker Lemaker made a smart design choice when releasing its Banana Pi board. It is an Allwinner A20-based board but uses the same board size and connector placement as Raspberry Pi and hence the name Banana Pi. Because of this, many of those Raspberry Pi cases could fit the Banana Pi and even shields will fit. Software-wise, it is quite different and does not work when using Raspberry Pi image files. Nevertheless, it features composite video out, stereo audio out, HDMI out Gigabit Ethernet, two USB ports, one USB OtG port, CSI out and LVDS out, and a handful of pins. Also available are a LiPo battery connector and a SATA connector and two buttons, but those might not be accessible on a lot of standard cases. See the following image for the topside of the Banana Pi: Itead and Olimex Itead and Olimex both offer an interesting board, which is worth mentioning separately. The Iteaduino Plus and the Olimex A20-SoM are quite interesting concepts; the computing module, which is a board with the SoC, memory, and flash, which are plugin modules, and a separated baseboard. Both of them sell a very complete baseboard as open source hardware, but anybody can design their own baseboard and buy the computing module. You can see the following board by Itead: Refer to the following board by Olimex: Additional hardware While a development board is a key ingredient, there are several other items that are also required. A power supply, for example, is not always supplied and does have some considerations. Also, additional hardware is required for the initial communication and to debug. Summary In this article, you looked at the additional hardware and a few extra peripherals that will help you understand the stuff you require for your projects. Resources for Article: Further resources on this subject: Home Security by BeagleBone [article] Mobile Devices [article] Making the Unit Very Mobile – Controlling the Movement of a Robot with Legs [article]
Read more
  • 0
  • 0
  • 8868

article-image-how-make-game-using-only-free-tools
Ellison Leao
16 Dec 2014
3 min read
Save for later

How to Make a Game Using Only Free Tools

Ellison Leao
16 Dec 2014
3 min read
Being an independent game developer is never easy. Many developers do not have enough money or sponsorship to buy great game development tools to continue creating great games. If you are starting a career in game development or if you are already a game developer who doesn't want to spend money on licenses,in this post we will examine how to create great games using only free tools. There are some game engines with some free licenses available, which are limited but very powerful. Here are some of them: GameMaker GameMaker, by YoYo Studios, is a very powerful tool that doesn't require a lot of programming skills, since it uses a proper script language called GML, which has a very small learning curve. The free version only exports to Windows. To learn more about it, go to their website: https://www.yoyogames.com/studio. Unity Unity3D is the most famous game development tool these days. With Unity you can create games from 2D platformers to 3D FPS with a few drag-and-drop elements and some scripting. And speaking about scripting, you can create Unity scripts using C#, UnityScript (a Unity version of JavaScript), or Boo. You can use most of the engine features, but if you want to make some money with your games, you can only use the free version if your game annual income does not exceed $100,000. If so, you will need to buy a Pro license. Construct 2 Similar to the GameMaker tool, Construct 2 by Scirraprovidessome abstraction layers for non-programmers and makes it easy and quick to deliver a game. The free version can export to the Windows Store, the Chrome Web Store, and Facebook. But if you are a good programmer who likes a bit of a challenge, there are several tools for you. Here are some frameworks to get you started: Phaser Phaser is an open source HTML5 framework, which provides some nice features, such as WebGL and canvas renders, a physics, particles, animation and camera systems, and more. It also supports Typescript for development. Monogame Monogame is the open source version of the XNA 4 framework. The main goal is to let Monogame users build their games for many platforms, such as iOS, Android, Mac OS X, Linux, and Windows 8. FlashPunk   FlashPunk is a framework written in ActionScript3 that helps you build 2D flash games. It has anything that a game needs, from collision support, audio and graphics system, to a live debugger that helps you fix bugs. These tools are just a sample of many other tools that exist out there. We've created a GitHub repo, gathering information from many places and listing a lot of game development resources that you can use to enhance your game development experience. It covers many types of tools, like 3D tools to render terrains, audio tools to make sound effect and soundtracks, free assets websites and more. You can find the list by clicking here. About the Author Ellison Leão (@ellisonleao) is a passionate software engineer with more than 6 years of experience in web projects. He is a contributor to the MelonJS framework and other open source projects. When he is not writing games, he loves to play drums.
Read more
  • 0
  • 0
  • 7177

article-image-role-angularjs
Packt
16 Dec 2014
7 min read
Save for later

Role of AngularJS

Packt
16 Dec 2014
7 min read
This article by Sandeep Kumar Patel, author of Responsive Web Design with AngularJS we will explore the role of AngularJS for responsive web development. Before going into AngularJS, you will learn about responsive "web development in general. Responsive" web development can be performed "in two ways: Using the browser sniffing approach Using the CSS3 media queries approach (For more resources related to this topic, see here.) Using the browser sniffing approach When we view" web pages through our browser, the browser sends a user agent string to the server. This string" provides information like browser and device details. Reading these details, the browser can be redirected to the appropriate view. This method of reading client details is known as browser sniffing. The browser string has a lot of different information about the source from where the request is generated. The following diagram shows the information shared by the user string:   Details of the parameters" present in the user agent string are as follows: Browser name: This" represents the actual name of the browser from where the request has originated, for example, Mozilla or Opera Browser version: This represents" the browser release version from the vendor, for example, Firefox has the latest version 31 Browser platform: This represents" the underlying engine on which the browser is running, for example, Trident or WebKit Device OS: This represents" the operating system running on the device from where the request has originated, for example, Linux or Windows Device processor: This represents" the processor type on which the operating system is running, for example, 32 or 64 bit A different browser string is generated based on the combination of the device and type of browser used while accessing a web page. The following table shows examples of browser "strings: Browser Device User agent string Firefox Windows desktop Mozilla/5.0 (Windows NT 5.1; rv:31.0) Gecko/20100101 Firefox/31.0 Chrome OS X 10 desktop Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/29.0.1547.66 Safari/537.36 Opera Windows desktop Opera/9.80 (Windows NT 6.0) Presto/2.12.388 Version/12.14 Safari OS X 10 desktop Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/537.13+ (KHTML, like Gecko) Version/5.1.7 Safari/534.57.2 Internet Explorer Windows desktop Mozilla/5.0 (compatible; MSIE 10.6; Windows NT 6.1; Trident/5.0; InfoPath.2; SLCC1; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; .NET CLR 2.0.50727) 3gpp-gba UNTRUSTED/1.0   AngularJS has features like providers or services which can be most useful for this browser user-agent sniffing and a redirection approach. An AngularJS provider can be created that can be used in the configuration in the routing module. This provider can have reusable properties and reusable methods that can be used to identify the device and route the specific request to the appropriate template view. To discover more about user agent strings on various browser and device combinations, visit http://www.useragentstring.com/pages/Browserlist/. CSS3 media queries approach CSS3 brings a "new horizon to web application development. One of the key features" is media queries to develop a responsive web application. Media queries uses media types and features as "deciding parameters to apply the style to the current web page. Media type CSS3 media queries" provide rules for media types to have different styles applied to a web page. In the media queries specification, media types that should be supported by the implemented browser are listed. These media types are as follows: all: This is used" for all media type devices aural: This is "used for speech and sound synthesizers braille: This is used "for braille tactile feedback devices embossed: This" is used for paged braille printers handheld: This is "used for small or handheld devices, for example, mobile print: This" is used for printers, for example, an A4 size paper document projection: This is" used for projection-based devices, such as a projector screen with a slide screen: This is "used for computer screens, for example, desktop and "laptop screens tty: This is" used for media using a fixed-pitch character grid, such as teletypes and terminals tv: This is used "for television-type devices, for example, webOS "or Android-based television A media rule can be declared using the @media keyword with the specific type for the targeted media. The following code shows an example of the media rule usage, where the background body color" is black and text is white for the screen type media, and background body color is white and text is black for the printer media type: @media screen { body {    background:black;    color:white; } }   @media print{ body {    background:white;    color:black; } } An external style "sheet can be downloaded and applied to the current page based on the media type with the HTML link tag. The following code uses the link type in conjunction with media type: <link rel='stylesheet' media='screen' href='<fileName.css>' /> To learn more about" different media types,visit https://developer.mozilla.org/en-US/docs/Web/CSS/@media#Media_types. Media feature Conditional styles can be "applied to a page based on different features of a device. The features that are "supported by CSS3 media queries to apply styles are as follows: color: Based on the" number of bits used for a color component by the device-specific style sheet, this can be applied to a page. color-index: Based "on the color look up, table styles can be applied "to a page. aspect-ratio: Based "on the aspect ratio, display area style sheets can be applied to a page. device-aspect-ratio: Based "on the device aspect ratio, styles can be applied to a page. device-height: Based "on device height, styles can be applied to a page. "This includes the entire screen. device-width: Based "on device width, styles can be applied to a page. "This includes the entire screen. grid: Based "on the device type, bitmap or grid, styles can be applied "to a page. height: Based on" the device rendering area height, styles can be used "to a page. monochrome: Based on" the monochrome type, styles can be applied. "This represents the number of bits used by the device in the grey scale. orientation: Based" on the viewport mode, landscape or portrait, styles can be applied to a page. resolution: Based" on the pixel density, styles can be applied to a page. scan: Based on the "scanning type used by the device for rendering, styles can be applied to a page. width: Based "on the device screen width, specific styles can be applied. The following" code shows some examples" of CSS3 media queries using different device features for conditional styles used: //for screen devices with a minimum aspect ratio 0.5 @media screen and (min-aspect-ratio: 1/2) { img {    height: 70px;    width: 70px; } } //for all device in portrait viewport @media all and (orientation: portrait) { img {    height: 100px;    width: 200px; } } //For printer devices with a minimum resolution of 300dpi pixel density @media print and (min-resolution: 300dpi) { img {    height: 600px;    width: 400px; } } To learn more" about different media features, visit https://developer.mozilla.org/en-US/docs/Web/CSS/@media#Media_features. Summary In this chapter, you learned about responsive design and the SPA architecture. "You now understand the role of the AngularJS library when developing a responsive application. We quickly went through all the important features of AngularJS with the coded syntax. In the next chapter, you will set up your AngularJS application and learn to create dynamic routing-based on the devices. Resources for Article:  Further resources on this subject: Best Practices for Modern Web Applications [article] Important Aspect of AngularJS UI Development [article] A look into responsive design frameworks [article]
Read more
  • 0
  • 23
  • 41358

article-image-building-information-radiator-part-1
Andrew Fisher
16 Dec 2014
9 min read
Save for later

Building an Information Radiator - Part 1

Andrew Fisher
16 Dec 2014
9 min read
Download the code files for this project here. I love lights; specifically, I love LEDs - which have been described to me as "catnip for geeks". LEDs are low powered but bright, which means they can be embedded into all sorts of interesting places and, when coupled with a network, can be used for all sorts of ambient display purposes. In this post, I'll show you how to build an "information radiator" with a bit of Python and some LEDs, which you can then use to make your own for your own personal needs. // An information radiator light showing the forecast temperature in Melbourne. An information radiator is so called as it radiates information outwards from (often) a fixed point so that it can be interpreted by an observer. More complex information can be encoded through the use of color, brightness, or frequency of lighting to encode more information. I'm going to show you how to build an ambient display that scrapes some data from a weather service and then display it using colored light to indicate the forecasted temperature. This is quite a simple example, but by the end of this two-part post series, you will be able to change your information radiator to consider rain or multiple elements, or even point to something that is important to you. Bill of materials Item Description Cost Ethernet Arduino The Freetronics EtherTen is excellent, but an Arduino Uno with an Ethernet shield works too. $60 RGB LED The light discs from DFRobot are great as they produce a lot of light. $10 Computer This is needed to run the Python script to check the weather.   Wire Red, green, blue, and white is ideal, but anything you have available is fine. $2 Light fitting Anything that diffuses light will be interesting. $1+ Tools required These common tools will come in handy: Soldering iron Wire strippers Design You don't want the light attached to the computer all the time - what's the point of a light if you can just look up the weather on Google? The device will connect to the network and exist somewhere visible, and then the processing can run on a mini server somewhere (such as a Raspberry Pi) and just send the device messages when needed. So, the system design looks like this: The microcontroller looks after the LED and exposes a network interface. A Python script runs periodically on the server to check the weather forecast, get the data, and then send a message to the Arduino. Building the light The build of the light is quite straightforward. Cut four pieces of wire about 6 inches long (personal preference) and solder them to the four connections on the light disk.   // Light disc with wires soldered on. Strip 5mm of wire from the other end and wire the light disk to the Arduino in the following way: R to pin 5 G to pin 6 B to pin 9 Depending on the version of the light disc you have, wire GND to GND or 5V to 5V. The specifics are labelled on the disc itself, and the newer discs are GND.   // Light disc wired into Arduino. That's it! You're all done electronics-wise. Plug in an Ethernet cable and ensure you have 7-20V power supplied from a power pack to the Arduino. Programming the Arduino If you have never programmed an Arduino before, I suggest this tutorial as an excellent starting point. I'm going to assume you have got the Arduino IDE installed on your computer and you can upload sketches. First, you need to test your wiring. The following Arduino code will cycle through combinations of colors for about 1 second each. It will print the color to the serial console as well, so you can observe it with the serial monitor: #define RED 5 #define GREEN 6 #define BLUE 9 #define MAX_COLOURS 8 #define GND true // change this to false if 5V type char* colours[] = {"Off", "red", "green", "yellow", "blue", "magenta", "cyan", "white"}; uint8_t current_colour = 0; void setup() { Serial.begin(9600); Serial.println("Testing lights"); pinMode(RED, OUTPUT); pinMode(GREEN, OUTPUT); pinMode(BLUE, OUTPUT); if (GND) { digitalWrite(RED, LOW); digitalWrite(GREEN, LOW); digitalWrite(BLUE, LOW); } else { digitalWrite(RED, HIGH); digitalWrite(GREEN, HIGH); digitalWrite(BLUE, HIGH); } } void loop () { Serial.print("Current colour: "); Serial.println(colours[current_colour]); if (GND) { digitalWrite(RED, current_colour & 1); digitalWrite(GREEN, current_colour & 2); digitalWrite(BLUE, current_colour & 4); } else { digitalWrite(RED, !(bool)(current_colour & 1)); digitalWrite(GREEN, !(bool)(current_colour & 2)); digitalWrite(BLUE, !(bool)(current_colour & 4)); } if ((++current_colour) >= MAX_COLOURS) current_colour=0; delay(1000); } Notably, there is a flag to flip (#define GND true | false) depending on whether your light disc uses GND or 5V. All this does is reverse the bit-shifting logic (on the GND disc, the light goes on when the pin goes HIGH, but on the 5V disc, the light goes on when the pin goes LOW). If the colors are muddled, you have probably just connected a wire to the wrong pin; just flip them over and it should be fine. If you aren't seeing any light, check your connections and ensure you are getting power to the light disk. The next thing to do is write the sketch that will take messages from the network and update the light. To do this, we need to establish a protocol. There are many ways to define this, but for simplicity, a text protocol like JSON works sufficiently well. Each message will look like this: {r:val, g:val, b:val} In each case, val  is an unsigned byte, so will be in the range 0-255: // Adapted from generic web server example as part of IDE created by David Mellis and Tom Igoe. #include "Arduino.h" #include <Ethernet.h> #include <SPI.h> #include <string.h> #include <stdlib.h> #define DEBUG false // <1> // Enter a MAC address and IP address for your controller below. // The IP address will be dependent on your local network: byte mac[] = { 0xDE, 0xAD, 0xBE, 0xEF, 0xFE, 0xAE }; byte ip[] = { <PUT YOUR IP HERE AS COMMA BYTES> }; //eg 192,168,0,100 byte gateway[] = { <PUT YOUR GW HERE AS COMMA BYTES}; // eg 192,168,0,1 byte subnet[] = { <PUT YOUR SUBNET HERE>}; //eg 255,255,255,0 // Initialize the Ethernet server library // with the IP address and port you want to use (in this case telnet) EthernetServer server(23); #define BUFFERLENGTH 255 // these are the pins you wire each LED to. #define RED 5 #define GREEN 6 #define BLUE 9 #define GND true // change this to false if 5V type, true if GND type light disc void setup() { Ethernet.begin(mac, ip, gateway, subnet); server.begin(); #ifdef DEBUG Serial.begin(9600); Serial.println("Awaiting connection"); #endif } void loop() { char buffer[BUFFERLENGTH]; int index = 0; // Listen EthernetClient client = server.available(); if (client) { #ifdef DEBUG Serial.println("Got a client"); #endif // reset the input buffer index = 0; while (client.connected()) { if (client.available()){ char c = client.read(); // if it's not a new line, then add it to the buffer <2> if (c != 'n' && c != 'r') { buffer[index] = c; index++; if (index > BUFFERLENGTH) index = BUFFERLENGTH -1; continue; } else { buffer[index] = ' '; } // get the message string for processing String msgstr = String(buffer); // get just the bits we want between the {} msgstr = msgstr.substring(msgstr.lastIndexOf('{')+1, msgstr.indexOf('}', msgstr.lastIndexOf('{'))); msgstr.replace(" ", ""); msgstr.replace("'", ""); #ifdef DEBUG Serial.println("Message:"); Serial.println(msgstr); #endif // rebuild the buffer with just the URL msgstr.toCharArray(buffer, BUFFERLENGTH); // iterate over the tokens of the message - assumed flat. <3> char *p = buffer; char *str; while ((str = strtok_r(p, ",", &p)) != NULL) { #ifdef DEBUG Serial.println(str); #endif char *tp = str; char *key; char *val; // get the key key = strtok_r(tp, ":", &tp); val = strtok_r(NULL, ":", &tp); #ifdef DEBUG Serial.print("Key: "); Serial.println(key); Serial.print("val: "); Serial.println(val); #endif // <4> if (GND) { if (*key == 'r') analogWrite(RED, atoi(val)); if (*key == 'g') analogWrite(GREEN, atoi(val)); if (*key == 'b') analogWrite(BLUE, atoi(val)); } else { if (*key == 'r') analogWrite(RED, 255-atoi(val)); if (*key == 'g') analogWrite(GREEN, 255-atoi(val)); if (*key == 'b') analogWrite(BLUE, 255-atoi(val)); } } break; } } delay(10); // give client time to send any data back client.stop(); } } The most notable parts of the code are as follows: You add your own network settings in here This text parser just adds text to a buffer until a n arrives As this protocol is simple, I use a string tokenizer to break up the message into its constituent pieces as key-value pairs Use the RGB values to set the appropriate level on the PWM pins (noting polarity reversal for GND vs 5V light discs) To test the code, upload the sketch, ensure your Ethernet cable is plugged in, and attempt to connect to the device: telnet <ip> 23 This should return something like the following: Trying 10.0.1.91... Connected to 10.0.1.91. Escape character is '^]'. Now, enter: {r:200,g:0, b:0} <enter> If the light changes to red, then everything is working - time to get some data. If not, check your code and make sure the messages are being interpreted properly (plug in your computer to use the serial debugger to watch the messages). Play around with changing the colors of your light by sending different values to the device. In the Part 2 post, I’ll explain how to scrape the weather data we want and use that to update the light periodically. About the Author Andrew Fisher is a creator (and destroyer) of things that combine mobile web, ubicomp, and lots of data. He is a programmer, interaction researcher, and CTO at JBA, a data consultancy in Melbourne, Australia. He can be found on Twitter at @ajfisher. 
Read more
  • 0
  • 0
  • 2523

article-image-adding-graded-activities
Packt
16 Dec 2014
9 min read
Save for later

Adding Graded Activities

Packt
16 Dec 2014
9 min read
This article by Rebecca Barrington, author of Moodle Gradebook Second Edition, teaches you how to add assignments and set up how they will be graded, including how to use our custom scales and add outcomes for grading. (For more resources related to this topic, see here.) As with all content within Moodle, we need to select Turn editing on within the course in order to be able to add resources and activities. All graded activities are added through the Add an activity or resource text available within each section of within a Moodle course. This text can be found in the bottom right of each section after editing has been turned on. There are a number of items that can be graded and will appear within the Gradebook. Assignments are the most feature-rich of all the graded activities and have many options available in order to customize how assessments can be graded. They can be used to provide assessment information for students, store grades, and provide feedback. When setting up the assignment, we can choose for students to submit their work electronically—either through file submission or online text, or we can review the assessment offline and use only the grade and feedback features of the assignment. Adding assignments There are many options *within the assignments, and throughout this article we will set up a number of different assignments and you'll learn about some of their most useful features and options. Let's have a go at creating a range of assignments that are ready for grading. Creating an assignment with a scale The first assignment that we will add will *make use of the PMD scale Click on the Turn editing on button. Click on Add an activity or resource. Click on Assignment and then click on Add. In the Assignment name box, type in the name of the assignment (such as Task 1). In the Description box, provide some assignment details. In the Availability section, we need to disable the date options. We will not make use of these options, but they can be very useful. To disable the options, click on the tick next to the Enable text. However, details of these options have *been provided for future* reference. The Allow submissions from section* is mostly relevant when the assignment will be submitted electronically, as students won't be able to submit their work until the date and time indicated here. The Due date section* can be used to indicate when the assignment needs to be submitted by. If students electronically submit their assignment after the date and time indicated here, the submission date and time will be shown in red in order to notify the teacher that it was submitted past the due date. The Cut off date section* enables teachers to set an extension period after the due date where late submissions will continue to be accepted. In the* Submission types section, ensure *that the File submissions checkbox is enabled by adding a tick there. This will enable students to submit their assignment electronically. There are additional options that we can choose as well. With Maximum number of uploaded files, we can indicate how many files a student can upload. Keep this as 1. We can also determine the Maximum submission size option for each file using the drop-down list shown in the following screenshot: Within the Feedback types section, ensure that all options under the Feedback types *section are *selected. Feedback comments enables *us to provide written feedback along with the grade. Feedback files enables us *to upload a file in order to provide feedback to a student. Offline grading worksheet will *provide us with the option to download a .csv file that contains core information about the assignment, and this can be used to add grades and feedback while working offline. This completed .csv file can be uploaded and the grades will be added to the assignments within the Gradebook. In the Submission settings section, we have options related to how students will submit their assignment and how they will reattempt submission if required. If Require students click submit button is left as No, students will upload* their assignment* and it will be available *to the teacher for grading. If this option is changed to Yes, students can upload their assignment, but the teacher will see that it is in the draft form. Students will click on Submit to indicate that it is ready to be graded. Require that students accept the submission statement will provide students *with a statement that they need to agree to when they submit their assignment. The default statement is This assignment is my own work, except where I have acknowledged the use of works of other people. The submission statement can be changed by a site administrator by navigating to Site administration | plugins | Activity modules | Assignment settings. The Attempts reopened drop-down list* provides options for the status of the assignment after it has been graded. Students will only be able to resubmit their work when it is open. Therefore this setting will control when and if students are able to submit another version of their assignment. The options available to us are:Never: This option should be selected if students will not be able to submit another piece of work.Manually: This will enable anyone who has the role of a teacher to choose to reopen a submission that enables a student to submit their work again.Automatically until pass: This option works when a pass grade is set within the Gradebook. After grading, if the student is awarded the minimum pass *grade or higher, the submission *will remain closed in order to prevent any changes to the submission. However, if the assignment is graded lower than the assigned pass grade, the submission will automatically reopen in order to enable the student to submit the assignment again.Maximum attempts: The maximum *attempts allowed for this assignment will limit the number of times an assignment is reopened. For example, if this option is set to 3, then a student will only be able to submit their assignment three times. After they have submitted their assignment for a third time, they will not be allowed to submit it again. The default is unlimited, but it can be changed by clicking on the drop-down list. In the Submission settings section, ensure that the options for Require students click on submit button and Require that students accept the submission statement are set to Yes. Also, change the Attempts reopened to Automatically until passed. Within the Grade section, navigate to Grade | Type | Scale and choose the PMD scale. Select Use marking workflow by changing the drop-down list to Yes.Use marking workflow is a new feature of Moodle 2.6* that enables *the grading process to go through a range of stages in order to indicate that the marking is in progress or is complete, is being reviewed, or is ready for release to students. Click on* Save and return to course. Creating an online assignment with a number grade The next *assignment that we will create will have an online* text option that will have a maximum grade of 20. The following steps show you how to create an online assignment with a number grade: Enable editing by clicking on Turn editing on. Click on Add an activity or resource. Click on Assignment and then click on Add. In the Assignment name box, type in the name of the assignment (such as Task 2). In the Description box, provide the assignment details. In the Submission types section, ensure that Online text has a tick next to it. This will enable students to type directly into Moodle. When choosing this option, we can also set a maximum word limit by clicking on the tick box next to the Enable text. After enabling this option, we can add a number to the textbox. For this assignment, enable a word limit of 200 words. When using* online text* submission, we have an additional feedback option within the Feedback types section. Under the Comment inline text, click on No and switch to Yes to enable yourself to add written feedback for students within the written text submitted by students. In the Submission settings section, ensure that the options for Require students click submit button and Require that students accept the submission statement are set to Yes. Also, change Attempts reopened to Automatically until passed. Within the Grades section, navigate to Grade | Type | Point and ensure that Maximum points is set to 20. Click *on* Save and return to course. Creating an assignment including outcomes The next assignment that we will *create will add some of the Outcomes: Enable editing by clicking on Turn editing on. Click on Add an activity or resource. Click on Assignment and then click on Add. In the Assignment name box, type in the name of the assignment (such as Task 3). In the Description box, provide the assignment details. In the Submission types box, ensure that Online text and File submissions are selected. Set Maximum number of uploaded files to 2. In the Submission settings section, ensure that the options for Require students to click submit button and Require that students accept the submission statement are amended to Yes. Change Attempts reopened to Manually. Within the Grades section, navigate to Grade | Type | Point and Maximum points is set to 100. In the Outcomes *section, choose the outcomes as Evidence provided and Criteria 1 met. Scroll to the bottom of the screen and click on Save and return to course. Summary In this article, we added a range of assignments that made use of number and scale grades as well as added outcomes to an assignment. Resources for Article: Further resources on this subject: Moodle for Online Communities [article] What's New in Moodle 2.0 [article] Moodle 2.0: What's New in Add a Resource [article]
Read more
  • 0
  • 0
  • 2660

article-image-ridge-regression
Packt
16 Dec 2014
9 min read
Save for later

Ridge Regression

Packt
16 Dec 2014
9 min read
In this article by Patrick R. Nicolas, the author of the book Scala for Machine Learning, we will cover the basics of ridge regression. The purpose of regression is to minimize a loss function, the residual sum of squares (RSS) being the one commonly used. The problem of overfitting can be addressed by adding a penalty term to the loss function. The penalty term is an element of the larger concept of regularization. (For more resources related to this topic, see here.) Ln roughness penalty Regularization consists of adding a penalty function J(w) to the loss function (or RSS in the case of a regressive classifier) in order to prevent the model parameters (or weights) from reaching high values. A model that fits a training set very well tends to have many features variable with relatively large weights. This process is known as shrinkage. Practically, shrinkage consists of adding a function with model parameters as an argument to the loss function: The penalty function is completely independent from the training set {x,y}. The penalty term is usually expressed as a power to function of the norm of the model parameters (or weights) wd. For a model of D dimension the generic Lp-norm is defined as follows: Notation Regularization applies to parameters or weights associated to an observation. In order to be consistent with our notation w0 being the intercept value, the regularization applies to the parameters w1 …wd. The two most commonly used penalty functions for regularization are L1 and L2. Regularization in machine learning The regularization technique is not specific to the linear or logistic regression. Any algorithm that minimizes the residual sum of squares, such as support vector machine or feed-forward neural network, can be regularized by adding a roughness penalty function to the RSS. The L1 regularization applied to the linear regression is known as the Lasso regularization. The Ridge regression is a linear regression that uses the L2 regularization penalty. You may wonder which regularization makes sense for a given training set. In a nutshell, L2 and L1 regularizations differ in terms of computation efficiency, estimation, and features selection (refer to the 13.3 L1 regularization: basics section in the book Machine Learning: A Probabilistic Perspective, and the Feature selection, L1 vs. L2 regularization, and rotational invariance paper available at http://www.machinelearning.org/proceedings/icml2004/papers/354.pdf). The various differences between the two regularizations are as follows: Model estimation: L1 generates a sparser estimation of the regression parameters than L2. For large non-sparse dataset, L2 has a smaller estimation error than L1. Feature selection: L1 is more effective in reducing the regression weights for features with high value than L2. Therefore, L1 is a reliable features selection tool. Overfitting: Both L1 and L2 reduce the impact of overfitting. However, L1 has a significant advantage in overcoming overfitting (or excessive complexity of a model) for the same reason it is more appropriate for selecting features. Computation: L2 is conducive to a more efficient computation model. The summation of the loss function and L2 penalty w2 is a continuous and differentiable function for which the first and second derivative can be computed (convex minimization). The L1 term is the summation of |wi|, and therefore, not differentiable. Terminology The ridge regression is sometimes called the penalized least squares regression. The L2 regularization is also known as the weight decay. Let's implement the ridge regression, and then evaluate the impact of the L2-norm penalty factor. Ridge regression The ridge regression is a multivariate linear regression with a L2 norm penalty term, and can be calculated as follows: The computation of the ridge regression parameters requires the resolution of the system of linear equations similar to the linear regression. Matrix representation of ridge regression closed form is as follows: I is the identity matrix and it is using the QR decomposition, as shown here: Implementation The implementation of the ridge regression adds L2 regularization term to the multiple linear regression computation of the Apache Commons Math library. The methods of RidgeRegression have the same signature as its ordinary least squares counterpart. However, the class has to inherit the abstract base class AbstractMultipleLinearRegression in the Apache Commons Math and override the generation of the QR decomposition to include the penalty term, as shown in the following code: class RidgeRegression[T <% Double](val xt: XTSeries[Array[T]],                                    val y: DblVector,                                   val lambda: Double) {                   extends AbstractMultipleLinearRegression                    with PipeOperator[Array[T], Double] {    private var qr: QRDecomposition = null    private[this] val model: Option[RegressionModel] = …    … } Besides the input time series xt and the labels y, the ridge regression requires the lambda factor of the L2 penalty term. The instantiation of the class train the model. The steps to create the ridge regression models are as follows: Extract the Q and R matrices for the input values, newXSampleData (line 1) Compute the weights using the calculateBeta defined in the base class (line 2) Return the tuple regression weights calculateBeta and the residuals calculateResiduals private val model: Option[(DblVector, Double)] = { this.newXSampleData(xt.toDblMatrix) //1 newYSampleData(y) val _rss = calculateResiduals.toArray.map(x => x*x).sum val wRss = (calculateBeta.toArray, _rss) //2 Some(RegressionModel(wRss._1, wRss._2)) } The QR decomposition in the AbstractMultipleLinearRegression base class does not include the penalty term (line 3); the identity matrix with lambda factor in the diagonal has to be added to the matrix to be decomposed (line 4). override protected def newXSampleData(x: DblMatrix): Unit = { super.newXSampleData(x)   //3 val xtx: RealMatrix = getX val nFeatures = xt(0).size Range(0, nFeatures).foreach(i => xtx.setEntry(i,i,xtx.getEntry(i,i) + lambda)) //4 qr = new QRDecomposition(xtx) } The regression weights are computed by resolving the system of linear equations using substitution on the QR matrices. It overrides the calculateBeta function from the base class: override protected def calculateBeta: RealVector = qr.getSolver().solve(getY()) Test case The objective of the test case is to identify the impact of the L2 penalization on the RSS value, and then compare the predicted values with original values. Let's consider the first test case related to the regression on the daily price variation of the Copper ETF (symbol: CU) using the stock daily volatility and volume as feature. The implementation of the extraction of observations is identical as with the least squares regression: val src = DataSource(path, true, true, 1) val price = src |> YahooFinancials.adjClose val volatility = src |> YahooFinancials.volatility val volume = src |> YahooFinancials.volume //1   val _price = price.get.toArray val deltaPrice = XTSeries[Double](_price                                .drop(1)                                .zip(_price.take(_price.size -1))                                .map( z => z._1 - z._2)) //2 val data = volatility.get                      .zip(volume.get)                      .map(z => Array[Double](z._1, z._2)) //3 val features = XTSeries[DblVector](data.take(data.size-1)) val regression = new RidgeRegression[Double](features, deltaPrice, lambda) //4   regression.rss match { case Some(rss) => Display.show(rss, logger) …. The observed data, ETF daily price, and the features (volatility and volume) are extracted from the source src (line 1). The daily price change, deltaPrice, is computed using a combination of Scala take and drop methods (line 2). The features vector is created by zipping volatility and volume (line 3). The model is created by instantiating the RidgeRegression class (line 4). The RSS value, rss, is finally displayed (line 5). The RSS value, rss, is plotted for different values of lambda <= 1.0 in the following graph: Graph of RSS versus Lambda for Copper ETF The residual sum of squares decreased as λ increases. The curve seems to be reaching for a minimum around λ=1. The case of λ = 0 corresponds to the least squares regression. Next, let's plot the RSS value for λ varying between 1 and 100: Graph RSS versus large value Lambda for Copper ETF This time around RSS increases with λ before reaching a maximum for λ > 60. This behavior is consistent with other findings (refer to Lecture 5: Model selection and assessment, a lecture by H. Bravo and R. Irizarry from department of Computer Science, University of Maryland, in 2010, available at http://www.cbcb.umd.edu/~hcorrada/PracticalML/pdf/lectures/selection.pdf). As λ increases, the overfitting gets more expensive, and therefore, the RSS value increases. The regression weights can by simply outputted as follows: regression.weights.get Let's plot the predicted price variation of the Copper ETF using the ridge regression with different value of lambda (λ): Graph of ridge regression on Copper ETF price variation with variable Lambda The original price variation of the Copper ETF Δ = price(t+1)-price(t) is plotted as λ =0. The predicted values for λ = 0.8 is very similar to the original data. The predicted values for λ = 0.8 follows the pattern of the original data with reduction of large variations (peaks and troves). The predicted values for λ = 5 corresponds to a smoothed dataset. The pattern of the original data is preserved but the magnitude of the price variation is significantly reduced. The reader is invited to apply the more elaborate K-fold validation routine and compute precision, recall, and F1 measure to confirm the findings. Summary The ridge regression is a powerful alternative to the more common least squares regression because it reduces the risk of overfitting. Contrary to the Naïve Bayes classifiers, it does not require conditional independence of the model features. Resources for Article: Further resources on this subject: Differences in style between Java and Scala code [Article] Dependency Management in SBT [Article] Introduction to MapReduce [Article]
Read more
  • 0
  • 0
  • 6786
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-how-to-run-hadoop-on-google-cloud-1
Robi Sen
15 Dec 2014
4 min read
Save for later

How to Run Hadoop on Google Cloud – Part 1

Robi Sen
15 Dec 2014
4 min read
Setting up and working with Hadoop can sometimes be difficult. Furthermore, most people with limited resources develop on Hadoop instances on Virtual Machines locally or on minimal hardware. The problem with this is that Hadoop is really designed to run on many machines in order to realize its full capabilities. In this two part series of posts, we will show you how you can get started with Hadoop in the cloud with Google services quickly and relatively easily. Getting Started The first thing you need in order to follow along is a Google account. If you don’t have a Google account, you can sign up here: https://accounts.google.com/SignUp. Next, you need to create a Google Compute and Google Cloud storage enabled project via the Google Developers Console. Let’s walk through that right now. First go to the Developer Console and log in using your Google account. You will need your credit card as part of this process; however, to complete this two part post series, you will not need to spend any money. Once you have logged in, you should see something like what is shown in Figure 1. Figure 1: Example view of Google Developers Console Now select Create Project. This will pop up the create new project windows, as shown in Figure 2. In the project name field, go ahead and name your project HadoopTutorial. For the Project ID, Google will assign you a random project ID or you can try to select your own. Whatever your project ID is, just make note of it since we will be using it later. If, however, you forget your project ID, you can just come back to the Google console to look it up. You do not need to select the first checkbox shown in Figure 2, but go ahead and check the second checkbox, which is the terms of service. Now select Create. Figure 2: New Project window When you select Create, be prepared for a small delay as Google builds your project. When it is done, you should see a screen like that shown in Figure 3.   Figure 3: Project Dashboard Now click on Enable an API. You should now see the APIs screen. Make sure you check to see whether the Google Cloud Storage and Google Cloud Storage JSON API options are enabled, that is, showing a green ON button. Now scroll down and find the Google Compute Engine and select the OFF button to enable it like the one shown in Figure 4. If you don’t have a payment account set up on Google, you will be asked to do that now and put in a valid credit card. Once that is done, you can go back and enable the Google Compute Engine.   Figure 4: Setting up your Google APIs     You should now have your Google developer account up and running. In the next post, I will walk you through the installation of the Google Cloud SDK and setting up Hadoop via Windows and Cygwin. Read part 2 here. Want more Hadoop content? Check out our dynamic Hadoop page, updated with our latest titles and most popular content. About the author Robi Sen, CSO at Department 13, is an experienced inventor, serial entrepreneur, and futurist whose dynamic twenty-plus year career in technology, engineering, and research has led him to work on cutting edge projects for DARPA, TSWG, SOCOM, RRTO, NASA, DOE, and the DOD. Robi also has extensive experience in the commercial space, including the co-creation of several successful start-up companies. He has worked with companies such as Under Armour, Sony, CISCO, IBM, and many others to help build out new products and services. Robi specializes in bringing his unique vision and thought process to difficult and complex problems, allowing companies and organizations to find innovative solutions that they can rapidly operationalize or go to market with.
Read more
  • 0
  • 0
  • 13611

article-image-how-to-run-hadoop-on-google-cloud-2
Robi Sen
15 Dec 2014
7 min read
Save for later

How to Run Hadoop on Google Cloud – Part 2

Robi Sen
15 Dec 2014
7 min read
Setting up and working with Hadoop can sometimes be difficult. Furthermore, most people with limited resources develop on Hadoop instances on Virtual Machines locally or on minimal hardware. The problem with this is that Hadoop is really designed to run on many machines in order to realize its full capabilities. In this two part series of posts (read part 1 here), we will show you how you can quickly get started with Hadoop in the cloud with Google services. In the last part in this series, we installed our Google developer account. Now it is time to install the Google Cloud SDK. Installing the Google Cloud SDK To work with the Google Cloud SDK, we need a Cygwin 32-bit version. Get it here, even if you have a 64-bit processor. The reason for this is that the Python 64-bit version for Windows has issues that make it incompatible with many common Python tools. So you should stick with the 32-bit version. Next, when you install Cygwin, you need to make sure you select Python (note that if you do not install the Cygwin version of Python, your installation will fail), openssh, and curl. You can do this when you get to the package screen by typing openssh or curl in the search bar at top and selecting the package under "net," then by selecting the check box under "Bin" for openssh. Do the same for curl. You should see something like what is shown in Figures 1 and 2 respectively. Figure 1: Adding openssh   Figure 2: Adding curl to Cygwin Now go ahead and start Cygwin by going to Start -> All Programs -> Cygwin -> Cygwin Terminal. Now use curl to install the Google Cloud SDK by typing the following command “$ curl https://sdk.cloud.google.com | bash,” which will install the Google Cloud SDK from the Internet. Follow the prompts to complete the setup. When prompted, if you would like to update your system path, select "y" and when complete, restart Cygwin. After you restart Cygwin, you need to authenticate with the Google Cloud SDK. To do this type "gcloud auth login –no-launch-browser" like in Figure 3.   Figure 3: Authenticating with Google Cloud SDK tools Cloud SDK will then give you a URL that you should copy and paste in your browser. You will then be asked to log in with your Google account and accept the permissions requested by the SDK as in Figure 4.   Figure 4: Google Cloud authorization via OAuth Google will provide you with a verification code that you can cut and paste into the command line and if everything works, you should be logged in. Next, set your project ID for this session by using the command "$ gcloud config set project YOURPROJECTID" as in Figure 5.   Figure 5: Setting your project ID Now you need to download the set of scripts that will help you set up Hadoop in Google Cloud Storage.[1] Make sure you do not close this command-line window because we are going to use it again. Download the Big Data utilities scripts to set up Hadoop in the Cloud here. Once you have downloaded the zip, unpack it and place it in the directory wherever you want. Now, in the command line, type "gsutil mb -p YOURPROJECTID gs://SOMEBUCKETNAME." If everything goes well, you should see something like Figure 6. Figure 6: Creating your Google Cloud Storage bucker YOURPROJECTID is the project ID you created or were assigned earlier and SOMEBUCKETNAME is whatever you want your bucket to be called. Unfortunately, bucket names must be unique. Read more here, so using something like your company domain name and some other unique identifier might be a good idea. If you do not pick a unique name, you will get an error. Now go to the directory where you stored your Big Data Utility Scripts and open bdutil_env.sh in a text editor as in Figure 7.   Figure 7: Editing the bdutil_env.sh file Now add your bucket name for the CONFIGBUCKET  value in the file and your project ID for the PROJECT value like in Figure 8. Now save the file. Figure 8: Editing the bdutil_env.sh file Once you have the bdutil_env.sh file, you need to test that you can reach your compute instances via gcutil and ssh. Let’s walk through that now to set it up so you can do it in the future. In Cygwin, create a test instance to play with and set up gcutil by typing the command "gcutil addinstance mytest," then hit Enter. You will be asked to select a time zone (I selected 6), a number of processors, and the like. Go ahead and select the items you want since after we create this instance and connect to it, we will delete it. After you walk through the setup steps, Google will create your instance. During the creation, you will be asked for a passphrase. Make sure you use a passphrase you can remember. Now, in the command line, type "gcutil ssh mytest." This will now try to connect to your "mytest" instance via SSH, and if it’s the first time you have done this, you will be asked to type in a passphrase. Do not type a passphrase; just leave it blank and select Enter. This will then create a public and private ssh key. If everything works, you should now connect to the instance and you will know gcutil ssh is working correct. Go ahead and type "exit" and then "gcutil deleteinstance mytest" and select "y" for all questions. This will trigger the Google Cloud to destroy your test instance. Now in Cygwin, navigate to where you placed the dbutils download. If you are not familiar with Cygwin, you can navigate to any directory on the c drive by using the "cygdrive/c" and then set the Unix style path to your directory. So, for example, on my computer it would look like Figure 9. Figure 9: Navigating to the dbutils folder in Cygwin Now we can attempt a deployment of Haddop by typing "./bdutil deploy" like in Figure 10. Figure 10: Deploying Hadoop The system will now try to deploy your Hadoop instance to the Cloud. You might be prompted to create a staging directory as well while the script is running. Go ahead and type "y" to accept. You should now see a message saying "Deployment complete." It might take several minutes for your job to complete, so be patient. When it is finished, check to see whether your cluster is up by typing in "gcutil listinstances", where you will see something like what is shown in Figure 11. Figure 11: A list of Hadoop instances running From here, you need to test your deployment, which you do via the command "gcutil ssh –project=YOURPROJECTID hs-ghfs-nn < Hadoop-validate-setup.sh" like in Figure 12. Figure 12: Validating Hadoop deployment If the script runs successfully, you should see an output like "teragen, terasort, teravalidate passed." From there, go ahead and delete the project by typing "./bdutil delete." This will delete the deployed virtual machines (VMs) and associated artifacts. When it’s done, you should see message "Done deleting VMs!" Summary In this two part blog post series, you learned how to use the Google Cloud SDK to set up Hadoop via Windows and Cygwin. Now you have Cygwin set up and configured to build, connect to the Google Cloud, set up instances, and deploy Hadoop. If you want even more Hadoop content, visit our Hadoop page. Featuring our latest releases and our top free Hadoop content, it's the centre of Packt's Big Data coverage. About the author Robi Sen, CSO at Department 13, is an experienced inventor, serial entrepreneur, and futurist whose dynamic twenty-plus year career in technology, engineering, and research has led him to work on cutting edge projects for DARPA, TSWG, SOCOM, RRTO, NASA, DOE, and the DOD. Robi also has extensive experience in the commercial space, including the co-creation of several successful start-up companies. He has worked with companies such as Under Armour, Sony, CISCO, IBM, and many others to help build out new products and services. Robi specializes in bringing his unique vision and thought process to difficult and complex problems, allowing companies and organizations to find innovative solutions that they can rapidly operationalize or go to market with.
Read more
  • 0
  • 0
  • 4500

article-image-how-to-build-a-koa-web-application-part-1
Christoffer Hallas
15 Dec 2014
8 min read
Save for later

How to Build a Koa Web Application - Part 1

Christoffer Hallas
15 Dec 2014
8 min read
You may be a seasoned or novice web developer, but no matter your level of experience, you must always be able to set up a basic MVC application. This two part series will briefly show you how to use Koa, a bleeding edge Node.js web application framework to create a web application using MongoDB as its database. Koa has a low footprint and tries to be as unbiased as possible. For this series, we will also use Jade and Mongel, two Node.js libraries that provide HTML template rendering and MongoDB model interfacing, respectively. Note that this series requires you to use Node.js version 0.11+. At the end of the series, we will have a small and basic app where you can create pages with a title and content, list your pages, and view them. Let’s get going! Using NPM and Node.js If you do not already have Node.js installed, you can download installation packages at the official Node.js website, http://nodejs.org. I strongly suggest that you install Node.js in order to code along with the article. Once installed, Node.js will add two new programs to your computer that you can access from your terminal; they’re node and npm. The first program is the main Node.js program and is used to run Node.js applications, and the second program is the Node Package Manager and it’s used to install Node.js packages. For this application we start out in an empty folder by using npm to install four libraries: $ npm install koa jade mongel co-body Once this is done, open your favorite text editor and create an index.js file in the folder in which we will now start our creating our application. We start by using the require function to load the four libraries we just installed: var koa = require('koa'); var jade = require('jade'); var mongel = require('mongel'); var parse = require(‘co-body'); This simply loads the functionality of the libraries into the respective variables. This lets us create our Page model and our Koa app variables: var Page = mongel('pages', ‘mongodb://localhost/app'); var app = koa(); As you can see, we now use the variables mongel and koa that we previously loaded into our program using require. To create a model with mongel, all we have to do is give the name of our MongoDB collection and a MongoDB connection URI that represents the network location of the database; in this case we’re using a local installation of MongoDB and a database called app. It’s simple to create a basic Koa application, and as seen in the code above, all we do is create a new variable called app that is the result of calling the Koa library function. Middleware, generators, and JavaScript Koa uses a new feature in JavaScript called generators. Generators are not widely available in browsers yet except for some versions of Google Chrome, but since Node.js is built on the same JavaScript as Google Chrome it can use generators. The generators function is much like a regular JavaScript function, but it has a special ability to yield several values along with the normal ability of returning a single value. Some expert JavaScript programmers used this to create a new and improved way of writing asynchronous code in JavaScript, which is required when building a networked application such as a web application. The generators function is a complex subject and we won’t cover it in detail. We’ll just show you how to use it in our small and basic app. In Koa, generators are used as something called middleware, a concept that may be familiar to you from other languages such as Ruby and Python. Think of middleware as a stack of functions through which an HTTP request must travel in order to create an appropriate response. Middleware should be created so that the functionality of a given middleware is encapsulated together. In our case, this means we’ll be creating two pieces of middleware: one to create pages and one to list pages or show a page. Let’s create our first middleware: app.use(function* (next) { … }); As you can see, we start by calling the app.use function, which takes a generator as its argument, and this effectively pushes the generator into the stack. To create a generator, we use a special function syntax where an asterisk is added as seen in the previous code snippet. We let our generator take a single argument called next, which represents the next middleware in the stack, if any. From here on, it is simply a matter of checking and responding to the parameters of the HTTP request, which are accessible to us in the Koa context. This is also the function context, which in JavaScript is the keyword this, similar to other languages and the keyword self: if (this.path != '/create') { yield next; return } Since we’re creating some middleware that helps us create pages, we make sure that this request is for the right path, in our case, /create; if not, we use the yield keyword and the next argument to pass the control of the program to the next middleware. Please note the return keyword that we also use; this is very important in this case as the middleware would otherwise continue while also passing control to the next middleware. This is not something you want to happen unless the middleware you’re in will not modify the Koa context or HTTP response, because subsequent middleware will always expect that they’re now in control. Now that we have checked that the path is correct, we still have to check the method to see if we’re just showing the form to create a page, or if we should actually create a page in the database: if (this.method == 'POST') { var body = yield parse.form(this); var page = yield Page.createOne({    title: body.title,    contents: body.contents }); this.redirect('/' + page._id); return } else if (this.method != 'GET') { this.status = 405; this.body = 'Method Not Allowed'; return } To check the method, we use the Koa context again and the method attribute. If we’re handling a POST request we now know how to create a page, but this also means that we must extract extra information from the request. Koa does not process the body of a request, only the headers, so we use the co-body library that we downloaded early and loaded in as the parse variable. Notice how we yield on the parse.form function; this is because this is an asynchronous function and we have to wait until it is done before we continue the program. Then we proceed to use our mongel model Page to create a page using the data we found in the body of the request, again this is an asynchronous function and we use yield to wait before we finally redirect the request using the page’s database id. If it turns out the method was not POST, we still want to use this middleware to show the form that is actually used to issue the request. That means we have to make sure that the method is GET, so we added an else if statement to the original check, and if the request is neither POST or GET we respond with an HTTP status 405 and the message Method Not Allowed, which is the appropriate response for this case. Notice how we don’t yield next; this is because the middleware was able to determine a satisfying response for the request and it requires no further processing. Finally, if the method was actually POST, we use the Jade library that we also installed using npm to render a create.jade template in HTML: var html = jade.renderFile('create.jade'); this.body = html; Notice how we set the Koa context’s body attribute to the rendered HTML from Jade; all this does is tell Koa that we want to send that back to the browser that sent the request. Wrapping up You are well on your way to creating your Koa app. In Part 2 we will implement Jade templates and list and view pages. Ready for the next step? Read Part 2 here. Explore all of our top Node.js content in one place - visit our Node.js page today! About the author Christoffer Hallas is a software developer and entrepreneur from Copenhagen, Denmark. He is a computer polyglot and contributes to and maintains a number of open source projects. When not contemplating his next grand idea (which remains an idea) he enjoys music, sports, and design of all kinds. Christoffer can be found on GitHub as hallas and at Twitter as @hamderhallas.
Read more
  • 0
  • 0
  • 5178

article-image-how-to-auto-scale-your-cloud-with-saltstack
Nicole Thomas
15 Dec 2014
10 min read
Save for later

How to Auto-Scale Your Cloud with SaltStack

Nicole Thomas
15 Dec 2014
10 min read
What is SaltStack? SaltStack is an extremely fast, scalable, and powerful remote execution engine and configuration management tool created to control distributed infrastructure, code, and data efficiently. At the heart of SaltStack, or “Salt”, is its remote execution engine, which is a bi-directional, secure communication system administered through the use of a Salt Master daemon. This daemon is used to control Salt Minion daemons, which receive commands from the remote Salt Master. A major component of Salt’s approach to configuration management is Salt Cloud, which was made to manage Salt Minions in cloud environments. The main purpose of Salt Cloud is to spin up instances on cloud providers, install a Salt Minion on the new instance using Salt’s Bootstrap Script, and configure the new minion so it can immediately get to work. Salt Cloud makes it easy to get an infrastructure up and running quickly and supports an array of cloud providers such as OpenStack, Digital Ocean, Joyent, Linode, Rackspace, Amazon EC2, and Google Compute Engine to name a few. Here is a full list of cloud providers supported by SaltStack and the automation features supported for each. What is cloud auto scaling? One of the most formidable benefits of cloud application hosting and data storage is the cloud infrastructure’s capacity to scale as demand fluctuates. Many cloud providers offer auto scaling features that automatically increase or decrease the number of instances that are up and running in a user’s cloud at any given time. These components generate new instances as needed to ensure optimal performance as activity escalates, while during idle periods, instances are destroyed to reduce costs. To harness the power of cloud auto-scaling technologies, SaltStack provides two reactor formulas that integrate Salt’s configuration management and remote execution capabilities for either Amazon EC2 Auto Scaling or Rackspace Auto Scale. The Salt Cloud Reactor Salt Formulas can be very helpful in the rapid build out of management frameworks for cloud infrastructures. Formulas are pre-written Salt States that can be used to configure services, install packages, or any other common configuration management tasks. The Salt Cloud Reactor is a formula that allows Salt to interact with supported Salt Cloud providers who provide cloud auto scaling features. (Note: at the time this article was written, the only supported Salt Cloud providers with cloud auto scaling capabilities were Rackspace Auto Scale and Amazon EC2 Auto Scaling. The Salt Cloud Reactor can also be used directly with EC2 Auto Scaling, but it is recommended that the EC2 Autoscale Reactor be used instead, as discussed in the following section.) The Salt Cloud Reactor allows SaltStack to know when instances are spawned or destroyed by the cloud provider. When a new instance comes online, a Salt Minion is automatically installed and the minion’s key is accepted by the Salt Master. If the configuration for the minion contains the appropriate startup state, it will configure itself and start working on its tasks. Accordingly, when an instance is deleted by the cloud provider, the minion’s key is removed from the Salt Master. In order to use the Salt Cloud Reactor, the Salt Master must be configured appropriately. In addition to applying all necessary settings on the Salt Master, a Salt Cloud query must be executed on a regular basis. The query polls data from the cloud provider to collect changes in the auto scaling sequence, as cloud providers using the Salt Cloud Reactor do not directly trigger notifications to Salt upon instance creation and deletion. The cloud query must be issued via a scheduling system such as cron or the Salt Scheduler. Once the Salt Master has been configured and query scheduling has been implemented, the reactor will manage itself and allow the Salt Master to interact with any Salt Minions created or destroyed by the auto scaling system. The EC2 Autoscale Reactor Salt’s EC2 Autoscale Reactor enables Salt to collaborate with Amazon EC2 Auto Scaling. Similarly to the Salt Cloud Reactor, the EC2 Autoscale Reactor will bootstrap a Salt Minion on any newly created instances and the Salt Master will automatically accept the new minion’s key. Additionally, when an EC2 instance is destroyed, the Salt Minion’s key will be automatically removed from the Salt Master. However, the EC2 Auto Scale Reactor formula differs from the Salt Cloud Reactor formula in one major way. Amazon EC2 provides notifications directly to the reactor when the EC2 cloud is scaled up or down, making it easy for Salt to immediately bootstrap new instances with a Salt Minion, or to delete old Salt Minion keys from the master. This behavior, therefore, does not require any kind of scheduled query to poll EC2 for changes in scale like the Salt Cloud Reactor demands. Changes to the EC2 cloud can be acted upon by the Salt Master immediately, whereas changes in clouds using the Salt Cloud Reactor may experience a delay in the instance being created and the Salt Master bootstrapping the instance with a new minion. Configuring the EC2 Autoscale Reactor Both of the cloud auto scaling reactors were only recently added to the SaltStack arsenal, and as such, the Salt develop branch is required to set up auto any scaling capabilities. To get started, clone the Salt repository from GitHub onto the machine serving as the Salt Master: git clone https://github.com/saltstack/salt Depending on the operating system you are using, there are a few dependencies that also need to be installed to run SaltStack from the develop branch. Check out the Installing Salt for Development documentation for OS-specific instructions. Once Salt has been installed for development, the Salt Master needs to be configured. First, create the default salt directory in /etc : mkdir /etc/salt The default Salt Master configuration file resides in salt/conf/master. Copy this file into the new salt directory: cp path/to/salt/conf/master /etc/salt/master The Salt Master configuration file is completely commented out, as the default configuration for the master will work on most systems. However, some additional settings must be configured to enable the EC2 Autoscale Reactor to work with the Salt Master. Under the external_auth section of the master configuration file, replace the commented out lines with the following: external_auth:   pam:     myuser:       - .*       - ‘@runner’       - ‘@wheel’ rest_cherrypy:   port: 8080   host: 0.0.0.0   webhook_url: /hook   webhook_disable_auth: True reactor:   - ‘salt/netapi/hook/ec2/autoscale’:     - ‘/srv/reactor/ec2-autoscale.sls’ ec2.autoscale:   provider: my-ec2-config   ssh_username: ec2-user These settings allow the Salt API web hook system to interact with EC2. When a web request is received from EC2, the Salt API will execute an event for the reactor system to respond to. The final ec2.autoscale setting points the reactor to the corresponding Salt Cloud provider configuration file. If authenticity problems with the reactor’s web hook occur, an email notification from Amazon will be sent to the user. To configure the Salt Master to connect to a mail server, see the example SMTP settings in the EC2 Autoscale Reactor documentation. Next, the Salt Cloud provider configuration file must be created. First, create the cloud provider configuration directory: mkdir /etc/salt/cloud.providers.d In /etc/salt/cloud.providers.d, create a file named ec2.conf, and set the following configurations according to your Amazon EC2 account: my-ec2-config:   id: <my aws id>   key: <my aws key>   keyname: <my aws key name>   securitygroup: <my aws security group>   private_key: </path/to/my/private_key.pem>   location: us-east-1   provider: ec2   minion:     master: saltmaster.example.com The last line, master: saltmaster.example.com, represents the location of the Salt Master so the Salt Minions know where to connect once it’s up and running. To set up the actual reactor, create a new reactor directory, download the ec2-autoscale-reactor formula, and copy the reactor formula into the new directory, like so: mkdir /srv/reactor cp path/to/downloaded/package/ec2-autoscale.sls /srv/reactor/ec2-autoscale.sls The last major configuration step is to configure all of the appropriate settings on your EC2 account. First, log in to your AWS account and set up SNS HTTP(S) notifications by selecting SNS (Push Notification Service) from the AWS Console. Click Create New Topic, enter a topic name and a display name, and click the Create Topic button. Then, inside the Topic Details area, click Create Subscription. Choose HTTP or HTTPS as needed and enter the web hook for the Salt API. Assuming your Salt Master is set up at https://saltmaster.example.com, the final web hook endpoint will be: https://saltmaster.example.com/hook/ec2/autoscale. Finally, click Subscribe. Next, set up the launch configurations by choosing EC2 (Virtual Servers in the Cloud) from the AWS Console. Then, select Launch Configurations on the left-hand side. Click Create Launch Configuration and follow the prompts to define the appropriate settings for your cloud. Finally, on the review screen, click Create Launch Configuration to save your settings. Once the launch configuration is set up, click Auto Scaling Groups from the left-hand navigation menu to create auto scaling variables such as the minimum and maximum number of instances your cloud should contain. Click Create Auto Scaling Group, choose Create an Auto Scaling group from an existing launch configuration, select the appropriate configuration, and then click Next Step. From there, follow the prompts until you reach the Configure Notifications screen. Click Add Notification and choose the notification setting that was configured during the SNS configuration step. Finally, complete the rest of the prompts. Congratulations! At this point, you should have successfully configured SaltStack to work with EC2 Auto Scaling! Salt Scheduler As mentioned in the Salt Cloud Reactor section, some type of scheduling system must be implemented when using the Salt Cloud Reactor formula. SaltStack provides its own scheduler, which can be used by adding the following state to the Salt Master’s configuration file: schedule:   job1:     function: cloud.full_query     seconds: 300 Here, the seconds setting ensures that the Salt Master will perform a salt-cloud --full-query command every 5 minutes. A minimum value of 300 seconds or greater is recommended, however, the value can be changed as necessary. Salting instances from the web interface Another exciting quality of Salt’s auto-scale reactor formulas is once a reactor is configured, the respective cloud provider web interface can be used to spin up new instances that are automatically “Salted”. Since the reactor integrates with the web interface to automatically install a Salt Minion on any new instances, it will perform the same operations when instances are created manually via the web interface. The same functionality is true for manually deleting instances: if an instance is manually destroyed on the web interface, the corresponding minion’s key will be removed from the Salt Master. More resources For troubleshooting, more configuration options, or SaltStack specifics, SaltStack has many helpful resources such as SaltStack, Salt Cloud, Salt Cloud Reactor, and EC2 Autoscale Reactor documentation. SaltStack also has a thriving, active, and friendly open source community. About the Author Nicole Thomas is a QA Engineer at SaltStack, Inc. Before coming to SaltStack, she wore many hats from web and Android developer to contributing editor to working in Environmental Education. Nicole recently graduated Summa Cum Laude from Westminster College with a degree in Computer Science. Nicole also has a degree in Environmental Studies from the University of Utah.
Read more
  • 0
  • 0
  • 11265
article-image-orchestrate-multiple-docker-containers-simply-using-fig
Felix Rabe
15 Dec 2014
7 min read
Save for later

Orchestrate Multiple Docker Containers Simply Using Fig

Felix Rabe
15 Dec 2014
7 min read
When you start learning how to use Docker, you play around running a single container in a single project. Soon after, you want to start multiple Docker containers in multiple projects. A nifty little tool called Fig helps you do just that. Sneak preview At the end of this blog post, you will have the following small sample app running on Docker using a MongoDB database and a Node.js web server in two separate containers:   But first, some background. Bash vs Fig Before this article, I had written my own shell script for the setup of named-data.education. But as I was exploring the realm of Docker orchestration, I came to the conclusion that throwing my script out and going with an existing solution would be the better practice. I ended up choosing Fig because it supports (and simplifies) the workflow I had implemented with my custom shell script. Also, it was recently acquired by Docker, and will soon be integrated with Docker. What does Fig do for you? It builds, runs, and removes multiple containers together in a single command. It keeps docker command-line arguments out of sight and inside the fig.yml file. This reduces long docker run … commands to a simple fig up command, similar to vagrant up. It avoids naming conflicts by giving image and container names project-specific prefixes, derived from the name of the directory that contains the fig.yml file. It knows the state in which the application environment is in, so docker ps ; docker stop ; docker rm just becomes fig up, which restarts running containers transparently. The fig.yml file is much more readable and maintainable than an equivalent shell script would be. Life cycle of a single Docker container (without Fig) This state diagram illustrates the life cycle of a typical Docker container:   States source - There is a Dockerfile, but nothing was done with it built - A Docker image was built from the Dockerfile running - A Docker container was started from the Docker image stopped - The Docker container has been stopped or it has stopped on its own Transitions These actually correspond to Docker commands, the most prominent ones being build and run: build - Takes a Dockerfile and creates a Docker image run - Takes a Docker image and runs a command in a container stop - Stops a running Docker container, or the container dies rm - Removes a stopped Docker container rmi - Removes a Docker image Life cycle of multiple Docker containers with Fig This diagram illustrates how Fig orchestrates multiple Docker containers: These transitions also correspond to Fig commands, with fig up being the champion here. (There is also a fig run command, but it has a marginal role in comparison.) Okay, let's get practical. A web app and a database As an example, let's say your app is really simple and consists of just a database and a web frontend: In Docker, this architecture is implemented by running each part in a separate container. The external connection, between NodeJS and the Internet, is realized by exposing a port, whereas the internal connection, between NodeJS and MongoDB, is realized using a link. Application source code For this sample application, create a directory with the following layout with the listings shown below, or get the source code from GitHub: fig-nodejs-mongodb-example/ fig.yml web/ Dockerfile liststorage.coffee package.json server.coffee fig.yml: web: build: ./web ports: - "8080:8080" links: - db db: image: mongo:2.6 Dockerfile: FROM node:0.10 ADD package.json /code/ WORKDIR /code RUN npm install ADD . /code CMD ["./node_modules/.bin/coffee", "./server.coffee"] liststorage.coffee: {MongoClient, ObjectID} = require 'mongodb' class module.exports.ListStorage constructor: -> @ready = false @collection = null MongoClient.connect 'mongodb://db_1:27017/list', (err, db) => throw err if err db.createCollection 'list', (err, collection) => throw err if err @ready = true @collection = collection toArray: (callback) -> return callback new Error 'not ready' unless @ready @collection.find().toArray (err, list) -> return callback err if err callback null, list push: (item, callback) -> doc = item: item @collection.insert doc, {w: 1}, (err, result) -> return callback err if err callback null remove: (_id, callback) -> @collection.remove {_id: ObjectID(_id)}, {w: 1}, (err, result) -> return callback err if err callback null package.json: (The bare minimum is to make npm install happy, but normally npm init should be used to create this file.) { "dependencies": { "body-parser": "^1.6.6", "coffee-script": "^1.8.0", "express": "^4.8.5", "handlebars": "^2.0.0-beta.1", "mongodb": "^1.4.9" } } server.coffee: #!/usr/bin/env coffee require 'coffee-script/register' ListStorage = require('./liststorage').ListStorage listStorage = new ListStorage handlebars = require 'handlebars' indexTemplate = handlebars.compile ''' <title>List</title> <h1>List</h1> <ul> {{#each items}} <li><a href="/delete/{{_id}}">[&times;]</a> {{item}}</li> {{/each}} </ul> <form method="POST"> <label>Add something:</label> <input name="item" autofocus="autofocus" /> <input type="submit" value="Submit" /> </form> ''' express = require 'express' bodyParser = require 'body-parser' app = express() app.use bodyParser.urlencoded extended: true app.get '/', (req, res) -> listStorage.toArray (err, items) -> throw err if err res.send indexTemplate items: items app.post '/', (req, res) -> listStorage.push req.body.item, (err) -> throw err if err res.redirect '/' app.get '/delete/:_id', (req, res) -> listStorage.remove req.params._id, (err) -> throw err if err res.redirect '/' app.listen 8080 Start it up First, make sure you have Docker and Fig installed. (This example has been tested with Fig 0.5.2 and Docker 1.2.0. On OS X, brew install fig works fairly well, together with docker-osx or boot2docker.) Then, open the terminal and type the following: cd fig-nodejs-mongodb-example fig up -d Then (assuming you run docker-osx), open http://localdocker:8080/ and play around - knowing that you did not have to manually set up two virtual machines! Other commands you might wish to use fig up (Ctrl-C will stop this from running) fig logs - for logging fig stop - for stopping fig rm - for removing fig ps - for status Remarks The sections in fig.yml are called “services.” As Fig allows scaling by running multiple instances of services, link aliases get an additional suffix inside containers compared to plain Docker; db_1 for Fig versus just db for Docker. Also, as Docker (and Fig) manage the container's /etc/hosts file, you get a db_1 host for free. Now, go have fun and keep containin’! About the author Felix Rabe has been programming and working with different technologies and companies at different levels since 1993. Currently, he is researching and promoting Named Data Networking (http://named-data.net/), an evolution of the Internet architecture that currently relies on the host-bound Internet Protocol.
Read more
  • 0
  • 0
  • 5363

article-image-qgis-feature-selection-tools
Packt
05 Dec 2014
4 min read
Save for later

QGIS Feature Selection Tools

Packt
05 Dec 2014
4 min read
 In this article by Anita Graser, the author of Learning QGIS Third Edition, we will cover the following topics: Selecting features with the mouse Selecting features using expressions Selecting features using Spatial queries (For more resources related to this topic, see here.) Selecting features with the mouse The first group of tools in the Attributes toolbar allows us to select features on the map using the mouse. The following screenshot shows the Select Feature(s) tool. We can select a single feature by clicking on it or select multiple features by drawing a rectangle. The other tools can be used to select features by drawing different shapes: polygons, freehand areas, or circles around the features. All features that intersect with the drawn shape are selected. Holding down the Ctrl key will add the new selection to an existing one. Similarly, holding down Ctrl + Shift will remove the new selection from the existing selection. Selecting features by expression The second type of select tool is called Select by Expression, and it is also available in the Attribute toolbar. It selects features based on expressions that can contain references and functions using feature attributes and/or geometry. The list of available functions is pretty long, but we can use the search box to filter the list by name to find the function we are looking for faster. On the right-hand side of the window, we will find Selected Function Help, which explains the functionality and how to use the function in an expression. The Function List option also shows the layer attribute fields, and by clicking on Load all unique values or Load 10 sample values, we can easily access their content. As with the mouse tools, we can choose between creating a new selection or adding to or deleting from an existing selection. Additionally, we can choose to only select features from within an existing selection. Let's have a look at some example expressions that you can build on and use in your own work: Using the lakes.shp file in our sample data, we can, for example, select big lakes with an area bigger than 1,000 square miles using a simple attribute query, "AREA_MI" > 1000.0, or using geometry functions such as $area > (1000.0 * 27878400). Note that the lakes.shp CRS uses feet, and we, therefore, have to multiply by 27,878,400 to convert from square feet to square miles. The dialog will look like the one shown in the following screenshot. We can also work with string functions, for example, to find lakes with long names, such as length("NAMES") > 12, or lakes with names that contain the s or S character, such as lower("NAMES") LIKE '%s%', which first converts the names to lowercase and then looks for any appearance of s. Selecting features using spatial queries The third type of tool is called Spatial Query and allows us to select features in one layer based on their location, relative to the features in a second layer. These tools can be accessed by going to Vector | Research Tools | Select by location and then going to Vector | Spatial Query | Spatial Query. Enable it in Plugin Manager if you cannot find it in the Vector menu. In general, we want to use the Spatial Query plugin, as it supports a variety of spatial operations such as crosses, equals, intersects, is disjoint, overlaps, touches, and contains, depending on the layer's geometry type. Let's test the Spatial Query plugin using railroads.shp and pipelines.shp from the sample data. For example, we might want to find all the railroad features that cross a pipeline; we will, therefore, select the railroads layer, the Crosses operation, and the pipelines layer. After clicking on Apply, the plugin presents us with the query results. There is a list of IDs of the result features on the right-hand side of the window, as you can see in the following screenshot. Below this list, we can select the Zoom to item checkbox, and QGIS will zoom to the feature that belongs to the selected ID. Additionally, the plugin offers buttons to directly save all the resulting features to a new layer. Summary This article introduced you to three solutions to select features in QGIS: selecting features with mouse, using spatial queries, and using expressions. Resources for Article: Further resources on this subject: Editing attributes [article] Server Logs [article] Improving proximity filtering with KNN [article]
Read more
  • 0
  • 0
  • 12869

article-image-solving-problems-with-spring-boot
Greg Turnquist
05 Dec 2014
4 min read
Save for later

Solving Problems With Spring Boot

Greg Turnquist
05 Dec 2014
4 min read
I first became aware of Spring Boot early in 2013. At the time, we were rebuilding Spring's website and decided to write a collection of guides that could be consumed during a single lunch break (at spring.io/guides). Realizing how much code Spring Boot saved us from writing (and explaining to readers) thanks to its auto-configuration feature, we embraced it fully. In fact, it led me to write several patches for Spring Boot to help with several of the guides I was writing. Discovering that boiler plate Spring code was unnecessary was incredibly exciting and very effective. Another of Spring Boot's amazing features was its support for properties. This was something I learned more about when I attended Dave Syer and Phil Webb's presentation at the 2013 SpringOne conference. The room was packed with attendees. The keynote presentation of Spring Boot from the night before had whet many appetites. I learned that not only did Spring Boot provide the means to inject critical values into auto-configured beans, it also had strong support to configure on any platform through different naming conventions. You could also override embedded property settings at any stage, even after deployment into production. At my previous company, I had built something by hand that was similar, but never as sophisticated. Given Java's frankly ineffective property APIs, it doesn't surprise me how much people like Spring Boot's solution to this. Another golden feature is Spring Boot's library of built-in actuators. These include metrics, controls, and reports. In a production environment, this type of material is critical. Not having to build it up for the nth time as I have done in the past makes it a killer feature to me. And Spring Boot's support for adding your own metrics and management endpoints is really cool. Every feature Spring Boot provides is incredibly appealing to quickly moving into coding features and deploying them into real apps. Things are simpler and are more aligned with what is needed when you deploy and maintain an app. You don't change gears when it comes production time. Spring Boot doesn't laden me with complex XML configuration files. Instead, I can configure things with code and simply properties. But when I need to customize something special, like a view resolver, Spring Boot gets out of the way by withdrawing its auto-configured one. I was speaking with a colleague a couple months ago and discovered he was helping a local school to setup a computer science workshop. Thanks to Spring Boot, he gave them advice on setting up some exercises where the students would be able to immediately start writing the code that displays "Hello World" on a web page. They wouldn't have to start with installing a build system nor standing up an application server. The thought of going through all those extras sounds really boring; I can only imagine how that would dampen the spirits of kids just getting started. Being to instead move right into application development, and seeing results within minutes, sounds more exciting than ever. So taking all this into account, I started writing my proposal earlier this year about Spring Boot. It was within two weeks of that when Packt reached out to me about writing some Spring Framework oriented book. I immediately polished up my proposal and responded with it. It didn't take Packt long (24 hours perhaps?) for them to leap at the idea. We hammered out the specifics in less than a week and got moving quickly. I have never been more excited about writing. Given that I have been reading blog articles about Spring Boot for almost two years, I have seen so many examples of how people are solving problems, not just building toy apps. I decided to tilt my book towards solving those problems and show how Spring Boot really is the innovative answer to modern application development. I hope everyone is able to enjoy it. About The Author Greg Turnquist is a test-bitten script junky. He is a member of the Spring team at Pivotal. He works on Spring Data REST, Spring Boot, and other Spring projects, while also working as an editor-at-large of Spring's Getting Started guides. He launched the Nashville JUG in 2010. He also created Spring Python and wrote Spring Python 1.1 and Python Testing Cookbook for Packt. He has been a Spring fan for years.
Read more
  • 0
  • 0
  • 7592
article-image-building-remote-controlled-tv-node-webkit
Roberto González
04 Dec 2014
14 min read
Save for later

Building a Remote-controlled TV with Node-Webkit

Roberto González
04 Dec 2014
14 min read
Node-webkit is one of the most promising technologies to come out in the last few years. It lets you ship a native desktop app for Windows, Mac, and Linux just using HTML, CSS, and some JavaScript. These are the exact same languages you use to build any web app. You basically get your very own Frameless Webkit to build your app, which is then supercharged with NodeJS, giving you access to some powerful libraries that are not available in a typical browser. As a demo, we are going to build a remote-controlled Youtube app. This involves creating a native app that displays YouTube videos on your computer, as well as a mobile client that will let you search for and select the videos you want to watch straight from your couch. You can download the finished project from https://github.com/Aerolab/youtube-tv. You need to follow the first part of this guide (Getting started) to set up the environment and then run run.sh (on Mac) or run.bat (on Windows) to start the app. Getting started First of all, you need to install Node.JS (a JavaScript platform), which you can download from http://nodejs.org/download/. The installer comes bundled with NPM (Node.JS Package Manager), which lets you install everything you need for this project. Since we are going to be building two apps (a desktop app and a mobile app), it’s better if we get the boring HTML+CSS part out of the way, so we can concentrate on the JavaScript part of the equation. Download the project files from https://github.com/Aerolab/youtube-tv/blob/master/assets/basics.zip and put them in a new folder. You can name the project’s folder youtube-tv  or whatever you want. The folder should look like this: - index.html   // This is the starting point for our desktop app- css         // Our desktop app styles- js           // This is where the magic happens- remote       // This is where the magic happens (Part 2)- libraries   // FFMPEG libraries, which give you H.264 video support in Node-Webkit- player      // Our youtube player- Gruntfile.js // Build scripts- run.bat     // run.bat runs the app on Windows- run.sh       // sh run.sh runs the app on Mac Now open the Terminal (on Mac or Linux) or a new command prompt (on Windows) right in that folder. Now we’ll install a couple of dependencies we need for this project, so type these commands to install node-gyp and grunt-cli. Each one will take a few seconds to download and install: On Mac or Linux: sudo npm install node-gyp -gsudo npm install grunt-cli -g  On Windows: npm install node-gyp -gnpm install grunt-cli -g Leave the Terminal open. We’ll be using it again in a bit. All Node.JS apps start with a package.json file (our manifest), which holds most of the settings for your project, including which dependencies you are using. Go ahead and create your own package.json file (right inside the project folder) with the following contents. Feel free to change anything you like, such as the project name, the icon, or anything else. Check out the documentation at https://github.com/rogerwang/node-webkit/wiki/Manifest-format: {"//": "The // keys in package.json are comments.", "//": "Your project’s name. Go ahead and change it!","name": "Remote","//": "A simple description of what the app does.","description": "An example of node-webkit","//": "This is the first html the app will load. Just leave this this way","main": "app://host/index.html","//": "The version number. 0.0.1 is a good start :D","version": "0.0.1", "//": "This is used by Node-Webkit to set up your app.","window": {"//": "The Window Title for the app","title": "Remote","//": "The Icon for the app","icon": "css/images/icon.png","//": "Do you want the File/Edit/Whatever toolbar?","toolbar": false,"//": "Do you want a standard window around your app (a title bar and some borders)?","frame": true,"//": "Can you resize the window?","resizable": true},"webkit": {"plugin": false,"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.36 (KHTML, like Gecko) Safari/537.36"}, "//": "These are the libraries we’ll be using:","//": "Express is a web server, which will handle the files for the remote","//": "Socket.io lets you handle events in real time, which we'll use with the remote as well.","dependencies": {"express": "^4.9.5","socket.io": "^1.1.0"}, "//": "And these are just task handlers to make things easier","devDependencies": {"grunt": "^0.4.5","grunt-contrib-copy": "^0.6.0","grunt-node-webkit-builder": "^0.1.21"}} You’ll also find Gruntfile.js, which takes care of downloading all of the node-webkit assets and building the app once we are ready to ship. Feel free to take a look into it, but it’s mostly boilerplate code. Once you’ve set everything up, go back to the Terminal and install everything you need by typing: npm installgrunt nodewebkitbuild You may run into some issues when doing this on Mac or Linux. In that case, try using sudo npm install and sudo grunt nodewebkitbuild. npm install installs all of the dependencies you mentioned in package.json, both the regular dependencies and the development ones, like grunt and grunt-nodewebkitbuild, which downloads the Windows and Mac version of node-webkit, setting them up so they can play videos, and building the app. Wait a bit for everything to install properly and we’re ready to get started. Note that if you are using Windows, you might get a scary error related to Visual C++ when running npm install. Just ignore it. Building the desktop app All web apps (or websites for that matter) start with an index.html file. We are going to be creating just that to get our app to run: <!DOCTYPE html><html><head><metacharset="utf-8"/><title>Youtube TV</title> <linkhref='http://fonts.googleapis.com/css?family=Roboto:500,400'rel='stylesheet'type='text/css'/><linkhref="css/normalize.css"rel="stylesheet"type="text/css"/><linkhref="css/styles.css"rel="stylesheet"type="text/css"/></head><body> <divid="serverInfo"><h1>Youtube TV</h1></div> <divid="videoPlayer"> </div> <script src="js/jquery-1.11.1.min.js"></script> <script src="js/youtube.js"></script><script src="js/app.js"></script> </body></html> As you may have noticed, we are using three scripts for our app: jQuery (pretty well known at this point), a Youtube video player, and finally app.js, which contains our app's logic. Let’s dive into that! First of all, we need to create the basic elements for our remote control. The easiest way of doing this is to create a basic web server and serve a small web app that can search Youtube, select a video, and have some play/pause controls so we don’t have any good reasons to get up from the couch. Open js/app.js and type the following: // Show the Developer Tools. And yes, Node-Webkit has developer tools built in! Uncomment it to open it automatically//require('nw.gui').Window.get().showDevTools(); // Express is a web server, will will allow us to create a small web app with which to control the playervar express = require('express');var app = express();var server = require('http').Server(app);var io = require('socket.io')(server); // We'll be opening up our web server on Port 8080 (which doesn't require root privileges)// You can access this server at http://127.0.0.1:8080var serverPort =8080;server.listen(serverPort); // All the static files (css, js, html) for the remote will be served using Express.// These assets are in the /remote folderapp.use('/', express.static('remote')); With those 7 lines of code (not counting comments) we just got a neat web server working on port 8080. If you were paying attention to the code, you may have noticed that we required something called socket.io. This lets us use websockets with minimal effort, which means we can communicate with, from, and to our remote instantly. You can learn more about socket.io at http://socket.io/. Let’s set that up next in app.js: // Socket.io handles the communication between the remote and our app in real time, // so we can instantly send commands from a computer to our remote and backio.on('connection', function (socket) { // When a remote connects to the app, let it know immediately the current status of the video (play/pause)socket.emit('statusChange', Youtube.status); // This is what happens when we receive the watchVideo command (picking a video from the list)socket.on('watchVideo', function (video) {// video contains a bit of info about our video (id, title, thumbnail)// Order our Youtube Player to watch that video   Youtube.watchVideo(video);}); // These are playback controls. They receive the “play” and “pause” events from the remotesocket.on('play', function () {   Youtube.playVideo();});socket.on('pause', function () {   Youtube.pauseVideo();}); }); // Notify all the remotes when the playback status changes (play/pause)// This is done with io.emit, which sends the same message to all the remotesYoutube.onStatusChange =function(status) {io.emit('statusChange', status);}; That’s the desktop part done! In a few dozen lines of code we got a web server running at http://127.0.0.1:8080 that can receive commands from a remote to watch a specific video, as well as handling some basic playback controls (play and pause). We are also notifying the remotes of the status of the player as soon as they connect so they can update their UI with the correct buttons (if it’s playing, show the pause button and vice versa). Now we just need to build the remote. Building the remote control The server is just half of the equation. We also need to add the corresponding logic on the remote control, so it’s able to communicate with our app. In remote/index.html, add the following HTML: <!DOCTYPE html><html><head><metacharset=“utf-8”/><title>TV Remote</title> <metaname="viewport"content="width=device-width, initial-scale=1, maximum-scale=1"/> <linkrel="stylesheet"href="/css/normalize.css"/><linkrel="stylesheet"href="/css/styles.css"/></head><body> <divclass="controls"><divclass="search"><inputid="searchQuery"type="search"value=""placeholder="Search on Youtube..."/></div><divclass="playback"><buttonclass="play">&gt;</button><buttonclass="pause">||</button></div></div> <divid="results"class="video-list"> </div> <divclass="__templates"style="display:none;"><articleclass="video"><figure><imgsrc=""alt=""/></figure> <divclass="info"><h2></h2></div> </article></div>  <script src="/socket.io/socket.io.js"></script><script src="/js/jquery-1.11.1.min.js"></script> <script src="/js/search.js"></script><script src="/js/remote.js"></script> </body></html> Again, we have a few libraries: Socket.io is served automatically by our desktop app at /socket.io/socket.io.js, and it manages the communication with the server. jQuery is somehow always there, search.js manages the integration with the Youtube API (you can take a look if you want), and remote.js handles the logic for the remote. The remote itself is pretty simple. It can look for videos on Youtube, and when we click on a video it connects with the app, telling it to play the video with socket.emit. Let’s dive into remote/js/remote.js to make this thing work: // First of all, connect to the server (our desktop app)var socket = io.connect(); // Search youtube when the user stops typing. This gives us an automatic search.var searchTimeout =null;$('#searchQuery').on('keyup', function(event){clearTimeout(searchTimeout);searchTimeout = setTimeout(function(){   searchYoutube($('#searchQuery').val());}, 500);}); // When we click on a video, watch it on the App$('#results').on('click', '.video', function(event){// Send an event to notify the server we want to watch this videosocket.emit('watchVideo', $(this).data());});  // When the server tells us that the player changed status (play/pause), alter the playback controlssocket.on('statusChange', function(status){if( status ==='play' ) {   $('.playback .pause').show();   $('.playback .play').hide();}elseif( status ==='pause'|| status ==='stop' ) {   $('.playback .pause').hide();   $('.playback .play').show();}});  // Notify the app when we hit the play button$('.playback .play').on('click', function(event){socket.emit('play');}); // Notify the app when we hit the pause button$('.playback .pause').on('click', function(event){socket.emit('pause');}); This is very similar to our server, except we are using socket.emit a lot more often to send commands back to our desktop app, telling it which videos to play and handle our basic play/pause controls. The only thing left to do is make the app run. Ready? Go to the terminal again and type: If you are on a Mac: sh run.sh If you are on Windows: run.bat If everything worked properly, you should be both seeing the app and if you open a web browser to http://127.0.0.1:8080 the remote client will open up. Search for a video, pick anything you like, and it’ll play in the app. This also works if you point any other device on the same network to your computer’s IP, which brings me to the next (and last) point. Finishing touches There is one small improvement we can make: print out the computer’s IP to make it easier to connect to the app from any other device on the same Wi-Fi network (like a smartphone). On js/app.js add the following code to find out the IP and update our UI so it’s the first thing we see when we open the app: // Find the local IPfunction getLocalIP(callback) {require('dns').lookup( require('os').hostname(),function (err, add, fam) {typeof callback =='function'? callback(add) :null;   });} // To make things easier, find out the machine's ip and communicate itgetLocalIP(function(ip){$('#serverInfo h1').html('Go to<br/><strong>http://'+ip+':'+serverPort+'</strong><br/>to open the remote');}); The next time you run the app, the first thing you’ll see is the IP for your computer, so you just need to type that URL in your smartphone to open the remote and control the player from any computer, tablet, or smartphone (as long as they are in the same Wi-Fi network). That's it! You can start expanding on this to improve the app: Why not open the app on a fullscreen by default? Why not get rid of the horrible default frame and create your own? You can actually designate any div as a window handle with CSS (using -webkit-app-region: drag), so you can drag the window by that div and create your own custom title bar. Summary While the app has a lot of interlocking parts, it's a good first project to find out what you can achieve with node-webkit in just a few minutes. I hope you enjoyed this post! About the author Roberto González is the co-founder of Aerolab, “an awesome place where we really push the barriers to create amazing, well-coded designs for the best digital products”. He can be reached at @robertcode.
Read more
  • 0
  • 0
  • 3098

article-image-test456456
Packt
02 Dec 2014
28 min read
Save for later

test456456

Packt
02 Dec 2014
28 min read
Advanced Techniques   and Reflection     In this chapter, we will discuss the flexibility and reusability of your code with the help of advanced techniques in Dart. Generic programming is widely useful and is about making your code type-unaware. Using types and generics makes your code safer and allows you to detect bugs early. The debate over errors versus exceptions splits developers into two sides. Which side to choose? It doesn't matter if you know the secret of using both. Annotation is another advanced technique used to decorate existing classes at runtime to change their behavior. Annotations can help reduce the amount of boilerplate code to write your applications. And last but not least, we will open Pandora's box through mirrors of reflection. In this chapter, we will cover the following topics:   Generics   Errors versus exceptions   Annotations   Reflection   Generics   Dart originally came with generics—a facility of generic programming. We have to tell the static analyzer the permitted type of a collection so it can inform us at compile time if we insert a wrong type of object. As a result, programs become clearer and safer to use. We will discuss how to effectively use generics and minimize the complications associated with them.       Advanced Techniques and Reflection Raw types   Dart supports arrays in the form of the List class. Let's say you use a list to store data. The data that you put in the list depends on the context of your code. The list may contain different types of data at the same time, as shown in the following code:   // List of data   List raw = [1, "Letter", {'test':'wrong'}]; // Ordinary item   double item = 1.23;   void main() {   // Add the item to array raw.add(item); print(raw);   }   In the preceding code, we assigned data of different types to the raw list. When the code executes, we get the following result:   [1, Letter, {test: wrong}, 1.23]   So what's the problem with this code? There is no problem. In our code, we intentionally used the default raw list class in order to store items of different types. But such situations are very rare. Usually, we keep data of a specific type in a list. How can we prevent inserting the wrong data type into the list? One way is to check the data type each time we read or write data to the list, as shown in the following code:   // Array of String data   List parts = ['wheel', 'bumper', 'engine']; // Ordinary item   double item = 1.23;   void main() {   if (item is String) {   // Add the item to array parts.add(item);   }   print(parts);   }                 [ 2 ]     Chapter 2 Now, from the following result, we can see that the code is safer and works as expected:   [wheel, bumper, engine]   The code becomes more complicated with those extra conditional statements. What should you do when you add the wrong type in the list and it throws exceptions? What if you forget to insert an extra conditional statement? This is where generics come to the fore.   Instead of writing a lot of type checks and class casts when manipulating a collection, we tell the static analyzer what type of object the list is allowed to contain. Here is the modified code, where we specify that parts can only contain strings:   // Array of String data   List<String> parts = ['wheel', 'bumper', 'engine']; // Ordinary item   double item = 1.23;   void main() {   // Add the item to array parts.add(item); print(parts);   }   Now, Listis a generic class with the String parameter. Dart Editor invokes the static analyzer to check the types in the code for potential problems at compile time and alert us if we try to insert a wrong type of object in our collection, as shown in the following screenshot:                                           [ 3 ]     Advanced Techniques and Reflection This helps us make the code clearer and safer because the static analyzer checks the type of the collection at compile time. The important point is that you shouldn't use raw types. As a bonus, we can use a whole bunch of shorthand methods to organize iteration through the list of items to cast safer. Bear in mind that the static analyzer only warns about potential problems and doesn't generate any errors. Dart checks the types of generic classes only in the check mode. Execution in the production mode or code compiled to JavaScript loses all the type information. Using generics   Let's discuss how to make the transition to using generics in our code with some real-world examples. Assume that we have the following AssemblyLine class:   part of assembly.room;   // AssemblyLine. class AssemblyLine { List of items on line. List _items = [];   Add [item] to line. add(item) {   _items.add(item);   }   Make operation on all items in line. make(operation) {   _items.forEach((item) { operation(item);   });   }   }   Also, we have a set of different kinds of cars, as shown in the following code:   part of assembly.room;   // Car   abstract class Car { // Color         [ 4 ]     Chapter 2 String color;   }   // Passenger car   class PassengerCar extends Car {   String toString() => "Passenger Car";   }   // Truck   class Truck extends Car { String toString() => "Truck"; }   Finally, we have the following assembly.room library with a main method:   library assembly.room;   part 'assembly_line.dart'; part 'car.dart';   operation(car) { print('Operate ${car}'); }   main() {   // Create passenger assembly line   AssemblyLine passengerCarAssembly = new AssemblyLine();   We can add passenger car passengerCarAssembly.add(new PassengerCar()); We can occasionally add Truck as well passengerCarAssembly.add(new Truck()); Operate passengerCarAssembly.make(operation);   }   In the preceding example, we were able to add the occasional truck in the assembly line for passenger cars without any problem to get the following result:   Operate Passenger Car   Operate Truck   This seems a bit farfetched since in real life, we can't assemble passenger cars and trucks in the same assembly line. So to make your solution safer, you need to make the AssemblyLinetype generic.         [ 5 ]     Advanced Techniques and Reflection Generic types   In general, it's not difficult to make a type generic. Consider the following example of the AssemblyLineclass:   part of assembly.room;   // AssemblyLine.   class AssemblyLine <E extends Car> {   o   List of items on line. List<E> _items = [];   o   Add [item] to line. add(E item) { _items.insert(0, item);   }   o   Make operation on all items in line. make(operation) {   _items.forEach((E item) { operation(item);   });   }   }   In the preceding code, we added one type parameter, E, in the declaration of the AssemblyLine class. In this case, the type parameter requires the original one to be asubtype of Car. This allows the AssemblyLine implementation to take advantage of Car without the need for casting a class. The type parameter E is known as a boundedtype parameter. Any changes to the assembly.room library will look like this:   library assembly.room;   part 'assembly_line.dart'; part 'car.dart';   operation(car) { print('Operate ${car}'); }   main() {   // Create passenger assembly line             [ 6 ]     Chapter 2 AssemblyLine<PassengerCar> passengerCarAssembly = new AssemblyLine<PassengerCar>();   We can add passenger car passengerCarAssembly.add(new PassengerCar()); We can occasionally add truck as well passengerCarAssembly.add(new Truck()); Operate passengerCarAssembly.make(operation);   }   The static analyzer alerts us at compile time if we try to insert the Truck argument in the assembly line for passenger cars, as shown in the following screenshot:                                                 After we fix the code in line 17, all looks good. Our assembly line is now safe. But if you look at the operation function, it is totally different for passenger cars than it is for trucks; this means that we must make the operation generic as well. The static analyzer doesn't show any warnings and, even worse, we cannot make the operation generic directly because Dart doesn't support generics for functions. But there is   a solution.                 [ 7 ]     Advanced Techniques and Reflection Generic functions   Functions, like all other data types in Dart, are objects, and they have the data type Function. In the following code, we will create an Operationclass as an implementation of Function and then apply generics to it as usual:   part of assembly.room;   // Operation for specific type of car   class Operation<E extends Car> implements Function {   o   Operation name final String name; o   Create new operation with [name] Operation(this.name);   o   We call our function here call(E car) {   print('Make ${name} on ${car}');   }   }   The gem in our class is the call method. As Operation implements Functionandhas a callmethod, we can pass an instance of our class as a function in themakemethod of the assembly line, as shown in the following code:   library assembly.room;   part 'assembly.dart'; part 'car.dart'; part 'operation.dart';   main() {   Paint operation for passenger car Operation<PassengerCar> paint = new Operation<PassengerCar>("paint");   Paint operation for Trucks   Operation<Truck> paintTruck = new Operation<Truck>("paint");   Create passenger assembly line Assembly<PassengerCar> passengerCarAssembly = new Assembly<PassengerCar>();   We can add passenger car passengerCarAssembly.add(new PassengerCar()); Operate only with passenger car passengerCarAssembly.make(paint); Operate with mistake passengerCarAssembly.make(paintTruck); }       [ 8 ]     Chapter 2 In the preceding code, we created the paint operation to paint the passenger cars and the paintTruckoperation to paint trucks. Later, we created the   passengerCarAssembly line and added a newpassenger car to the line via the add method. We can run the paint operation on the passenger car by calling the make method of the passengerCarAssembly line. Next, we intentionally made a mistake and tried to paint the truck on the assembly line for passenger cars, which resulted in the following runtime exception:   Make paint on Passenger Car Unhandled exception:   type 'PassengerCar' is not a subtype of type 'Truck' of 'car'. #0 Operation.call (…/generics_operation.dart:10:10)   #1 Assembly.make.<anonymous closure>(…/generics_assembly.dart:16:15)   #2 List.forEach (dart:core-patch/growable_array.dart:240) #3 Assembly.make (…/generics_assembly.dart:15:18)   #4 main (…/generics_assembly_and_operation_room.dart:20:28)   …   This trick with the call method of the Function type helps you make all the aspects of your assembly line generic. We've seen how to make a class generic and function to make the code of our application safer and cleaner.   The documentation generator automatically adds information about generics in the generated documentation pages.   To understand the differences between errors and exceptions, let's move on to the next topic.   Errors versus exceptions   Runtime faults can and do occur during the execution of a Dart program. We can split all faults into two types:   Errors   Exceptions                   [ 9 ]     Advanced Techniques and Reflection There is always some confusion on deciding when to use each kind of fault, but you will be given several general rules to make your life a bit easier. All your decisions will be based on the simple principle of recoverability. If your code generates a fault that can reasonably be recovered from, use exceptions. Conversely, if the code generates a fault that cannot be recovered from, or where continuing the execution would do more harm, use errors.   Let's take a look at each of them in detail.   Errors   An error occurs if your code has programming errors that should be fixed by the programmer. Let's take a look at the following main function:   main() {   Fixed length list List list = new List(5); Fill list with values   for (int i = 0; i < 10; i++) { list[i] = i;   }   print('Result is ${list}');   }   We created an instance of the List class with a fixed length and then tried to fill it with values in a loop with more items than the fixed size of the List class. Executing the preceding code generates RangeError, as shown in the following screenshot:               This error occurred because we performed a precondition violation in our code when we tried to insert a value in the list at an index outside the valid range. Mostly, these types of failures occur when the contract between the code and the calling API is broken. In our case, RangeError indicates that the precondition was violated. There are a whole bunch of errors in the Dart SDK such as CastError, RangeError, NoSuchMethodError, UnsupportedError, OutOfMemoryError, and StackOverflowError. Also, there are many others that you will find in the errors. dart file as a part of the dart.core library. All error classes inherit from the Error class and can return stack trace information to help find the bug quickly. In the preceding example, the error happened in line 6 of the main method in the   range_error.dart file.     [ 10 ]     Chapter 2 We can catch errors in our code, but because the code was badly implemented, we should rather fix it. Errors are not designed to be caught, but we can throw them if a critical situation occurs. A Dart program should usually terminate when an error occurs.   Exceptions   Exceptions, unlike errors, are meant to be caught and usually carry information about the failure, but they don't include the stack trace information. Exceptions happen in recoverable situations and don't stop the execution of a program. You can throw any non-null object as an exception, but it is better to create a new exception class that implements the marker interface Exception and overrides the toString method of the Object class in order to deliver additional information. An exception should be handled in a catch clause or made to propagate outwards. The following is an example of code without the use of exceptions:   import 'dart:io';   main() {   // File URI   Uri uri = new Uri.file("test.json"); // Check uri   if (uri != null) { // Create the file File file = new File.fromUri(uri); // Check whether file exists if (file.existsSync()) { // Open file   RandomAccessFile random = file.openSync(); // Check random   if (random != null) { // Read file   List<int> notReadyContent = random.readSync(random.lengthSync());   // Check not ready content   if (notReadyContent != null) {   Convert to String String content = new String.fromCharCodes(notReadyContent);   Print results   print('File content: ${content}');   }   // Close file random.closeSync();   }   [ 11 ]     Advanced Techniques and Reflection } else {   print ("File doesn't exist");   }   }   }   Here is the result of this code execution:   File content: [{ name: Test, length: 100 }]   As you can see, the error detection and handling leads to a confusing spaghetti code. Worse yet, the logical flow of the code has been lost, making it difficult to read and understand it. So, we transform our code to use exceptions as follows:   import 'dart:io';   main() {   RandomAccessFile random; try {   // File URI   Uri uri = new Uri.file("test.json"); // Create the file File file = new File.fromUri(uri); // Open file   random = file.openSync(); // Read file List<int> notReadyContent = random.readSync(random.lengthSync());   // Convert to String   String content = new String.fromCharCodes(notReadyContent); // Print results   print('File content: ${content}');   } on ArgumentError catch(ex) { print('Argument error exception'); } on UnsupportedError catch(ex) { print('URI cannot reference a file'); } on FileSystemException catch(ex) {   print ("File doesn't exist or accessible"); } finally {   try { random.closeSync();   } on FileSystemException catch(ex) { print("File can't be close");   }   }   }   [ 12 ]     Chapter 2 The code in thefinally statement will always be executed independent of whether the exception happened or not to close the random file. Finally, we have a clear separation of exception handling from the working code and we can now propagate uncaught exceptions outwards in the call stack.   The suggestions based on recoverability after exceptions are fragile. In our example, we caught ArgumentErrorandUnsupportError in common with FileSystemException. This was only done to show that errors and exceptionshave the same nature and can be caught any time. So, what is the truth? While developing my own framework, I used the following principle:   If I believe the code cannot recover, I use an error, and if I think it can recover, I use an exception.   Let's discuss another advanced technique that has become very popular and that helps you change the behavior of the code without making any changes to it.   Annotations   An annotation is metadata—data about data. An annotation is a way to keep additional information about the code in the code itself. An annotation can have parameter values to pass specific information about an annotated member. An annotation without parameters is called a marker annotation. The purpose of a marker annotation is just to mark the annotated member.   Dart annotations are constant expressions beginning with the @ character. We can apply annotations to all the members of the Dart language, excluding comments and annotations themselves. Annotations can be:   Interpreted statically by parsing the program and evaluating the constants via a suitable interpreter   Retrieved via reflection at runtime by a framework   The documentation generator does not add annotations to the generated documentation pages automatically, so the information about annotations must be specified separately in comments.                     [ 13 ]     Advanced Techniques and Reflection Built-in annotations   There are several built-in annotations defined in the Dart SDK interpreted by the static analyzer. Let's take a look at them.   Deprecated   The first built-in annotation is deprecated, which is very useful when you need to mark a function, variable, a method of a class, or even a whole class as deprecated and that it should no longer be used. The static analyzer generates a warning whenever a marked statement is used in code, as shown in the following screenshot:                                                   Override   Another built-in annotation is override. This annotation informs the static analyzer that any instance member, such as a method, getter, or setter, is meant to override the member of a superclass with the same name. The class instance variables as well as static members never override each other. If an instance member marked with override fails to correctly override a member in one of its superclasses, the static analyzer generates the following warning:             [ 14 ]     Chapter 2                                             Proxy   The last annotation is proxy. Proxy is a well-known pattern used when we need to call a real class's methods through the instance of another class. Let's assume that we have the following Car class:   part of cars;   // Class Car class Car {   int _speed = 0; // The car speed int get speed => _speed;   // Accelerate car accelerate(acc) { _speed += acc;   }   }                     [ 15 ]     Advanced Techniques and Reflection To drive the car instance, we must accelerate it as follows:   library cars;   part 'car.dart';   main() {   Car car = new Car(); car.accelerate(10); print('Car speed is ${car.speed}');   }   We now run our example to get the following result:   Car speed is 10   In practice, we may have a lot of different car types and would want to test all ofthem. To help us with this, we created the CarProxy class by passing an instance of Car in the proxy's constructor. From now on, we can invoke the car's methods through the proxy and save the results in a log as follows:   part of cars;   // Proxy to [Car] class CarProxy {   final Car _car;   // Create new proxy to [car] CarProxy(this._car);   @override   noSuchMethod(Invocation invocation) { if (invocation.isMethod &&   invocation.memberName == const Symbol('accelerate')) { // Get acceleration value   var acc = invocation.positionalArguments[0]; // Log info   print("LOG: Accelerate car with ${acc}"); // Call original method _car.accelerate(acc);   } else if (invocation.isGetter &&   invocation.memberName == const Symbol('speed')) { var speed = _car.speed;   // Log info         [ 16 ]     Chapter 2 print("LOG: The car speed ${speed}"); return speed;   }   return super.noSuchMethod(invocation);   }   }   As you can see, CarProxy does not implement the Car interface. All the magic happens inside noSuchMethod, which is overridden from theObjectclass. In thismethod, we compare the invoked member name with accelerateandspeed. If the comparison results match one of our conditions, we log the information and then call the original method on the real object. Now let's make changes to the main method, as shown in the following screenshot:                                   Here, the static analyzer alerts you with a warning because the CarProxyclass doesn't have the accelerate method and the speed getter. You must add the proxy annotation to the definition of the CarProxy class to suppress the staticanalyzer warning, as shown in the following screenshot:           Now with all the warnings gone, we can run our example to get the following successful result:   Car speed is 10   LOG: Accelerate car with 10   LOG: The car speed 20   Car speed through proxy is 20   [ 17 ]     Advanced Techniques and Reflection Custom annotations   Let's say we want to create a test framework. For this, we will need several custom annotations to mark methods in a testable class to be included in a test case. The following code has two custom annotations. In the case, where we need only marker annotation, we use a constant string test. In the event that we need to pass parameters to an annotation, we will use a Test class with a constant constructor, as shown in the following code:   library test;   //  Marker annotation test const String test = "test";   //  Test annotation   class Test {   //  Should test be ignored? final bool include; //  Default constant constructor const Test({this.include:true});   String toString() => 'test';   }   The Test class has the final include variable initialized with a default value of true. To exclude a method from tests, we should pass false as a parameter forthe annotation, as shown in the following code:   library test.case;   import 'test.dart'; import 'engine.dart';   // Test case of Engine class TestCase {   Engine engine = new Engine();   //  Start engine @test testStart() {   engine.start();   if (!engine.started) throw new Exception("Engine must start");   }   //  Stop engine   @Test()   [ 18 ]     Chapter 2 testStop() { engine.stop(); if (engine.started) throw new Exception("Engine must stop");   }   // Warm up engine @Test(include:false) testWarmUp() {   // ...   }   }   In this scenario, we test theEngine class via the invocation of the testStartand testStopmethods ofTestCase, while avoiding the invocation of thetestWarmUp method.   So what's next? How can we really use annotations? Annotations are useful with reflection at runtime, so now it's time to discuss how to make annotations available through reflection.   Reflection   Introspection is the ability of a program to discover and use its own structure. Reflection is the ability of a program to use introspection to examine and modify the structure and behavior of the program at runtime. You can use reflection to dynamically create an instance of a type or get the type from an existing object and invoke its methods or access its fields and properties. This makes your code more dynamic and can be written against known interfaces so that the actual classes can be instantiated using reflection. Another purpose of reflection is to create development and debugging tools, and it is also used for meta-programming.   There are two different approaches to implementing reflection:   The first approach is that the information about reflection is tightly integrated with the language and exists as part of the program's structure. Access to program-based reflection is available by a property or method.   The second approach is based on the separation of reflection information and program structure. Reflection information is separated inside a distinct mirror object that binds to the real program member.               [ 19 ]     Advanced Techniques and Reflection Dart reflection follows the second approach with Mirrors. You can find more information about the concept of Mirrors in the original paper written by Gilad   Bracha at http://bracha.org/mirrors.pdf. Let's discuss the advantagesof mirrors:   Mirrors are separate from the main code and cannot be exploited for malicious purposes   As reflection is not part of the code, the resulting code is smaller   There are no method-naming conflicts between the reflection API and inspected classes   It is possible to implement many different mirrors with different levels of reflection privileges   It is possible to use mirrors in command-line and web applications   Let's try Mirrors and see what we can do with them. We will continue to create a library to run our tests.   Introspection in action   We will demonstrate the use of Mirrors with something simple such as introspection. We will need a universal code that can retrieve the information about any object   or class in our program to discover its structure and possibly manipulate it with properties and call methods. For this, we've prepared the TypeInspector class. Let's take a look at the code. We've imported the dart:mirrors library here to add the introspection ability to our code:   library inspector;   import 'dart:mirrors'; import 'test.dart';   class TypeInspector { ClassMirror _classMirror; // Create type inspector for [type]. TypeInspector(Type type) {   _classMirror = reflectClass(type);   }                   [ 20 ]     Chapter 2 The ClassMirror class contains all the information about the observing type. We perform the actual introspection with the reflectClass function of Mirrors and return a distinct mirror object as the result. Then, we call the getAnnotatedMethods method and specify the name of the annotation that we are interested in. This will return a list of MethodMirror that will contain methods annotated with specified parameters. One by one, we step through all the instance members and call the private_isMethodAnnotatedmethod. If the result of the execution of the_isMethodAnnotated method is successful, then we add the discovering method tothe resultlist of foundMethodMirror's, as shown in the following code:   // Return list of method mirrors assigned by [annotation]. List<MethodMirror> getAnnotatedMethods(String annotation) { List<MethodMirror> result = []; // Get all methods   _classMirror.instanceMembers.forEach( (Symbol name, MethodMirror method) {   if (_isMethodAnnotated(method, annotation)) { result.add(method);   }   });   return result;   }   The first argument of _isMethodAnnotatedhas themetadata property that keeps a list of annotations. The second argument of this method is the annotation name that we would like to find. The inst variable holds a reference to the original object in the reflectee property. We pass through all the method's metadata to exclude some ofthem annotated with the Test class and marked with include equals false. All other method's annotations should be compared to the annotation name, as follows:   // Check is [method] annotated with [annotation].   bool _isMethodAnnotated(MethodMirror method, String annotation) { return method.metadata.any(   (InstanceMirror inst) {   // For [Test] class we check include condition if (inst.reflectee is Test &&   !(inst.reflectee as Test).include) { // Test must be exclude   return false;   }   // Literal compare of reflectee and annotation return inst.reflectee.toString() == annotation;   });   }   }   [ 21 ]     Advanced Techniques and Reflection Dart mirrors have the following three main functions for introspection:   reflect: This function is used to introspect an instance that is passed asa parameter and saves the result in InstanceMirror or ClosureMirror. For the first one, we can call methods, functions, or get and set fields of the reflectee property. For the second one, we can execute the closure.   reflectClass: This function reflects the class declaration and returns ClassMirror. It holds full information about the type passed as a parameter.   reflectType: This function returns TypeMirror and reflects a class, typedef, function type, or type variable.   Let's take a look at the main code:   library test.framework;   import 'type_inspector.dart'; import 'test_case.dart';   main() {   TypeInspector inspector = new TypeInspector(TestCase); List methods = inspector.getAnnotatedMethods('test'); print(methods); }   Firstly, we created an instance of our TypeInspectorclass and passed the testable   class, in our case, TestCase. Then, we called getAnnotatedMethods from inspector with the name of the annotation, test. Here is the result of the execution:   [MethodMirror on 'testStart', MethodMirror on 'testStop']   The inspector method found the methods testStartandtestStop and ignored testWarmUp of the TestCase class as per our requirements.   Reflection in action   We have seen how introspection helps us find methods marked with annotations. Now we need to call each marked method to run the actual tests. We will do that using reflection. Let's make a MethodInvoker class to show reflection in action:   library executor;   import 'dart:mirrors';   class MethodInvoker implements Function { // Invoke the method   [ 22 ]     Chapter 2 call(MethodMirror method) {   ClassMirror classMirror = method.owner as ClassMirror;   //  Create an instance of class InstanceMirror inst =   classMirror.newInstance(new Symbol(''), []);   //  Invoke method of instance inst.invoke(method.simpleName, []); }   }   As the MethodInvoker class implements the Function interface and has the call method, we can call instance it as if it was a function. In order to call the method, we must first instantiate a class. Each MethodMirror method has the owner property, which points to the owner object in the hierarchy. The owner ofMethodMirror in our case is ClassMirror. In the preceding code, we created a new instance of the class with an empty constructor and then we invoked the method of inst by name. In both cases, the second parameter was an empty list of method parameters.   Now, we introduce MethodInvoker to the main code. In addition to TypeInspector, we create the instance of MethodInvoker. One by one, we step through the methods and send each of them to invoker. We print Success only if no exceptions occur.   To prevent the program from terminating if any of the tests failed, we wrap invoker in the try-catch block, as shown in the following code:   library test.framework;   import 'type_inspector.dart'; import 'method_invoker.dart'; import 'engine_case.dart';   main() {   TypeInspector inspector = new TypeInspector(TestCase); List methods = inspector.getAnnotatedMethods(test); MethodInvoker invoker = new MethodInvoker(); methods.forEach((method) { try { invoker(method);   print('Success ${method.simpleName}');   } on Exception catch(ex) { print(ex);   } on Error catch(ex) {   print("$ex : ${ex.stackTrace}");   }   });   }   [ 23 ]     Advanced Techniques and Reflection As a result, we will get the following code:   Success Symbol("testStart")   Success Symbol("testStop")   To prove that the program will not terminate in the case of an exception in the tests, we will change the code in TestCase to break it, as follows:   // Start engine @test testStart() {   engine.start();   // !!! Broken for reason   if (engine.started) throw new Exception("Engine must start");   }   When we run the program, the code for testStart fails, but the program continues executing until all the tests are finished, as shown in the following code:   Exception: Engine must start   Success Symbol("testStop")   And now our test library is ready for use. It uses introspection and reflection to observe and invoke marked methods of any class.   Summary   This concludes mastering of the advanced techniques in Dart. You now know that generics produce safer and clearer code, annotation with reflection helps execute code dynamically, and errors and exceptions play an important role in finding bugs that are detected at runtime.   In the next chapter, we will talk about the creation of objects and how and when to create them using best practices from the programming world.                               [ 24 ]  
Read more
  • 0
  • 0
  • 1291
Modal Close icon
Modal Close icon