Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7010 Articles
article-image-christmas-light-sequencer
Packt
25 Feb 2015
20 min read
Save for later

Christmas Light Sequencer

Packt
25 Feb 2015
20 min read
In this article by Sai Yamanoor and Srihari Yamanoor, authors of the book Raspberry Pi Mechatronics Projects Hotshot, have picked a Christmas-themed project to demonstrate controlling appliances connected to a local network using Raspberry Pi. We will design automation and control of Christmas lights in our homes. We will decorate our homes with lights for any festive occasion and work on a article that enables us to build fantastic projects. We will build a local server to control the devices. We will use the web.py framework to design the web server. We'd like to dedicate this article to the memory of Aaron Swartz who was the founder of the web.py framework. Mission briefing In this article, we will install a local web server-based control of GPIO pins on the Raspberry Pi. We will use this web server framework to control it via a web page. The Raspberry Pi on top of the tree is just an ornament for decoration Why is it awesome? We celebrate festive occasions by decorating our homes. The decorations reflect our heart and it can be enhanced by using Raspberry Pi. This article involves interfacing AC-powered devices to Raspberry Pi. You should exercise extreme caution while interfacing the devices, and it is strongly recommended that you stick to the recommended devices. Your objectives In this article, we will work on the following aspects: Interface of the Christmas tree lights and other decorative equipment to the Raspberry Pi Set up the digitally-addressable RGB matrix Interface of an audio device Setting up the web server Interfacing devices to the web server You can buy your sustainable and UK grown Christmas trees from christmastrees.co.uk. Mission checklist This article is based on a broad concept. You are free to choose decorative items of your own interest. We chose to show the following items for demonstration: Item Estimated Cost Christmas tree * 1 30 USD Outdoor decoration (optional) 30 USD Santa Claus figurine * 1 20 USD Digitally addressable strip * 1 30 USD approximately Power Switch Tail 2 from Adafruit Industries (http://www.adafruit.com/product/268) 25 USD approximately Arduino Uno (any variant) 20 – 30 USD approximately Interface the devices to the Raspberry Pi It is important to exercise caution while connecting electrical appliances to the Raspberry Pi. If you don't know what you are doing, please skip this section. Adult supervision is required while connecting appliances. In this task, we will look into interfacing decorative appliances (operated with an AC power supply) such as the Christmas tree. It is important to interface AC appliances to the Raspberry Pi in accordance with safety practices. It is possible to connect AC appliances to the Raspberry Pi using solid state relays. However, if the prototype boards aren't connected properly, it is a potential hazard. Hence, we use the Power Switch Tail II sold by Adafruit Industries. The Power Switch Tail II has been rated for 110V. According to the specifications provided on the Adafruit website, Power Switch Tail's relay can switch up to 15A resistive loads. It can be controlled by providing a 3-12V DC signal. We will look into controlling the lights on a Christmas tree in this task. Power Switch Tail II – Photo courtesy: Adafruit.com Prepare for lift off We have to connect the Power Switch Tail II to the Raspberry Pi to test it. The follow Fritzing schematic shows the connection of the switch to the Raspberry Pi using Pi Cobbler. Pin 25 is connected to in+, while the in- pin is connected to the Ground pin of the Raspberry Pi. The Pi Cobbler breakout board is connected to the Raspberry Pi as shown in the following image: The Raspberry Pi connection to the Power Switch Tail II using Pi Cobbler Engage thrusters In order to test the device, there are two options to control the device the GPIO Pins of the Raspberry Pi. This can be controlled either using the quick2wire GPIO library or using the Raspi GPIO library. The main difference between the quick2wire gpio library and the Raspi GPIO library is that the former does not require that the Python script to be run with root user privileges (to those who are not familiar with root privileges, the Python script needs to be run using sudo). In the case of the Raspi GPIO library, it is possible to set the ownership of the pins to avoid executing the script as root. Once the installation is complete, let's turn on/off the lights on the tree with a three second interval. The code for it is given as follows: # Import the rpi.gpio module.import RPi.GPIO as GPIO#Import delay module.from time import sleep#Set to BCM GPIOGPIO.setmode(GPIO.BCM)# BCM pin 25 is the output.GPIO.setup(25, GPIO.OUT) # Initialise Pin25 to low (false) so that the Christmas tree lights are switched off. GPIO.output(25, False)while 1:GPIO.output(25,False)sleep(3)GPIO.output(25,True)sleep(3) In the preceding task, we will get started by importing the raspi.gpio module and the time module to introduce a delay between turning on/off the lights: import RPi.GPIO as GPIO#Import delay modulefrom time import sleep We need to set the mode in which the GPIO pins are being used. There are two modes, namely the board's GPIO mode and the BCM GPIO mode (more information available on http://sourceforge.net/p/raspberry-gpio-python/wiki/). The former refers to the pin numbers on the Raspberry Pi board while the latter refers to the pin number found on the Broadcom chipset. In this example, we will adopt the BCM chipset's pin description. We will set the pin 25 to be an output pin and set it to false so that the Christmas tree lights are switched off at the start of the program: GPIO.setup(25, GPIO.OUT)GPIO.output(25, False) In the preceding routine, we are switching off the lights and turning them back on with a three-second interval: while 1:GPIO.output(25,True)sleep(3)GPIO.output(25,False)sleep(3) When the pin 25 is set to high, the device is turned on, and it is turned off when the pin is set to low with a three-second interval. Connecting multiple appliances to the Raspberry Pi Let's consider a scenario where we have to control multiple appliances using the Raspberry Pi. It is possible to connect a maximum of 15 devices to the GPIO interface of the Raspberry Pi. (There are 17 GPIO pins on the Raspberry Pi Model B, but two of those pins, namely GPIO14 and 15, are set to be UART in the default state. This can be changed after startup. It is also possible to connect a GPIO expander to connect more devices to Raspberry Pi.) In the case of appliances that need to be connected to the 110V AC mains, it is recommended that you use multiple power switch tails to adhere to safety practices. In the case of decorative lights that operate using a battery (for example, a two-feet Christmas tree) or appliances that operate at low voltage levels of 12V DC, a simple transistor circuit and a relay can be used to connect the devices. A sample circuit is shown in the figure that follows: A transistor switching circuit In the preceding circuit, since the GPIO pins operate at 3.3V levels, we will connect the GPIO pin to the base of the NPN transistor. The collector pin of the transistor is connected to one end of the relay. The transistor acts as a switch and when the GPIO pin is set to high, the collector is connected to the emitter (which in turn is connected to the ground) and hence, energizes the relay. Relays usually have three terminals, namely, the common terminal, Normally Open Terminal, and Normally Closed Terminal. When the relay is not energized, the common terminal is connected to the Normally Closed Terminal. Upon energization, the Normally Open Terminal is connected to the common terminal, thus turning on the appliance. The freewheeling diode across the relay is used to protect the circuit from any reverse current from the switching of the relays. The transistor switching circuit aids in operating an appliance that operates at 12V DC using the Raspberry Pi's GPIO pins (the GPIO pins of the Raspberry Pi operate at 3.3V levels). The relay and the transistor switching circuit enables controlling high current devices using the Raspberry Pi. It is possible to use an array of relays (as shown in the following image) and control an array of decorative lighting arrangements. It would be cool to control lighting arrangements according to the music that is being played on the Raspberry Pi (a project idea for the holidays!). The relay board (shown in the following image) operates at 5V DC and comes with the circuitry described earlier in this section. We can make use of the board by powering up the board using a 5V power supply and connecting the GPIO pins to the pins highlighted in red. As explained earlier, the relay can be energized by setting the GPIO pin to high. A relay board Objective complete – mini debriefing In this section, we discussed controlling decorative lights and other holiday appliances by running a Python script on the Raspberry Pi. Let's move on to the next section to set up the digitally addressable RGB LED strip! Setting up the digitally addressable RGB matrix In this task, we will talk about setting up options available for LED lighting. We will discuss two types of LED strips, namely analog RGB LED strips and digitally-addressable RGB LED strips. A sample of the digitally addressable RGB LED strip is shown in the image that follows: A digitally addressable RGB LED Strip Prepare for lift off As the name explains, digitally-addressable RGB LED strips are those where the colour of each RGB LED can be individually controlled (in the case of the analog strip, the colors cannot be individually controlled). Where can I buy them? There are different models of the digitally addressable RGB LED strips based on different chips such as LPD6803, LPD8806, and WS2811. The strips are sold in a reel of a maximum length of 5 meters. Some sources to buy the LED strips include Adafruit (http://www.adafruit.com/product/306) and Banggood (http://www.banggood.com/5M-5050-RGB-Dream-Color-6803-IC-LED-Strip-Light-Waterproof-IP67-12V-DC-p-931386.html) and they cost about 50 USD for a reel. Some vendors (including Adafruit) sell them in strips of one meter as well. Engage thrusters Let's review how to control and use these digitally-addressable RGB LED strips. How does it work? Most digitally addressable RGB strips come with terminals to powering the LEDs, a clock pin, and a data pin. The LEDs are serially connected to each other and are controlled through the SPI (Serial Peripheral Interface). The RGB LEDs on the strip are controlled by a chip that latches data from the microcontroller/Raspberry Pi onto the LEDs with reference to the clock cycles received on the clock pin. In the case of the LPD8806 strip, each chip can control about 2 LEDs. It can control each channel of the RGB LED using a seven-bit PWM channel. More information on the function of the RGB LED strip is available at https://learn.adafruit.com/digital-led-strip. It is possible to break the LED strip into individual segments. Each segment contains about 2 LEDs, and Adafruit industries has provided an excellent tutorial to separate the individual segments of the LED strip (https://learn.adafruit.com/digital-led-strip/advanced-separating-strips). Lighting up the RGB LED strip There are two ways of connecting the RGB LED strip. They can either be connected to an Arduino and controlled by the Raspberry Pi or controlled by the Raspberry Pi directly. An Arduino-based control It is assumed that you are familiar with programming microcontrollers, especially those on the Arduino platform. An Arduino connection to the digitally addressable interface In the preceding figure, the LED strip is powered by an external power supply. (The tiny green adapter represents the external power supply. The recommended power supply for the RGB LED strip is 5V/2A per meter of LEDs (while writing this article, we got an old computer power supply to power up the LEDs). The Clock pins (the CI pin) and the Data pins (DI) of the first segment of the RGB strip are connected to the pins D2 and D3 respectively. (We are doing this since we will test the example from Adafruit industries. The example is available at https://github.com/adafruit/LPD8806/tree/master/examples.) Since the RGB strip consists of multiple segments that are serially connected, the Clock Out (CO) and Data Out (DO) pins of the first segment are connected to the Clock In (CI) and Data In (DI) pins of the second segment and so on. Let's review the example, strandtest.pde, to test the RGB LED strip. The example makes use of Software SPI (Bit Banging of the clock and data pins for lighting effects). It is also possible to use the SPI interface of the Arduino platform. In the example, we need to set the number of LEDs used for the test. For example, we need to set the number of LEDs on the strip to 64 for a two-meter strip. Here is how to do this: The following line needs to be changed: int nLEDs = 64; Once the code is uploaded, the RGB matrix should light up, as shown in this image: 8 x 8 RGB matrix lit up Let's quickly review the Arduino sketch from Adafruit. We will get started by setting up an LPD8806 object as follows: //nLEDS refer to number of LEDs in the strip. This cannot exceed 160 LEDs/5m due to current draw. LPD8806 strip = LPD8806(nLEDs, dataPin, clockPin); In the setup() sectionof the Arduino sketch, we will initialize the RGB strip "as follows: // Start up the LED stripstrip.begin(); // Update the strip, to start they are all 'off'strip.show(); As soon as we enter the main loop, scripts such as colorChase and rainbow are executed. We can make use of this Arduino sketch to implement serial port commands to control the lighting scripts using the Raspberry Pi. This task merely provides some ideas of connecting and lighting up the RGB LED strip. You should familiarize yourself with the working principles of the RGB LED strip. The Raspberry Pi has an SPI port, and hence, it is possible to control the RGB strip directly from the Raspberry Pi. Objective complete – mini debriefing In this task, we reviewed options for decorative lighting and controlling them using the Raspberry Pi and Arduino. Interface of an audio device In this task, we will work on installing MP3 and WAV file audio player tools on the Raspbian operating system. Prepare for lift off The Raspberry Pi is equipped with a 3.5mm audio jack and the speakers can be connected to that output. In order to get started, we install the ALSA utilities package and a command-line mp3 player: sudo apt-get install alsa-utils sudo apt-get install mpg321 Engage thrusters In order to use the alsa-utils or mpg321 players, we have to activate the BCM2835's sound drivers and this can be done using the modprobe command: sudo modprobe snd_bcm2835 After activating the drivers, it is possible to play the WAV files using the aplay command (aplay is a command-line player available as part of the alsa-utils package): aplay testfile.wav An MP3 file can be played using the mpg321 command (a command-line MP3 player): mpg321 testfile.mp3 In the preceding examples, the commands were executed in the directory where the WAV file or the MP3 file was located. In the Linux environment, it is possible to stop playing a file by pressing CTRL + C. Objective complete – mini debriefing We were able to install sound utilities in this task. Later, we will use the installed utilities to play audio from a web page. It is possible to play the sound files on the Raspberry Pi using the module available in Python. Some examples include: Snack sound tool kit, Pygame, and so on. Installing the web server In this section, we will install a local web server on Raspberry Pi. There are different web server frameworks that can be installed on the Raspberry Pi. They include Apache v2.0, Boost, the REST framework, and so on. Prepare for lift off As mentioned earlier, we will build a web server based on the web.py framework. This section is entirely referenced from web.py tutorials (http://webpy.github.io/). In order to install web.py, a Python module installer such as pip or easy_install is required. We will install it using the following command: sudo apt-get install python-setuptools Engage thrusters The web.py framework can be installed using the easy_install tool: sudo easy_install web.py Once the installation is complete, it is time to test it with a Hello World! example. We will open a new file using a text editor available with Python IDLE and get started with a Hello World! example for the web.py framework using the following steps: The first step is to import the web.py framework: import web The next step is defining the class that will handle the landing page. In this case, it is index: urls = ('/','index') We need to define what needs to be done when one tries to access the URL. "We will like to return the Hello world!text: class index: def GET(self): return "Hello world!" The next step is to ensure that a web page is set up using the web.py framework when the Python script is launched: if __name__ == '__main__': app = web.application(urls, globals()) app.run() When everything is put together, the following code is what we'll see: import web urls = ('/','index') class index: def GET(self): return "Hello world!" if __name__ == '__main__': app = web.application(urls,globals()) app.run() We should be able to start the web page by executing the Python script: python helloworld.py We should be able to launch the website from the IP address of the Raspberry Pi. For example, if the IP address is 10.0.0.10, the web page can be accessed at http://10.0.0.10:8080 and it displays the text Hello world. Yay! A Hello world! example using the web.py framework Objective complete – mission debriefing We built a simple web page to display the Hello world text. In the next task, we will be interfacing the Christmas tree and other decorative appliances to our web page so that we can control it from anywhere on the local network. It is possible to change the default port number for the web page access by launching the Python script as follows: python helloworld.py 1234 Now, the web page can be accessed at http://<IP_Address_of_the_Pi>:1234. Interfacing the web server In this task, we will learn to interface one decorative appliance and a speaker. We will create a form and buttons on an HTML page to control the devices. Prepare for lift off In this task, we will review the code (available along with this article) required to interface decorative appliances and lighting arranging to a web page and controlled over a local network. Let's get started with opening the file using a text editing tool (Python IDLE's text editor or any other text editor). Engage thrusters We will import the following modules to get started with the program: import web from web import form import RPi.GPIO as GPIO import os The GPIO module is initialized, the board numbering is set, and ensure that all appliances are turned off by setting the GPIO pins to low or false and declare any global variables: #Set board GPIO.setmode(GPIO.BCM) #Initialize the pins that have to be controlled GPIO.setup(25,GPIO.OUT) GPIO.output(25,False) This is followed by defining the template location: urls = ('/', 'index') render = web.template.render('templates') The buttons used in the web page are also defined: appliances_form = form.Form( form.Button("appbtn", value="tree", class_="btntree"), form.Button("appbtn", value="Santa", class_="btnSanta"), form.Button("appbtn", value="audio", class_="btnaudio")    In this example, three buttons are used, a value is assigned to each button along with their class.    In this example, we are using three buttons and the name is appbtn. A value is assigned to each button that determines the desired action when a button is clicked. For example, when a Christmas tree button is clicked, the lights need to be turned on. This action can be executed based on the value that is returned during the button press. The home page is defined in the index class. The GET method is used to render the web page and POST for button click actions. class index: def GET(self): form = appliances_form() return render.index(form, "Raspberry Pi Christmas lights controller") def POST(self): userData = web.input() if userData.appbtn == "tree" global state state = not state elif userData.appbtn == "Santa": #do something here for another appliance elif userData.appbtn == "audio": os.system("mpg321 /home/pi/test.mp3") GPIO.output(25,state) raise web.seeother('/')    In the POST method, we need to monitor the button clicks and perform an action accordingly. For example, when the button with the tree value is returned, we can change the Boolean value, state. This in turn switches the state of the GPIO pin 25. Earlier, we connected the power tail switch to pin 25. The index page file that contains the form and buttons is as follows: $def with (form,title) <html> <head> <title>$title</title> <link rel="stylesheet" type="text/css" href="/static/styles.css"> </head> <body&gt <P><center><H1>Christmas Lights Controller</H1></center> <br /> <br /> <form class="form" method="post"> $:form.render() </form> </body> </html> The styles of the buttons used on the web page are described as follows in styles.css: form .btntree { margin-left : 200px; margin-right : auto; background:transparent url("images/topic_button.png") no-repeat top left; width : 186px; height: 240px; padding : 0px; position : absolute; } form .btnSanta{ margin-left :600px; margin-right : auto; background:transparent url("images/Santa-png.png") no-repeat top left; width : 240px; height: 240px; padding : 40px; position : absolute; } body {background-image:url('bg-snowflakes-3.gif'); } The web page looks like what is shown in the following figure: Yay! We have a Christmas lights controller interface. Objective complete – mini debriefing We have written a simple web page that interfaces a Christmas tree and RGB tree and plays MP3 files. This is a great project for a holiday weekend. It is possible to view this web page from anywhere on the Internet and turn these appliances on/off (Fans of the TV show Big Bang Theory might like this idea. A step-by-step instruction on setting it up is available at http://www.everydaylinuxuser.com/2013/06/connecting-to-raspberry-pi-from-outside.html). Summary In this article, we have accomplished the following: Interfacing the RGB matrix Interfacing AC appliances to Raspberry Pi Design of a web page Interfacing devices to the web page Resources for Article: Further resources on this subject: Raspberry Pi Gaming Operating Systems [article] Calling your fellow agents [article] Our First Project – A Basic Thermometer [article]
Read more
  • 0
  • 0
  • 9493

article-image-raspberry-pi-and-raspbian
Packt
25 Feb 2015
14 min read
Save for later

The Raspberry Pi and Raspbian

Packt
25 Feb 2015
14 min read
In this article by William Harrington, author of the book Learning Raspbian, we will learn about the Raspberry Pi, the Raspberry Pi Foundation and Raspbian, the official Linux-based operating system of Raspberry Pi. In this article, we will cover: (For more resources related to this topic, see here.) The Raspberry Pi History of the Raspberry Pi The Raspberry Pi hardware The Raspbian operating system Raspbian components The Raspberry Pi Despite first impressions, the Raspberry Pi is not a tasty snack. The Raspberry Pi is a small, powerful, and inexpensive single board computer developed over several years by the Raspberry Pi Foundation. If you are a looking for a low cost, small, easy-to-use computer for your next project, or are interested in learning how computers work, then the Raspberry Pi is for you. The Raspberry Pi was designed as an educational device and was inspired by the success of the BBC Micro for teaching computer programming to a generation. The Raspberry Pi Foundation set out to do the same in today's world, where you don't need to know how to write software to use a computer. At the time of printing, the Raspberry Pi Foundation had shipped over 2.5 million units, and it is safe to say that they have exceeded their expectations! The Raspberry Pi Foundation The Raspberry Pi Foundation is a not-for-profit charity and was founded in 2006 by Eben Upton, Rob Mullins, Jack Lang, and Alan Mycroft. The aim of this charity is to promote the study of computer science to a generation that didn't grow up with the BBC Micro or the Commodore 64. They became concerned about the lack of devices that a hobbyist could use to learn and experiment with. The home computer was often ruled out, as it was so expensive, leaving the hobbyist and children with nothing to develop their skills with. History of the Raspberry Pi Any new product goes through many iterations before mass production. In the case of the Raspberry Pi, it all began in 2006 when several concept versions of the Raspberry Pi based on the Atmel 8-bit ATMega664 microcontroller were developed. Another concept based on a USB memory stick with an ARM processor (similar to what is used in the current Raspberry Pi) was created after that. It took six years of hardware development to create the Raspberry Pi that we know and love today! The official logo of the Raspberry Pi is shown in the following screenshot: It wasn't until August 2011 when 50 boards of the Alpha version of the Raspberry Pi were built. These boards were slightly larger than the current version to allow the Raspberry Pi Foundation, to debug the device and confirm that it would all work as expected. Twenty-five beta versions of the Raspberry Pi were assembled in December 2011 and auctioned to raise money for the Raspberry Pi Foundation. Only a single small error with these was found and corrected for the first production run. The first production run consisted of 10,000 boards of Raspberry Pi manufactured overseas in China and Taiwan. Unfortunately, there was a problem with the Ethernet jack on the Raspberry Pi being incorrectly substituted with an incompatible part. This led to some minor shipping delays, but all the Raspberry Pi boards were delivered within weeks of their due date. As a bonus, the foundation was able to upgrade the Model A Raspberry Pi to 256 MB of RAM instead of the 128 MB that was planned. This upgrade in memory size allowed the Raspberry Pi to perform even more amazing tasks, such as real-time image processing. The Raspberry Pi is now manufactured in the United Kingdom, leading to the creation of many new jobs. The release of the Raspberry Pi was met with great fanfare, and the two original retailers of the Raspberry Pi - Premier Farnell and RS components-sold out of the first batch within minutes. The Raspberry Pi hardware At the heart of Raspberry Pi is the powerful Broadcom BCM2835 "system on a chip". The BCM2835 is similar to the chip at the heart of almost every smartphone and set top box in the world that uses ARM architecture. The BCM2835 CPU on the Raspberry Pi runs at 700 MHz and its performance is roughly equivalent to a 300 MHz Pentium II computer that was available back in 1999. To put this in perspective, the guidance computer used in the Apollo missions was less powerful than a pocket calculator! Block diagram of the Raspberry Pi The Raspberry Pi comes with either 256 MB or 512 MB of RAM, depending on which model you buy. Hopefully, this will increase in future versions! Graphic capabilities Graphics in the Raspberry Pi are provided by a Videocore 4 GPU. The graphic performance of the graphics processing unit (GPU) is roughly equivalent to the Xbox, launched in 2011, which cost many hundreds of dollars. These might seem like very low specifications, but they are enough to play Quake 3 at 1080p and full HD movies. There are two ways to connect a display to the Raspberry Pi. The first is using a composite video cable and the second is using HDMI. The composite output is useful as you are able to use any old TV as a monitor. The HDMI output is recommended however, as it provides superior video quality. A VGA connection is not provided on the Raspberry Pi as it would be cost prohibitive. However, it is possible to use an HDMI to VGA/DVI converter for users who have VGA or DVI monitors. The Raspberry Pi also supports an LCD touchscreen. An official version has not been released yet, although many unofficial ones are available. The Raspberry Pi Foundation says that they expect to release one this year. The Raspberry Pi model The Raspberry Pi has several different variants: the Model A and the Model B. The Model A is a low-cost version and unfortunately omits the USB hub chip. This chip also functions as a USB to an Ethernet converter. The Raspberry Pi Foundation has also just released the Raspberry Pi Model B+ that has extra USB ports and resolves many of the power issues surrounding the Model B and Model B USB ports. Parameters Model A Model B Model B+ CPU BCM2835 BCM2835 BCM2835 RAM 256 MB 512 MB 512 MB USB Ports 1 2 4 Ethernet Ports 0 1 1 Price (USD) ~$25 ~$35 ~$35 Available since February 2012 February 2012 July 2014 Boards     Differences between the Raspberry Pi models Did you know that the Raspberry Pi is so popular that if you search for raspberry pie in Google, they will actually show you results for the Raspberry Pi! Accessories The success of the Raspberry Pi has encouraged many other groups to design accessories for the Raspberry Pi, and users to use them. These accessories range from a camera to a controller for an automatic CNC machine. Some of these accessories include: Accessories Links Raspberry Pi camera http://www.raspberrypi.org/tag/camera-board/ VGA board http://www.suptronics.com/RPI.html CNC Controller http://code.google.com/p/picnc/ Autopilot http://www.emlid.com/ Case http://shortcrust.net/ Raspbian No matter how good the hardware of the Raspberry Pi is, without an operating system it is just a piece of silicon, fiberglass, and a few other materials. There are several different operating systems for the Raspberry Pi, including RISC OS, Pidora, Arch Linux, and Raspbian. Currently, Raspbian is the most popular Linux-based operating system for the Raspberry Pi. Raspbian is an open source operating system based on Debian, which has been modified specifically for the Raspberry Pi (thus the name Raspbian). Raspbian includes customizations that are designed to make the Raspberry Pi easier to use and includes many different software packages out of the box. Raspbian is designed to be easy to use and is the recommended operating system for beginners to start off with their Raspberry Pi. Debian The Debian operating system was created in August 1993 by Ian Murdock and is one of the original distributions of Linux. As Raspbian is based on the Debian operating system, it shares almost all the features of Debian, including its large repository of software packages. There are over 35,000 free software packages available for your Raspberry Pi, and they are available for use right now! An excellent resource for more information on Debian, and therefore Raspbian, is the Debian administrator's handbook. The handbook is available at http://debian-handbook.info. Open source software The majority of the software that makes up Raspbian on the Raspberry Pi is open source. Open source software is a software whose source code is available for modification or enhancement by anyone. The Linux kernel and most of the other software that makes up Raspbian is licensed under the GPLv2 License. This means that the software is made available to you at no cost, and that the source code that makes up the software is available for you to do what you want to. The GPLV2 license also removes any claim or warranty. The following extract from the GPLV2 license preamble gives you a good idea of the spirit of free software: "The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users…. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things." Raspbian components There are many components that make up a modern Linux distribution. These components work together to provide you with all the modern features you expect in a computer. There are several key components that Raspbian is built from. These components are: The Raspberry Pi bootloader The Linux kernel Daemons The shell Shell utilities The X.Org graphical server The desktop environment The Raspberry Pi bootloader When your Raspberry Pi is powered on, lot of things happen behind the scene. The role of the bootloader is to initialize the hardware in the Raspberry Pi to a known state, and then to start loading the Linux kernel. In the case of the Raspberry Pi, this is done by the first and second stage bootloaders. The first stage bootloader is programmed into the ROM of the Raspberry Pi during manufacture and cannot be modified. The second and third stage bootloaders are stored on the SD card and are automatically run by the previous stage bootloader. The Linux kernel The Linux kernel is one of the most fundamental parts of Raspbian. It manages every part of the operation of your Raspberry Pi, from displaying text on the screen to receiving keystrokes when you type on your keyboard. The Linux kernel was created by Linus Torvalds, who started working on the kernel in April 1991. Since then, groups of volunteers and organizations have worked together to continue the development of the kernel and make it what it is today. Did you know that the cost to rewrite the Linux kernel to where it was in 2011 would be over $3 billion USD? If you want to use a hardware device by connecting it to your Raspberry Pi, the kernel needs to know what it is and how to use it. The vast majority of devices on the market are supported by the Linux kernel, with more being added all the time. A good example of this is when you plug a USB drive into your Raspberry Pi. In this case, the kernel automatically detects the USB drive and notifies a daemon that automatically makes the files available to you. When the kernel has finished loading, it automatically runs a program called init. This program is designed to finish the initialization of the Raspberry Pi, and then to load the rest of the operating system. This program starts by loading all the daemons into the background, followed by the graphical user interface. Daemons A daemon is a piece of software that runs behind the scenes to provide the operating system with different features. Some examples of a daemon include the Apache web server, Cron, a job scheduler that is used to run programs automatically at different times, and Autofs, a daemon that automatically mounts removable storage devices such as USB drives. A distribution such as Raspbian needs more than just the kernel to work. It also needs other software that allows the user to interact with the kernel, and to manage the rest of the operating system. The core operating system consists of a collection of programs and scripts that make this happen. The shell After all the daemons have loaded, init launches a shell. A shell is an interface to your Raspberry Pi that allows you to monitor and control it using commands typed in using a keyboard. Don't be fooled by this interface, despite the fact that it looks exactly like what was used in computers 30 years ago. The shell is one of the most powerful parts of Raspbian. There are several shells available in Linux. Raspbian uses the Bourne again shell (bash) This shell is by far the most common shell used in Linux. Bash is an extremely powerful piece of software. One of bash's most powerful features is its ability to run scripts. A script is simply a collection of commands stored in a file that can do things, such as run a program, read keys from the keyboard, and many other things. Later on in this book, you will see how to use bash to make the most from your Raspberry Pi! Shell utilities A command interpreter is not much of use without any commands to run. While bash provides some very basic commands, all the other commands are shell utilities. These shell utilities together form one of the important parts of Raspbian (essential as without the utilities, the system would crash). They provide many features that range from copying files, creating directories, to the Advanced Packaging Tool (APT) – a package manager application that allows you to install and remove software from your Raspberry Pi. You will learn more about APT later in this book. The X.Org graphical server After the shell and daemons are loaded, by default the X.Org graphical server is automatically started. The role of X.Org is to provide you with a common platform from which to build a graphical user interface. X.Org handles everything from moving your mouse pointer, listening, and responding to your key presses to actually drawing the applications you are running onto the screen. The desktop environment It is difficult to use any computer without a desktop environment. A desktop environment lets you interact with your computer using more than just your keyboard, surf the Internet, view pictures and movies, and many other things. A GUI normally uses Windows, menus, and a mouse to do this. Raspbian includes a graphical user interface called Lightweight X11 Desktop Environment or LXDE. LXDE is used in Raspbian as it was specifically designed to run on devices such as the Raspberry Pi, which only have limited resources. Later in this book, you will learn how to customize and use LXDE to make the most of your Raspberry Pi.                 A screenshot of the LXDE desktop environment Summary This article introduces the Raspberry Pi and Raspbian. It discusses the history, hardware and components of the Raspberry Pi. This article is a great resource for understanding the Raspberry Pi. Resources for Article: Further resources on this subject: Raspberry Pi Gaming Operating Systems? [article] Learning Nservicebus Preparing Failure [article] Clusters Parallel Computing and Raspberry Pi Brief Background [article]
Read more
  • 0
  • 0
  • 9681

article-image-static-data-management
Packt
25 Feb 2015
29 min read
Save for later

Static Data Management

Packt
25 Feb 2015
29 min read
In this article by Loiane Groner, author of the book Mastering Ext JS, Second Edition, we will start implementing the application's core features, starting with static data management. What exactly is this? Every application has information that is not directly related to the core business, but this information is used by the core business logic somehow. There are two types of data in every application: static data and dynamic data. For example, the types of categories, languages, cities, and countries can exist independently of the core business and can be used by the core business information as well; this is what we call static data because it does not change very often. And there is the dynamic data, which is the information that changes in the application, what we call core business data. Clients, orders, and sales would be examples of dynamic or core business data. We can treat this static information as though they are independent MySQL tables (since we are using MySQL as the database server), and we can perform all the actions we can do on a MySQL table. (For more resources related to this topic, see here.) Creating a Model As usual, we are going to start by creating the Models. First, let's list the tables we will be working with and their columns: Actor: actor_id, first_name, last_name, last_update Category: category_id, name, last_update Language: language_id, name, last_update City: city_id, city, country_id, last_update Country: country_id, country, last_update We could create one Model for each of these entities with no problem at all; however, we want to reuse as much code as possible. Take another look at the list of tables and their columns. Notice that all tables have one column in common—the last_update column. All the previous tables have the last_update column in common. That being said, we can create a super model that contains this field. When we implement the actor and category models, we can extend the super Model, in which case we do not need to declare the column. Don't you think? Abstract Model In OOP, there is a concept called inheritance, which is a way to reuse the code of existing objects. Ext JS uses an OOP approach, so we can apply the same concept in Ext JS applications. If you take a look back at the code we already implemented, you will notice that we are already applying inheritance in most of our classes (with the exception of the util package), but we are creating classes that inherit from Ext JS classes. Now, we will start creating our own super classes. As all the models that we will be working with have the last_update column in common (if you take a look, all the Sakila tables have this column), we can create a super Model with this field. So, we will create a new file under app/model/staticData named Base.js: Ext.define('Packt.model.staticData.Base', {     extend: 'Packt.model.Base', //#1       fields: [         {             name: 'last_update',             type: 'date',             dateFormat: 'Y-m-j H:i:s'         }     ] }); This Model has only one column, that is, last_update. On the tables, the last_update column has the type timestamp, so the type of the field needs to be date, and we will also apply date format: 'Y-m-j H:i:s', which is years, months, days, hours, minutes, and seconds, following the same format as we have in the database (2006-02-15 04:34:33). When we can create each Model representing the tables, we will not need to declare the last_update field again. Look again at the code at line #1. We are not extending the default Ext.data.Model class, but another Base class (security.Base). Adapting the Base Model schema Create a file named Base.js inside the app/model folder with the following content in it: Ext.define('Packt.model.Base', {     extend: 'Ext.data.Model',       requires: [         'Packt.util.Util'     ],       schema: {         namespace: 'Packt.model', //#1         urlPrefix: 'php',         proxy: {             type: 'ajax',             api :{                 read : '{prefix}/{entityName:lowercase}/list.php',                 create:                     '{prefix}/{entityName:lowercase}/create.php',                 update:                     '{prefix}/{entityName:lowercase}/update.php',                 destroy:                     '{prefix}/{entityName:lowercase}/destroy.php'             },             reader: {                 type: 'json',                 rootProperty: 'data'             },             writer: {                 type: 'json',                 writeAllFields: true,                 encode: true,                 rootProperty: 'data',                 allowSingle: false             },             listeners: {                 exception: function(proxy, response, operation){               Packt.util.Util.showErrorMsg(response.responseText);                 }             }         }     } }); Instead of using Packt.model.security, we are going to use only Packt.model. The Packt.model.security.Base class will look simpler now as follows: Ext.define('Packt.model.security.Base', {     extend: 'Packt.model.Base',       idProperty: 'id',       fields: [         { name: 'id', type: 'int' }     ] }); It is very similar to the staticData.Base Model we are creating for this article. The difference is in the field that is common for the staticData package (last_update) and security package (id). Having a single schema for the application now means entityName of the Models will be created based on their name after 'Packt.model'. This means that the User and Group models we created will have entityName security.User, and security.Group respectively. However, we do not want to break the code we have implemented already, and for this reason we want the User and Group Model classes to have the entity name as User and Group. We can do this by adding entityName: 'User' to the User Model and entityName: 'Group' to the Group Model. We will do the same for the specific models we will be creating next. Having a super Base Model for all models within the application means our models will follow a pattern. The proxy template is also common for all models, and this means our server-side code will also follow a pattern. This is good to organize the application and for future maintenance. Specific models Now we can create all the models representing each table. Let's start with the Actor Model. We will create a new class named Packt.model.staticData.Actor; therefore, we need to create a new file name Actor.js under app/model/staticData, as follows: Ext.define('Packt.model.staticData.Actor', {     extend: 'Packt.model.staticData.Base', //#1       entityName: 'Actor', //#2       idProperty: 'actor_id', //#3       fields: [         { name: 'actor_id' },         { name: 'first_name'},         { name: 'last_name'}     ] }); There are three important things we need to note in the preceding code: This Model is extending (#1) from the Packt.model.staticData.Base class, which extends from the Packt.model.Base class, which in turn extends from the Ext.data.Model class. This means this Model inherits all the attributes and behavior from the classes Packt.model.staticData.Base, Packt.model.Base, and Ext.data.Model. As we created a super Model with the schema Packt.model, the default entityName created for this Model would be staticData.Actor. We are using entityName to help the proxy compile the url template with entityName. To make our life easier we are going to overwrite entityName as well (#2). The third point is idProperty (#3). By default, idProperty has the value "id". This means that when we declare a Model with a field named "id", Ext JS already knows that this is the unique field of this Model. When it is different from "id", we need to specify it using the idProperty configuration. As all Sakila tables do not have a unique field called "id"—it is always the name of the entity + "_id"—we will need to declare this configuration in all models. Now we can do the same for the other models. We need to create four more classes: Packt.model.staticData.Category Packt.model.staticData.Language Packt.model.staticData.City Packt.model.staticData.Country At the end, we will have six Model classes (one super Model and five specific models) created inside the app/model/staticData package. If we create a UML-class diagram for the Model classes, we will have the following diagram: The Actor, Category, Language, City, and Country Models extend the Packt.model.staticData Base Model, which extends from Packt.model.Base, which in turn extends the Ext.data.Model class. Creating a Store The next step is to create the storesfor each Model. As we did with the Model, we will try to create a generic Storeas well (in this article, will create a generic code for all screens, so creating a super Model, Store, and View is part of the capability). Although the common configurations are not in the Store, but in the Proxy(which we declared inside the schema in the Packt.model.Base class), having a super Store class can help us to listen to events that are common for all the static data stores. We will create a super Store named Packt.store.staticData.Base. As we need a Store for each Model, we will create the following stores: Packt.store.staticData.Actors Packt.store.staticData.Categories Packt.store.staticData.Languages Packt.store.staticData.Cities Packt.store.staticData.Countries At the end of this topic, we will have created all the previous classes. If we create a UML diagram for them, we will have something like the following diagram: All the Store classes extend from the Base Store. Now that we know what we need to create, let's get our hands dirty! Abstract Store The first class we need to create is the Packt.store.staticData.Base class. Inside this class, we will only declare autoLoad as true so that all the subclasses of this Store can be loaded when the application launches: Ext.define('Packt.store.staticData.Base', {     extend: 'Ext.data.Store',       autoLoad: true }); All the specific stores that we will create will extend this Store. Creating a super Store like this can feel pointless; however, we do not know that during future maintenance, we will need to add some common Store configuration. As we will use MVC for this module, another reason is that inside the Controller, we can also listen to Store events (available since Ext JS 4.2). If we want to listen to the same event of a set of stores and we execute exactly the same method, having a super Store will save us some lines of code. Specific Store Our next step is to implement the Actors, Categories, Languages, Cities, and Countries stores. So let's start with the Actors Store: Ext.define('Packt.store.staticData.Actors', {     extend: 'Packt.store.staticData.Base', //#1       model: 'Packt.model.staticData.Actor' //#2 }); After the definition of the Store, we need to extend from the Ext JS Store class. As we are using a super Store, we can extend directly from the super Store (#1), which means extending from the Packt.store.staticData.Base class. Next, we need to declare the fields or the model that this Store is going to represent. In our case, we always declare the Model (#2). Using a model inside the Store is good for reuse purposes. The fields configuration is recommended just in case we need to create a very specific Store with specific data that we are not planning to reuse throughout the application, as in a chart or a report. For the other stores, the only thing that is going to be different is the name of the Store and the Model. However, if you need the code to compare with yours or simply want to get the complete source code, you can download the code bundle from this book or get it at https://github.com/loiane/masteringextjs. Creating an abstract GridPanel for reuse Now is the time to implement the views. We have to implement five views: one to perform the CRUD operations for Actor, one for Category, one for Language, one for City, and one for Country. The following screenshot represents the final result we want to achieve after implementing the Actors screen: And the following screenshot represents the final result we want to achieve after implementing the Categories screen: Did you notice anything similar between these two screens? Let's take a look again: The top toolbar is the same (1); there is a Live Search capability (2); there is a filter plugin (4), and the Last Update and widget columns are also common (3). Going a little bit further, both GridPanels can be edited using a cell editor(similar to MS Excel capabilities, where you can edit a single cell by clicking on it). The only things different between these two screens are the columns that are specific to each screen (5). Does this mean we can reuse a good part of the code if we use inheritance by creating a super GridPanel with all these common capabilities? Yes! So this is what we are going to do. So let's create a new class named Packt.view.staticData.BaseGrid, as follows: Ext.define('Packt.view.staticData.BaseGrid', {     extend: 'Ext.ux.LiveSearchGridPanel', //#1     xtype: 'staticdatagrid',       requires: [         'Packt.util.Glyphs' //#2     ],       columnLines: true,    //#3     viewConfig: {         stripeRows: true //#4     },       //more code here });    We will extend the Ext.ux.LiveSearchGridPanel class instead of Ext.grid.Panel. The Ext.ux.LiveSearchGridPanel class already extends the Ext.grid.Panel class and also adds the Live Search toolbar (2). The LiveSearchGridPanel class is a plugin that is distributed with the Ext JS SDK. So, we do not need to worry about adding it manually to our project (you will learn how to add third-party plugins to the project later in this book). As we will also add a toolbar with the Add, Save Changes, Cancel Changes buttons, we need to require the util.Glyphs class we created (#2). The configurations #3 and #4 show the border of each cell of the grid and to alternate between a white background and a light gray background. Likewise, any other component that is responsible for displaying information in Ext JS, such as the "Panel" piece is only the shell. The View is responsible for displaying the columns in a GridPanel. We can customize it using the viewConfig (#4). The next step is to create an initComponent method. To initComponent or not? While browsing other developers' code, we might see some using the initComponent when declaring an Ext JS class and some who do not (as we have done until now). So what is the difference between using it and not using it? When declaring an Ext JS class, we usually configure it according to the application needs. They might become either a parent class for other classes or not. If they become a parent class, some of the configurations will be overridden, while some will not. Usually, we declare the ones that we expect to override in the class as configurations. We declare inside the initComponent method the ones we do not want to be overridden. As there are a few configurations we do not want to be overridden, we will declare them inside the initComponent, as follows: initComponent: function() {     var me = this;       me.selModel = {         selType: 'cellmodel' //#5     };       me.plugins = [         {             ptype: 'cellediting',  //#6             clicksToEdit: 1,             pluginId: 'cellplugin'         },         {             ptype: 'gridfilters'  //#7         }     ];       //docked items       //columns       me.callParent(arguments); //#8 } We can define how the user can select information from the GridPanel: the default configuration is the Selection RowModel class. As we want the user to be able to edit cell by cell, we will use the Selection CellModel class (#5) and also the CellEditing plugin (#6), which is part of the Ext JS SDK. For the CellEditing plugin, we configure the cell to be available to edit when the user clicks on the cell (if we need the user to double-click, we can change to clicksToEdit: 2). To help us later in the Controller, we also assign an ID to this plugin. To be able to filter the information (the Live Search will only highlight the matching records), we will use the Filters plugin (#7). The Filters plugin is also part of the Ext JS SDK. The callParent method (#8) will call initConfig from the superclass Ext.ux.LiveSearchGridPanel passing the arguments we defined. It is a common mistake to forget to include the callParent call when overriding the initComponent method. If the component does not work, make sure you are calling the callParent method! Next, we are going to declare dockedItems. As all GridPanels will have the same toolbar, we can declare dockedItems in the super class we are creating, as follows: me.dockedItems = [     {         xtype: 'toolbar',         dock: 'top',         itemId: 'topToolbar', //#9         items: [             {                 xtype: 'button',                 itemId: 'add', //#10                 text: 'Add',                 glyph: Packt.util.Glyphs.getGlyph('add')             },             {                 xtype: 'tbseparator'             },             {                 xtype: 'button',                 itemId: 'save',                 text: 'Save Changes',                 glyph: Packt.util.Glyphs.getGlyph('saveAll')             },             {                 xtype: 'button',                 itemId: 'cancel',                 text: 'Cancel Changes',                 glyph: Packt.util.Glyphs.getGlyph('cancel')             },             {                 xtype: 'tbseparator'             },             {                 xtype: 'button',                 itemId: 'clearFilter',                 text: 'Clear Filters',                 glyph: Packt.util.Glyphs.getGlyph('clearFilter')             }         ]     } ]; We will have Add, Save Changes, Cancel Changes, and Clear Filters buttons. Note that the toolbar (#9) and each of the buttons (#10) has itemId declared. As we are going to use the MVC approach in this example, we will declare a Controller. The itemId configuration has a responsibility similar to the reference that we declare when working with a ViewController. We will discuss the importance of itemId more when we declare the Controller. When declaring buttons inside a toolbar, we can omit the xtype: 'button' configuration since the button is the default component for toolbars. Inside the Glyphs class, we need to add the following attributes inside its config: saveAll: 'xf0c7', clearFilter: 'xf0b0' And finally, we will add the two columns that are common for all the screens (Last Update column and Widget Column delete (#13)) along with the columns already declared in each specific GridPanel: me.columns = Ext.Array.merge( //#11     me.columns,               //#12     [{         xtype    : 'datecolumn',         text     : 'Last Update',         width    : 150,         dataIndex: 'last_update',         format: 'Y-m-j H:i:s',         filter: true     },     {         xtype: 'widgetcolumn', //#13         width: 45,         sortable: false,       //#14         menuDisabled: true,    //#15         itemId: 'delete',         widget: {             xtype: 'button',   //#16             glyph: Packt.util.Glyphs.getGlyph('destroy'),             tooltip: 'Delete',             scope: me,                //#17             handler: function(btn) {  //#18                 me.fireEvent('widgetclick', me, btn);             }         }     }] ); In the preceding code we merge (#11) me.columns (#12) with two other columns and assign this value to me.columns again. We want all child grids to have these two columns plus the specific columns for each child grid. If the columns configuration from the BaseGrid class were outside initConfig, then when a child class declared its own columns configuration the value would be overridden. If we declare the columns configuration inside initComponent, a child class would not be able to add its own columns configuration, so we need to merge these two configurations (the columns from the child class #12 with the two columns we want each child class to have). For the delete button, we are going to use a Widget Column (#13) (introduced in Ext JS 5). Until Ext JS 4, the only way to have a button inside a Grid Column was using an Action Column. We are going to use a button (#16) to represent a Widget Column. Because it is a Widget Column, there is no reason to make this column sortable (#14), and we can also disable its menu (#15). Specific GridPanels for each table Our last stop before we implement the Controller is the specific GridPanels. We have already created the super GridPanel that contains most of the capabilities that we need. Now we just need to declare the specific configurations for each GridPanel. We will create five GridPanels that will extend from the Packt.view.staticData.BaseGrid class, as follows: Packt.view.staticData.Actors Packt.view.staticData.Categories Packt.view.staticData.Languages Packt.view.staticData.Cities Packt.view.staticData.Countries Let's start with the Actors GridPanel, as follows: Ext.define('Packt.view.staticData.Actors', {     extend: 'Packt.view.staticData.BaseGrid',     xtype: 'actorsgrid',        //#1       store: 'staticData.Actors', //#2       columns: [         {             text: 'Actor Id',             width: 100,             dataIndex: 'actor_id',             filter: {                 type: 'numeric'   //#3             }         },         {             text: 'First Name',             flex: 1,             dataIndex: 'first_name',             editor: {                 allowBlank: false, //#4                 maxLength: 45      //#5             },             filter: {                 type: 'string'     //#6             }         },         {             text: 'Last Name',             width: 200,             dataIndex: 'last_name',             editor: {                 allowBlank: false, //#7                 maxLength: 45      //#8             },             filter: {                 type: 'string'     //#9             }         }     ] }); Each specific class has its own xtype (#1). We also need to execute an UPDATE query in the database to update the menu table with the new xtypes we are creating: UPDATE `sakila`.`menu` SET `className`='actorsgrid' WHERE `id`='5'; UPDATE `sakila`.`menu` SET `className`='categoriesgrid' WHERE `id`='6'; UPDATE `sakila`.`menu` SET `className`='languagesgrid' WHERE `id`='7'; UPDATE `sakila`.`menu` SET `className`='citiesgrid' WHERE `id`='8'; UPDATE `sakila`.`menu` SET `className`='countriesgrid' WHERE `id`='9'; The first declaration that is specific to the Actors GridPanel is the Store (#2). We are going to use the Actors Store. Because the Actors Store is inside the staticData folder (store/staticData), we also need to pass the name of the subfolder; otherwise, Ext JS will think that this Store file is inside the app/store folder, which is not true. Then we need to declare the columns specific to the Actors GridPanel (we do not need to declare the Last Update and the Delete Action Column because they are already in the super GridPanel). What you need to pay attention to now are the editor and filter configurations for each column. The editor is for editing (cellediting plugin). We will only apply this configuration to the columns we want the user to be able to edit, and the filter (filters plugin) is the configuration that we will apply to the columns we want the user to be able to filter information from. For example, for the id column, we do not want the user to be able to edit it as it is a sequence provided by the MySQL database auto increment, so we will not apply the editor configuration to it. However, the user can filter the information based on the ID, so we will apply the filter configuration (#3). We want the user to be able to edit the other two columns: first_name and last_name, so we will add the editor configuration. We can perform client validations as we can do on a field of a form too. For example, we want both fields to be mandatory (#4 and #7) and the maximum number of characters the user can enter is 45 (#5 and #8). And at last, as both columns are rendering text values (string), we will also apply filter (#6 and #9). For other filter types, please refer to the Ext JS documentation as shown in the following screenshot. The documentation provides an example and more configuration options that we can use: And that is it! The super GridPanel will provide all the other capabilities. Summary In this article, we covered how to implement screens that look very similar to the MySQL Table Editor. The most important concept we covered in this article is implementing abstract classes, using the inheritance concept from OOP. We are used to using these concepts on server-side languages, such as PHP, Java, .NET, and so on. This article demonstrated that it is also important to use these concepts on the Ext JS side; this way, we can reuse a lot of code and also implement generic code that provides the same capability for more than one screen. We created a Base Model and Store. We used GridPanel and Live Search grid and filter plugin for the GridPanel as well. You learned how to perform CRUD operations using the Store capabilities. Resources for Article: Further resources on this subject: So, What Is EXT JS? [article] The Login Page Using EXT JS [article] Improving Code Quality [article]
Read more
  • 0
  • 0
  • 6168

article-image-agile-data-modeling-neo4j
Packt
25 Feb 2015
18 min read
Save for later

Agile data modeling with Neo4j

Packt
25 Feb 2015
18 min read
In this article by Sumit Gupta, author of the book Neo4j Essentials, we will discuss data modeling in Neo4j, which is evolving and flexible enough to adapt to changing business requirements. It captures the new data sources, entities, and their relationships as they naturally occur, allowing the database to easily adapt to the changes, which in turn results in an extremely agile development and provides quick responsiveness to changing business requirements. (For more resources related to this topic, see here.) Data modeling is a multistep process and involves the following steps: Define requirements or goals in the form of questions that need to be executed and answered by the domain-specific model. Once we have our goals ready, we can dig deep into the data and identify entities and their associations/relationships. Now, as we have our graph structure ready, the next step is to form patterns from our initial questions/goals and execute them against the graph model. This whole process is applied in an iterative and incremental manner, similar to what we do in agile, and has to be repeated again whenever we change our goals or add new goals/questions, which need to be answered by your graph model. Let's see in detail how data is organized/structured and implemented in Neo4j to bring in the agility of graph models. Based on the principles of graph data structure available at http://en.wikipedia.org/wiki/Graph_(abstract_data_type), Neo4j implements the property graph data model at storage level, which is efficient, flexible, adaptive, and capable of effectively storing/representing any kind of data in the form of nodes, properties, and relationships. Neo4j not only implements the property graph model, but has also evolved the traditional model and added the feature of tagging nodes with labels, which is now referred to as the labeled property graph. Essentially, in Neo4j, everything needs to be defined in either of the following forms: Nodes: A node is the fundamental unit of a graph, which can also be viewed as an entity, but based on a domain model, it can be something else too. Relationships: These defines the connection between two nodes. Relationships also have types, which further differentiate relationships from one another. Properties: Properties are viewed as attributes and do not have their own existence. They are related either to nodes or to relationships. Nodes and relationships can both have their own set of properties. Labels: Labels are used to construct a group of nodes into sets. Nodes that are labeled with the same label belong to the same set, which can be further used to create indexes for faster retrieval, mark temporary states of certain nodes, and there could be many more, based on the domain model. Let's see how all of these four forms are related to each other and represented within Neo4j. A graph essentially consists of nodes, which can also have properties. Nodes are linked to other nodes. The link between two nodes is known as a relationship, which also can have properties. Nodes can also have labels, which are used for grouping the nodes. Let's take up a use case to understand data modeling in Neo4j. John is a male and his age is 24. He is married to a female named Mary whose age is 20. John and Mary got married in 2012. Now, let's develop the data model for the preceding use case in Neo4j: John and Mary are two different nodes. Marriage is the relationship between John and Mary. Age of John, age of Mary, and the year of their marriage become the properties. Male and Female become the labels. Easy, simple, flexible, and natural… isn't it? The data structure in Neo4j is adaptive and effectively can model everything that is not fixed and evolves over a period of time. The next step in data modeling is fetching the data from the data model, which is done through traversals. Traversals are another important aspect of graphs, where you need to follow paths within the graph starting from a given node and then following its relationships with other nodes. Neo4j provides two kinds of traversals: breadth first available at http://en.wikipedia.org/wiki/Breadth-first_search and depth first available at http://en.wikipedia.org/wiki/Depth-first_search. If you are from the RDBMS world, then you must now be wondering, "What about the schema?" and you will be surprised to know that Neo4j is a schemaless or schema-optional graph database. We do not have to define the schema unless we are at a stage where we want to provide some structure to our data for performance gains. Once performance becomes a focus area, then you can define a schema and create indexes/constraints/rules over data. Unlike the traditional models where we freeze requirements and then draw our models, Neo4j embraces data modeling in an agile way so that it can be evolved over a period of time and is highly responsive to the dynamic and changing business requirements. Read-only Cypher queries In this section, we will discuss one of the most important aspects of Neo4j, that is, read-only Cypher queries. Read-only Cypher queries are not only the core component of Cypher but also help us in exploring and leveraging various patterns and pattern matching constructs. It either begins with MATCH, OPTIONAL MATCH, or START, which can be used in conjunction with the WHERE clause and further followed by WITH and ends with RETURN. Constructs such as ORDER BY, SKIP, and LIMIT can also be used with WITH and RETURN. We will discuss in detail about read-only constructs, but before that, let's create a sample dataset and then we will discuss constructs/syntax of read-only Cypher queries with illustration. Creating a sample dataset – movie dataset Let's perform the following steps to clean up our Neo4j database and insert some data which will help us in exploring various constructs of Cypher queries: Open your Command Prompt or Linux shell and open the Neo4j shell by typing <$NEO4J_HOME>/bin/neo4j-shell. Execute the following commands on your Neo4j shell for cleaning all the previous data: //Delete all relationships between Nodes MATCH ()-[r]-() delete r; //Delete all Nodes MATCH (n) delete n; Now we will create a sample dataset, which will contain movies, artists, directors, and their associations. Execute the following set of Cypher queries in your Neo4j shell to create the list of movies: CREATE (:Movie {Title : 'Rocky', Year : '1976'}); CREATE (:Movie {Title : 'Rocky II', Year : '1979'}); CREATE (:Movie {Title : 'Rocky III', Year : '1982'}); CREATE (:Movie {Title : 'Rocky IV', Year : '1985'}); CREATE (:Movie {Title : 'Rocky V', Year : '1990'}); CREATE (:Movie {Title : 'The Expendables', Year : '2010'}); CREATE (:Movie {Title : 'The Expendables II', Year : '2012'}); CREATE (:Movie {Title : 'The Karate Kid', Year : '1984'}); CREATE (:Movie {Title : 'Rocky', Year : '1976'}); CREATE (:Movie {Title : 'Rocky II', Year : '1979'}); CREATE (:Movie {Title : 'Rocky III', Year : '1982'}); CREATE (:Movie {Title : 'Rocky IV', Year : '1985'}); CREATE (:Movie {Title : 'Rocky V', Year : '1990'}); CREATE (:Movie {Title : 'The Expendables', Year : '2010'}); CREATE (:Movie {Title : 'The Expendables II', Year : '2012'}); CREATE (:Movie {Title : 'The Karate Kid', Year : '1984'}); CREATE (:Movie {Title : 'The Karate Kid II', Year : '1986'}); Execute the following set of Cypher queries in your Neo4j shell to create the list of artists: CREATE (:Artist {Name : 'Sylvester Stallone', WorkedAs : ["Actor", "Director"]}); CREATE (:Artist {Name : 'John G. Avildsen', WorkedAs : ["Director"]}); CREATE (:Artist {Name : 'Ralph Macchio', WorkedAs : ["Actor"]}); CREATE (:Artist {Name : 'Simon West', WorkedAs : ["Director"]}); Execute the following set of cypher queries in your Neo4j shell to create the relationships between artists and movies: Match (artist:Artist {Name : "Sylvester Stallone"}), (movie:Movie {Title: "Rocky"}) CREATE (artist)-[:ACTED_IN {Role : "Rocky Balboa"}]->(movie); Match (artist:Artist {Name : "Sylvester Stallone"}), (movie:Movie {Title: "Rocky II"}) CREATE (artist)-[:ACTED_IN {Role : "Rocky Balboa"}]->(movie); Match (artist:Artist {Name : "Sylvester Stallone"}), (movie:Movie {Title: "Rocky III"}) CREATE (artist)-[:ACTED_IN {Role : "Rocky Balboa"}]->(movie); Match (artist:Artist {Name : "Sylvester Stallone"}), (movie:Movie {Title: "Rocky IV"}) CREATE (artist)-[:ACTED_IN {Role : "Rocky Balboa"}]->(movie); Match (artist:Artist {Name : "Sylvester Stallone"}), (movie:Movie {Title: "Rocky V"}) CREATE (artist)-[:ACTED_IN {Role : "Rocky Balboa"}]->(movie); Match (artist:Artist {Name : "Sylvester Stallone"}), (movie:Movie {Title: "The Expendables"}) CREATE (artist)-[:ACTED_IN {Role : "Barney Ross"}]->(movie); Match (artist:Artist {Name : "Sylvester Stallone"}), (movie:Movie {Title: "The Expendables II"}) CREATE (artist)-[:ACTED_IN {Role : "Barney Ross"}]->(movie); Match (artist:Artist {Name : "Sylvester Stallone"}), (movie:Movie {Title: "Rocky II"}) CREATE (artist)-[:DIRECTED]->(movie); Match (artist:Artist {Name : "Sylvester Stallone"}), (movie:Movie {Title: "Rocky III"}) CREATE (artist)-[:DIRECTED]->(movie); Match (artist:Artist {Name : "Sylvester Stallone"}), (movie:Movie {Title: "Rocky IV"}) CREATE (artist)-[:DIRECTED]->(movie); Match (artist:Artist {Name : "Sylvester Stallone"}), (movie:Movie {Title: "The Expendables"}) CREATE (artist)-[:DIRECTED]->(movie); Match (artist:Artist {Name : "John G. Avildsen"}), (movie:Movie {Title: "Rocky"}) CREATE (artist)-[:DIRECTED]->(movie); Match (artist:Artist {Name : "John G. Avildsen"}), (movie:Movie {Title: "Rocky V"}) CREATE (artist)-[:DIRECTED]->(movie); Match (artist:Artist {Name : "John G. Avildsen"}), (movie:Movie {Title: "The Karate Kid"}) CREATE (artist)-[:DIRECTED]->(movie); Match (artist:Artist {Name : "John G. Avildsen"}), (movie:Movie {Title: "The Karate Kid II"}) CREATE (artist)-[:DIRECTED]->(movie); Match (artist:Artist {Name : "Ralph Macchio"}), (movie:Movie {Title: "The Karate Kid"}) CREATE (artist)-[:ACTED_IN {Role:"Daniel LaRusso"}]->(movie); Match (artist:Artist {Name : "Ralph Macchio"}), (movie:Movie {Title: "The Karate Kid II"}) CREATE (artist)-[:ACTED_IN {Role:"Daniel LaRusso"}]->(movie); Match (artist:Artist {Name : "Simon West"}), (movie:Movie {Title: "The Expendables II"}) CREATE (artist)-[:DIRECTED]->(movie); Next, browse your data through the Neo4j browser. Click on Get some data from the left navigation pane and then execute the query by clicking on the right arrow sign that will appear on the extreme right corner just below the browser navigation bar, and it should look something like this: Now, let's understand the different pieces of read-only queries and execute those against our movie dataset. Working with the MATCH clause MATCH is the most important clause used to fetch data from the database. It accepts a pattern, which defines "What to search?" and "From where to search?". If the latter is not provided, then Cypher will scan the whole tree and use indexes (if defined) in order to make searching faster and performance more efficient. Working with nodes Let's start asking questions from our movie dataset and then form Cypher queries, execute them on <$NEO4J_HOME>/bin/neo4j-shell against the movie dataset and get the results that will produce answers to our questions: How do we get all nodes and their properties? Answer: MATCH (n) RETURN n; Explanation: We are instructing Cypher to scan the complete database and capture all nodes in a variable n and then return the results, which in our case will be printed on our Neo4j shell. How do we get nodes with specific properties or labels? Answer: Match with label, MATCH (n:Artist) RETURN n; or MATCH (n:Movies) RETURN n; Explanation: We are instructing Cypher to scan the complete database and capture all nodes, which contain the value of label as Artist or Movies. Answer: Match with a specific property MATCH (n:Artist {WorkedAs:["Actor"]}) RETURN n; Explanation: We are instructing Cypher to scan the complete database and capture all nodes that contain the value of label as Artist and the value of property WorkedAs is ["Actor"]. Since we have defined the WorkedAs collection, we need to use square brackets, but in all other cases, we should not use square brackets. We can also return specific columns (similar to SQL). For example, the preceding statement can also be formed as MATCH (n:Artist {WorkedAs:["Actor"]}) RETURN n.name as Name;. Working with relationships Let's understand the process of defining relationships in the form of Cypher queries in the same way as you did in the previous section while working with nodes: How do we get nodes that are associated or have relationships with other nodes? Answer: MATCH (n)-[r]-(n1) RETURN n,r,n1; Explanation: We are instructing Cypher to scan the complete database and capture all nodes, their relationships, and nodes with which they have relationships in variables n, r, and n1 and then further return/print the results on your Neo4j shell. Also, in the preceding query, we have used - and not -> as we do not care about the direction of relationships that we retrieve. How do we get nodes, their associated properties that have some specific type of relationship, or the specific property of a relationship? Answer: MATCH (n)-[r:ACTED_IN {Role : "Rocky Balboa"}]->(n1) RETURN n,r,n1; Explanation: We are instructing Cypher to scan the complete database and capture all nodes, their relationships, and nodes, which have a relationship as ACTED_IN and with the property of Role as Rocky Balboa. Also, in the preceding query, we do care about the direction (incoming/outgoing) of a relationship, so we are using ->. For matching multiple relations replace [r:ACTED_IN] with [r:ACTED_IN | DIRECTED] and use single quotes or escape characters wherever there are special characters in the name of relationships. How do we get a coartist? Answer: MATCH (n {Name : "Sylvester Stallone"})-[r]->(x)<-[r1]-(n1) return n.Name as Artist,type(r),x.Title as Movie, type(r1), n1.Name as Artist2; Explanation: We are trying to find out all artists that are related to Sylvester Stallone in some manner or the other. Once you run the preceding query, you will see something like the following image, which should be self-explanatory. Also, see the usage of as and type. as is similar to the SQL construct and is used to define a meaningful name to the column presenting the results, and type is a special keyword that gives the type of relationship between two nodes. How do we get the path and number of hops between two nodes? Answer: MATCH p = (:Movie{Title:"The Karate Kid"})-[:DIRECTED*0..4]-(:Movie{Title:"Rocky V"}) return p; Explanation: Paths are the distance between two nodes. In the preceding statement, we are trying to find out all paths between two nodes, which are between 0 (minimum) and 4 (maximum) hops away from each other and are only connected through the relationship DIRECTED. You can also find the path, originating only from a particular node and re-write your query as MATCH p = (:Movie{Title:"The Karate Kid"})-[:DIRECTED*0..4]-() return p;. Integration of the BI tool – QlikView In this section, we will talk about the integration of the BI tool—QlikView with Neo4j. QlikView is available only on the Windows platform, so this section is only applicable for Windows users. Neo4j as an open source database exposes its core APIs for developers to write plugins and extends its intrinsic capabilities. Neo4j JDBC is one such plugin that enables the integration of Neo4j with various BI / visualization and ETL tools such as QlikView, Jaspersoft, Talend, Hive, Hbase, and many more. Let's perform the following steps for integrating Neo4j with QlikView on Windows: Download, install, and configure the following required software: Download the Neo4j JDBC driver directly from https://github.com/neo4j-contrib/neo4j-jdbc as the source code. You need to compile and create a JAR file or you can also directly download the compiled sources from http://dist.neo4j.org/neo4j-jdbc/neo4j-jdbc-2.0.1-SNAPSHOT-jar-with-dependencies.jar. Depending upon your Windows platform (32 bit or 64 bit), download the QlikView Desktop Personal Edition. In this article, we will be using QlikView Desktop Personal Edition 11.20.12577.0 SR8. Install QlikView and follow the instructions as they appear on your screen. After installation, you will see the QlikView icon in your Windows start menu. QlikView leverages QlikView JDBC Connector for integration with JDBC data sources, so our next step would be to install QlikView JDBC Connector. Let's perform the following steps to install and configure the QlikView JDBC Connector: Download QlikView JDBC Connector either from http://www.tiq-solutions.de/display/enghome/ENJDBC or from https://s3-eu-west-1.amazonaws.com/tiq-solutions/JDBCConnector/JDBCConnector_Setup.zip. Open the downloaded JDBCConnector_Setup.zip file and install the provided connector. Once the installation is complete, open JDBC Connector from your Windows Start menu and click on Active for a 30-day trial (if you haven't already done so during installation). Create a new profile of the name Qlikview-Neo4j and make it your default profile. Open the JVM VM Options tab and provide the location of jvm.dll, which can be located at <$JAVA_HOME>/jre/bin/client/jvm.dll. Click on Open Log-Folder to check the logs related to the Database connections. You can configure the Logging Level and also define the JVM runtime options such as -Xmx and -Xms in the textbox provided for Option in the preceding screenshot. Browse through the JDBC Driver tab, click on Add Library, and provide the location of your <$NEO4J-JDBC driver>.jar, and add the dependent JAR files. Instead of adding individual libraries, we can also add a folder containing the same list of libraries by clicking on the Add Folder option. We can also use non JDBC-4 compliant drivers by mentioning the name of the driver class in the Advanced Settings tab. There is no need to do that, however, if you are setting up a configuration profile that uses a JDBC-4 compliant driver. Open the License Registration tab and request Online Trial license, which will be valid for 30 days. Assuming that you are connected to the Internet, the trial license will be applied immediately. Save your settings and close QlikView JDBC Connector configuration. Open <$NEO4J_HOME>binneo4jshell and execute the following set of Cypher statements one by one to create sample data in your Neo4j database; then, in further steps, we will visualize this data in QlikView: CREATE (movies1:Movies {Title : 'Rocky', Year : '1976'}); CREATE (movies2:Movies {Title : 'Rocky II', Year : '1979'}); CREATE (movies3:Movies {Title : 'Rocky III', Year : '1982'}); CREATE (movies4:Movies {Title : 'Rocky IV', Year : '1985'}); CREATE (movies5:Movies {Title : 'Rocky V', Year : '1990'}); CREATE (movies6:Movies {Title : 'The Expendables', Year : '2010'}); CREATE (movies7:Movies {Title : 'The Expendables II', Year : '2012'}); CREATE (movies8:Movies {Title : 'The Karate Kid', Year : '1984'}); CREATE (movies9:Movies {Title : 'The Karate Kid II', Year : '1986'}); Open the QlikView Desktop Personal Edition and create a new view by navigating to File | New. The Getting Started wizard may appear as we are creating a new view. Close this wizard. Navigate to File | EditScript and change your database to JDBCConnector.dll (32). In the same window, click on Connect and enter "jdbc:neo4j://localhost:7474/" in the "url" box. Leave the username and password as empty and click on OK. You will see that a CUSTOM CONNECT TO statement is added in the box provided. Next insert the highlighted Cypher statements in the provided window just below the CUSTOM CONNECT TO statement. Save the script and close the EditScript window. Now, on your Qlikviewsheet, execute the script by pressing Ctrl + R on our keyboard. Next, add a new TableObject on your Qlikviewsheet, select "MovieTitle" from the provided fields and click on OK. And we are done!!!! You will see the data appearing in the listbox in the newly created Table Object. The data is fetched from the Neo4j database and QlikView is used to render this data. The same process is used for connecting to other JDBC-compliant BI / visualization / ETL tools such as Jasper, Talend, Hive, Hbase, and so on. We just need to define appropriate JDBC Type-4 drivers in JDBC Connector. We can also use ODBC-JDBC Bridge provided by EasySoft at http://www.easysoft.com/products/data_access/odbc_jdbc_gateway/index.html. EasySoft provides the ODBC-JDBC Gateway, which facilitates ODBC access from applications such as MS Access, MS Excel, Delphi, and C++ to Java databases. It is a fully functional ODBC 3.5 driver that allows you to access any JDBC data source from any ODBC-compatible application. Summary In this article, you have learned the basic concepts of data modeling in Neo4j and have walked you through the process of BI integration with Neo4j. Resources for Article: Further resources on this subject: Working Neo4j Embedded Database [article] Introducing Kafka [article] First Steps With R [article]
Read more
  • 0
  • 0
  • 2654

article-image-introducing-web-application-development-rails
Packt
25 Feb 2015
8 min read
Save for later

Introducing Web Application Development in Rails

Packt
25 Feb 2015
8 min read
In this article by Syed Fazle Rahman, author of the book Bootstrap for Rails,we will learn how to present your application in the best possible way, which has been the most important factor for every web developer for ages. In this mobile-first generation, we are forced to go with the wind and make our application compatible with mobiles, tablets, PCs, and every possible display on Earth. Bootstrap is the one stop solution for all woes that developers have been facing. It creates beautiful responsive designs without any extra efforts and without any advanced CSS knowledge. It is a true boon for every developer. we will be focusing on how to beautify our Rails applications through the help of Bootstrap. We will create a basic Todo application with Rails. We will explore the folder structure of a Rails application and analyze which folders are important for templating a Rails Application. This will be helpful if you want to quickly revisit Rails concepts. We will also see how to create views, link them, and also style them. The styling in this article will be done traditionally through the application's default CSS files. Finally, we will discuss how we can speed up the designing process using Bootstrap. In short, we will cover the following topics: Why Bootstrap with Rails? Setting up a Todo Application in Rails Analyzing folder structure of a Rails application Creating views Styling views using CSS Challenges in traditionally styling a Rails Application (For more resources related to this topic, see here.) Why Bootstrap with Rails? Rails is one the most popular Ruby frameworks which is currently at its peak, both in terms of demand and technology trend. With more than 3,100 members contributing to its development, and tens of thousands of applications already built using it, Rails has created a standard for every other framework in the Web today. Rails was initially developed by David Heinemeier Hansson in 2003 to ease his own development process in Ruby. Later, he became generous enough to release Rails to the open source community. Today, it is popularly known as Ruby on Rails. Rails shortens the development life cycle by moving the focus from reinventing the wheel to innovating new features. It is based on the convention of the configurations principle, which means that if you follow the Rails conventions, you would end up writing much less code than you would otherwise write. Bootstrap, on the other hand, is one of the most popular frontend development frameworks. It was initially developed at Twitter for some of its internal projects. It makes the life of a novice web developer easier by providing most of the reusable components that are already built and are ready to use. Bootstrap can be easily integrated with a Rails development environment through various methods. We can directly use the .css files provided by the framework, or can extend it through its Sass version and let Rails compile it. Sass is a CSS preprocessor that brings logic and functionality into CSS. It includes features like variables, functions, mixins, and others. Using the Sass version of Bootstrap is a recommended method in Rails. It gives various options to customize Bootstrap's default styles easily. Bootstrap also provides various JavaScript components that can be used by those who don't have any real JavaScript knowledge. These components are required in almost every modern website being built today. Bootstrap with Rails is a deadly combination. You can build applications faster and invest more time to think about functionality, rather than rewrite codes. Setting up a Todo application in Rails I assume that you already have basic knowledge of Rails development. You should also have Rails and Ruby installed in your machine to start with. Let's first understand what this Todo application will do. Our application will allow us to create, update, and delete items from the Todo list. We will first analyze the folders that are created while scaffolding this application and which of them are necessary for templating the application. So, let's dip our feet into the water: First, we need to select our workspace, which can be any folder inside your system. Let's create a folder named Bootstrap_Rails_Project. Now, open the terminal and navigate to this folder. It's time to create our Todo application. Write the following command to create a Rails application named TODO: rails new TODO This command will execute a series of various other commands that are necessary to create a Rails application. So, just wait for sometime before it stops executing all the codes. If you are using a newer version of Rails, then this command will also execute bundle install command at the end. Bundle install command is used to install other dependencies. The output for the preceding command is as follows: Now, you should have a new folder inside Bootstrap_Rails_Project named TODO, which was created by the preceding code. Here is the output: Analyzing folder structure of a Rails application Let's navigate to the TODO folder to check how our application's folder structure looks like: Let me explain to you some of the important folders here: The first one is the app folder. The assets folder inside the app folder is the location to store all the static files like JavaScript, CSS, and Images. You can take a sneak peek inside them to look at the various files. The controllers folder handles various requests and responses of the browser. The helpers folder contains various helper methods both for the views and controllers. The next folder mailers, contains all the necessary files to send an e-mail. The models folder contains files that interact with the database. Finally, we have the views folder, which contains all the .erb files that will be complied to HTML files. So, let's start the Rails server and check out our application on the browser: Navigate to the TODO folder in the terminal and then type the following command to start a server: rails server You can also use the following command: rails s You will see that the server is deployed under the port 3000. So, type the following URL to view the application: http://localhost:3000. You can also use the following URL: http://0.0.0.0:3000. If your application is properly set up, you should see the default page of Rails in the browser: Creating views We will be using Rails' scaffold method to create models, views, and other necessary files that Rails needs to make our application live. Here's the set of tasks that our application should perform: It should list out the pending items Every task should be clickable, and the details related to that item should be seen in a new view We can edit that item's description and some other details We can delete that item The task looks pretty lengthy, but any Rails developer would know how easy it is to do. We don't actually have to do anything to achieve it. We just have to pass a single scaffold command, and the rest will be taken care of. Close the Rails server using Ctrl + C keys and then proceed as follows: First, navigate to the project folder in the terminal. Then, pass the following command: rails g scaffold todo title:string description:text completed:boolean This will create a new model called todo that has various fields like title, description, and completed. Each field has a type associated with it. Since we have created a new model, it has to be reflected in the database. So, let's migrate it: rake db:create db:migrate The preceding code will create a new table inside a new database with the associated fields. Let's analyze what we have done. The scaffold command has created many HTML pages or views that are needed for managing the todo model. So, let's check out our application. We need to start our server again: rails s Go to the localhost page http://localhost:3000 at port number 3000. You should still see the Rails' default page. Now, type the URL: http://localhost:3000/todos. You should now see the application, as shown in the following screenshot: Click on New Todo, you will be taken to a form which allows you to fill out various fields that we had earlier created. Let's create our first todo and click on submit. It will be shown on the listing page: It was easy, wasn't it? We haven't done anything yet. That's the power of Rails, which people are crazy about. Summary This article was to brief you on how to develop and design a simple Rails application without the help of any CSS frontend frameworks. We manually styled the application by creating an external CSS file styles.css and importing it into the application using another CSS file application.css. We also discussed the complexities that a novice web designer might face on directly styling the application. Resources for Article: Further resources on this subject: Deep Customization of Bootstrap [article] The Bootstrap grid system [article] Getting Started with Bootstrap [article]
Read more
  • 0
  • 0
  • 11788

article-image-url-routing-and-template-rendering
Packt
24 Feb 2015
11 min read
Save for later

URL Routing and Template Rendering

Packt
24 Feb 2015
11 min read
In this article by Ryan Baldwin, the author of Clojure Web Development Essentials, however, we will start building our application, creating actual endpoints that process HTTP requests, which return something we can look at. We will: Learn what the Compojure routing library is and how it works Build our own Compojure routes to handle an incoming request What this chapter won't cover, however, is making any of our HTML pretty, client-side frameworks, or JavaScript. Our goal is to understand the server-side/Clojure components and get up and running as quickly as possible. As a result, our templates are going to look pretty basic, if not downright embarrassing. (For more resources related to this topic, see here.) What is Compojure? Compojure is a small, simple library that allows us to create specific request handlers for specific URLs and HTTP methods. In other words, "HTTP Method A requesting URL B will execute Clojure function C. By allowing us to do this, we can create our application in a sane way (URL-driven), and thus architect our code in some meaningful way. For the studious among us, the Compojure docs can be found at https://github.com/weavejester/compojure/wiki. Creating a Compojure route Let's do an example that will allow the awful sounding tech jargon to make sense. We will create an extremely basic route, which will simply print out the original request map to the screen. Let's perform the following steps: Open the home.clj file. Alter the home-routes defroute such that it looks like this: (defroutes home-routes   (GET "/" [] (home-page))   (GET "/about" [] (about-page))   (ANY "/req" request (str request))) Start the Ring Server if it's not already started. Navigate to http://localhost:3000/req. It's possible that your Ring Server will be serving off a port other than 3000. Check the output on lein ring server for the serving port if you're unable to connect to the URL listed in step 4. You should see something like this: Using defroutes Before we dive too much into the anatomy of the routes, we should speak briefly about what defroutes is. The defroutes macro packages up all of the routes and creates one big Ring handler out of them. Of course, you don't need to define all the routes for an application under a single defroutes macro. You can, and should, spread them out across various namespaces and then incorporate them into the app in Luminus' handler namespace. Before we start making a bunch of example routes, let's move the route we've already created to its own namespace: Create a new namespace hipstr.routes.test-routes (/hipstr/routes/test_routes.clj) . Ensure that the namespace makes use of the Compojure library: (ns hipstr.routes.test-routes   (:require [compojure.core :refer :all])) Next, use the defroutes macro and create a new set of routes, and move the /req route we created in the hipstr.routes.home namespace under it: (defroutes test-routes   (ANY "/req" request (str request))) Incorporate the new test-routes route into our application handler. In hipstr.handler, perform the following steps: Add a requirement to the hipstr.routes.test-routes namespace: (:require [compojure.core :refer [defroutes]]   [hipstr.routes.home :refer [home-routes]]   [hipstr.routes.test-routes :refer [test-routes]]   …) Finally, add the test-routes route to the list of routes in the call to app-handler: (def app (app-handler   ;; add your application routes here   [home-routes test-routes base-routes] We've now created a new routing namespace. It's with this namespace where we will create the rest of the routing examples. Anatomy of a route So what exactly did we just create? We created a Compojure route, which responds to any HTTP method at /req and returns the result of a called function, in our case a string representation of the original request map. Defining the method The first argument of the route defines which HTTP method the route will respond to; our route uses the ANY macro, which means our route will respond to any HTTP method. Alternatively, we could have restricted which HTTP methods the route responds to by specifying a method-specific macro. The compojure.core namespace provides macros for GET, POST, PUT, DELETE, HEAD, OPTIONS, and PATCH. Let's change our route to respond only to requests made using the GET method: (GET "/req" request (str request)) When you refresh your browser, the entire request map is printed to the screen, as we'd expect. However, if the URL and the method used to make the request don't match those defined in our route, the not-found route in hipstr.handler/base-routes is used. We can see this in action by changing our route to listen only to the POST methods: (POST "/req" request (str request)) If you try and refresh the browser again, you'll notice we don't get anything back. In fact, an "HTTP 404: Page Not Found" response is returned to the client. If we POST to the URL from the terminal using curl, we'll get the following expected response: # curl -d {} http://localhost:3000/req {:ssl-client-cert nil, :go-bowling? "YES! NOW!", :cookies {}, :remote-addr "0:0:0:0:0:0:0:1", :params {}, :flash nil, :route-params {}, :headers {"user-agent" "curl/7.37.1", "content-type" "application/x-www-form-urlencoded", "content-length" "2", "accept" "*/*", "host" "localhost:3000"}, :server-port 3000, :content-length 2, :form-params {}, :session/key nil, :query-params {}, :content-type "application/x-www-form-urlencoded", :character-encoding nil, :uri "/req", :server-name "localhost", :query-string nil, :body #<HttpInput org.eclipse.jetty.server.HttpInput@38dea1>, :multipart-params {}, :scheme :http, :request-method :post, :session {}} Defining the URL The second component of the route is the URL on which the route is served. This can be anything we want and as long as the request to the URL matches exactly, the route will be invoked. There are, however, two caveats we need to be aware of: Routes are tested in order of their declaration, so order matters. The trailing slash isn't handled well. Compojure will always strip the trailing slash from the incoming request but won't redirect the user to the URL without the trailing slash. As a result an HTTP 404: Page Not Found response is returned. So never base anything off a trailing slash, lest ye peril in an ocean of confusion. Parameter destructuring In our previous example we directly refer to the implicit incoming request and pass that request to the function constructing the response. This works, but it's nasty. Nobody ever said, I love passing around requests and maintaining meaningless code and not leveraging URLs, and if anybody ever did, we don't want to work with them. Thankfully, Compojure has a rather elegant destructuring syntax that's easier to read than Clojure's native destructuring syntax. Let's create a second route that allows us to define a request map key in the URL, then simply prints that value in the response: (GET "/req/:val" [val] (str val)) Compojure's destructuring syntax binds HTTP request parameters to variables of the same name. In the previous syntax, the key :val will be in the request's :params map. Compojure will automatically map the value of {:params {:val...}} to the symbol val in [val]. In the end, you'll get the following output for the URL http://localhost:3000/req/holy-moly-molly: That's pretty slick but what if there is a query string? For example, http://localhost:3000/req/holy-moly-molly!?more=ThatsAHotTomalle. We can simply add the query parameter more to the vector, and Compojure will automatically bring it in: (GET "/req/:val" [val more] (str val "<br>" more)) Destructuring the request What happens if we still need access to the entire request? It's natural to think we could do this: (GET "/req/:val" [val request] (str val "<br>" request)) However, request will always be nil because it doesn't map back to a parameter key of the same name. In Compojure, we can use the magical :as key: (GET "/req/:val" [val :as request] (str val "<br>" request)) This will now result in request being assigned the entire request map, as shown in the following screenshot: Destructuring unbound parameters Finally, we can bind any remaining unbound parameters into another map using &. Take a look at the following example code: (GET "/req/:val/:another-val/:and-another"   [val & remainders] (str val "<br>" remainders)) Saving the file and navigating to http://localhost:3000/req/holy-moly-molly!/what-about/susie-q will render both val and the map with the remaining unbound keys :another-val and :and-another, as seen in the following screenshot: Constructing the response The last argument in the route is the construction of the response. Whatever the third argument resolves to will be the body of our response. For example, in the following route: (GET "/req/:val" [val] (str val)) The third argument, (str val), will echo whatever the value passed in on the URL is. So far, we've simply been making calls to Clojure's str but we can just as easily call one of our own functions. Let's add another route to our hipstr.routes.test-routes, and write the following function to construct its response: (defn render-request-val [request-map & [request-key]]   "Simply returns the value of request-key in request-map,   if request-key is provided; Otherwise return the request-map.   If request-key is provided, but not found in the request-map,   a message indicating as such will be returned." (str (if request-key         (if-let [result ((keyword request-key) request-map)]           result           (str request-key " is not a valid key."))         request-map))) (defroutes test-routes   (POST "/req" request (render-request-val request))   ;no access to the full request map   (GET "/req/:val" [val] (str val))   ;use :as to get access to full request map   (GET "/req/:val" [val :as full-req] (str val "<br>" full-req))   ;use :as to get access to the remainder of unbound symbols   (GET "/req/:val/:another-val/:and-another" [val & remainders]     (str val "<br>" remainders))   ;use & to get access to unbound params, and call our route   ;handler function   (GET "/req/:key" [key :as request]     (render-request-val request key))) Now when we navigate to http://localhost:3000/req/server-port, we'll see the value of the :server-port key in the request map… or wait… we should… what's wrong? If this doesn't seem right, it's because it isn't. Why is our /req/:val route getting executed? As stated earlier, the order of routes is important. Because /req/:val with the GET method is declared earlier, it's the first route to match our request, regardless of whether or not :val is in the HTTP request map's parameters. Routes are matched on URL structure, not on parameters keys. As it stands right now, our /req/:key will never get matched. We'll have to change it as follows: ;use & to get access to unbound params, and call our route handler function (GET "/req/:val/:another-val/:and-another" [val & remainders]   (str val "<br>" remainders))   ;giving the route a different URL from /req/:val will ensure its   execution   (GET "/req/key/:key" [key :as request] (render-request-val   request key))) Now that our /req/key/:key route is logically unique, it will be matched appropriately and render the server-port value to screen. Let's try and navigate to http://localhost:3000/req/key/server-port again: Generating complex responses What if we want to create more complex responses? How might we go about doing that? The last thing we want to do is hardcode a whole bunch of HTML into a function, it's not 1995 anymore, after all. This is where the Selmer library comes to the rescue. Summary In this article we have learnt what Compojure is, what a Compojure routing library is and how it works. You have also learnt to build your own Compojure routes to handle an incoming request, within which you learnt how to use defroutes, the anatomy of a route, destructuring parameter and how to define the URL. Resources for Article: Further resources on this subject: Vmware Vcenter Operations Manager Essentials - Introduction To Vcenter Operations Manager [article] Websockets In Wildfly [article] Clojure For Domain-Specific Languages - Design Concepts With Clojure [article]
Read more
  • 0
  • 0
  • 2297
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-actors-and-pawns
Packt
23 Feb 2015
7 min read
Save for later

Actors and Pawns

Packt
23 Feb 2015
7 min read
In this article by William Sherif, author of the book Learning C++ by Creating Games with UE4, we will really delve into UE4 code. At first, it is going to look daunting. The UE4 class framework is massive, but don't worry. The framework is massive, so your code doesn't have to be. You will find that you can get a lot done and a lot onto the screen using relatively less code. This is because the UE4 engine code is so extensive and well programmed that they have made it possible to get almost any game-related task done easily. Just call the right functions, and voila, what you want to see will appear on the screen. The entire notion of a framework is that it is designed to let you get the gameplay you want, without having to spend a lot of time in sweating out the details. (For more resources related to this topic, see here.) Actors versus pawns A Pawn is an object that represents something that you or the computer's Artificial Intelligence (AI) can control on the screen. The Pawn class derives from the Actor class, with the additional ability to be controlled by the player directly or by an AI script. When a pawn or actor is controlled by a controller or AI, it is said to be possessed by that controller or AI. Think of the Actor class as a character in a play. Your game world is going to be composed of a bunch of actors, all acting together to make the gameplay work. The game characters, Non-player Characters (NPCs), and even treasure chests will be actors. Creating a world to put your actors in Here, we will start from scratch and create a basic level into which we can put our game characters. The UE4 team has already done a great job of presenting how the world editor can be used to create a world in UE4. I want you to take a moment to create your own world. First, create a new, blank UE4 project to get started. To do this, in the Unreal Launcher, click on the Launch button beside your most recent engine installation, as shown in the following screenshot: That will launch the Unreal Editor. The Unreal Editor is used to visually edit your game world. You're going to spend a lot of time in the Unreal Editor, so please take your time to experiment and play around with it. I will only cover the basics of how to work with the UE4 editor. You will need to let your creative juices flow, however, and invest some time in order to become familiar with the editor. To learn more about the UE4 editor, take a look at the Getting Started: Introduction to the UE4 Editor playlist, which is available at https://www.youtube.com/playlist?list=PLZlv_N0_O1gasd4IcOe9Cx9wHoBB7rxFl. Once you've launched the UE4 editor, you will be presented with the Projects dialog. The following screenshot shows the steps to be performed with numbers corresponding to the order in which they need to be performed: Perform the following steps to create a project: Select the New Project tab at the top of the screen. Click on the C++ tab (the second subtab). Then select Basic Code from the available projects listing. Set the directory where your project is located (mine is Y:Unreal Projects). Choose a hard disk location with a lot of space (the final project will be around 1.5 GB). Name your project. I called mine GoldenEgg. Click on Create Project to finalize project creation. Once you've done this, the UE4 launcher will launch Visual Studio. There will only be a couple of source files in Visual Studio, but we're not going to touch those now. Make sure that Development Editor is selected from the Configuration Manager dropdown at the top of the screen, as shown in the following screenshot: Now launch your project by pressing Ctrl + F5 in Visual Studio. You will find yourself in the Unreal Engine 4 editor, as shown in the following screenshot: The UE4 editor We will explore the UE4 editor here. We'll start with the controls since it is important to know how to navigate in Unreal. Editor controls If you've never used a 3D editor before, the controls can be quite hard to learn. These are the basic navigation controls while in edit mode: Use the arrow keys to move around in the scene Press Page Up or Page Down to go up and down vertically Left mouse click + drag it left or right to change the direction you are facing Left mouse click + drag it up or down to dolly (move the camera forward and backward, same as pressing up/down arrow keys) Right mouse click + drag to change the direction you are facing Middle mouse click + drag to pan the view Right mouse click and the W, A, S, and D keys to move around the scene Play mode controls Click on the Play button in the bar at the top, as shown in the following screenshot. This will launch the play mode. Once you click on the Play button, the controls change. In play mode, the controls are as follows: The W, A, S, and D keys for movement The left or right arrow keys to look toward the left and right, respectively The mouse's motion to change the direction in which you look The Esc key to exit play mode and return to edit mode What I suggest you do at this point is try to add a bunch of shapes and objects into the scene and try to color them with different materials. Adding objects to the scene Adding objects to the scene is as easy as dragging and dropping them in from the Content Browser tab. The Content Browser tab appears, by default, docked at the left-hand side of the window. If it isn't seen, simply select Window and navigate to Content Browser in order to make it appear. Make sure that the Content Browser is visible in order to add objects to your level Next, select the Props folder on the left-hand side of the Content Browser. Drag and drop things from the Content Browser into your game world To resize an object, press R on your keyboard. The manipulators around the object will appear as boxes, which denotes resize mode. Press R on your keyboard to resize an object In order to change the material that is used to paint the object, simply drag and drop a new material from the Content Browser window inside the Materials folder. Drag and drop a material from the Content Browser's Materials folder to color things with a new color Materials are like paints. You can coat an object with any material you want by simply dragging and dropping the material you desire onto the object you desire it to be coated on. Materials are only skin-deep: they don't change the other properties of an object (such as weight). Starting from scratch If you want to start creating a level from scratch, simply click on File and navigate to New Level..., as shown here: You can then select between Default and Empty Level. I think selecting Empty Level is a good idea, for the reasons that are mentioned later. The new level will be completely black in color to start with. Try dragging and dropping some objects from the Content Browser tab again. This time, I added a resized shapes / box for the ground plane and textured it with moss, a couple of Props / SM_Rocks, Particles / P_Fire, and most importantly, a light source. Be sure to save your map. Here's a snapshot of my map (how does yours look?): Summary In this article, we reviewed how the realistic environments are created with actors and monsters which are part of the game and also seen how the various kind of levels are created from the scratch. Resources for Article: Further resources on this subject: Program structure, execution flow, and runtime objects [article] What is Quantitative Finance? [article] Creating and Utilizing Custom Entities [article]
Read more
  • 0
  • 0
  • 2928

article-image-selenium-testing-tools
Packt
23 Feb 2015
8 min read
Save for later

Selenium Testing Tools

Packt
23 Feb 2015
8 min read
In this article by Raghavendra Prasad MG, author of the book Learning Selenium Testing Tools, Third Edition you will be introduced to the Selenium IDE WebDriver by demonstrating its installation, basic features, its advanced features, and implementation of automation framework with programming language basics required for automation. Automation being a key point of success of any software organization, everybody is looking at the freeware and huge community supported tool like Selenium. Anybody who is willing to learn and work on automation with Selenium has an opportunity to learn the tool from basic to advanced stage with this book and the book would become a life time reference for the reader. (For more resources related to this topic, see here.) Key features of the book Following are the key features of the book: The book contain the information from basic level to advanced levels and user need not have know anything as a pre requisite. The book contains real time examples which make the reader to co-relate with there real time scenarios. The book contains basics of Java which is required for Selenium automation, hence reader need to go through other books exclusively which would contain the information which is no where required for the Selenium automation. The book contains the concept of automation framework design and implementation, which would definitely help reader to build and implement his/her own automation framework. What you will learn from this book? There are a lot of things you will learn from the book. A few of them are mentioned as follows: History of Selenium and its evolution Working with previous versions of Selenium that is, Selenium IDE Selenium WebDriver – basic to advanced state Basics of Java (only what is required for Selenium automation) WebElement handling with Selenium WebDriver Page Object Factory implementation Automation frameworks types, design and implementation Utilities building for automation Who this book is for? The book is for manual testers. Even any software professionals and the ones who wants to make their carrier in Selenium automation testing can use this book. This book also helps automation testers / automation architects who want to build or implement the automation / automation frameworks on Selenium automation tool. What this book covers The book covers the following major topics: Selenium IDE Selenium IDE is a Firefox add-on developed originally by Shinya Kasatani as a way to use the original Selenium Core code without having to copy Selenium Core onto the server. Selenium Core is the key JavaScript modules that allows Selenium to drive the browser. It has been developed using JavaScript so that it can interact with the DOM (Document Object Model) using native JavaScript calls. Selenium IDE has been developed to allow testers and developers to record their actions as they follow the workflow that they need to test. Locators Locators shows how we can find elements on the page to be used in our tests. We will use XPath, CSS, link text, and ID to find elements on the page so that we can interact with them. Locators allow us to find elements on a page that can be used in our tests. In the last chapter we managed to work against a page which had decent locators. In HTML, it is seen as a good practice to make sure that every element you need to interact with has an ID attribute and a name attribute. Unfortunately, following best practices can be extremely difficult, especially when building the HTML dynamically on the server before sending it back to the browser. Following are the locators used in Selenium IDE: Locators Description Example ID This element identifies an ID attribute on the page id=inputButton name This element identifies name attribute on the page name=buttonFind link This element identifies links by the text link=index XPath This element identifies by XPath xpath=//div[@class='classname'] CSS This element identifies by CSS css=#divinthecenter DOM This element identifies by DOM dom=document.getElementById("inputButton") Selenium WebDriver The primary feature of the Selenium WebDriver is the integration of the WebDriver API and its design to provide a simpler, more concise programming interface in addition to addressing some limitations in the Selenium-RC API. Selenium WebDriver was developed to better support dynamic web pages where elements of a page may change without the page itself being reloaded. WebDriver's goal is to supply a well designed object-oriented API that provides improved support for modern advanced web application testing problems. Finding elements When working with WebDriver on a web application, we will need to find elements on the page. This is the Core to being able to work. All the methods for performing actions to the web application, like typing and clicking require that we search the element first. Finding an element on the page by its ID The first item that we are going to look at is finding an element by ID. Searching elements by ID is one of the easiest ways to find an element. We start with findElementByID(). This method is a helper method that sets an argument for a more generic findElement call. We will see now how we can use it in action. The method's signature looks like the following line of code: findElementById(String using); The using variable takes the ID of the element that you wish to look for. It will return a WebElement object that we can then work with. Using findElementById() We find an element on the page by using the findElementById() method that is on each of the Browser Driver classes. findElement calls will return a WebElement object that we can perform actions on. Follow these steps to see how it works: Open your Java IDE. (IntelliJ or Eclipse are the one's that are mostly used) We are going to use the following command: WebElement element = ((FindsById)driver). findElementById("verifybutton"); Run the test from the IDE. It will look like the following screenshot: Page Objects In this section of the article, we are going to have a look at how we can apply some best practices to tests. You will learn how to make maintainable test suites that will allow you to update tests in seconds. We will have a look at creating your own DSL so that people can see intent. We will create tests using the Page Object pattern. Working with FirefoxDriver FirefoxDriver is the easiest driver to use, since everything that we need to use is all bundled with the Java client bindings. We do the basic task of loading the browser and type into the page as follows: Update the setUp() method to load the FirefoxDriver(); driver = new FirefoxDriver(); Now we need to find an element. We will find the one with the ID nextBid: WebElement element = driver.findElement(By.id("nextBid")); Now we need to type into that element as follows: element.sendKeys("100"); Run your test and it should look like the following: import org.openqa.selenium.*; import org.openqa.selenium.firefox.*; import org.testng.annotations.*; public class TestChapter6 {   WebDriver driver;   @BeforeTest public void setUp(){    driver = new FirefoxDriver();    driver.get      ("http://book.theautomatedtester.co.uk/chapter4"); }   @AfterTest public void tearDown(){    driver.quit(); }   @Test public void testExamples(){    WebElement element = driver.findElement(By.id("nextBid"));    element.sendKeys("100"); } } We are currently witnessing an explosion of mobile devices in the market. A lot of them are more powerful than your average computer was just over a decade ago. This means, that in addition to having nice clean, responsive, and functional desktop applications, we are starting to have to make sure the same basic functionality is available to mobile devices. We are going to be looking at how we can set up mobile devices to be used with Selenium WebDriver. We will learn the following topics: How to use the stock browser on Android How to test with Opera Mobile How to test on iOS Understanding Selenium Grid Selenium Grid is a version of Selenium that allows teams to set up a number of Selenium instances and then have one central point to send your Selenium commands to. This differs from what we saw in Selenium Remote WebDriver where we always had to explicitly say where the Selenium Server is as well as know what browsers that server can handle. With Selenium Grid we just ask for a specific browser, and then the hub that is part of Selenium Grid will route all the Selenium commands through to the Remote Control you want. Summary We have understood and learnt what is Selenium and its evolutions from IDE to WebDriver and Grid as well. In addition we have learnt how to identify the WebElements using WebDriver, and its design pattern and locators through WebDriver. And we learnt on the automation framework design and implementation, mobile application automation on Android and iOS. Finally we understood the concept of Selenium Grid. Resources for Article: Further resources on this subject: Quick Start into Selenium Tests [article] Getting Started With Selenium Webdriver and Python [article] Exploring Advanced Interactions of WebDriver [article]
Read more
  • 0
  • 0
  • 4957

article-image-customizing-app-controller
Packt
23 Feb 2015
15 min read
Save for later

Customizing App Controller

Packt
23 Feb 2015
15 min read
In this article by Nasir Naeem, the author of Learning System Center App Controller, introduces you to the App Controller administrative portal. Further instructions are provided to integrate the SCVMM server with App Controller. It also covers integrating the Azure cloud subscription, roles-based access, adding network share to the SCVMM server, and configuring the SSL certificate for the App Controller website. System Center 2012 R2 App Controller provides a web-based portal to manage an on-premises Azure Cloud and a third party cloud solution through a single pane of glass. Before we can manage these solutions, we will need to connect to these resources by integrating them in the App Controller admin console. This article will walk you through the steps required for integration. (For more resources related to this topic, see here.) Logging in to the App Controller interface In this section, we will log in to the App Controller web portal for the first time. Perform the following steps to open the App Controller admin console: Before attempting to log on to the App Controller server, ensure that Microsoft Silverlight is installed on the server. It can be downloaded from http://www.microsoft.com/silverlight/. Log on to the App Controller server. Launch the Internet Explorer: type https://localhost, and press the Enter key. If a warning message saying There is a problem with this website's security certificate shows up, select the Continue to this website (not recommended) link. Now, you will be presented with the logon screen. Provide administrative credentials to log on to start with, it is the service account details of the App Controller administrator. Click on the Sign In button in the browser. Depending on the speed of your system, the Microsoft System Center 2012 R2 App Controller Admin portal will open. The Admin portal is based on the Silverlight technology and looks very similar to the Virtual Machine Manager Console as follows: The App Controller Admin console is divided into seven sections. By default, the Overview page shows up every time we log in to the admin portal. We can manage multiple subscriptions and common tasks in the Overview page. There are three main categories in the Overview page. Out of them, Status contains Private Clouds created in the VMM server, Public Clouds displays Microsoft Azure subscriptions being managed by App Controller, and Hosting Service Providers shows third party service providers. Integrating the Virtual Machine Manager server for private cloud management In this section, we will integrate our previously installed Virtual Machine Manager server with the App Controller server. Perform the following steps to complete the task: Log on to the App Controller server. Launch Internet Explorer and log in with an account that has local administrative access on the App Controller server. On the Overview page, under the Private Clouds subsection of the Status section, click on the Connect a Virtual Machine Manager server link. In future, we should click on the Settings link in the left pane, select Connections in the submenu, click on Connect in the middle pane, and select SCVMM from the pop-up menu. In the pop-up dialog box, provide a Connection Name, Description, Server name, and a Port for VMM communication. Ensure that you select Automatically import SSL certificates. Then click on the OK button. After a couple of minutes, VMM integration will be completed and the Private Clouds section will be populated with the current configuration set in the VMM server. We can also see the configuration of the configured clouds in Virtual Machine Manager. By clicking on Clouds in the left pane, Contoso Cloud can be seen in the middle pane with a description and cloud name assigned. To see computer limitations set on the cloud, we can change the View option to show information cards by clicking on the Show items as cards button in the top-right corner. Configuring a Microsoft Azure subscription In this section, we will configure the on-premises App Controller deployment to connect to the Windows Azure subscription. The following capabilities will be enabled for our private cloud users in both private and public cloud: Start virtual machines Stop virtual machines Shut down virtual machines Restart virtual machines Connect to virtual machines Modify existing virtual machines Copy existing virtual machines to Azure Deploy virtual machines Deploy cloud services Add virtual machines to cloud services Modify existing services View and manage jobs To connect App Controller to Windows Azure, we have to first create a self-signed certificate. Then export the certificate package with private keys and also export the certificate without private keys. Next, we need to upload the certificate without private keys to the Windows Azure management portal and import the certificate package with the subscription ID into App Controller. Perform the following steps to complete the task: Log on to the App Controller server and launch the IIS manager console. Left-click on the Server name in the console. Double-click on Server Certificates in the IIS feature section, as follows: In the Actions pane on the right side of the console. Click on the Create Self-Signed Certificate link. In the Create Self-Signed Certificate wizard, provide a friendly name like AzureManagementCertificate and store it in the Personal store. Then click on OK. Launch MMC by typing MMC in the Run dialog box. Add Certificate snap-in from Add/remove snap-ins. Select Computer account for managing certificates store. Then click on Next. In the Select the computer you want this snap-in to manage section dialog page, select Local computer. Next, click on Finish and then click on OK. Back in the MMC console, expand Certificates (Local Computer). Expand Personal, then Highlight Certificates. In the middle pane, we can see the list of certificates available. Right-click on AzureManagementCertificate. Select All Tasks and click on Export…. Click on Next to start the Certificate Wizard. Make sure the Yes export the private key option is selected. Then click on Next. Leave default settings for Export File Format and click on Next. Provide a strong password to protect the PFX package. Then click on Next. Provide a path to store the package locally and click on Next. Finally, click on Finish. Now log on to the Windows Azure portal. Sign in with your Azure Administrative ID details. In the Management Portal, select Settings in the left pane. In the middle pane, click on the MANAGEMENT CERTIFICATES link. Then click on the UPLOAD A MANAGEMENT CERTIFICATE link. Browse for the CER file without private keys. Wait for the upload to complete. Take a note of the Subscription ID in the Management Certificates section. This will be used during the App Controller connection configuration. Now, we are ready to connect App Controller to the Azure subscription. Azure cloud subscription will use certificate authentication. The certificate uploaded in the previous step will be used for encrypting traffic between App Controller and Azure cloud. Back in the App Controller admin portal, click on Clouds in the left pane. Click on the down arrow on the Connect button in the middle pane. Then select Windows Azure subscription. Provide a friendly name for the subscription and values for the Description, Subscription ID, and Management certificate fields with a private key and Management certificate password for the PFX package file. Then click on OK. After a couple of minutes, Azure subscription will be added to the App Controller environment. Now we can manage Services and Virtual Machines attached to this subscription. In the Cloud section, we can also see the new Windows Azure connection as shown in the following screenshot: Configuring roles-based access In this section, we will be adding a new tenant user to the App Controller. This user will be assigned particular settings to manage their environment. I have created a standard domain user called Contoso_Tenant01 for demo purposes. This account will be given full administrative access to the Contoso Cloud only. Follow the following steps to complete this task: Log on to the Virtual Machine Manager server. Launch VMM Console. Select Settings in the left pane. Expand Security and select User Roles. In the ribbon, click on Create User Role. After the Create User Role wizard launches, provide the Name and Description and then click on Next. I have used Contoso Cloud Administrator and Administrator of Contoso Cloud. Select Tenant Administrator and click on Next. In the Members section, add a security group on individual user accounts. I have added a Contoso_Tenant01 account to the members list. Then click on Next. In the Scope section of the wizard, select the checkbox next to Contoso Cloud and click on Next. In the Quotas for the Contoso Cloud section, adjust Role Level and Member level quotas as required. We will be using the default settings of Use Maximum for all settings. Then click on Next. In the Networking section of the wizard, add Logical network belonging to Contoso by clicking on the Add button. Then click on Next. In the Resources section of the wizard, add resources that this tenant administrator can use. I have selected OS profiles, Small HW hardware profile, VM Template, and Service Template, available in the list. Then click on Next. In the Permissions section of the wizard, we can specify tasks that this user account can perform in the environment. Switch to Contoso Cloud in the middle pane. Click on the Select All button. Then click on Next In the Run As Accounts section of the create User Roles wizard, specify a privileged account that is required in Contoso Cloud and click on Next. In the Summary section of the wizard, review specified settings and then click on Finish. Adding a new VMM Library share In this section, we will add a new Virtual Machine Library share to SCVMM. Perform the following steps for the new Virtual Machine Library share to SCVMM: To specify a dedicated folder to upload data by this user, I have created a folder called Contoso_Cloud in the root of system drive. Give full permission to the VMM computer account and the VMM service account on the security tab and share the folder. Add the new share to the VMM Library by clicking on Library in the left pane. Right-click on the VMM server name and select Add Library Shares. Select the checkbox next to the Contoso_Cloud share. Then, click on Next and Finish. Now click on Settings in the left pane. Select User Roles in the Security section in the left pane. Right-click on Contoso Cloud Administrator and select Properties. Switch to the Resources section in the left pane. Click on the Browse button in the Specify the library location where this user can upload data section. Select the Contoso_Cloud folder from the Select destination folder dialog box and click on OK. Now log on to the App Controller server. Open a new session in Internet Explorer and browse to the App Controller admin portal. Log on with a new account. In our case, it is domainnamecontoso_tenant01. After logging on, the Contoso_Tenant01 account can only see items that are allowed in the Virtual Machine Manager server. Adding a network share In this section, we will be adding a network share to the App Controller server. This share will be used as local cache during download or upload of the virtual machines. It can be any folder on the local network as long as the App Controller service account has the ability to make changes to the content of the shared folder. Perform the following steps to complete this task: Log on to the App Controller server. Launch Internet Explorer and log in with administrative credentials. We also need a shared folder with the correct permissions assigned. So launch Windows Explorer and create a folder. We will be creating a folder in the root of the system drive called SCAC_Share. Once the folder is created, right-click on the folder name and select Properties. Switch to the Security tab and add the App Controller service account. Give full control permission to the service account. In our case, the account name is srv_scac_acc. Click on Apply and then on OK. Repeat the same process by switching to Sharing tab. Click on the Share button. Add the service account if it is not already present and then click on the Share button. Now click on the Done button and click on the Close button on the folder properties dialog box. Now go back to the Internet Explorer browser and select the Overview page in the App Controller admin portal. Under the Next Steps section in the Common Tasks subsection, click on the Add a network file share link. Provide the UNC path to the folder that we created in step 3. The naming syntax is \<servername><sharename>. Then click on OK. A confirmation message will show up at the bottom of the screen for the task being completed. We can verify the addition of the share by clicking on Library in the left pane. Expanding Shares in the middle pane, we can also add more shares or remove listed shares in the Library's Shares section. Configuring SSL certificate for the App Controller website In this section, we will change the default SSL self-signed certificate to one that is generated by our internal certificate authority (CA). Building a PKI infrastructure is out of the scope of this book. Please look at the TechNet articles for creating a PKI infrastructure. Perform the following steps provided to complete this task: I will try to explain the tasks that have to be completed to get a certificate from the internal CA. To get the CA certificate published, log on to the CA server and launch the Certsrv.msc console. Expand the server name. Right-click on Certificate Templates and make a duplicate copy of Webserver template. Ensure that Server Authentication is listed in the Extensions tab. Give the template a unique name. I have used Generic Web SSL Certificate. In the Security tab, allow the App Controller server with the Enroll permission. Then right-click on Certificates Templates in the Certsrv console. Select New; select New Certificate templates to issue. From the list, select the new template. Now, reboot the App Controller server. After reboot, launch MMC console, add certificates snap-in, and ensure that it shows the Local computer store. Then expand to Personal and expand Certificates. Right-click on Certificates, select All Tasks and Request New Certificates. Select the new template we just published and click on the Add more information link. Change Type from FullDN to Common Name. Specify appcontroller.contoso.internal. Give this certificate a friendly name and then click on OK. Back in the Certificate enrolment wizard, click on Enroll. Log on to the App Controller server and launch the Internet Information Services console. The IIS Manager console can also be launched by pressing the windows key and typing in InetMgr.exe. Expand Server Name and also expand Sites. Right-click on the AppController website. Select Edit Bindings…. In the Edit Sites Bindings dialog box, select https and then click on the Edit button. Select appcontroller Webserver Cert from the drop-down list. Verify that certificate is correct by clicking on the View button. Click on the Select button and then click on the OK button. Now that we have a valid certificate assigned to the website in IIS Manager, create Host (A) record in DNS services. Specify appcontroller.contoso.internal as the FQDN and IP address of the App Controller server. Make sure Silverlight is installed on the testing machine. Launch Internet Explorer and browse to https://appcontroller.contoso.internal. After a couple of seconds, the App Controller logon screen will show up. Take a look in the browser address bar; the certificate error should have disappeared. We no longer get a warning message before the log on screen. We can also verify the certificate assigned to this website by going to Files | Properties of the site and clicking on the Certificates button. Customizing App Controller branding In some scenarios corporate branding is required. It is very simple to change the branding on App Controller management portal pages. The following screenshot highlights the areas that can be changed by altering or replacing specific files on the App Controller server: Both files are typically located at C:Program FilesMicrosoft System Center 2012 R2App Controllerwwwroot. Let's take a look at the following steps: To replace the top-left logo, create a file with the name SC2012_WebHeaderLeft_AC.png with dimensions of 213 x 38 pixels containing a transparent background. To replace the top-right log, create a file with the name SC2012_WebHeaderRight_AC.png with dimensions of 108 x 16 pixels containing a transparent background. Override the existing files on the App Controller server with the new files. Close the browser window. Open a new browser window and try to log in to the App Controller portal. The newly added logo files will be shown on top of the logon dialog box. The same new branding logos will be displayed after logging on to the App Controller Management portal, as shown in the following screenshot: Summary In this article, we integrated Virtual Machine Manager in the App Controller. We also attached a Windows Azure subscription to the App Controller. We added a network share to the App Controller environment and saw how to configure roles-based access users. We also changed the SSL certificate of the App Controller admin portal. Resources for Article: Further resources on this subject: Google App Engine [article] Securing Your Twilio App [article] Advanced Programming And Control [article]
Read more
  • 0
  • 0
  • 4782

article-image-sprites-action
Packt
20 Feb 2015
6 min read
Save for later

Sprites in Action

Packt
20 Feb 2015
6 min read
In this article by Milcho G. Milchev, author of the book SFML Essentials, we will see how we can use SFML to create a customized animation using a sequence of images. We will also see how SFML renders an animation. Animation exists in many forms. The traditional approach to animation is drawing a sequence of images which differ slightly from each other, and showing them on a screen one after the other. Even though this approach is still widely used, there are more elegant alternatives. For example, drawing (or modelling in 3D) only the limbs of a character and then animating how they move relative to time is a technique that saves a lot of time for artists. It also creates smoother results because not every frame of the animation has to be redrawn. In this book, we are going to explore only the traditional approach, since it is the simpler solution for programmers, and in many cases it is enough to bring life to any sprite. (For more resources related to this topic, see here.) The setup As we established earlier, the traditional approach involves a set of images that need to change over time. For our example, we will use a crystal, which rotates around its centre. Typically, an animation is kept in a single file (a sprite sheet), where each frame of the animation is stored, and in most cases, each frame is the same size—the size of the object. In our example, the sprite is, 32 x 32 pixels and has eight frames, which play for one second. Here is what the sprite sheet looks like: The following screenshot shows our animation setup in code: First of all, note that we are using the AssetManager class to load our sprite sheet. The next line sets the texture rectangle of the sprite to target the first image in our sprite sheet. Here is what this means in terms of the sprite sheet texture: Next, we will move this texture rectangle once in a while to simulate a rotating crystal. In the previous code, we set the number of frames to eight (as many as there are in  the sprite sheet), and set the time of the animation to one second in total, which means that each frame stays for about 0.125 seconds (the animation duration is divided by the number of frames) at a time. We know what we want to do now, so let's do it: In the code, we first measure the delta time since the last frame and add it to the accumulated time. The last two lines of the code actually do all the work. The first one looks intimidating at first glance, but it is simply a way to choose the correct frame, based on how much time has passed and how long the animation is. The formula timeAsSeconds / animationDuration gives us the time relative to the animation duration. So let's say that 0.4 seconds have passed and our animation duration is 1 second. This leaves us with 0.4 seconds in local animation time. Multiply this 0.4 seconds by the number of frames, and we get the following result: 0.4 * 8 = 3.2 This gives us which frame we should be on at the moment, and how long we have been there. The current frame index is the whole part of 3.2 (which is three), and the fraction part (0.2) is how long we have been on that frame. In this case, we are only interested in the current frame so we will take that by casting the whole expression to int. This rounds the number down if the number is positive (which it always is in this case). The last part, % frameNum is there to restart the animation when it reaches beyond its last frame. So in the case where 2.3 seconds have passed, we have the following result: 2.3 * 8 = 18.4 We do not have a 19th frame to show, so we show the frame which corresponds to that in our local scale [0…7]. In this case: 18 / 8 = 2 (and 2 remainder) Since the % operator takes the remainder of a division, we are left to show the frame with the index two, which is the third frame. (We start counting from zero as programmers, remember?) The last line of the code sets the texture rectangle to the current frame. The process is quite straightforward—since we only have frames on the x axis, we do not need to worry about the y coordinate of the rectangle, and so we will set it to zero. The x is computed by animFrame * spriteSize.x, which multiplies the current frame by the width of the frame. In the case, the current frame is two and the frame's width is 32, so we get: 2 * 32 = 64 Here is what the texture rectangle will look like: The last thing we need to do is render the sprite inside the render frame and we are done. If everything goes smoothly, we should have a rotating crystal on the screen with eight frames. With this technique, we can animate sprites of all kinds no matter how many frames they have or how long the animation is. There are problems with the current approach though - the code looks messy, and it is only useful for a single animation. What if we want multiple animations for a sprite (rotating the crystal in a vertical direction as well), and we want to be able to switch between them? Currently, we would have to duplicate all our code for each animation and each animated sprite. In the next section, we will talk about how to avoid these issues by building a fully featured animation system that requires as little code duplication as possible. Summary Sprite animations seem quite easy now, don't they? Just keep in mind that there is a lot more to explore when it comes to animation. Not only are there different techniques to doing them, but also perfecting what we've developed so far might take some time. Fortunately, what we have so far will work as is in the majority of cases so I would say that you are pretty much set to go. If you want to dig deeper, buy the book and read SFML Essentials in a simple step-by-step fashion by using SFML library to create realistic looking animations as well as to develop  2D and 3D games using the SFML. Resources for Article: Further resources on this subject: Vmware Vcenter Operations Manager Essentials - Introduction To Vcenter Operations Manager [article] Translating a File In Sdl Trados Studio [article] Adding Finesse To Your Game [article]
Read more
  • 0
  • 0
  • 26484
article-image-puppet-language-and-style
Packt
20 Feb 2015
18 min read
Save for later

Puppet Language and Style

Packt
20 Feb 2015
18 min read
In this article by Thomas Uphill, author of the book Puppet Cookbook Third Edition, we will cover the following recipes: Installing a package before starting a service Installing, configuring, and starting a service Using regular expressions in if statements Using selectors and case statements Creating a centralized Puppet infrastructure Creating certificates with multiple DNS names (For more resources related to this topic, see here.) Installing a package before starting a service To show how ordering works, we'll create a manifest that installs httpd and then ensures the httpd package service is running. How to do it... We start by creating a manifest that defines the service: service {'httpd':    ensure => running,    require => Package['httpd'], } The service definition references a package resource named httpd; we now need to define that resource: package {'httpd':    ensure => 'installed', } How it works... In this example, the package will be installed before the service is started. Using require within the definition of the httpd service ensures that the package is installed first, regardless of the order within the manifest file. Capitalization Capitalization is important in Puppet. In our previous example, we created a package named httpd. If we wanted to refer to this package later, we would capitalize its type (package) as follows: Package['httpd'] To refer to a class, for example, the something::somewhere class, which has already been included/defined in your manifest, you can reference it with the full path as follows: Class['something::somewhere'] When you have a defined type, for example the following defined type: example::thing {'one':} The preceding resource may be referenced later as follows: Example::Thing['one'] Knowing how to reference previously defined resources is necessary for the next section on metaparameters and ordering. Learning metaparameters and ordering All the manifests that will be used to define a node are compiled into a catalog. A catalog is the code that will be applied to configure a node. It is important to remember that manifests are not applied to nodes sequentially. There is no inherent order to the application of manifests. With this in mind, in the previous httpd example, what if we wanted to ensure that the httpd process started after the httpd package was installed? We couldn't rely on the httpd service coming after the httpd package in the manifests. What we have to do is use metaparameters to tell Puppet the order in which we want resources applied to the node. Metaparameters are parameters that can be applied to any resource and are not specific to any one resource type. They are used for catalog compilation and as hints to Puppet but not to define anything about the resource to which they are attached. When dealing with ordering, there are four metaparameters used: before require notify subscribe The before and require metaparameters specify a direct ordering; notify implies before and subscribe implies require. The notify metaparameter is only applicable to services; what notify does is tell a service to restart after the notifying resource has been applied to the node (this is most often a package or file resource). In the case of files, once the file is created on the node, a notify parameter will restart any services mentioned. The subscribe metaparameter has the same effect but is defined on the service; the service will subscribe to the file. Trifecta The relationship between package and service previously mentioned is an important and powerful paradigm of Puppet. Adding one more resource-type file into the fold, creates what puppeteers refer to as the trifecta. Almost all system administration tasks revolve around these three resource types. As a system administrator, you install a package, configure the package with files, and then start the service. Diagram of Trifecta (Files require package for directory, service requires files and package) Idempotency A key concept of Puppet is that the state of the system when a catalog is applied to a node cannot affect the outcome of Puppet run. In other words, at the end of Puppet run (if the run was successful), the system will be in a known state and any further application of the catalog will result in a system that is in the same state. This property of Puppet is known as idempotency. Idempotency is the property that no matter how many times you do something, it remains in the same state as the first time you did it. For instance, if you had a light switch and you gave the instruction to turn it on, the light would turn on. If you gave the instruction again, the light would remain on. Installing, configuring, and starting a service There are many examples of this pattern online. In our simple example, we will create an Apache configuration file under /etc/httpd/conf.d/cookbook.conf. The /etc/httpd/conf.d directory will not exist until the httpd package is installed. After this file is created, we would want httpd to restart to notice the change; we can achieve this with a notify parameter. How to do it... We will need the same definitions as our last example; we need the package and service installed. We now need two more things. We need the configuration file and index page (index.html) created. For this, we follow these steps: As in the previous example, we ensure the service is running and specify that the service requires the httpd package: service {'httpd':    ensure => running,    require => Package['httpd'], } We then define the package as follows: package {'httpd':    ensure => installed, } Now, we create the /etc/httpd/conf.d/cookbook.conf configuration file; the /etc/httpd/conf.d directory will not exist until the httpd package is installed. The require metaparameter tells Puppet that this file requires the httpd package to be installed before it is created: file {'/etc/httpd/conf.d/cookbook.conf':    content => "<VirtualHost *:80>nServername     cookbooknDocumentRoot     /var/www/cookbookn</VirtualHost>n",    require => Package['httpd'],    notify => Service['httpd'], } We then go on to create an index.html file for our virtual host in /var/www/cookbook. This directory won't exist yet, so we need to create this as well, using the following code: file {'/var/www/cookbook':    ensure => directory, } file {'/var/www/cookbook/index.html':    content => "<html><h1>Hello World!</h1></html>n",    require => File['/var/www/cookbook'], } How it works… The require attribute to the file resources tell Puppet that we need the /var/www/cookbook directory created before we can create the index.html file. The important concept to remember is that we cannot assume anything about the target system (node). We need to define everything on which the target depends. Anytime you create a file in a manifest, you have to ensure that the directory containing that file exists. Anytime you specify that a service should be running, you have to ensure that the package providing that service is installed. In this example, using metaparameters, we can be confident that no matter what state the node is in before running Puppet, after Puppet runs, the following will be true: httpd will be running The VirtualHost configuration file will exist httpd will restart and be aware of the VirtualHost file The DocumentRoot directory will exist An index.html file will exist in the DocumentRoot directory Using regular expressions in if statements Another kind of expression you can test in if statements and other conditionals is the regular expression. A regular expression is a powerful way to compare strings using pattern matching. How to do it… This is one example of using a regular expression in a conditional statement. Add the following to your manifest: if $::architecture =~ /64/ { notify { '64Bit OS Installed': } } else { notify { 'Upgrade to 64Bit': } fail('Not 64 Bit') } How it works… Puppet treats the text supplied between the forward slashes as a regular expression, specifying the text to be matched. If the match succeeds, the if expression will be true and so the code between the first set of curly braces will be executed. In this example, we used a regular expression because different distributions have different ideas on what to call 64bit; some use amd64, while others use x86_64. The only thing we can count on is the presence of the number 64 within the fact. Some facts that have version numbers in them are treated as strings to Puppet. For instance, $::facterversion. On my test system, this is 2.0.1, but when I try to compare that with 2, Puppet fails to make the comparison: Error: comparison of String with 2 failed at /home/thomas/.puppet/manifests/version.pp:1 on node cookbook.example.com If you wanted instead to do something if the text does not match, use !~ rather than =~: if $::kernel !~ /Linux/ { notify { 'Not Linux, could be Windows, MacOS X, AIX, or ?': } } There's more… Regular expressions are very powerful, but can be difficult to understand and debug. If you find yourself using a regular expression so complex that you can't see at a glance what it does, think about simplifying your design to make it easier. However, one particularly useful feature of regular expressions is the ability to capture patterns. Capturing patterns You can not only match text using a regular expression, but also capture the matched text and store it in a variable: $input = 'Puppet is better than manual configuration' if $input =~ /(.*) is better than (.*)/ { notify { "You said '${0}'. Looks like you're comparing ${1}    to ${2}!": } } The preceding code produces this output: You said 'Puppet is better than manual configuration'. Looks like you're comparing Puppet to manual configuration! The variable $0 stores the whole matched text (assuming the overall match succeeded). If you put brackets around any part of the regular expression, it creates a group, and any matched groups will also be stored in variables. The first matched group will be $1, the second $2, and so on, as shown in the preceding example. Regular expression syntax Puppet's regular expression syntax is the same as Ruby's, so resources that explain Ruby's regular expression syntax will also help you with Puppet. You can find a good introduction to Ruby's regular expression syntax at this website: http://www.tutorialspoint.com/ruby/ruby_regular_expressions.htm. Using selectors and case statements Although you could write any conditional statement using if, Puppet provides a couple of extra forms to help you express conditionals more easily: the selector and the case statement. How to do it… Here are some examples of selector and case statements: Add the following code to your manifest: $systemtype = $::operatingsystem ? { 'Ubuntu' => 'debianlike', 'Debian' => 'debianlike', 'RedHat' => 'redhatlike', 'Fedora' => 'redhatlike', 'CentOS' => 'redhatlike', default => 'unknown', }   notify { "You have a ${systemtype} system": } Add the following code to your manifest: class debianlike { notify { 'Special manifest for Debian-like systems': } }   class redhatlike { notify { 'Special manifest for RedHat-like systems': } }   case $::operatingsystem { 'Ubuntu', 'Debian': {    include debianlike } 'RedHat', 'Fedora', 'CentOS', 'Springdale': {    include redhatlike } default: {    notify { "I don't know what kind of system you have!":    } } } How it works… Our example demonstrates both the selector and the case statement, so let's see in detail how each of them works. Selector In the first example, we used a selector (the ? operator) to choose a value for the $systemtype variable depending on the value of $::operatingsystem. This is similar to the ternary operator in C or Ruby, but instead of choosing between two possible values, you can have as many values as you like. Puppet will compare the value of $::operatingsystem to each of the possible values we have supplied in Ubuntu, Debian, and so on. These values could be regular expressions (for example, for a partial string match, or to use wildcards), but in our case, we have just used literal strings. As soon as it finds a match, the selector expression returns whatever value is associated with the matching string. If the value of $::operatingsystem is Fedora, for example, the selector expression will return the redhatlike string and this will be assigned to the variable $systemtype. Case statement Unlike selectors, the case statement does not return a value. case statements come in handy when you want to execute different code depending on the value of some expression. In our second example, we used the case statement to include either the debianlike or redhatlike class, depending on the value of $::operatingsystem. Again, Puppet compares the value of $::operatingsystem to a list of potential matches. These could be regular expressions or strings, or as in our example, comma-separated lists of strings. When it finds a match, the associated code between curly braces is executed. So, if the value of $::operatingsystem is Ubuntu, then the code including debianlike will be executed. There's more… Once you've got a grip of the basic use of selectors and case statements, you may find the following tips useful. Regular expressions As with if statements, you can use regular expressions with selectors and case statements, and you can also capture the values of the matched groups and refer to them using $1, $2, and so on: case $::lsbdistdescription { /Ubuntu (.+)/: {    notify { "You have Ubuntu version ${1}": } } /CentOS (.+)/: {    notify { "You have CentOS version ${1}": } } default: {} } Defaults Both selectors and case statements let you specify a default value, which is chosen if none of the other options match (the style guide suggests you always have a default clause defined): $lunch = 'Filet mignon.' $lunchtype = $lunch ? { /fries/ => 'unhealthy', /salad/ => 'healthy', default => 'unknown', }   notify { "Your lunch was ${lunchtype}": } The output is as follows: t@mylaptop ~ $ puppet apply lunchtype.pp Notice: Your lunch was unknown Notice: /Stage[main]/Main/Notify[Your lunch was unknown]/message: defined 'message' as 'Your lunch was unknown' When the default action shouldn't normally occur, use the fail() function to halt the Puppet run. How to do it… The following steps will show you how to use the in operator: Add the following code to your manifest: if $::operatingsystem in [ 'Ubuntu', 'Debian' ] { notify { 'Debian-type operating system detected': } } elsif $::operatingsystem in [ 'RedHat', 'Fedora', 'SuSE',   'CentOS' ] { notify { 'RedHat-type operating system detected': } } else { notify { 'Some other operating system detected': } } Run Puppet: t@cookbook:~/.puppet/manifests$ puppet apply in.pp Notice: Compiled catalog for cookbook.example.com in environment production in 0.03 seconds Notice: Debian-type operating system detected Notice: /Stage[main]/Main/Notify[Debian-type operating system detected]/message: defined 'message' as 'Debian-type operating system detected'Notice: Finished catalog run in 0.02 seconds There's more… The value of an in expression is Boolean (true or false) so you can assign it to a variable: $debianlike = $::operatingsystem in [ 'Debian', 'Ubuntu' ]   if $debianlike { notify { 'You are in a maze of twisty little packages, all alike': } } Creating a centralized Puppet infrastructure A configuration management tool such as Puppet is best used when you have many machines to manage. If all the machines can reach a central location, using a centralized Puppet infrastructure might be a good solution. Unfortunately, Puppet doesn't scale well with a large number of nodes. If your deployment has less than 800 servers, a single Puppet master should be able to handle the load, assuming your catalogs are not complex (take less than 10 seconds to compile each catalog). If you have a larger number of nodes, I suggest a load balancing configuration described in Mastering Puppet, Thomas Uphill, Packt Publishing. A Puppet master is a Puppet server that acts as an X509 certificate authority for Puppet and distributes catalogs (compiled manifests) to client nodes. Puppet ships with a built-in web server called WEBrick, which can handle a very small number of nodes. In this section, we will see how to use that built-in server to control a very small (less than 10) number of nodes. Getting ready The Puppet master process is started by running puppet master; most Linux distributions have start and stop scripts for the Puppet master in a separate package. To get started, we'll create a new debian server named puppet.example.com. How to do it... Install Puppet on the new server and then use Puppet to install the Puppet master package: # puppet resource package puppetmaster ensure='installed' Notice: /Package[puppetmaster]/ensure: created package { 'puppetmaster':   ensure => '3.7.0-1puppetlabs1', } Now start the Puppet master service and ensure it will start at boot: # puppet resource service puppetmaster ensure=true enable=true service { 'puppetmaster':   ensure => 'running',   enable => 'true', } How it works... The Puppet master package includes the start and stop scripts for the Puppet master service. We use Puppet to install the package and start the service. Once the service is started, we can point another node at the Puppet master (you might need to disable the host-based firewall on your machine). From another node, run puppet agent to start a puppet agent, which will contact the server and request a new certificate: t@ckbk:~$ sudo puppet agent -t Info: Creating a new SSL key for cookbook.example.com Info: Caching certificate for ca Info: Creating a new SSL certificate request for cookbook.example.com Info: Certificate Request fingerprint (SHA256): 06:C6:2B:C4:97:5D:16:F2:73:82:C4:A9:A7:B1:D0:95:AC:69:7B:27:13:A9:1A:4C:98:20:21:C2:50:48:66:A2 Info: Caching certificate for ca Exiting; no certificate found and waitforcert is disabled Now on the Puppet server, sign the new key: root@puppet:~# puppet cert list pu "cookbook.example.com" (SHA256) 06:C6:2B:C4:97:5D:16:F2:73:82:C4:A9:A7:B1:D0:95:AC:69:7B:27:13:A9:1A:4C:98:20:21:C2:50:48:66:A2 root@puppet:~# puppet cert sign cookbook.example.com Notice: Signed certificate request for cookbook.example.com Notice: Removing file Puppet::SSL::CertificateRequestcookbook.example.com at'/var/lib/puppet/ssl/ca/requests/cookbook.example.com.pem' Return to the cookbook node and run Puppet again: t@ckbk:~$ sudo puppet agent –vt Info: Caching certificate for cookbook.example.com Info: Caching certificate_revocation_list for caInfo: Caching certificate for cookbook.example.comInfo: Retrieving pluginfactsInfo: Retrieving pluginInfo: Caching catalog for cookbookInfo: Applying configuration version '1410401823'Notice: Finished catalog run in 0.04 seconds There's more... When we ran puppet agent, Puppet looked for a host named puppet.example.com (since our test node is in the example.com domain); if it couldn't find that host, it would then look for a host named Puppet. We can specify the server to contact with the --server option to puppet agent. When we installed the Puppet master package and started the Puppet master service, Puppet created default SSL certificates based on our hostname. Creating certificates with multiple DNS names By default, Puppet will create an SSL certificate for your Puppet master that contains the fully qualified domain name of the server only. Depending on how your network is configured, it can be useful for the server to be known by other names. In this recipe, we'll make a new certificate for our Puppet master that has multiple DNS names. Getting ready Install the Puppet master package if you haven't already done so. You will then need to start the Puppet master service at least once to create a certificate authority (CA). How to do it... The steps are as follows: Stop the running Puppet master process with the following command: # service puppetmaster stop [ ok ] Stopping puppet master. Delete (clean) the current server certificate: # puppet cert clean puppet Notice: Revoked certificate with serial 6 Notice: Removing file Puppet::SSL::Certificate puppet at '/var/lib/puppet/ssl/ca/signed/puppet.pem' Notice: Removing file Puppet::SSL::Certificate puppet at '/var/lib/puppet/ssl/certs/puppet.pem' Notice: Removing file Puppet::SSL::Key puppet at '/var/lib/puppet/ssl/private_keys/puppet.pem' Create a new Puppet certificate using Puppet certificate generate with the --dns-alt-names option: root@puppet:~# puppet certificate generate puppet --dns-alt-names puppet.example.com,puppet.example.org,puppet.example.net --ca-location local Notice: puppet has a waiting certificate request true Sign the new certificate: root@puppet:~# puppet cert --allow-dns-alt-names sign puppet Notice: Signed certificate request for puppet Notice: Removing file Puppet::SSL::CertificateRequest puppet at '/var/lib/puppet/ssl/ca/requests/puppet.pem' Restart the Puppet master process: root@puppet:~# service puppetmaster restart[ ok ] Restarting puppet master. How it works... When your puppet agents connect to the Puppet server, they look for a host called Puppet, they then look for a host called Puppet.[your domain]. If your clients are in different domains, then you need your Puppet master to reply to all the names correctly. By removing the existing certificate and generating a new one, you can have your Puppet master reply to multiple DNS names. Summary Configuration management has become a requirement for system administrators. Knowing how to use configuration management tools, such as Puppet, enables administrators to take full advantage of automated provisioning systems and cloud resources. There is a natural progression from performing a task, scripting a task to creating a module in Puppet, or Puppetizing a task. Resources for Article: Further resources on this subject: Designing Puppet Architectures [article] External Tools and the Puppet Ecosystem [article] Puppet: Integrating External Tools [article]
Read more
  • 0
  • 0
  • 3903

article-image-spark-programming-model
Packt
20 Feb 2015
13 min read
Save for later

The Spark Programming Model

Packt
20 Feb 2015
13 min read
In this article by Nick Pentreath, author of the book Machine Learning with Spark, we will delve into a high-level overview of Spark's design, we will introduce the SparkContext object as well as the Spark shell, which we will use to interactively explore the basics of the Spark programming model. While this section provides a brief overview and examples of using Spark, we recommend that you read the following documentation to get a detailed understanding:Spark Quick Start: http://spark.apache.org/docs/latest/quick-start.htmlSpark Programming guide, which covers Scala, Java, and Python: http://spark.apache.org/docs/latest/programming-guide.html (For more resources related to this topic, see here.) SparkContext and SparkConf The starting point of writing any Spark program is SparkContext (or JavaSparkContext in Java). SparkContext is initialized with an instance of a SparkConf object, which contains various Spark cluster-configuration settings (for example, the URL of the master node). Once initialized, we will use the various methods found in the SparkContext object to create and manipulate distributed datasets and shared variables. The Spark shell (in both Scala and Python, which is unfortunately not supported in Java) takes care of this context initialization for us, but the following lines of code show an example of creating a context running in the local mode in Scala: val conf = new SparkConf().setAppName("Test Spark App").setMaster("local[4]")val sc = new SparkContext(conf) This creates a context running in the local mode with four threads, with the name of the application set to Test Spark App. If we wish to use default configuration values, we could also call the following simple constructor for our SparkContext object, which works in exactly the same way: val sc = new SparkContext("local[4]", "Test Spark App") The Spark shell Spark supports writing programs interactively using either the Scala or Python REPL (that is, the Read-Eval-Print-Loop, or interactive shell). The shell provides instant feedback as we enter code, as this code is immediately evaluated. In the Scala shell, the return result and type is also displayed after a piece of code is run. To use the Spark shell with Scala, simply run ./bin/spark-shell from the Spark base directory. This will launch the Scala shell and initialize SparkContext, which is available to us as the Scala value, sc. Your console output should look similar to the following screenshot: To use the Python shell with Spark, simply run the ./bin/pyspark command. Like the Scala shell, the Python SparkContext object should be available as the Python variable sc. You should see an output similar to the one shown in this screenshot: Resilient Distributed Datasets The core of Spark is a concept called the Resilient Distributed Dataset (RDD). An RDD is a collection of "records" (strictly speaking, objects of some type) that is distributed or partitioned across many nodes in a cluster (for the purposes of the Spark local mode, the single multithreaded process can be thought of in the same way). An RDD in Spark is fault-tolerant; this means that if a given node or task fails (for some reason other than erroneous user code, such as hardware failure, loss of communication, and so on), the RDD can be reconstructed automatically on the remaining nodes and the job will still complete. Creating RDDs RDDs can be created from existing collections, for example, in the Scala Spark shell that you launched earlier: val collection = List("a", "b", "c", "d", "e")val rddFromCollection = sc.parallelize(collection) RDDs can also be created from Hadoop-based input sources, including the local filesystem, HDFS, and Amazon S3. A Hadoop-based RDD can utilize any input format that implements the Hadoop InputFormat interface, including text files, other standard Hadoop formats, HBase, Cassandra, and many more. The following code is an example of creating an RDD from a text file located on the local filesystem: val rddFromTextFile = sc.textFile("LICENSE") The preceding textFile method returns an RDD where each record is a String object that represents one line of the text file. Spark operations Once we have created an RDD, we have a distributed collection of records that we can manipulate. In Spark's programming model, operations are split into transformations and actions. Generally speaking, a transformation operation applies some function to all the records in the dataset, changing the records in some way. An action typically runs some computation or aggregation operation and returns the result to the driver program where SparkContext is running. Spark operations are functional in style. For programmers familiar with functional programming in Scala or Python, these operations should seem natural. For those without experience in functional programming, don't worry; the Spark API is relatively easy to learn. One of the most common transformations that you will use in Spark programs is the map operator. This applies a function to each record of an RDD, thus mapping the input to some new output. For example, the following code fragment takes the RDD we created from a local text file and applies the size function to each record in the RDD. Remember that we created an RDD of Strings. Using map, we can transform each string to an integer, thus returning an RDD of Ints: val intsFromStringsRDD = rddFromTextFile.map(line => line.size) You should see output similar to the following line in your shell; this indicates the type of the RDD: intsFromStringsRDD: org.apache.spark.rdd.RDD[Int] = MappedRDD[5] at map at <console>:14 In the preceding code, we saw the => syntax used. This is the Scala syntax for an anonymous function, which is a function that is not a named method (that is, one defined using the def keyword in Scala or Python, for example). The line => line.size syntax means that we are applying a function where the input variable is to the left of the => operator, and the output is the result of the code to the right of the => operator. In this case, the input is line, and the output is the result of calling line.size. In Scala, this function that maps a string to an integer is expressed as String => Int.This syntax saves us from having to separately define functions every time we use methods such as map; this is useful when the function is simple and will only be used once, as in this example. Now, we can apply a common action operation, count, to return the number of records in our RDD: intsFromStringsRDD.count The result should look something like the following console output: 14/01/29 23:28:28 INFO SparkContext: Starting job: count at <console>:17...14/01/29 23:28:28 INFO SparkContext: Job finished: count at <console>:17, took 0.019227 sres4: Long = 398 Perhaps we want to find the average length of each line in this text file. We can first use the sum function to add up all the lengths of all the records and then divide the sum by the number of records: val sumOfRecords = intsFromStringsRDD.sumval numRecords = intsFromStringsRDD.countval aveLengthOfRecord = sumOfRecords / numRecords The result will be as follows: aveLengthOfRecord: Double = 52.06030150753769 Spark operations, in most cases, return a new RDD, with the exception of most actions, which return the result of a computation (such as Long for count and Double for sum in the preceding example). This means that we can naturally chain together operations to make our program flow more concise and expressive. For example, the same result as the one in the preceding line of code can be achieved using the following code: val aveLengthOfRecordChained = rddFromTextFile.map(line => line.size).sum / rddFromTextFile.count An important point to note is that Spark transformations are lazy. That is, invoking a transformation on an RDD does not immediately trigger a computation. Instead, transformations are chained together and are effectively only computed when an action is called. This allows Spark to be more efficient by only returning results to the driver when necessary so that the majority of operations are performed in parallel on the cluster. This means that if your Spark program never uses an action operation, it will never trigger an actual computation, and you will not get any results. For example, the following code will simply return a new RDD that represents the chain of transformations: val transformedRDD = rddFromTextFile.map(line => line.size).filter(size => size > 10).map(size => size * 2) This returns the following result in the console: transformedRDD: org.apache.spark.rdd.RDD[Int] = MappedRDD[8] at map at <console>:14 Notice that no actual computation happens and no result is returned. If we now call an action, such as sum, on the resulting RDD, the computation will be triggered: val computation = transformedRDD.sum You will now see that a Spark job is run, and it results in the following console output: ...14/11/27 21:48:21 INFO SparkContext: Job finished: sum at <console>:16, took 0.193513 scomputation: Double = 60468.0 The complete list of transformations and actions possible on RDDs as well as a set of more detailed examples are available in the Spark programming guide (located at http://spark.apache.org/docs/latest/programming-guide.html#rdd-operations), and the API documentation (the Scala API documentation) is located at http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.rdd.RDD). Caching RDDs One of the most powerful features of Spark is the ability to cache data in memory across a cluster. This is achieved through use of the cache method on an RDD: rddFromTextFile.cache Calling cache on an RDD tells Spark that the RDD should be kept in memory. The first time an action is called on the RDD that initiates a computation, the data is read from its source and put into memory. Hence, the first time such an operation is called, the time it takes to run the task is partly dependent on the time it takes to read the data from the input source. However, when the data is accessed the next time (for example, in subsequent queries in analytics or iterations in a machine learning model), the data can be read directly from memory, thus avoiding expensive I/O operations and speeding up the computation, in many cases, by a significant factor. If we now call the count or sum function on our cached RDD, we will see that the RDD is loaded into memory: val aveLengthOfRecordChained = rddFromTextFile.map(line => line.size).sum / rddFromTextFile.count Indeed, in the following output, we see that the dataset was cached in memory on the first call, taking up approximately 62 KB and leaving us with around 270 MB of memory free: ...14/01/30 06:59:27 INFO MemoryStore: ensureFreeSpace(63454) called with curMem=32960, maxMem=31138775014/01/30 06:59:27 INFO MemoryStore: Block rdd_2_0 stored as values to memory (estimated size 62.0 KB, free 296.9 MB)14/01/30 06:59:27 INFO BlockManagerMasterActor$BlockManagerInfo:Added rdd_2_0 in memory on 10.0.0.3:55089 (size: 62.0 KB, free: 296.9 MB)... Now, we will call the same function again: val aveLengthOfRecordChainedFromCached = rddFromTextFile.map(line => line.size).sum / rddFromTextFile.count We will see from the console output that the cached data is read directly from memory: ...14/01/30 06:59:34 INFO BlockManager: Found block rdd_2_0 locally... Spark also allows more fine-grained control over caching behavior. You can use the persist method to specify what approach Spark uses to cache data. More information on RDD caching can be found here: http://spark.apache.org/docs/latest/programming-guide.html#rdd-persistence. Broadcast variables and accumulators Another core feature of Spark is the ability to create two special types of variables: broadcast variables and accumulators. A broadcast variable is a read-only variable that is made available from the driver program that runs the SparkContext object to the nodes that will execute the computation. This is very useful in applications that need to make the same data available to the worker nodes in an efficient manner, such as machine learning algorithms. Spark makes creating broadcast variables as simple as calling a method on SparkContext as follows: val broadcastAList = sc.broadcast(List("a", "b", "c", "d", "e")) The console output shows that the broadcast variable was stored in memory, taking up approximately 488 bytes, and it also shows that we still have 270 MB available to us: 14/01/30 07:13:32 INFO MemoryStore: ensureFreeSpace(488) called with curMem=96414, maxMem=31138775014/01/30 07:13:32 INFO MemoryStore: Block broadcast_1 stored as values to memory(estimated size 488.0 B, free 296.9 MB)broadCastAList: org.apache.spark.broadcast.Broadcast[List[String]] = Broadcast(1) A broadcast variable can be accessed from nodes other than the driver program that created it (that is, the worker nodes) by calling value on the variable: sc.parallelize(List("1", "2", "3")).map(x => broadcastAList.value ++ x).collect This code creates a new RDD with three records from a collection (in this case, a Scala List) of ("1", "2", "3"). In the map function, it returns a new collection with the relevant record from our new RDD appended to the broadcastAList that is our broadcast variable. Notice that we used the collect method in the preceding code. This is a Spark action that returns the entire RDD to the driver as a Scala (or Python or Java) collection. We will often use collect when we wish to apply further processing to our results locally within the driver program. Note that collect should generally only be used in cases where we really want to return the full result set to the driver and perform further processing. If we try to call collect on a very large dataset, we might run out of memory on the driver and crash our program.It is preferable to perform as much heavy-duty processing on our Spark cluster as possible, preventing the driver from becoming a bottleneck. In many cases, however, collecting results to the driver is necessary, such as during iterations in many machine learning models. On inspecting the result, we will see that for each of the three records in our new RDD, we now have a record that is our original broadcasted List, with the new element appended to it (that is, there is now either "1", "2", or "3" at the end): ...14/01/31 10:15:39 INFO SparkContext: Job finished: collect at <console>:15, took 0.025806 sres6: Array[List[Any]] = Array(List(a, b, c, d, e, 1), List(a, b, c, d, e, 2), List(a, b, c, d, e, 3)) An accumulator is also a variable that is broadcasted to the worker nodes. The key difference between a broadcast variable and an accumulator is that while the broadcast variable is read-only, the accumulator can be added to. There are limitations to this, that is, in particular, the addition must be an associative operation so that the global accumulated value can be correctly computed in parallel and returned to the driver program. Each worker node can only access and add to its own local accumulator value, and only the driver program can access the global value. Accumulators are also accessed within the Spark code using the value method. For more details on broadcast variables and accumulators, see the Shared Variables section of the Spark Programming Guide: http://spark.apache.org/docs/latest/programming-guide.html#shared-variables. Summary In this article, we learned the basics of Spark's programming model and API using the interactive Scala console. Resources for Article: Further resources on this subject: Ridge Regression [article] Clustering with K-Means [article] Machine Learning Examples Applicable to Businesses [article]
Read more
  • 0
  • 0
  • 4922

article-image-aggregators-file-exchange-over-ftpftps-social-integration-and-enterprise-messaging
Packt
20 Feb 2015
26 min read
Save for later

Aggregators, File exchange Over FTP/FTPS, Social Integration, and Enterprise Messaging

Packt
20 Feb 2015
26 min read
In this article by Chandan Pandey, the author of Spring Integration Essentials, we will explore the out-of-the-box capabilities that the Spring Integration framework provides for a seamless flow of messages across heterogeneous components and see what Spring Integration has in the box when it comes to real-world integration challenges. We will cover Spring Integration's support for external components and we will cover the following topics in detail: Aggregators File exchange over FTP/FTPS Social integration Enterprise messaging (For more resources related to this topic, see here.) Aggregators The aggregators are the opposite of splitters - they combine multiple messages and present them as a single message to the next endpoint. This is a very complex operation, so let's start by a real life scenario. A news channel might have many correspondents who can upload articles and related images. It might happen that the text of the articles arrives much sooner than the associated images - but the article must be sent for publishing only when all relevant images have also arrived. This scenario throws up a lot of challenges; partial articles should be stored somewhere, there should be a way to correlate incoming components with existing ones, and also there should be a way to identify the completion of a message. Aggregators are there to handle all of these aspects - some of the relevant concepts that are used are MessageStore, CorrelationStrategy, and ReleaseStrategy. Let's start with a code sample and then we will dive down to explore each of these concepts in detail: <int:aggregator   input-channel="fetchedFeedChannelForAggregatior"   output-channel="aggregatedFeedChannel"   ref="aggregatorSoFeedBean"   method="aggregateAndPublish"   release-strategy="sofeedCompletionStrategyBean"   release-strategy-method="checkCompleteness"   correlation-strategy="soFeedCorrelationStrategyBean"   correlation-strategy-method="groupFeedsBasedOnCategory"   message-store="feedsMySqlStore "   expire-groups-upon-completion="true">   <int:poller fixed-rate="1000"></int:poller> </int:aggregator> Hmm, a pretty big declaration! And why not—a lot of things combine together to act as an aggregator. Let's quickly glance at all the tags used: int:aggregator: This is used to specify the Spring framework's namespace for the aggregator. input-channel: This is the channel from which messages will be consumed. output-channel: This is the channel to which messages will be dropped after aggregation. ref: This is used to specify the bean having the method that is called on the release of messages. method: This is used to specify the method that is invoked when messages are released. release-strategy: This is used to specify the bean having the method that decides whether aggregation is complete or not. release-strategy-method: This is the method having the logic to check for completeness of the message. correlation-strategy: This is used to specify the bean having the method to correlate the messages. correlation-strategy-method: This is the method having the actual logic to correlate the messages. message-store: This is used to specify the message store, where messages are temporarily stored until they have been correlated and are ready to release. This can be in memory (which is default) or can be a persistence store. If a persistence store is configured, message delivery will be resumed across a server crash. Java class can be defined as an aggregator and, as described in the previous bullet points, the method and ref parameters decide which method of bean (referred by ref) should be invoked when messages have been aggregated as per CorrelationStrategy and released after fulfilment of ReleaseStrategy. In the following example, we are just printing the messages before passing them on to the next consumer in the chain: public class SoFeedAggregator {   public List<SyndEntry> aggregateAndPublish(List<SyndEntry>     messages) {     //Do some pre-processing before passing on to next channel     return messages;   } } Let's get to the details of the three most important components that complete the aggregator. Correlation strategy Aggregator needs to group the messages—but how will it decide the groups? In simple words, CorrelationStrategy decides how to correlate the messages. The default is based on a header named CORRELATION_ID. All messages having the same value for the CORRELATION_ID header will be put in one bracket. Alternatively, we can designate any Java class and its method to define a custom correlation strategy or can extend Spring Integration framework's CorrelationStrategy interface to define it. If the CorrelationStrategy interface is implemented, then the getCorrelationKey() method should be implemented. Let's see our correlation strategy in the feeds example: public class CorrelationStrategy {   public Object groupFeedsBasedOnCategory(Message<?> message) {     if(message!=null){       SyndEntry entry = (SyndEntry)message.getPayload();       List<SyndCategoryImpl> categories=entry.getCategories();       if(categories!=null&&categories.size()>0){         for (SyndCategoryImpl category: categories) {           //for simplicity, lets consider the first category           return category.getName();         }       }     }     return null;   } } So how are we correlating our messages? We are correlating the feeds based on the category name. The method must return an object that can be used for correlating the messages. If a user-defined object is returned, it must satisfy the requirements for a key in a map such as defining hashcode() and equals(). The return value must not be null. Alternatively, if we would have wanted to implement it by extending framework support, then it would have looked like this: public class CorrelationStrategy implements CorrelationStrategy {   public Object getCorrelationKey(Message<?> message) {     if(message!=null){       …             return category.getName();           }         }       }       return null;     }   } } Release strategy We have been grouping messages based on correlation strategy—but when will we release it for the next component? This is decided by the release strategy. Similar to the correlation strategy, any Java POJO can define the release strategy or we can extend framework support. Here is the example of using the Java POJO class: public class CompletionStrategy {   public boolean checkCompleteness(List<SyndEntry> messages) {     if(messages!=null){       if(messages.size()>2){         return true;       }     }     return false;   } } The argument of a message must be of type collection and it must return a Boolean indication whether to release the accumulated messages or not. For simplicity, we have just checked for the number of messages from the same category—if it's greater than two, we release the messages. Message store Until an aggregated message fulfils the release criteria, the aggregator needs to store them temporarily. This is where message stores come into the picture. Message stores can be of two types: in-memory and persistence store. Default is in memory, and if this is to be used, then there is no need to declare this attribute at all. If a persistent message store needs to be used, then it must be declared and its reference should be given to the message- store attribute. A mysql message store can be declared and referenced as follows: <bean id=" feedsMySqlStore "   class="org.springframework.integration.jdbc.JdbcMessageStore">   <property name="dataSource" ref="feedsSqlDataSource"/> </bean> Data source is Spring framework's standard JDBC data source. The greatest advantage of using persistence store is recoverability—if the system recovers from a crash, all in-memory aggregated messages will not be lost. Another advantage is capacity—memory is limited, which can accommodate a limited number of messages for aggregation, but the database can have a much bigger space. FTP/FTPS FTP, or File Transfer Protocol, is used to transfer files across networks. FTP communications consist of two parts: server and client. The client establishes a session with the server, after which it can download or upload files. Spring Integration provides components that act as a client and connect to the FTP server to communicate with it. What about the server—which server will it connect to? If you have access to any public or hosted FTP server, use it. Else, the easiest way for trying out the example in this section is to set up a local instance of the FTP server. FTP setup is out of the scope of this article. Prerequisites To use Spring Integration components for FTP/FTPS, we need to add a namespace to our configuration file and then add the Maven dependency entry in the pom.xml file. The following entries should be made: Namespace support can be added by using the following code snippet:   class="org.springframework.integration.     ftp.session.DefaultFtpSessionFactory">   <property name="host" value="localhost"/>   <property name="port" value="21"/>   <property name="username" value="testuser"/>   <property name="password" value="testuser"/> </bean> The DefaultFtpSessionFactory class is at work here, and it takes the following parameters: Host that is running the FTP server Port at which it's running the server Username Password for the server A session pool for the factory is maintained and an instance is returned when required. Spring takes care of validating that a stale session is never returned. Downloading files from the FTP server Inbound adapters can be used to read the files from the server. The most important aspect is the session factory that we just discussed in the preceding section. The following code snippet configures an FTP inbound adapter that downloads a file from a remote directory and makes it available for processing: <int-ftp:inbound-channel-adapter   channel="ftpOutputChannel"   session-factory="ftpClientSessionFactory"   remote-directory="/"   local-directory=   "C:\Chandan\Projects\siexample\ftp\ftplocalfolder"   auto-create-local-directory="true"   delete-remote-files="true"   filename-pattern="*.txt"   local-filename-generator-expression=   "#this.toLowerCase() + '.trns'">   <int:poller fixed-rate="1000"/> </int-ftp:inbound-channel-adapter> Let's quickly go through the tags used in this code: int-ftp:inbound-channel-adapter: This is the namespace support for the FTP inbound adapter. channel: This is the channel on which the downloaded files will be put as a message. session-factory: This is a factory instance that encapsulates details for connecting to a server. remote-directory: This is the directory on the server where the adapter should listen for the new arrival of files. local-directory: This is the local directory where the downloaded files should be dumped. auto-create-local-directory: If enabled, this will create the local directory structure if it's missing. delete-remote-files: If enabled, this will delete the files on the remote directory after it has been downloaded successfully. This will help in avoiding duplicate processing. filename-pattern: This can be used as a filter, but only files matching the specified pattern will be downloaded. local-filename-generator-expression: This can be used to generate a local filename. An inbound adapter is a special listener that listens for events on the remote directory, for example, an event fired on the creation of a new file. At this point, it will initiate the file transfer. It creates a payload of type Message<File> and puts it on the output channel. By default, the filename is retained and a file with the same name as the remote file is created in the local directory. This can be overridden by using local- filename-generator-expression. Incomplete files On the remote server, there could be files that are still in the process of being written. Typically, there the extension is different, for example, filename.actualext.writing. The best way to avoid reading incomplete files is to use the filename pattern that will copy only those files that have been written completely. Uploading files to the FTP server Outbound adapters can be used to write files to the server. The following code snippet reads a message from a specified channel and writes it inside the FTP server's remote directory. The remote server session is determined as usual by the session factory. Make sure the username configured in the session object has the necessary permission to write to the remote directory. The following configuration sets up a FTP adapter that can upload files in the specified directory:   <int-ftp:outbound-channel-adapter channel="ftpOutputChannel"     remote-directory="/uploadfolder"     session-factory="ftpClientSessionFactory"     auto-create-directory="true">   </int-ftp:outbound-channel-adapter> Here is a brief description of the tags used: int-ftp:outbound-channel-adapter: This is the namespace support for the FTP outbound adapter. channel: This is the name of the channel whose payload will be written to the remote server. remote-directory: This is the remote directory where files will be put. The user configured in the session factory must have appropriate permission. session-factory: This encapsulates details for connecting to the FTP server. auto-create-directory: If enabled, this will automatically create a remote directory if it's missing, and the given user should have sufficient permission. The payload on the channel need not necessarily be a file type; it can be one of the following: java.io.File: A Java file object byte[]: This is a byte array that represents the file contents java.lang.String: This is the text that represents the file contents Avoiding partially written files Files on the remote server must be made available only when they have been written completely and not when they are still partial. Spring uses a mechanism of writing the files to a temporary location and its availability is published only when it has been completely written. By default, the suffix is written, but it can be changed using the temporary-file-suffix property. This can be completely disabled by setting use-temporary-file- name to false. FTP outbound gateway Gateway, by definition, is a two-way component: it accepts input and provides a result for further processing. So what is the input and output in the case of FTP? It issues commands to the FTP server and returns the result of the command. The following command will issue an ls command with the option –l to the server. The result is a list of string objects containing the filename of each file that will be put on the reply- channel. The code is as follows: <int-ftp:outbound-gateway id="ftpGateway"     session-factory="ftpClientSessionFactory"     request-channel="commandInChannel"     command="ls"     command-options="-1"     reply-channel="commandOutChannel"/> The tags are pretty simple: int-ftp:outbound-gateway: This is the namespace support for the FTP outbound gateway session-factory: This is the wrapper for details needed to connect to the FTP server command: This is the command to be issued command-options: This is the option for the command reply-channel: This is the response of the command that is put on this channel FTPS support For FTPS support, all that is needed is to change the factory class—an instance of org.springframework.integration.ftp.session.DefaultFtpsSessionFactory should be used. Note the s in DefaultFtpsSessionFactory. Once the session is created with this factory, it's ready to communicate over a secure channel. Here is an example of a secure session factory configuration: <bean id="ftpSClientFactory"   class="org.springframework.integration.ftp.session.   DefaultFtpsSessionFactory">   <property name="host" value="localhost"/>   <property name="port" value="22"/>   <property name="username" value="testuser"/>   <property name="password" value="testuser"/> </bean> Although it is obvious, I would remind you that the FTP server must be configured to support a secure connection and open the appropriate port. Social integration Any application in today's context is incomplete if it does not provide support for social messaging. Spring Integration provides in-built support for many social interfaces such as e-mails, Twitter feeds, and so on. Let's discuss the implementation of Twitter in this section. Prior to Version 2.1, Spring Integration was dependent on the Twitter4J API for Twitter support, but now it leverages Spring's social module for Twitter integration. Spring Integration provides an interface for receiving and sending tweets as well as searching and publishing the search results in messages. Twitter uses oauth for authentication purposes. An app must be registered before we start Twitter development on it. Prerequisites Let's look at the steps that need to be completed before we can use a Twitter component in our Spring Integration example: Twitter account setup: A Twitter account is needed. Perform the following steps to get the keys that will allow the user to use Twitter using the API: Visit https://apps.twitter.com/. Sign in to your account. Click on Create New App. Enter the details such as Application name, Description, Website, and so on. All fields are self-explanatory and appropriate help has also been provided. The value for the field Website need not be a valid one—put an arbitrary website name in the correct format. Click on the Create your application button. If the application is created successfully, a confirmation message will be shown and the Application Management page will appear, as shown here: Go to the Keys and Access Tokens tab and note the details for Consumer Key (API Key) and Consumer Secret (API Secret) under Application Settings, as shown in the following screenshot: You need additional access tokens so that applications can use Twitter using APIs. Click on Create my access token; it takes a while to generate these tokens. Once it is generated, note down the value of Access Token and Access Token Secret. Go to the Permissions tab and provide permission to Read, Write and Access direct messages. After performing all these steps, and with the required keys and access token, we are ready to use Twitter. Let's store these in the twitterauth.properties property file: twitter.oauth.apiKey= lnrDlMXSDnJumKLFRym02kHsy twitter.oauth.apiSecret= 6wlriIX9ay6w2f6at6XGQ7oNugk6dqNQEAArTsFsAU6RU8F2Td twitter.oauth.accessToken= 158239940-FGZHcbIDtdEqkIA77HPcv3uosfFRnUM30hRix9TI twitter.oauth.accessTokenSecret= H1oIeiQOlvCtJUiAZaachDEbLRq5m91IbP4bhg1QPRDeh The next step towards Twitter integration is the creation of a Twitter template. This is similar to the datasource or connection factory for databases, JMS, and so on. It encapsulates details to connect to a social platform. Here is the code snippet: <context:property-placeholder location="classpath: twitterauth.properties "/> <bean id="twitterTemplate" class=" org.springframework.social.   twitter.api.impl.TwitterTemplate ">   <constructor-arg value="${twitter.oauth.apiKey}"/>   <constructor-arg value="${twitter.oauth.apiSecret}"/>   <constructor-arg value="${twitter.oauth.accessToken}"/>   <constructor-arg value="${twitter.oauth.accessTokenSecret}"/> </bean> As I mentioned, the template encapsulates all the values. Here is the order of the arguments: apiKey apiSecret accessToken accessTokenSecret With all the setup in place, let's now do some real work: Namespace support can be added by using the following code snippet: <beans   twitter-template="twitterTemplate"   channel="twitterChannel"> </int-twitter:inbound-channel-adapter> The components in this code are covered in the following bullet points: int-twitter:inbound-channel-adapter: This is the namespace support for Twitter's inbound channel adapter. twitter-template: This is the most important aspect. The Twitter template encapsulates which account to use to poll the Twitter site. The details given in the preceding code snippet are fake; it should be replaced with real connection parameters. channel: Messages are dumped on this channel. These adapters are further used for other applications, such as for searching messages, retrieving direct messages, and retrieving tweets that mention your account, and so on. Let's have a quick look at the code snippets for these adapters. I will not go into detail for each one; they are almost similar to what have been discussed previously. Search: This adapter helps to search the tweets for the parameter configured in the query tag. The code is as follows: <int-twitter:search-inbound-channel-adapter id="testSearch"   twitter-template="twitterTemplate"   query="#springintegration"   channel="twitterSearchChannel"> </int-twitter:search-inbound-channel-adapter> Retrieving Direct Messages: This adapter allows us to receive the direct message for the account in use (account configured in Twitter template). The code is as follows: <int-twitter:dm-inbound-channel-adapter   id="testdirectMessage"   twitter-template="twiterTemplate"   channel="twitterDirectMessageChannel"> </int-twitter:dm-inbound-channel-adapter> Retrieving Mention Messages: This adapter allows us to receive messages that mention the configured account via the @user tag (account configured in the Twitter template). The code is as follows: <int-twitter:mentions-inbound-channel-adapter   id="testmentionMessage"   twitter-template="twiterTemplate"   channel="twitterMentionMessageChannel"> </int-twitter:mentions-inbound-channel-adapter> Sending tweets Twitter exposes outbound adapters to send messages. Here is a sample code:   <int-twitter:outbound-channel-adapter     twitter-template="twitterTemplate"     channel="twitterSendMessageChannel"/> Whatever message is put on the twitterSendMessageChannel channel is tweeted by this adapter. Similar to an inbound gateway, the outbound gateway provides support for sending direct messages. Here is a simple example of an outbound adapter: <int-twitter:dm-outbound-channel-adapter   twitter-template="twitterTemplate"   channel="twitterSendDirectMessage"/> Any message that is put on the twitterSendDirectMessage channel is sent to the user directly. But where is the name of the user to whom the message will be sent? It is decided by a header in the message TwitterHeaders.DM_TARGET_USER_ID. This must be populated either programmatically, or by using enrichers or SpEL. For example, it can be programmatically added as follows: Message message = MessageBuilder.withPayload("Chandan")   .setHeader(TwitterHeaders.DM_TARGET_USER_ID,   "test_id").build(); Alternatively, it can be populated by using a header enricher, as follows: <int:header-enricher input-channel="twitterIn"   output-channel="twitterOut">   <int:header name="twitter_dmTargetUserId" value=" test_id "/> </int:header-enricher> Twitter search outbound gateway As gateways provide a two-way window, the search outbound gateway can be used to issue dynamic search commands and receive the results as a collection. If no result is found, the collection is empty. Let's configure a search outbound gateway, as follows:   <int-twitter:search-outbound-gateway id="twitterSearch"     request-channel="searchQueryChannel"     twitter-template="twitterTemplate"     search-args-expression="#springintegration"     reply-channel="searchQueryResultChannel"/> And here is what the tags covered in this code mean: int-twitter:search-outbound-gateway: This is the namespace for the Twitter search outbound gateway request-channel: This is the channel that is used to send search requests to this gateway twitter-template: This is the Twitter template reference search-args-expression: This is used as arguments for the search reply-channel: This is the channel on which searched results are populated This gives us enough to get started with the social integration aspects of the spring framework. Enterprise messaging Enterprise landscape is incomplete without JMS—it is one of the most commonly used mediums of enterprise integration. Spring provides very good support for this. Spring Integration builds over that support and provides adapter and gateways to receive and consume messages from many middleware brokers such as ActiveMQ, RabbitMQ, Rediss, and so on. Spring Integration provides inbound and outbound adapters to send and receive messages along with gateways that can be used in a request/reply scenario. Let's walk through these implementations in a little more detail. A basic understanding of the JMS mechanism and its concepts is expected. It is not possible to cover even the introduction of JMS here. Let's start with the prerequisites. Prerequisites To use Spring Integration messaging components, namespaces, and relevant Maven the following dependency should be added: Namespace support can be added by using the following code snippet: > Maven entry can be provided using the following code snippet: <dependency>   <groupId>org.springframework.integration</groupId>   <artifactId>spring-integration-jms</artifactId>   <version>${spring.integration.version}</version> </dependency> After adding these two dependencies, we are ready to use the components. But before we can use an adapter, we must configure an underlying message broker. Let's configure ActiveMQ. Add the following in pom.xml:   <dependency>     <groupId>org.apache.activemq</groupId>     <artifactId>activemq-core</artifactId>     <version>${activemq.version}</version>     <exclusions>       <exclusion>         <artifactId>spring-context</artifactId>         <groupId>org.springframework</groupId>       </exclusion>     </exclusions>   </dependency>   <dependency>     <groupId>org.springframework</groupId>     <artifactId>spring-jms</artifactId>     <version>${spring.version}</version>     <scope>compile</scope>   </dependency> After this, we are ready to create a connection factory and JMS queue that will be used by the adapters to communicate. First, create a session factory. As you will notice, this is wrapped in Spring's CachingConnectionFactory, but the underlying provider is ActiveMQ: <bean id="connectionFactory" class="org.springframework.   jms.connection.CachingConnectionFactory">   <property name="targetConnectionFactory">     <bean class="org.apache.activemq.ActiveMQConnectionFactory">       <property name="brokerURL" value="vm://localhost"/>     </bean>   </property> </bean> Let's create a queue that can be used to retrieve and put messages: <bean   id="feedInputQueue"   class="org.apache.activemq.command.ActiveMQQueue">   <constructor-arg value="queue.input"/> </bean> Now, we are ready to send and retrieve messages from the queue. Let's look into each message one by one. Receiving messages – the inbound adapter Spring Integration provides two ways of receiving messages: polling and event listener. Both of them are based on the underlying Spring framework's comprehensive support for JMS. JmsTemplate is used by the polling adapter, while MessageListener is used by the event-driven adapter. As the name suggests, a polling adapter keeps polling the queue for the arrival of new messages and puts the message on the configured channel if it finds one. On the other hand, in the case of the event-driven adapter, it's the responsibility of the server to notify the configured adapter. The polling adapter Let's start with a code sample: <int-jms:inbound-channel-adapter   connection-factory="connectionFactory"   destination="feedInputQueue"   channel="jmsProcessedChannel">   <int:poller fixed-rate="1000" /> </int-jms:inbound-channel-adapter> This code snippet contains the following components: int-jms:inbound-channel-adapter: This is the namespace support for the JMS inbound adapter connection-factory: This is the encapsulation for the underlying JMS provider setup, such as ActiveMQ destination: This is the JMS queue where the adapter is listening for incoming messages channel: This is the channel on which incoming messages should be put There is a poller element, so it's obvious that it is a polling-based adapter. It can be configured in one of two ways: by providing a JMS template or using a connection factory along with a destination. I have used the latter approach. The preceding adapter has a polling queue mentioned in the destination and once it gets any message, it puts the message on the channel configured in the channel attribute. The event-driven adapter Similar to polling adapters, event-driven adapters also need a reference either to an implementation of the interface AbstractMessageListenerContainer or need a connection factory and destination. Again, I will use the latter approach. Here is a sample configuration: <int-jms:message-driven-channel-adapter   connection-factory="connectionFactory"   destination="feedInputQueue"   channel="jmsProcessedChannel"/> There is no poller sub-element here. As soon as a message arrives at its destination, the adapter is invoked, which puts it on the configured channel. Sending messages – the outbound adapter Outbound adapters convert messages on the channel to JMS messages and put them on the configured queue. To convert Spring Integration messages to JMS messages, the outbound adapter uses JmsSendingMessageHandler. This is is an implementation of MessageHandler. Outbound adapters should be configured with either JmsTemplate or with a connection factory and destination queue. Keeping in sync with the preceding examples, we will take the latter approach, as follows: <int-jms:outbound-channel-adapter   connection-factory="connectionFactory"   channel="jmsChannel"   destination="feedInputQueue"/> This adapter receives the Spring Integration message from jmsChannel, converts it to a JMS message, and puts it on the destination. Gateway Gateway provides a request/reply behavior instead of a one-way send or receive. For example, after sending a message, we might expect a reply or we may want to send an acknowledgement after receiving a message. The inbound gateway Inbound gateways provide an alternative to inbound adapters when request-reply capabilities are expected. An inbound gateway is an event-based implementation that listens for a message on the queue, converts it to Spring Message, and puts it on the channel. Here is a sample code: <int-jms:inbound-gateway   request-destination="feedInputQueue"   request-channel="jmsProcessedChannel"/> However, this is what an inbound adapter does—even the configuration is similar, except the namespace. So, what is the difference? The difference lies in replying back to the reply destination. Once the message is put on the channel, it will be propagated down the line and at some stage a reply would be generated and sent back as an acknowledgement. The inbound gateway, on receiving this reply, will create a JMS message and put it back on the reply destination queue. Then, where is the reply destination? The reply destination is decided in one of the following ways: Original message has a property JMSReplyTo, if it's present it has the highest precedence. The inbound gateway looks for a configured, default-reply-destination which can be configured either as a name or as a direct reference of a channel. For specifying channel as direct reference default-reply-destination tag should be used. An exception will be thrown by the gateway if it does not find either of the preceding two ways. The outbound gateway Outbound gateways should be used in scenarios where a reply is expected for the send messages. Let's start with an example: <int-jms:outbound-gateway   request-channel="jmsChannel"   request-destination="feedInputQueue"   reply-channel="jmsProcessedChannel" /> The preceding configuration will send messages to request-destination. When an acknowledgement is received, it can be fetched from the configured reply-destination. If reply-destination has not been configured, JMS TemporaryQueues will be created. Summary In this article, we covered out-of-the-box component provided by the Spring Integration framework such as aggregator. This article also showcased the simplicity and abstraction that Spring Integration provides when it comes to handling complicated integrations, be it file-based, HTTP, JMS, or any other integration mechanism. Resources for Article: Further resources on this subject: Modernizing Our Spring Boot App [article] Home Security by Beaglebone [article] Integrating With Other Frameworks [article]
Read more
  • 0
  • 0
  • 5607
article-image-android-and-udoo-home-automation
Packt
20 Feb 2015
6 min read
Save for later

Android and UDOO for Home Automation

Packt
20 Feb 2015
6 min read
This article written by Emanuele Palazzetti, the author of Getting Started with UDOO, will teach us about home automation and using sensors to monitor the CO2 emission by the wood heater in our home. (For more resources related to this topic, see here.) During the last few years, the maker culture greatly improved the way hobbyists, students, and, more in general, technology enthusiasts create new hardware devices. The advent of prototyping boards such as Arduino together with a widespread open source philosophy, changed how and where ideas are realized. If you're a maker or you have a friend that really like building prototypes and devices, probably both of you have already transformed the garage or the personal studio in a home lab. This process is so spontaneous that nowadays newcomers in the Makers Community begin to create their first device at home. UDOO and Home Automation Like other communities, the makers family self-sustained their ideas joining their passion with the Do It Yourself (DIY) philosophy, which led makers to build and use creative devices in their everyday life. The DIY movement was the key factor that triggered the Home Automation made with open source platforms, making this process funnier and maker-friendly. Indeed, if we take a look at some projects released on the Internet we can find thousands of prototypes, usually composed by an Arduino together with a Raspberry Pi, or any other combinations of similar prototyping boards. However, in this scenario, we have a new alternative proposed by the UDOO board—a stand alone computer that provides a multidevelopment-platform solution for Linux and Android operating systems with an integrated Arduino Due. Thanks to its flexibility, UDOO combines the best from two different worlds: the hardware makers and the software programmers. Applying the UDOO board to home automation is a natural process because of its power and versatility. Furthermore, the capability to install the Android operating system increases the capabilities of this board, because it offers out-of-the-box mobile applications ecosystem that can be easily reused in our appliances, enhancing the user experience of our prototype. Since 2011, Android supports the interaction with an Arduino compatible device through the Android Accessory Development Kit (ADK) reference. Designed to implement compelling accessories that could be connected to Android-powered devices, this reference defines the Android Open Accessory (AOA) protocol used to communicate with an Arduino device through an USB On-The-Go (OTG) port or via Bluetooth. UDOO makes use of this protocol and opens a serial communication between two processors soldered in the same board, which run the Android operating system and the microcontroller actions respectively. Mastering the ADK is not so easy, especially during the first approach. For this reason, the community has developed an Android library that simplifies a lot the use of the ADK: the Android ADKToolkit (http://adktoolkit.org). To put our hands-on code, we can imagine a scenario in which we want to monitor the carbon dioxide emissions (CO2) produced by the wood heater in our home. The first step is to connect the CO2 sensor to our board and use the manufacturer library, if available, in the Arduino sketch to retrieve the detected CO2. Using the ADK implementation, we can send this value back to the main UDOO's processor that runs the Android operating system, so it can use its powerful APIs, and a more powerful processor, to visualize or compute the collected data. The following example is a part of an Arduino sketch used to send sensor data back to the Android application using the ADK implementation: uint8_t buffer[128]; int co2Sensor = 0; void loop() { Usb.Task(); if (adk.isReady()) {    // Hypothetical function that    // gets the value from the sensor    co2Sensor = readFromCO2Sensor();    buffer[0] = co2Sensor;    // Sends the value to Android    adk.write(1, buffer);    delay(1000); } } On the Android side, using a traditional AsyncTask method or a more powerful ExecutorService method, we should read this value from the buffer. This task is greatly simplified through the ADKToolkit library that requires the initialization of the AdkManager class that holds the connection and exposes a handler to open or close the communication with Arduino. This instance could be initialized during the onCreate() activity callback with the following code: private AdkManager mAdkManager;@Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.hello_world); mAdkManager = new AdkManager(this); mAdkManager.open(); } In a separated thread, we can use the mAdkManager instance to read data from the ADK buffer. The following is an example that reads the message from the buffer and uses the getInt() method to parse the value: private class SensorThread implements Runnable { @Override public void run() {    // Reads detected CO2 from ADK    AdkMessage response = mAdkManager.read();    int co2 = response.getInt();    // Continues the execution    doSomething(co2); } } Once we retrieve the detected CO2 value from the sensor, we can use it in our application. For instance, we may plot this value using an Android open source chart library or send collected data to an external web service. These preceding snippets are just examples to show how it's easy to implement the communication in UDOO between the Android operating system and the onboard Arduino. With a powerful operating system such as Android and one of the most widespread prototyping platform provided in a single board, UDOO can play a key role in our home automation projects. Summary As makers, if we manage to make enough experience into the home automation field, chances are that we will be able to develop and build a high-end system for our own house, flexible enough to be easily extended without any further knowledge. When I wrote Getting started with UDOO, the idea was to make a comprehensive guide and a collection of examples, to help developers grabbing quickly the key concepts of this prototyping board, focusing in the Android development to bring back to light the advantages provided by this widespread platform, if used in our prototypes and not only in our mobile devices. Resources for Article: Further resources on this subject: Android Virtual Device Manager [Article] Writing Tag Content [Article] The Arduino Mobile Robot [Article]
Read more
  • 0
  • 0
  • 20627

article-image-visualize
Packt
19 Feb 2015
17 min read
Save for later

Visualize This!

Packt
19 Feb 2015
17 min read
This article is written by Michael Phillips, the author of the book TIBCO Spotfire: A Comprehensive Primer, discusses that human beings are fundamentally visual in the way they process information. The invention of writing was as much about visually representing our thoughts to others as it was about record keeping and accountancy. In the modern world, we are bombarded with formalized visual representations of information, from the ubiquitous opinion poll pie chart to clever and sophisticated infographics. The website http://data-art.net/resources/history_of_vis.php provides an informative and entertaining quick history of data visualization. If you want a truly breathtaking demonstration of the power of data visualization, seek out Hans Rosling's The best stats you've ever seen at http://ted.com. (For more resources related to this topic, see here.) We will spend time getting to know some of Spotfire's data capabilities. It's important that you continue to think about data; how it's structured, how it's related, and where it comes from. Building good visualizations requires visual imagination, but it also requires data literacy. This article is all about getting you to think about the visualization of information and empowering you to use Spotfire to do so. Apart from learning the basic features and properties of the various Spotfire visualization types, there is much more to learn about the seamless interactivity that Spotfire allows you to build in to your analyses. We will be taking a close look at 7 of the 16 visualization types provided by Spotfire, but these 7 visualization types are the most commonly used. We will cover the following topics: Displaying information quickly in tabular form Enriching your visualizations with color categorization Visualizing categorical information using bar charts Dividing a visualization across a trellis grid Key Spotfire concept—marking Visualizing trends using line charts Visualizing proportions using pie charts Visualizing relationships using scatter plots Visualizing hierarchical relationships using treemaps Key Spotfire concept—filters Enhancing tabular presentations using graphical tables Now let's have some fun! Displaying information quickly in tabular form While working through the data examples, we used the Spotfire Table visualization, but now we're going to take a closer look. People will nearly always want to see the "underlying data", the details behind any visualization you create. The Table visualization meets this need. It's very important not to confuse a table in the general data sense with the Spotfire Table visualization; the underlying data table remains immutable and complete in the background. The Table visualization is a highly manipulatable view of the underlying data table and should be treated as a visualization, not a data table. The data used here is BaseballPlayerData.xls There is always more than one way to do the same thing in Spotfire, and this is particularly true for the manipulation of visualizations. Let's start with some very quick manipulations: First, insert a table visualization by going to the Insert menu, selecting New Visualization, and then Table. To move a column, left-click on the column name, hold, and drag it. To sort by a column, left-click on the column name. To sort by more than one column, left-click on the first column name and then press Shift + left-click on the subsequent columns in order of sort precedence. To widen or narrow a column, hover the mouse over the right-hand edge of the column title until you see the cursor change to a two-way arrow, and then click and drag it. These and other properties of the Table visualization are also accessed via visualization properties. As you work through the various Spotfire visualizations, you'll notice that some types have more options than others, but there are common trends and an overall consistency in conventions. Visualization properties can be opened in a number of ways: By right-clicking on the visualization, a table in this case, and selecting Properties. By going to the Edit menu and selecting Visualization Properties. By clicking on the Visualization Properties icon, as shown in the following screenshot, in the icon tray below the main menu bar. It's beyond the scope of this book to explore every property and option. The context-sensitive help provided by Spotfire is excellent and explains all the options in glorious detail. I'd like to highlight four important properties of the Table visualization: The General property allows you to change the table visualization title, not the name of the underlying data table. It also allows you to hide the title altogether. The Data property allows you to switch the underlying data table, if you have more than one table loaded into your analysis. The Columns property allows you to hide columns and order the columns you do want to show. The Show/Hide Items property allows you to limit what is shown by a rule you define, such as top five hitters. After clicking on the Add button, you select the relevant column from a dropdown list, choose Rule type (Top), and finally, choose Value for the rule (5). The resulting visualization will only show the rows of data that meet the rule you defined. Enriching your visualizations with color categorization Color is a strong feature in Spotfire and an important visualization tool, often underestimated by report creators. It can be seen as merely a nice-to-have customization, but paying attention to color can be the difference between creating a stimulating and intuitive data visualization rather than an uninspiring and even confusing corporate report. Take some pride and care in the visual aesthetics of your analytics creations! Let's take a look at the color properties of the Table visualization. Open the Table visualization properties again, select Colors, and then Add the column Runs. Now, you can play with a color gradient, adding points by clicking on the Add Point button and customizing the colors. It's as easy as left-clicking on any color box and then selecting from a prebuilt palette or going into a full RGB selection dialog by choosing More Colors…. The result is a heatmap type effect for runs scored, with yellow representing low run totals, transitioning to green as the run total approaches the average value in the data, and becoming blue for the highest run totals. Visualizing categorical information using bar charts We saw how the Table visualization is perfect for showing and ordering detailed information. It's quite similar to a spreadsheet. The Bar Chart visualization is very good for visualizing categorical information, that is, where you have categories with supporting hard numbers—sales by region, for example. The region is the category, whereas the sales is the hard number or fact. Bar charts are typically used to show a distribution. Depending on your data or your analytic requirement, the bars can be ordered by value, placed side by side, stacked on top of each other, or arranged vertically or horizontally. There is a special case of the category and value combination and that is where you want to plot the frequencies of a set of numerical values. This type of bar chart is referred to as a histogram, and although it is number against number, it is still, in essence, a distribution plot. It is very common in fact to transform the continuous number range in such cases into a set of discrete bins or categories for the plot. For example, you could take some demographic data and plot age as the category and the number of people at that age as the value (the frequency) on a bar chart. The result, for a general population, would approach a bell-shaped curve. Let's create a bar chart using the baseball data. The data we will use is BaseballPlayerData.xls, which you can download from http://www.insidespotfire.com. Create a new page by right-clicking on any page tab and selecting New Page. You can also select New Page from the Insert menu or click on the new page icon in the icon bar below the main menu. Create a Bar Chart visualization by left-clicking on the bar chart icon or by selecting New Visualization and then Bar Chart from the Insert menu. Spotfire will automatically create a default chart, that is, rarely exactly what you want, so the next step is to configure the chart. Two distributions might be interesting to look at: the distribution of home runs across all the teams and the distribution of player salaries across all the teams. The axes are easy to change; simply use the axes selectors.   If the bars are vertical, it means that the category—Team, in our case—should be on the horizontal axis, with the value—Home Runs or Salary—on the vertical axis, representing the height of the bars.   We're going to pick Home Runs from the vertical axis selector and then an appropriate aggregation dropdown, which is highlighted in red in the screenshot. Sum would be a valid option, but let's go with Avg (Average). Similarly, select Team from the horizontal axis dropdown selector. The vertical, or value, axis must be an aggregation because there is more than one home run value for each category. You must decide if you want a sum, an average, a minimum, and so on. You can modify the visualization properties just as you did for the Table visualization. Some of the options are the same; some are specific to the bar chart. We're going to select the Sort bars by value option in the Appearance property. This will order the bars in descending order of value. We're also going to check the option Vertically under Scale labels | Show labels for the Category Axis property. There are two more actions to perform: create an identical bar chart except with average salary as the value axis, and give each bar chart an appropriate title (Visualization Properties|General|Title:). To copy an existing visualization, simply right-click on it and select Duplicate Visualization. We can now compare the distribution of home run average and salary average across all the baseball teams, but there's a better way to do this in a single visualization using color. Close the salary distribution bar chart by left-clicking on X in the upper right-hand corner of the visualization (X appears when you hover the mouse) or right-clicking on the visualization and selecting Close. Now, open the home run bar chart visualization properties, go to the Colors property, and color by Avg(Salary). Select a Gradient color mode, and add a median point by clicking on the Add Point button and selecting Median from the dropdown list of options on the added point. Finally, choose a suitable heat map range of colors; something like blue (min) through pale yellow (median) through red (max). You will still see the distribution of home runs across the baseball teams, but now you will have a superimposed salary heat map. Texas and Cleveland appear to be getting much more bang for their buck than the NY Yankees. Dividing a visualization across a trellis grid Trellising, whereby you divide a series of visualizations into individual panels, is a useful technique when you want to subdivide your analysis. In the example we've been working with, we might, for instance, want to split the visualization by league. Open the visualization properties for the home runs distribution bar chart colored by salary and select the Trellis property. Go to Panels and split by League (use the dropdown column selector). Spotfire allows you to build layers of information with even basic visualizations such as the bar chart. In one chart, we see the home run distribution by team, salary distribution by team, and breakdown by league. Key Spotfire concept – marking It's time to introduce one of the most important Spotfire concepts, called marking, which is central to the interactivity that makes Spotfire such a powerful analysis tool. Marking refers to the action of selecting data in a visualization. Every element you see is selectable, or markable, that is, a single row or multiple rows in a table, a single bar or multiple bars in a bar chart. You need to understand two aspects to marking. First, there is the visual effect, or color(s) you see, when you mark (select) visualization elements. Second, there is the behavior that follows marking: what happens to data and the display of data when you mark something. How to change the marking color From Spotfire v5.5 onward, you can choose, on a visualization-by-visualization basis, two distinct visual effects for marking: Use a separate color for marked items: all marked items are uniformly colored with the marking color, and all unmarked items retain their existing color. Keep existing color attributes and fade out unmarked items: all marked items keep their existing color, and all unmarked items also keep their existing color but with a high degree of color fade applied, leaving the marked items strongly highlighted. The second option is not available in versions older than v5.5 but is the default option in Versions 5.5 onward. The setting is made in the visualization's Appearance property by checking or unchecking the option Use separate color for marked items. The default color when using a separate color for marked items is dark green, but this can be changed by going to Edit|Document Properties|Markings|Edit. The new option has the advantage of retaining any underlying coloring you defined, but you might not like how the rest of the chart is washed out. Which approach you choose depends on what information you think is critical for your particular situation. When you create a new analysis, a default marking is created and applied to every visualization you create by default. You can change the color of the marking in Document Properties, which is found in the Edit menu. Just open Document Properties, click on the Markings tab, select the marking, click on the Edit button, and change the color. You can also create as many markings as you need, giving them convenient names for reference purposes, but we'll just focus on using one for now. How to set the marking behavior of a visualization Marking behavior depends fundamentally on data relationships. The data within a single data table is intrinsically related; the data in separate data tables must be explicitly related before you configure marking behavior for visualizations based on separate datasets. When you mark something in a visualization, five things can happen depending on the data involved and how you configured your visualizations: Conditions Behavior Two visualizations with the same underlying data table (they can be on different pages in the analysis file) and the same marking scheme applied. Marking data on one visualization will automatically mark the same data on the other. Two visualizations with related underlying data tables and the same marking scheme applied. The same as the previous condition's behavior, but subject to differences in data granularity. For example, marking a baseball team in one visualization will mark all the team's players in another visualization that is based on a more detailed table related by team. Two visualizations with the same or related data tables where one has been configured with data dependency on the marking in the other. Nothing will display in the marking-dependent visualization other than what is marked in the reference visualization. Visualizations with unrelated underlying data tables. No marking interaction will occur, and the visualizations will mark completely independently of one another. Two visualizations with the same underlying data table or related data tables and with different marking schemes applied. Marking data on one visualization will not show on the other because the marking schemes are different. Here's how we set these behaviors: Open the visualization properties of the bar chart we have been working with and navigate to the Data property.   You'll notice that two settings refer to marking: Marking and Limit data using markings. Use the dropdown under Marking to select the marking to be used for the visualization. Having no marking is an option. Visualizations with the same marking will display synchronous selection, subject to the data relation conditions described earlier. The options under Limit data using markings determine how the visualization will be limited to marking elsewhere in the analysis. The default here is no dependency. If you select a marking, then the visualization will only display data selected elsewhere with that marking. It's not good to have the same marking for Marking and Limit data using markings. If you are using the limit data setting, select no marking, or create a second marking and select it under Marking. You're possibly a bit confused by now. Fortunately, marking is much harder to describe than to use! Let's build a tangible example. We'll start a new analysis, so close any analysis you have open and create a new one, loading the player-level baseball data (BaseballPlayerData.xls). Add two bar charts and a table. You can rearrange the layout by left-clicking on the title bar of a visualization, holding, and dragging it. Position the visualizations any way you wish, but you can place the two bar charts side by side, with the table below them spanning both. Save your analysis file at this point and at regular intervals. It's good behavior to save regularly as you build an analysis. It will save you a lot of grief if your PC fails in any way. There is no autosave function in Spotfire. For the first bar chart, set the following visualization properties: Property Value General | Title Home Runs Data | Marking Marking Data | Limit data using markings Nothing checked Appearance | Orientation Vertical bars Appearance | Sort bars by value Check Category Axis | Columns Team Value Axis | Columns Avg(Home Runs) Colors | Columns Avg(Salary) Colors | Color mode Gradient Add Point for median Max = strong red; Median = pale yellow; Min = strong blue Labels | Show labels for Marked Rows Labels | Types of labels | Complete bar Check For the second bar chart, set the following visualization properties: Property Value General | Title Roster Data | Marking Marking Data | Limit data using markings Nothing checked Appearance | Orientation Horizontal bars Appearance | Sort bars by value Check Category Axis | Columns Team Value Axis | Columns Count(Player Name) Colors | Columns Position Colors | Color mode Categorical For the table, set the following visualization properties: Property Value General | Title Details Data | Marking (None) Data | Limit data using markings Check Marking   Columns Team, Player Name, Games Played, Home Runs, Salary, Position Now start selecting visualization elements with your mouse. You can click on elements such as bars or segments of bars, or you can click and drag a rectangular block around multiple elements. When you select a bar on the Home Runs bar chart, the corresponding team bar automatically selects the Roster bar chart, and details for all the players in that team display in the Details table. When you select a bar segment on the Roster bar chart, the corresponding team bar automatically selects on the Home Runs bar chart and only players in the selected position for the team selected appear in the details. There are some very useful additional functions associated with marking, and you can access these by right-clicking on a marked item. They are Unmark, Invert, Delete, Filter To, and Filer Out. You can also unmark by left-clicking on any blank space in the visualization. Play with this analysis file until you are comfortable with the marking concept and functionality. Summary This article is a small taste of the book TIBCO Spotfire: A comprehensive primer. You've seen how the Table visualization is an easy and traditional way to display detailed information in tabular form and how the Bar Chart visualization is excellent for visualizing categorical information, such as distributions. You've learned how to enrich visualizations with color categorization and how to divide a visualization across a trellis grid. You've also been introduced to the key Spotfire concept of marking. Apart from gaining a functional understanding of these Spotfire concepts and techniques, you should have gained some insight into the science and art of data visualization. Resources for Article: Further resources on this subject: The Spotfire Architecture Overview [article] Interacting with Data for Dashboards [article] Setting Up and Managing E-mails and Batch Processing [article]
Read more
  • 0
  • 0
  • 6769
Modal Close icon
Modal Close icon