Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7008 Articles
article-image-building-puppet-module-skeleton
Packt
13 May 2016
5 min read
Save for later

Building a Puppet Module Skeleton

Packt
13 May 2016
5 min read
In this article by Scott Coulton, author of the book, Puppet for Containerization, we are going to look at how to construct a Puppet module with the correct file structure, unit tests, and gems. One of the most important things in development is having a solid foundation. Writing a Puppet module is no different. This topic is extremely important for the rest of the book, as we will be reusing the code over and over again to build all our modules. Let's look at how to build a module with the Puppet module generator. (For more resources related to this topic, see here.) The Puppet module generator One of the best things about working with Puppet is the number of tools out there both from the community and from Puppetlabs itself. The Puppet module generator is a tool that is developed by Puppetlabs and follows the best practices to create a module skeleton. The best thing about this tool is that it is bundled with every Puppet agent install. So, we don't need to install any extra software. So, let's log in to our vagrant box that we built in the last chapter. Let's change the directory to the root of our Vagrant repo and then use the vagrant up && vagrant ssh command. Now that we are logged in to the box, let's change the directory to root (sudo -i) and change the directory to /vagrant. The reason for this is that this folder will be mapped to our local box. Then, we can use our favorite text editor later in the chapter. Once we're in /vagrant, we can run the command to build our Puppet module skeleton. The puppet module generate <AUTHOR>-consul command for me will look like this: puppet module generate scottyc-consul. The script will then ask a few questions such as the version, author name, description, where the source lives, and so on. These are very important questions that are to be considered when you want to publish a module to the Forge (https://forge.puppetlabs.com/), but for now, let's just answer the questions as per the following example: Now that we have our Puppet module skeleton, we should look at what the structure looks like: Now, we are going to add a few files to help us with unit tests. The first file is .fixtures.yml. This file is used by spec-puppet to pull down any module dependencies into the spec/fixtures directory when we run our unit tests. For this module, the .fixtures.yml file should like as shown in the following screenshot: The next file that we are going to add is a .rspec file. This is the file that spec-puppet uses when it requires spec_helper and sets the pattern for our unit test folder structure. The file contents should look like as shown in this screenshot: Now that we have our folder structure, let's install the gems that we need to run our unit tests. My personal preference is to install the gems on the vagrant box; if you want to use your local machine, that's fine as well. So, let's log in to our vagrant box (cd into the root of our Vagrant repo and use the vagrant ssh command and then change the directory to root using sudo -i). First, we will install Ruby with yum install -y ruby. Once that is complete, let's cd into /vagrant/<your modules folder> and then run gem install bundler && bundle install. You should get the following output after this: As you can see from the preceding screenshot, we got some warnings. This is because we ran gem install as the root. We would not do that on a production system, but as this is our development box, it wont pose an issue. Now that we have all the gems that we need for our unit tests, let's add some basic facts to /spec/classes/init_spec.rb. The facts we are going to add are osfamily and operatingsystemrelease. So, the file will look like as shown in this screenshot: The last file that we will edit is the metadata.json file in the root of the repo. This file defines our module dependencies. For this module, we have one dependency, that is, docker, so we need to add that at the bottom of the metadata.json file, as shown in the following screenshot: The last thing we need to do is put everything in its place inside our Vagrant repo. We do that by creating a folder called modules in the root of our Vagrant repo. Then, we issue the mv <AUTHOR>-consul/ modules/consul command. Note that we removed the author name because we need the module to replicate what it would look like on a Puppet master. Now that we have our basic module skeleton ready, we can start with some coding. Summary So in this article we covered how to build a module with the Puppet module generator. For more information on Puppet, you can check other books by Packt Publsihing, mentioned as follows: Learning Puppet Puppet 3: Beginner’s Guide Mastering Puppet Resources for Article: Further resources on this subject: Puppet and OS Security Tools [article] Puppet Language and Style [article] Module, Facts, Types and Reporting tools in Puppet [article]
Read more
  • 0
  • 0
  • 3370

article-image-parallel-universe-r
Packt
12 May 2016
10 min read
Save for later

The Parallel Universe of R

Packt
12 May 2016
10 min read
This article by Simon Chapple, author of the book Mastering Parallel Programming with R, helps us understand the intricacies of parallel computing. Here, we'll take a look into Delores' Crystal Ball at what the future holds for massively parallel computation that will likely have a significant impact on the world of R programming, particularly when applied to big data. (For more resources related to this topic, see here.) Three steps to successful parallelization The following three-step distilled guidance is intended to help you decide what form of parallelism might be best suited for your particular algorithm/problem and summarizes what you learned throughout this article. Necessarily, it applies a level of generalization, so approach these guidelines with due consideration: Determine the type of parallelism that may best apply to your algorithm. Is the problem you are solving more computationally bound or data bound? If the former, then your problem may be amenable to GPUs; if the latter, then your problem may be more amenable to cluster-based computing; and if your problem requires a complex processing chain, then consider using the Spark framework. Can you divide the problem data/space to achieve a balanced workload across all processes, or do you need to employ an adaptive load-balancing scheme—for example, a task farm-based approach? Does your problem/algorithm naturally divide spatially? If so, consider whether a grid-based parallel approach can be used. Perhaps your problem is on an epic scale? If so, maybe you can develop your message passing-based code and run it on a supercomputer. Is there an implied sequential dependency between tasks; that is, do processes need to cooperate and share data during their computation, or can each separate divided task be executed entirely independently of one another? A large proportion of parallel algorithms typically have a work distribution phase, a parallel computation phase, and a result aggregation phase. To reduce the overhead of the startup and close down phases, consider whether a Tree-based approach to work distribution and result aggregation may be appropriate in your case. Ensure that the basis of the compute in your algorithm has optimal implementation. Profile your code in serial to determine whether there are any bottlenecks, and target these for improvement. Is there an existing parallel implementation similar to your algorithm that you can use directly or adopt? Review CRAN Task View: High-Performance and Parallel Computing with R at https://cran.r-project.org/web/views/HighPerformanceComputing.html; in particular, take a look at the subsection entitled Parallel Computing: Applications, a snapshot of which at the time of writing can be seen in the following figure: Figure 1: CRAN provides various parallelized packages you can use in your own program. Test and evaluate the parallel efficiency of your implementation. Use the P-estimated form of Amdahl's Law to predict the level of scalability you can achieve. Test your algorithm at varying amounts of parallelism, particularly odd numbers that trigger edge-case behaviors. Don't forget to run with just a single process. Running with more processes than processors will lead to trigger-lurking deadlock/race conditions (this is most applicable to message passing-based implementations). Where possible, to reduce overhead, ensure that your method of deployment/initialization places the data being consumed locally to each parallel process. What does the future hold? Obviously, this final section is at a considerable risk of "crystal ball gazing" and getting it wrong. However, there are a number of clear directions in which we can see how both hardware and software will develop, which makes it clear that parallel programming will play an ever more important and increasing role in our computational future. Besides, it has now become critical for us to be able to process vast amounts of information within a short window of time in order to ensure our own individual and collective safety. For example, we are experiencing an increased momentum towards significant climate change and extreme weather events and will therefore require increasingly accurate weather prediction to help us deal with this; this will only be possible with highly efficient parallel algorithms. In order to gaze into the future, we need to look back at the past. The hardware technology available to parallel computing has evolved at a phenomenal pace through the years. The levels of performance that can be achieved today by single chip designs are truly staggering in terms of recent history. The history of HPC For an excellent infographic review of the development of computing performance, I would urge you to visit the following web page: http://pages.experts-exchange.com/processing-power-compared/ This beautifully illustrates how, for example, an iPhone 4 released in 2010 has near-equivalent performance to the Cray 2 supercomputer from 1985 of around 1.5 gigaflops, and the Apple Watch released in 2015 has around twice the performance of the iPhone 4 and Cray 2! While chip manufacturers have managed to maintain the famous Moore's law that predicts transistor count doubling every two years, we are now at 14 nanometers (nm) in chip production, giving us around 100 complex processing cores in a single chip. In July 2015, IBM announced a prototype chip at 7 nm (1/10,000th the width of a human hair). Some scientists suggest that quantum tunneling effects will start to impact at 5 nm (which Intel expects to bring to the market by 2020), although a number of research groups have shown individual transistor construction as small as 1 nm in the lab using materials such as graphene. What all of this suggests is that the placement of 1,000 independent high-performance computational cores, together with sufficient amounts of high-speed cache memory inside a single-chip package comparable to the size of today's chips, could potentially be possible within the next 10 years. NIVIDA and Intel are arguably at the forefront of the dedicated HPC chip development with their respective offerings used in the world's fastest supercomputers, which can also be embedded in your desktop computer. NVIDIA produces Tesla, the K80 GPU-based accelerator available that now peaks at 1.87 teraflops double precision and 5.6 teraflops single precision, utilizing 4,992 cores (dual processor) and 24 GB of on-board RAM. Intel produces Xeon Phi, the collective family brand name for its Many Integrated Core (MIC) architecture; Knights Landing, which is new, is expected to peak at 3 teraflops double precision and 6 teraflops single precision, utilizing 72 cores (single processor) and 16 GB of the highly integrated on-chip fast memory when it is released, likely in fall 2015. The successors to these chips, namely NVIDIA's Volta and Intel's Knights Hill, will be the foundation for the next generation of American $200 million dollar supercomputers in 2018, delivering around 150 to 300 petaflops peak performance (around 150 million iPhone 4s), as compared to China's TIANHE-2, ranked as the fastest supercomputer in the world in 2015, with a peak performance of around 50 petaflops from 3.1 million cores. At the other extreme, within the somewhat smaller and less expensive world of mobile devices, most currently use between two and four cores, though mixed multicore capability such as ARM's big.LITTLE octacore makes eight cores available. However, this is already on the increase with, for example, MediaTek's new SoC MT6797, which has 10 main processing cores split into a pair and two groups of four cores with different clock speeds and power requirements to serve as the basis for the next-generation mobile phones. Top-end mobile devices, therefore, exhibit a rich heterogeneous architecture with mixed power cores, separate sensor chips, GPUs, and Digital Signal Processors (DSP) to direct different aspects of workload to the most power-efficient component. Mobile phones increasingly act as the communications hub and signal a processing gateway for a plethora of additional devices, such as biometric wearables and the rapidly expanding number of ultra-low power Internet of Things (IoT) sensing devices smartening all aspects of our local environment. While we are a little bit away from running R itself natively on mobile devices, the time will come when we seek to harness the distributed computing power of all our mobile devices. In 2014 alone, around 1.25 billion smartphones were sold. That's a lot of crowd-sourced compute power and potentially far outstrips any dedicated supercomputer on the planet either existing or planned. The software that enables us to utilize parallel systems, which, as we noted, are increasingly heterogeneous, continues to evolve. In this book, we have examined how you can utilize OpenCL from R to gain access to both the GPU and CPU, making it possible to perform mixed computation across both components, exploiting the particular strengths of each for certain types of processing. Indeed, another related initiative, Heterogeneous System Architecture (HSA), which enables even lower-level access to the spectrum of processor capabilities, may well gain traction over the coming years and help promote the uptake of OpenCL and its counterparts. HSA Foundation HSA Foundation was founded by a cross-industry group led by AMD, ARM, Imagination, MediaTek, Qualcomm, Samsung, and Texas Instruments. Its stated goal is to help support the creation of applications that seamlessly blend scalar processing on the CPU, parallel processing on the GPU, and optimized processing on the DSP via high bandwidth shared memory access, enabling greater application performance at low power consumption. To enable this, HSA Foundation is defining key interfaces for parallel computation using CPUs, GPUs, DSPs, and other programmable and fixed-function devices, thus supporting a diverse set of high-level programming languages and creating the next generation in general-purpose computing. You can find the recently released version 1.0 of the HSA specification at the following link: http://www.hsafoundation.com/html/HSA_Library.htm Hybrid parallelism As a final wrapping up, I thought I would show how you can overcome some of the inherent single-threaded nature of R even further and demonstrate a hybrid approach to parallelism that combines two of the different techniques we covered previously within a single R program. We've also discussed how heterogeneous computing is potentially the way of the future. This example refers to the code we would develop to utilize MPI through pbdMPI together with ROpenCL to enable us to exploit both the CPU and GPU simultaneously. While this is a slightly contrived example and both the devices compute the same dist() function, the intention is to show you just how far you can take things with R to get the most out of all your available compute resource. Basically, all we need to do is to top and tail our implementation of the dist() function in OpenCL with the appropriate pbdMPI initialization and termination and run the script with mpiexec on two processes, as follows: # Initialise both ROpenCL and pdbMPI require(ROpenCL) library(pbdMPI, quietly = TRUE) init() # Select device based on my MPI rank r <- comm.rank() if (r == 0) { # use gpu device <- 1 } else { # use cpu device <- 2 } ... # Execute the OpenCL dist() function on my assigned device comm.print(sprintf("%d executing on device %s", r, getDeviceType(deviceID)), all.rank = TRUE) res <- teval(openclDist(kernel)) comm.print(sprintf("%d done in %f secs",r,res$Duration), all.rank = TRUE) finalize() This is simple and very effective! Summary In this article, we looked at Crystal Ball and saw the prospects for the combination of heterogeneous compute hardware that is here today and that will expand in capability even further in the future, not only in our supercomputers and laptops but also in our personal devices. Parallelism is the only way these systems can be utilized effectively. As the volume of new quantified self- and environmentally-derived data increases and the number of cores in our compute architectures continues to rise, so does the importance of being able to write parallel programs to make use of it all—job security for parallel programmers looks good for many years to come! Resources for Article: Further resources on this subject: Multiplying Performance with Parallel Computing [article] Training and Visualizing a neural network with R [article] Big Data Analysis (R and Hadoop) [article]
Read more
  • 0
  • 0
  • 1812

article-image-working-led-lamps
Packt
12 May 2016
11 min read
Save for later

Working with LED Lamps

Packt
12 May 2016
11 min read
In this article by Samarth Shah and Utsav Shah, the authors of Arduino BLINK Blueprints, you will learn some cool stuff to do with controlling LEDs, such as creating a mood lamp and developing an LED night lamp. (For more resources related to this topic, see here.) Creating a mood lamp Lighting is one of the biggest opportunities for homeowners to effectively influence the ambience of their home, whether for comfort and convenience or to set the mood for guests. In this section, we will make a simple yet effective mood lamp using our own Arduino. We will be using an RGB LED for creating our mood lamp. An RGB (Red, Green, and Blue) LED has all three LEDs in one single package, so we don't need to use three different LEDs for getting different colors. Also, by mixing the values, we can simulate many colors using some sophisticated programming. It is said that, we can produce 16.7 million different colors. Using an RGB LED An RGB LED is simply three LEDs crammed into a single package. An RGB LED has four pins. Out of these four pins, one pin is the cathode (ground). As an RGB LED has all other three pins shorted with each other, it is also called a Common Anode RGB LED: Here, the longer head is the cathode, which is connected with ground, and the other three pins are connected with the power supply. Be sure to use a current-limiting resistor to protect the LED from burning out. Here, we will mix colors as we mix paint on a palette or mix audio with a mixing board. But to get a different color, we will have to write a different analog voltage to the pins of the LED. Why do RGB LEDs change color? As your eye has three types of light interceptor (red, green, and blue), you can mix any color you like by varying the quantities of red, green, and blue light. Your eyes and brain process the amounts of red, green, and blue, and convert them into a color of the spectrum: If we set the brightness of all our LEDs the same, the overall color of the light will be white. If we turn off the red LED, then only the green and blue LEDs will be on, which will make a cyan color. We can control the brightness of all three LEDs, making it possible to make any color. Also, the three different LEDs inside a single RGB LED might have different voltage and current levels; you can find out about them in a datasheet. For example, a red LED typically needs 2 V, while green and blue LEDs may drop up to 3-4 V. Designing a mood lamp Now, we are all set to use our RGB LED in our mood lamp. We will start by designing the circuit for our mood lamp. In our mood lamp, we will make a smooth transition between multiple colors. For that, we will need following components: An RGB LED 270 Ω resistors (for limiting the current supplied to the LED) Breadboard As we did earlier, we need one pin to control one LED. Here, our RGB LED consists of three LEDs. So, we need three different control pins to control three LEDs. Similarly, three current-limiting resistors are required for each LED. Usually, this resistor's value can be between 100 Ω and 1000 Ω. If we use a resistor with a value higher than 1000 Ω, minimal current will flow through the circuit, resulting in negligible light emission from our LED. So, it is advisable to use a resistor having suitable resistance. Usually, a resistor of 220 Ω or 470 Ω is preferred as a current-limiting resistor. As discussed in the earlier section, we want to control the voltage applied to each pin, so we will have to use PWM pins (3, 5, 6, 9, 10, and 11). The following schematic controls the red LED from pin 11, the blue LED from pin 10, and the green LED from pin 9. Hook the following circuit using resistors, breadboard, and your RGB LED: Once you have made the connection, write the following code in the editor window of Arduino IDE: int redLed = 11; int blueLed = 10; int greenLed = 9; void setup() { pinMode(redLed, OUTPUT); pinMode(blueLed, OUTPUT); pinMode(greenLed, OUTPUT); } void loop() { setColor(255, 0, 0); // Red delay(500); setColor(255, 0, 255); // Magenta delay(500); setColor(0, 0, 255); // Blue delay(500); setColor(0, 255, 255); // Cyan delay(500); setColor(0, 255, 0); // Green delay(500); setColor(255, 255, 0); // Yellow delay(500); setColor(255, 255, 255); // White delay(500); } void setColor(int red, int green, int blue) { // For common anode LED, we need to substract value from 255. red = 255 - red; green = 255 - green; blue = 255 - blue; analogWrite(redLed, red); analogWrite(greenLed, green); analogWrite(blueLed, blue); } We are using very simple code for changing the color of the LED every one second interval. Here, we are setting the color every second. So, this code won't give you a smooth transition between colors. But with this code, you will be able to run the RGB LED. Now we will modify this code to smoothly transition between colors. For a smooth transition between colors, we will use the following code: int redLed = 11; int greenLed = 10; int blueLed = 9; int redValue = 0; int greenValue = 0; int blueValue = 0; void setup(){ randomSeed(analogRead(0)); } void loop() { redValue = random(0,256); // Randomly generate 1 to 255 greenValue = random(0,256); // Randomly generate 1 to 255 blueValue = random(0,256); // Randomly generate 1 to 255 analogWrite(redLed,redValue); analogWrite(greenLed,greenValue); analogWrite(blueLed,blueValue); // Incrementing all the values one by one after setting the random values. for(redValue = 0; redValue < 255; redValue++){ analogWrite(redLed,redValue); analogWrite(greenLed,greenValue); analogWrite(blueLed,blueValue); delay(10); } for(greenValue = 0; greenValue < 255; greenValue++){ analogWrite(redLed,redValue); analogWrite(greenLed,greenValue); analogWrite(blueLed,blueValue); delay(10); } for(blueValue = 0; blueValue < 255; blueValue++){ analogWrite(redLed,redValue); analogWrite(greenLed,greenValue); analogWrite(blueLed,blueValue); delay(10); } //Decrementing all the values one by one for turning off all the LEDs. for(redValue = 255; redValue > 0; redValue--){ analogWrite(redLed,redValue); analogWrite(greenLed,greenValue); analogWrite(blueLed,blueValue); delay(10); } for(greenValue = 255; greenValue > 0; greenValue--){ analogWrite(redLed,redValue); analogWrite(greenLed,greenValue); analogWrite(blueLed,blueValue); delay(10); } for(blueValue = 255; blueValue > 0; blueValue--){ analogWrite(redLed,redValue); analogWrite(greenLed,greenValue); analogWrite(blueLed,blueValue); delay(10); } } We want our mood lamp to repeat the same sequence of colors again and again. So, we are using the randomSeed() function. The randomSeed() function initializes the pseudo random number generator, which will start at an arbitrary point and will repeat in the same sequence again and again. This sequence is very long and random, but will always be the same. Here, pin 0 is unconnected. So, when we start our sequence using analogRead(0), it will give some random number, which is useful in initializing the random number generator with a pretty fair random number. The random(min,max) function generates the random number between min and max values provided as parameters. In the analogWrite() function, the number should be between 0 and 255. So, we are setting min and max as 0 and 255 respectively. We are setting the random value to redPulse, greenPulse, and bluePulse, which we are setting to the pins. Once a random number is generated, we increment or decrement the value generated with a step of 1, which will smooth the transition between colors. Now we are all set to use this as mood lamp in our home. But before that we need to design the outer body of our lamp. We can use white paper (folded in a cube shape) to put around our RGB LED. White paper acts as a diffuser, which will make sure that the light is mixed together. Alternatively, you can use anything which diffuses light and make things looks beautiful! If you want to make the smaller version of the mood lamp, make a hole in a ping pong ball. Extend the RGB LED with jump wires and put that LED in the ball and you are ready to make your home look beautiful. Developing an LED night lamp So now we have developed our mood lamp, but it will turn on only and only when we connect a power supply to Arduino. It won't turn on or off depending on the darkness of the environment. Also, to turn it off, we have to disconnect our power supply from the Arduino. In this section, we will learn how to use switches with Arduino. Introduction to switch Switch is one of the most elementary and easy-to-overlook components. Switches do only one thing: either they open a circuit or short circuit. Mainly, there are two types of switches: Momentary switch: Momentary switches are those switches which require continuous actuation—like a keyboard switch and reset button on Arduino board. Maintained Switch: Maintained switches are those switches which, once actuated, remain actuated—like a wall switch. Normally, all the switches are NO (Normally Opened) type switches. So, when the switch is actuated, it closes the path and acts as a perfect piece of conducting wire. Apart from this, based on their working, many switches are out there in the world, such as toggle, rotary, DIP, rocker, membrane, and so on. Here, we will use a normal push button switch with four pins: In our push button switch, contacts A-D and B-C are short. We will connect our circuit between A and C. So, whenever you press the switch, the circuit will be complete and current will flow through the circuit. We will read the input from the button using the digitalRead() function. We will connect one pin (pin A) to the 5 V, and the other pin (pin C) to Arduino's digital input pin (pin 2). So whenever the key is pressed, it will send a 5 V signal to pin 2. Pixar lamp We will add a few more things in the mood lamp we discussed to make it more robust and easy to use. Along with the switch, we will add some kind of light-sensing circuit to make it automatic. We will use a Light Dependent Resistor (LDR) for sensing the light and controlling the lamp. Basically, LDR is a resistor whose resistance changes as the light intensity changes. Mostly, the resistance of LDRs drops as light increases. For getting the value changes as per the light levels, we need to connect our LDR as per the following circuit: Here, we are using a voltage divider circuit for measuring the light intensity change. As light intensity changes, the resistance of the LDR changes, which in turn changes the voltage across the resistor. We can read the voltage from any analog pin using analogRead(). Once you have connected the circuit as shown, write the following code in the editor: int LDR = 0; //will be getting input from pin A0 int LDRValue = 0; int light_sensitivity = 400; //This is the approx value of light surrounding your LDR int LED = 13; void setup() { Serial.begin(9600); //start the serial monitor with 9600 buad pinMode(LED, OUTPUT); } void loop() { LDRValue = analogRead(LDR); //Read the LDR's value through LDR pin A0 Serial.println(LDRValue); //Print the LDR values to serial monitor if (LDRValue < light_sensitivity) { digitalWrite(LED, HIGH); } else { digitalWrite(LED, LOW); } delay(50); //Delay before LDR value is read again } In the preceding code, we are reading the value from our LDR at pin analog A0. Whenever the value read from pin A0 is certain threshold value, we are turning on the LED. So whenever the light (lux value) around the LDR drops, then the set value, it will turn on the LED, or in our case, mood lamp. Similarly, we will add a switch in our mood lamp to make it fully functional as a Pixar lamp. Connect one pin of the push button at 5 V and the other pin to digital pin 2. We will turn on the lamp only, and only when the room is dark and the switch is on. So we will make the following changes in the previous code. In the setup function, initialize pin 2 as input, and in the loop add the following code: buttonState = digitalRead(pushSwitch); //pushSwitch is initialized as 2. If (buttonState == HIGH){ //Turn on the lamp } Else { //Turn off the lamp. //Turn off all LEDs one by one for smoothly turning off the lamp. for(redValue = 255; redValue > 0; redValue--){ analogWrite(redLed,redValue); analogWrite(greenLed,greenValue); analogWrite(blueLed,blueValue); delay(10); } for(greenValue = 255; greenValue > 0; greenValue--){ analogWrite(redLed,redValue); analogWrite(greenLed,greenValue); analogWrite(blueLed,blueValue); delay(10); } for(blueValue = 255; blueValue > 0; blueValue--){ analogWrite(redLed,redValue); analogWrite(greenLed,greenValue); analogWrite(blueLed,blueValue); delay(10); } } So, now we have incorporated an LDR and switch in our lamp to use it as a normal lamp. Summary In this article, we created a mood lamp with RGB LED and Pixar lamp along with developing an LED night lamp Resources for Article: Further resources on this subject: Arduino Development [article] Getting Started with Arduino [article] First Look and Blinking Lights [article]
Read more
  • 0
  • 0
  • 14756

article-image-continuous-deployment-and-how-its-different-continuous-delivery
Packt
12 May 2016
5 min read
Save for later

Continuous Deployment and How it's Different from Continuous Delivery

Packt
12 May 2016
5 min read
In this article by Nikhil Pathania, the author of the book Learning Continuous Integration with Jenkins, we'll discuss Continuous Deployment. Continuous Deployment is a very loosely understood term among the IT industry. Still more confusing are the differences between Continuous Deployment and Continuous Delivery. This article aims at defining Continuous Deployment along with examining its differences from Continuous Delivery. (For more resources related to this topic, see here.) What is Continuous Deployment? "The process of continuously deploying production ready features into the production environment; or to the end-user, is termed as Continuous Deployment". Continuous Deployment in a holistic sense means the process of making production-ready features go live instantly without any intervention. This includes building features in an agile manner, integrating and testing them continuously, and deploying them to the production environment without any break. On the other hand, Continuous Deployment in a literal sense means the task of deploying any given package continuously to any given environment. Therefore, the task of deploying packages to the testing server and production server conveys the literal meaning of Continuous Deployment. The following illustration helps us in understanding the various terminologies that we discussed just now. In the following diagram, there are various steps that a software code goes through, from its inception till its utilization (development to production). Each step has a tool associated with it, and each one is part of some or the other methodology. How Continuous Deployment is different from Continuous Delivery First, the features are developed, then they go through a cycle or Continuous Integration or through testing of all kinds. Anything that passes the various tests is considered as a production-ready feature. These production-ready features are then labeled in Artifactory or are kept separately to segregate them from non-production ready features. This is very similar to the manufacturing production line. The raw product goes through phases of modifications and testing. Finally, the finished product is packaged and stored in the warehouses. From the warehouses, depending on the orders, it gets shipped to various places. The product doesn't get shipped immediately after it's packaged. We can safely call this practice as Continuous Delivery. The following illustration depicts the Continuous Delivery life cycle: On the other hand, a Continuous Deployment life cycle looks somewhat as shown in the following illustration. The deployment phase is immediate, without any break. The production-ready features are immediately deployed to production. Who needs Continuous Deployment? One might have the following questions rolling in their minds. How can I achieve Continuous Deployment in my organization? What would be the challenges? How much testing do I need to incorporate and automate? The list goes on. Technical challenges are one thing. What's more important is realizing the fact that do we really need it? Do we really need Continuous Deployment? The answer is, not always and not in every case. From our definition of Continuous Deployment and our understanding from the previous topic, production ready features get deployed instantly to the production environment. In many organizations, it's the business that decides whether or not to make a feature live or when to make a feature live. Therefore, think of Continuous Deployment as an option, and not a compulsion. On the other hand, Continuous Delivery; which means creating production ready features in a continuous way, should be the motto for any organization. Continuous Deployment is easy to achieve in an organization that has just started. In other words, an organization that does not own a large code and has small infrastructure and less number of releases per day. On the other hand, it's difficult for organizations with massive projects to move to Continuous Deployment. Nevertheless, organizations with large projects should first target Continuous Integration, then Continuous Delivery, and at last Continuous Deployment. Frequent downtime of production environment with Continuous Deployment Continuous Deployment, though a necessity in some organizations, may not be a piece of cake. There are a few practical challenges that may surface while performing frequent releases to the production server. Let's see some of the hurdles. Deployment to any application server or web server requires a downtime. The following activities take place during a downtime. I have listed some of the general tasks and they may vary from project to project. Bringing down the services Deploying the package Bringing up the services Performing sanity checks The Jenkins job that performs the deployment in a production server may include all these steps. Nevertheless, running deployments every now and then on the production server may result in a frequent unavailability of the services. The solution to this problem is using a clustered production environment, as shown in the following illustration. This is a very generic example wherein the web server is behind a reverse proxy server such as NGINX, which also performs load balancing. The web server is a cluster environment (it has many nodes running the same services) to make it highly available. Usually, a clustered web application server will have a master-slave architecture, wherein a single node manager controls a few node agents. When deployment takes place on the web application server, the changes, restart activities, and sanity checks take place on each node, one at a time, thus resolving the downtime issues. Summary In this article, we covered an introduction to Continuous Deployment and how Continuous Deployment is different from Continuous Delivery. We also discussed the requirements of Continuous Deployment and the frequent downtime of a production environment with Continuous Deployment. Resources for Article: Further resources on this subject: Maven and Jenkins Plugin[article] Exploring Jenkins[article] Jenkins Continuous Integration[article]
Read more
  • 0
  • 0
  • 2410

article-image-how-achieve-continuous-integration
Packt
12 May 2016
22 min read
Save for later

How to achieve Continuous Integration

Packt
12 May 2016
22 min read
In this article by Nikhil Pathania, the author of the book Learning Continuous Integration with Jenkins, we'll learn how to achieve Continuous Integration. Implementing Continuous Integration involves using various DevOps tools. Ideally, a DevOps engineer is responsible for implementing Continuous Integration. The current Article introduces the readers to the constituents of Continuous Integration and the means to achieve it. (For more resources related to this topic, see here.) Development operations DevOps stands for development operations, and the people who manage these operations are called DevOps engineers. All the following mentioned tasks fall under development operations: Build and release management Deployment management Version control system administration Software configuration management All sorts of automation Implementing continuous integration Implementing continuous testing Implementing continuous delivery Implementing continuous deployment Cloud management and virtualization I assume that the preceding tasks need no explanation. A DevOps engineer accomplishes the previously mentioned tasks using a set of tools; these tools are loosely called DevOps tools (Continuous Integration tools, agile tools, team collaboration tools, defect tracking tools, continuous delivery tool, cloud management tool, and so on). A DevOps engineer has the capability to install and configure the DevOps tools to facilitate development operations. Hence, the name DevOps. Use a version control system This is the most basic and the most important requirement used to implement Continuous Integration. A version control system, or sometimes it's also called a revision control system, is a tool used to manage your code history. It can be centralized or distributed. Some of the famously centralized version control systems are SVN and IBM Rational ClearCase. In the distributed segment, we have tools such as Git. Ideally, everything that is required to build software must be version controlled. A version control tool offers many features, such as labeling, branching, and so on. When using a version control system, keep the branching to the minimum. Few companies have only one main branch and all the development activities happen on that. Nevertheless, most of the companies follow some branching strategies. This is because there is always a possibility that part of a team may work on a release and others may work on another release. At other times, there is a need to support the older release versions. Such scenarios always lead companies to use multiple branches. For example, imagine a project that has an Integration branch, a release branch, a hotfix branch, and a production branch. The development team will work on the release branch. They check-out and check-in code on the release branch. There can be more than one release branch where development is running in parallel. Let's say sprint 1 and sprint 2. Once sprint 2 is near its completion (assuming that all the local builds on the sprint 2 branch were successful), it is merged to the Integration branch. Automated builds run when there is something checked-in on the Integration branch, and the code is then packaged and deployed in the testing environments. If the testing passes with flying colors and the business is ready to move the release to production, then automated systems take the code and merge it with the production branch. Typical branching strategies From here, the code is then deployed in production. The reason for maintaining a separate branch for production comes from the desire to maintain a neat code with less number of versions. The production branch is always in sync with the hotfix branch. Any instant fix required on the production code is developed on the hotfix branch. The hotfix changes are then merged to the production as well as the Integration branch. The moment sprint 1 is ready, it is first rebased with the Integration branch and then merged into it. And it follows the same steps thereafter. An example to understand VCS Let's say I add a file to the Profile.txt version control with some initial details, such as the name, age, and employee ID. To modify the file, I have to check out the file. This is more like reserving the file for edit. Why reserve? In a development environment, a single file may be used by many developers. Hence, in order to facilitate an organized use, we have the option to reserve a file using the check-out operation. Let's assume that I do a check-out on the file and do some modifications by adding another line. After the modification, I perform a check-in operation. The new version contains the newly added line. Similarly, every time you or someone else modifies a file, a new version gets created. Types of version control systems We have already seen that a version control system is a tool used to record changes made to a file or set of files over time. The advantage is that you can recall specific versions of your file or a set of files. Almost every type of file can be version controlled. It's always good to use a Version Control System (VCS) and almost everyone uses it nowadays. You can revert an entire project back to a previous state, compare changes over time, see who last modified something that might be causing a problem, who introduced an issue and when, and more. Using a VCS also generally means that if you screw things up or lose files, you can easily recover. Looking back at the history of version control tools, we can observe that they can be divided into three categories: Local version control systems Centralized version control systems Distributed version control systems Centralized version control systems Initially, when VCS came into existence; some 40 years ago, they were mostly personal. Such as the one that comes with Microsoft Office Word, wherein you can version control a file you are working on. The reason was that in those times software development activity was minuscule in magnitude and was mostly done by individuals. But, with the arrival of large software development teams working in collaboration, the need for a centralized VCS was sensed. Hence, came VCS tools, such as Clear Case and Perforce. Some of the advantages of centralized VCS are as follows: All the code resides on a centralized server. Hence, it's easy to administrate and provides a greater degree of control. These new VCS also bring with them some new features, such as labeling, branching, baselining to name a few, which help people collaborate better. In a centralized VCS, the developers should be always connected to the network. As a result, the VCS at any given point of time always represents the updated code. The following diagram illustrates a centralized VCS:   A centralized version control system Distributed version control systems Another type of VCS is the distributed VCS. Here, there is a central repository containing all the software solution code. Instead of creating a branch, the developers completely clone the central repository on their local machine and then create a branch out of the local clone repository. Once he is done with his work, the developer first merges his branch with the Integration branch, and then he syncs the local clone repository with the central repository. You can argue that it is a combination of a local VCS plus a central VCS. An example of a distributed VCS is Git.   A distributed version control system Use a repository tools As part of the software development life cycle, the source code is continuously built into binary artifacts using Continuous Integration. Therefore, there should be a place to store these built packages for later use. The answer is to use a repository tool. But, what is a repository tool? It's a version control system for binary files. Do not confuse this with the version control system discussed in the previous sections. The former is responsible for versioning the source code and the lateral for binary files, such as .rar, .war, .exe, .msi, and so on. For example, the Maven plugin always downloads the plugins required to build the code into a folder. Rather than downloading the plugins again and again, they can be managed using a repository tool. As soon as a build gets created and passes all the checks, it should be uploaded to the repository tool. From there, if the developers and testers can manually pick them, deploy them, and test them, or if the automated deployment is in place, then the build is automatically deployed in the respective test environment. So, what's the advantage of using a build repository? A repository tool does the following: Every time a build gets generated, it is stored in a repository tool. There are many advantages of storing the build artifacts. One of the most important advantages is that the build artifacts are located in a centralized location from where they can be accessed when needed. It can store third-party binary plugins, modules that are required by the build tools. Hence, the build tool need not download the plugins every time a build runs. The repository tool is connected to the online source and keeps updating the plugin repository. It records what, when, and who created a build package. It creates a staging area to manage releases better. This also helps in speeding up the Continuous Integration process. In a Continuous Integration environment, each build generates a package and the frequency at which the build and packaging happen is high. As a result, there is a huge pile of packages. Using a repository tool makes it possible to store all the packages in one place. In this way, developers get the liberty to choose what to promote and what not to promote in higher environments. Use a Continuous Integration tool What is a Continuous Integration tool? It is nothing more than an orchestrator. A continuous integration tool is at the center of the Continuous Integration system and is connected to the version control system tool, build tool, repository tool, testing and production environments, quality analysis tool, test automation tool, and so on. All it does is an orchestration of all these tools. There are many Continuous Integration tools, Build Forge, Bamboo, Team city to name a few. But, the prime focus of our topic is Jenkins.   Basically, Continuous Integration tools consist of various pipelines. Each pipeline has its own purpose. There are pipelines used to take care of Continuous Integration. Some take care of testing, some take care of deployments, and so on. Technically, a pipeline is a flow of jobs. Each job is a set of tasks that run sequentially. Scripting is an integral part of a Continuous Integration tool that performs various kinds of tasks. The tasks may be as simple as copying a folder/file from one location to another, or it can be a complex Perl script used to monitor a machine for file modification. Nevertheless, the script is getting replaced by the growing number of plugins available in Jenkins. Now, you need not script to build Java code; there are plugins available for it. All you need to do is install and configure a plugin to get the job done. Technically, plugins are nothing but small modules written in Java, but they remove the burden of scripting from the developers' head. Creating a self-triggered build The next important thing is the self-triggered automated build. Build automation is simply a series of automated steps that compile the code and generate executables. The build automation can take help of build tools, such as Ant and Maven. Self-triggered automated builds are the most important parts of a Continuous Integration system. There are two main factors that call for an automated build mechanism: Speed Catch integration or code issue as early as possible There are projects where 100 to 200 builds happen per day. In such cases, speed plays an important factor. If the builds are automated, then it can save a lot of time. Things become even more interesting if the triggering of the build is made self-driven without any manual intervention. An auto-triggered build on very code change further saves time. When builds are frequent and fast, the probability of finding errors (a build error, compilation error, and integration error) is also more and fast.   Automate the packaging There is a possibility that a build may have many components. Let's take, for example, a build that has the .rar file as an output. Along with this, it has some UNIX configuration files, release notes, some executables, and also some database changes. All these different components need to be together. The task of creating a single archive or a single media out of many components is called packaging. This again can be automated using the Continuous Integration tools and can save a lot of time. Using build tools IT projects can be on various platforms, such as Java, .NET, Ruby on Rails, C, and C++ to name a few. Also, in a few places, you may see a collection of technologies. No matter what, every programming language, excluding the scripting languages, has compilers that compile the code from a high-level language to a machine-level language. Ant and Maven are the most common build tools used for projects based on Java. For the .NET lovers, there is MSBuild and TFS build. Coming to the Unix and Linux world, you have make and omake and also clear make in case you are using IBM Rational ClearCase as the version control tool. Let's see the important ones. Maven Maven is a build tool used mostly to compile Java code. The output is mostly jar files, or in some cases, war files depending on the requirements. It uses Java libraries and Maven plugins in order to compile the code. The code to be built is described using an XML file that contains information about the project being built, dependencies, and so on. Maven can be easily integrated into Continuous Integration tools, such as Jenkins, using plugins. MSBuild MSBuild is a tool used to build Visual Studio projects. MSBuild is bundled with Visual Studio. MSBuild is a functional replacement for nmake. MSBuild works on project files, which have the XML syntax, similar to that of Apache Ant. Its fundamental structure and operation are similar to that of UNIX make utility. The user defines what will be the input (the various source codes), and the output (usually, a .exe or .msi). But, the utility itself decides what to do and the order in which to do it. Automating the deployments Imagine a mechanism where the automated packaging has produced a package that contains .war files, database scripts, and some UNIX configuration files. Now, the task here is to deploy all the three artifacts into their respective environments. The .war file must be deployed in the application server. The UNIX configuration files should sit on the respective UNIX machine, and lastly, the database scripts should be executed in the database server. The deployment of such packages containing multiple components is usually done manually in almost every organization that does not have automation in place. The manual deployment is slow and prone to human errors. This is where the automated deployment mechanism is helpful. Automated deployment goes hand in hand with the automated build process. The previous scenario can be achieved using an automated build and deployment solution that builds each component in parallel, packages them, and then deploys them in parallel. Using tools such as Jenkins, this is possible. But, the previous idea introduces some challenges, which are as follows: There is a considerable amount of scripting required to orchestrate build packaging and deployment of a release containing multiple components. These scripts by itself are huge code to maintain that require time and resources. In most of the cases, deployment is not as simple as placing files in a directory. For example, imagine that the deployment requires you to install a Java-based application onto a Linux machine. Before we go ahead and deploy the application, it is required for you to have the correct version of Java installed on the target machine, or what if the machine is a new one and doesn't have Java at all. Also, the deployment automation mechanism should make sure that the target machine has the correct configuration before the artifacts are deployed. Most of the preceding challenges can be handled using a Continuous Integration tool, such as Jenkins. The field of managing the configuration on n number of machines is called configuration management. There are tools, such as Chef and Puppet to do this. Automating the testing Testing is an important part of a software development life cycle. In order to maintain quality software, it is necessary that the software solution goes through various test scenarios. Giving less importance to testing can result in customer dissatisfaction and a delayed product. Since testing is a manual, time-consuming, and repetitive task, automating the testing process can significantly increase the speed of software delivery. However, automating the testing process is a bit difficult than automating the build, release, and deployment processes. It usually takes a lot of efforts to automate nearly all the test cases used in a project. It is an activity that matures over time. Hence, when we begin to automate the testing, we need to take a few factors into consideration. Test cases that are of great value and easy to automate must be considered first. For example, automate the testing where the steps are the same, but they run every time with different data. Also, automate the testing where a software functionality is tested on various platforms. Also, automate the testing that involves a software application running on different configurations. Previously, the world was mostly dominated by the desktop applications. Automating the testing of a GUI-based system was quite difficult. This called for scripting languages where the manual mouse and keyboard entries were scripted and executed to test the GUI application. Nevertheless, today the software world is completely dominated by the web and mobile-based applications, which are easy to test through an automated approach using a test automation tool. Once the code is built, packaged, and deployed, testing should run automatically to validate the software. Traditionally, the process followed is to have an environment for SIT, UAT, PT, and Pre-Production. First, the release goes through SIT, which stands for System Integration Test. Here, testing is performed on an integrated code to check its functionality altogether. If pass, the code is deployed in the next environment, that is, UAT where it goes through a user acceptance test, and then similarly, it can lastly be deployed in PT where it goes through the performance test. Thus, in this way, the testing is prioritized. It is not always possible to automate all of the testings. But, the idea is to automate whatever testing is possible. The previous method discussed requires the need to have many environments and also a number of automated deployments into various environments. To avoid this, we can go for another method where there is only one environment where the build is deployed, and then, the basic tests are run, and after that, long running tests are triggered manually. Use static code analysis Static code analysis, also commonly called white-box testing, is a form of software testing that looks for the structural qualities of the code. For example, it answers how robust or maintainable the code is. Static code analysis is performed without actually executing programs. It is different from the functional testing, which looks into the functional aspects of software and is dynamic. Static code analysis is the evaluation of software's inner structures. For example, is there a piece of code used repetitively? Does the code contain lots of commented lines? How complex is the code? Using the metrics defined by a user, an analysis report can be generated that shows the code quality in terms of maintainability. It doesn't question the code functionality. Some of the static code analysis tools, such as SonarQube come with a dashboard, which shows various metrics and statistics of each run. Usually, as part of Continuous Integration, the static code analysis is triggered every time build runs. As discussed in the previous sections, static code analysis can also be included before a developer tries to check-in his code. Hence, code on low quality can be prevented right at the initial stage. They support many languages, such as Java, C/C++, Objective-C, C#, PHP, Flex, Groovy, JavaScript, Python, PL/SQL, COBOL, and so on. Automate using scripting languages One of the most important parts, or shall we say the backbone of Continuous Integration are the scripting languages. Using these, we can reach where no tool reaches. In my own experience, there are many projects where build tools, such as Maven, Ant, and the others don't work. For example, the SAS Enterprise application has a GUI interface used to create packages and perform code promotions from environment to environment. It also offers few APIs to do the same through the command line. If one has to automate the packaging and code promotion process, then they ought to use the scripting languages. Perl One of my favorites; Perl is an open source scripting language. It is mainly used for text manipulation. The main reasons for its popularity are as follows: It comes free and preinstalled with any Linux and Unix OS It's also freely available for Windows It is simple and fast to script using Perl It works both on Windows, Linux, and Unix platforms Though it was meant to be just a scripting language for processing files, nevertheless it has seen a wide range of usages in the areas of system administration, build, release and deployment automation, and much more. One of the other reasons for its popularity is the impressive collections of third-party modules. I would like to expand on the advantages of the multiple platform capabilities of Perl. There are situations where you will have Jenkins servers on a Windows machine, and the destination machines (where the code needs to be deployed) will be Linux machines. This is where Perl helps; a single script written on the Jenkins Master will run on both the Jenkins Master and the Jenkins Slaves. However, there are various other popular scripting languages that you can use, such as Ruby, Python, and Shell to name a few. Test in a production-like environment Ideally testing such as SIT, UAT, and PT to name a few, are performed in an environment that is different from the production. Hence, there is every possibility that the code that has passed these quality checks may fail in production. Therefore, it's advised to perform an end-to-end testing on the code in a production like environment, commonly referred as pre-production environment. In this way, we can be best assured that the code won't fail in production. However, there is a challenge to it. For example, consider an application that runs on various web browsers both on mobiles and PCs. To test such an application effectively, we would need to simulate the entire production environment used by the end users. These call for multiple build configurations and complex deployments, which are manual. Continuous Integration systems need to take care of this; on a click of a button, various environments should get created each reflecting the environment used by the customers. And then, it should be followed by deployment and testing thereafter. Backward traceability If something fails, there should be an ability to see when, who, and what caused the failure. This is called as backward traceability. How to achieve it? Let's see: By introducing automated notifications after each build. The moment a build is completed, the Continuous Integration tools automatically respond to the development team with the report card. As seen in the Scrum methodology, the software is developed in pieces called backlogs. Whenever a developer checks in the code, they need to apply a label on the checked-in code. This label can be the backlog number. Hence, when the build or a deployment fails, it can be traced back to the code that caused it using the backlog number. Labeling each build also helps in tracking back the failure. Using a defect tracking tool Defect tracking tools are a means to track and manage bugs, issues, tasks, and so on. Earlier projects were mostly using Excel sheets to track their defects. However, as the magnitude of the projects increased in terms of the number of test cycles and the number of developers, it became absolutely important to use a defect tracking tool. Some of the popular defect tracking tools are Atlassian JIRA and Bugzilla. The quality analysis market has seen the emergence of various bug tracking systems or defect management tools over the years. A defect tracking tools offers the following features: It allows you to raise or create defects and tasks that have got various fields to define the defect or the task. It allows you to assign the defect to the concerned team or an individual responsible for the change. Progressing through the life cycle stages-workflow. It provides you with the feature to comment on a defect or a task, watch the progress of the defect, and so on. It provides metric. For example, how many tickets were raised in a month? How much time was spent on resolving the issues? All these metrics are of significant importance to the business. It allows you to attach a defect to a particular release or build for better traceability. The previously mentioned features are a must for a bug tracking system. There may be many other features that a defect tracking tool may offer, such as voting, estimated time to resolve, and so on. Summary In this article, we learned how various DevOps tools go hand-in-hand to achieve Continuous Integration, and of course, helps projects go agile. We can fairly conclude that Continuous Integration is an engineering practice where each chunk of code is immediately built and unit-tested locally, then integrated and again built and tested on the Integration branch. Resources for Article: Further resources on this subject: Exploring Jenkins [article] Jenkins Continuous Integration [article] Maven and Jenkins Plugin [article]
Read more
  • 0
  • 0
  • 4361

article-image-playing-tic-tac-toe-against-ai
Packt
11 May 2016
30 min read
Save for later

Playing Tic-Tac-Toe against an AI

Packt
11 May 2016
30 min read
In this article by Ivo Gabe de Wolff, author of the book TypeScript Blueprints, we will build a game in which the computer will play well. The game is called Tic-Tac-Toe. The game is played by two players on a grid, usually three by three. The players try to place their symbols threein a row (horizontal, vertical or diagonal). The first player can place crosses, the second player placescircles. If the board is full, and no one has three symbols in a row, it is a draw. (For more resources related to this topic, see here.) The game is usually played on a three by three grid and the target is to have three symbols in a row. To make the application more interesting, we will make the dimension and the row length variable. We will not create a graphical interface for this application. We will only build the game mechanics and the artificial intelligence(AI). An AI is a player controlled by the computer. If implemented correctly, the computer should never lose on a standard three by three grid. When the computer plays against the computer, it will result in a draft. We will also write various unit tests for the application. We will build the game as a command line application. That means you can play the game in a terminal. You can interact with the game only with text input. It's player one's turn! Choose one out of these options: 1X|X|-+-+- |O|-+-+- | |   2X| |X-+-+- |O|-+-+- | |   3X| |-+-+-X|O|-+-+- | |   4X| |-+-+- |O|X-+-+- | |   5X| |-+-+- |O|-+-+-X| |   6X| |-+-+- |O|-+-+- |X|   7X| |-+-+- |O|-+-+- | |X Creating the project structure We will locate the source files in lib and the tests in lib/test. We use gulp to compile the project and AVA to run tests. We can install the dependencies of our project with NPM: npm init -y npm install ava gulp gulp-typescript --save-dev In gulpfile.js, we configure gulp to compile our TypeScript files. var gulp = require("gulp"); var ts = require("gulp-typescript");   var tsProject = ts.createProject("./lib/tsconfig.json");   gulp.task("default", function() { return tsProject.src() .pipe(ts(tsProject)) .pipe(gulp.dest("dist")); }); Configure TypeScript We can download type definitions for NodeJS with NPM: npm install @types/node --save-dev We must exclude browser files in TypeScript. In lib/tsconfig.json, we add the configuration for TypeScript: {   "compilerOptions": {     "target": "es6",     "module": "commonjs" }   } For applications that run in the browser, you will probably want to target ES5, since ES6 is not supported in all browsers. However, this application will only beexecuted in NodeJS, so we do not have such limitations. You have to use NodeJS 6 or later for ES6 support. Adding utility functions Since we will work a lot with arrays, we can use some utility functions. First, we create a function that flattens a two dimensional array into a one dimensional array. export function flatten<U>(array: U[][]) { return (<U[]>[]).concat(...array); } Next, we create a functionthat replaces a single element of an array with a specified value. We will use functional programming in this article again, so we must use immutable data structures. We can use map for this, since this function provides both the element and the index to the callback. With this index, we can determine whether that element should be replaced. export function arrayModify<U>(array: U[], index: number, newValue: U) { return array.map((oldValue, currentIndex) => currentIndex === index ? newValue : oldValue); } We also create a function that returns a random integer under a certain upper bound. export function randomInt(max: number) { return Math.floor(Math.random() * max); } We will use these functions in the next sessions. Creating the models In lib/model.ts, we will create the model for our game. The model should contain the game state. We start with the player. The game is played by two players. Each field of the grid contains the symbol of a player or no symbol. We will model the grid as a two dimensional array, where each field can contain a player. export type Grid = Player[][]; A player is either Player1, Player2 or no player. export enum Player { Player1 = 1, Player2 = -1, None = 0 } We have given these members values so we can easily get the opponent of a player. export function getOpponent(player: Player): Player { return -player; } We create a type to represent an index of the grid. Since the grid is two dimensional, such an index requires two values. export type Index = [number, number]; We can use this type to create two functions that get or update one field of the grid. We use functional programming in this article, so we will not modify the grid. Instead, we return a new grid with one field changed. export function get(grid: Grid, [rowIndex, columnIndex]: Index) { const row = grid[rowIndex]; if (!row) return undefined; return row[columnIndex]; } export function set(grid: Grid, [row, column]: Index, value: Player) { return arrayModify(grid, row, arrayModify(grid[row], column, value) ); } Showing the grid To show the game to the user, we must convert a grid to a string. First, we will create a function that converts a player to a string, then a function that uses the previous function to show a row, finally a function that uses these functions to show the complete grid. The string representation of a grid should have lines between the fields. We create these lines with standard characters (+, -, and |). This gives the following result: X|X|O-+-+- |O|-+-+-X| | To convert a player to the string, we must get his symbol. For Player1, that is a cross and for Player2, a circle. If a field of the grid contains no symbol, we return a space to keep the grid aligned. function showPlayer(player: Player) { switch (player) { case Player.Player1: return "X"; case Player.Player2: return "O"; default: return ""; } } We can use this function to the tokens of all fields in a row. We add a separator between these fields. function showRow(row: Player[]) { return row.map(showPlayer).reduce((previous, current) => previous + "|" + current); } Since we must do the same later on, but with a different separator, we create a small helper function that does this concatenation based on a separator. const concat = (separator: string) => (left: string, right: string) => left + separator + right; This function requires the separator and returns a function that can be passed to reduce. We can now use this function in showRow. function showRow(row: Player[]) { return row.map(showPlayer).reduce(concat("|")); } We can also use this helper function to show the entire grid. First we must compose the separator, which is almost the same as showing a single row. Next, we can show the grid with this separator. export function showGrid(grid: Grid) { const separator = "n" + grid[0].map(() =>"-").reduce(concat("+")) + "n"; return grid.map(showRow).reduce(concat(separator)); } Creating operations on the grid We will now create some functions that do operations on the grid. These functions check whether the board is full, whether someone has won, and what options a player has. We can check whether the board is full by looking at all fields. If no field exists that has no symbol on it, the board is full, as every field has a symbol. export function isFull(grid: Grid) { for (const row of grid) { for (const field of row) { if (field === Player.None) return false; } } return true; } To check whether a user has won, we must get a list of all horizontal, vertical and diagonal rows. For each row, we can check whether a row consists of a certain amount of the same symbols on a row. We store the grid as an array of the horizontal rows, so we can easily get those rows. We can also get the vertical rows relatively easily. function allRows(grid: Grid) { return [ ...grid, ...grid[0].map((field, index) => getVertical(index)), ... ];   function getVertical(index: number) { return grid.map(row => row[index]); } } Getting a diagonal row requires some more work. We create a helper function that will walk on the grid from a start point, in a certain direction. We distinguish two different kinds of diagonals: a diagonal that goes to the lower-right and a diagonal that goes to the lower-left. For a standard three by three game, only two diagonals exist. However, a larger grid may have more diagonals. If the grid is 5 by 5, and the users should get three in a row, ten diagonals with a length of at least three exist: 0, 0 to 4, 40, 1 to 3, 40, 2 to 2, 41, 0 to 4, 32, 0 to 4, 24, 0 to 0, 43, 0 to 0, 32, 0 to 0, 24, 1 to 1, 44, 2 to 2, 4 The diagonals that go toward the lower-right, start at the first column or at the first horizontal row. Other diagonals start at the last column or at the first horizontal row. In this function, we will just return all diagonals, even if they only have one element, since that is easy to implement. We implement this with a function that walks the grid to find the diagonal. That function requires a start position and a step function. The step function increments the position for a specific direction. function allRows(grid: Grid) { return [ ...grid, ...grid[0].map((field, index) => getVertical(index)), ...grid.map((row, index) => getDiagonal([index, 0], stepDownRight)), ...grid[0].slice(1).map((field, index) => getDiagonal([0, index + 1], stepDownRight)), ...grid.map((row, index) => getDiagonal([index, grid[0].length - 1], stepDownLeft)), ...grid[0].slice(1).map((field, index) => getDiagonal([0, index], stepDownLeft)) ];   function getVertical(index: number) { return grid.map(row => row[index]); }   function getDiagonal(start: Index, step: (index: Index) => Index) { const row: Player[] = []; for (let index = start; get(grid, index) !== undefined; index = step(index)) { row.push(get(grid, index)); } return row; } function stepDownRight([i, j]: Index): Index { return [i + 1, j + 1]; } function stepDownLeft([i, j]: Index): Index { return [i + 1, j - 1]; } function stepUpRight([i, j]: Index): Index { return [i - 1, j + 1]; } } To check whether a row has a certain amount of the same elements on a row, we will create a function with some nice looking functional programming. The function requires the array, the player, and the index at which the checking starts. That index will usually be zero, but during recursion we can set it to a different value. originalLength contains the original length that a sequence should have. The last parameter, length, will have the same value in most cases, but in recursion we will change the value. We start with some base cases. Every row contains a sequence of zero symbols, so we can always return true in such a case. function isWinningRow(row: Player[], player: Player, index: number, originalLength: number, length: number): boolean { if (length === 0) { return true; } If the row does not contain enough elements to form a sequence, the row will not have such a sequence and we can return false. if (index + length > row.length) { return false; } For other cases, we use recursion. If the current element contains a symbol of the provided player, this row forms a sequence if the next length—1 fields contain the same symbol. if (row[index] === player) { return isWinningRow(row, player, index + 1, originalLength, length - 1); } Otherwise, the row should contain a sequence of the original length in some other position. return isWinningRow(row, player, index + 1, originalLength, originalLength); } If the grid is large enough, a row could contain a long enough sequence after a sequence that was too short. For instance, XXOXXX contains a sequence of length three. This function handles these rows correctly with the parameters originalLength and length. Finally, we must create a function that returns all possible sets that a player can do. To implement this function, we must first find all indices. We filter these indices to indices that reference an empty field. For each of these indices, we change the value of the grid into the specified player. This results in a list of options for the player. export function getOptions(grid: Grid, player: Player) { const rowIndices = grid.map((row, index) => index); const columnIndices = grid[0].map((column, index) => index);   const allFields = flatten(rowIndices.map( row => columnIndices.map(column =><Index> [row, column]) ));   return allFields .filter(index => get(grid, index) === Player.None) .map(index => set(grid, index, player)); } The AI will use this to choose the best option and a human player will get a menu with these options. Creating the grid Before the game can be started, we must create an empty grid. We will write a function that creates an empty grid with the specified size. export function createGrid(width: number, height: number) { const grid: Grid = []; for (let i = 0; i < height; i++) { grid[i] = []; for (let j = 0; j < width; j++) { grid[i][j] = Player.None; } } return grid; } In the next section, we will add some tests for the functions that we have written. These functions work on the grid, so it will be useful to have a function that can parse a grid based on a string. We will separate the rows of a grid with a semicolon. Each row contains tokens for each field. For instance, "XXO; O ;X  " results in this grid: X|X|O-+-+- |O|-+-+-X| | We can implement this by splitting the string into an array of lines. For each line, we split the line into an array of characters. We map these characters to a Player value. export function parseGrid(input: string) { const lines = input.split(";"); return lines.map(parseLine);   function parseLine(line: string) { return line.split("").map(parsePlayer); } function parsePlayer(character: string) { switch (character) { case "X": return Player.Player1; case "O": return Player.Player2; default: return Player.None; } } } In the next section we will use this function to write some tests. Adding tests We will use AVA to write tests for our application. Since the functions do not have side effects, we can easily test them. In lib/test/winner.ts, we test the findWinner function. First, we check whether the function recognizes the winner in some simple cases. import test from "ava"; import { Player, parseGrid, findWinner } from "../model";   test("player winner", t => { t.is(findWinner(parseGrid("   ;XXX;   "), 3), Player.Player1); t.is(findWinner(parseGrid("   ;OOO;   "), 3), Player.Player2); t.is(findWinner(parseGrid("   ;   ;   "), 3), Player.None); }); We can also test all possible three-in-a-row positions in the three by three grid. With this test, we can find out whether horizontal, vertical, and diagonal rows are checked correctly. test("3x3 winner", t => { const grids = [     "XXX;   ;   ",     "   ;XXX;   ",     "   ;   ;XXX",     "X  ;X  ;X  ",     " X ; X ; X ",     "  X;  X;  X",     "X  ; X ;  X",     "  X; X ;X  " ]; for (const grid of grids) { t.is(findWinner(parseGrid(grid), 3), Player.Player1); } });   We must also test that the function does not claim that someone won too often. In the next test, we validate that the function does not return a winner for grids that do not have a winner. test("3x3 no winner", t => { const grids = [     "XXO;OXX;XOO",     "   ;   ;   ",     "XXO;   ;OOX",     "X  ;X  ; X " ]; for (const grid of grids) { t.is(findWinner(parseGrid(grid), 3), Player.None); } }); Since the game also supports other dimensions, we should check these too. We check that all diagonals of a four by three grid are checked correctly, where the length of a sequence should be two. test("4x3 winner", t => { const grids = [     "X   ; X  ;    ",     " X  ;  X ;    ",     "  X ;   X;    ",     "    ;X   ; X  ",     "  X ;   X;    ",     " X  ;  X ;    ",     "X   ; X  ;    ",     "    ;   X;  X " ]; for (const grid of grids) { t.is(findWinner(parseGrid(grid), 2), Player.Player1); } }); You can of course add more test grids yourself. Add tests before you fix a bug. These tests should show the wrong behavior related to the bug. When you have fixed the bug, these tests should pass. This prevents the bug returning in a future version. Random testing Instead of running the test on some predefined set of test cases, you can also write tests that run on random data. You cannot compare the output of a function directly with an expected value, but you can check some properties of it. For instance, getOptions should return an empty list if and only if the board is full. We can use this property to test getOptions and isFull. First, we create a function that randomly chooses a player. To higher the chance of a full grid, we add some extra weight on the players compared to an empty field. import test from "ava"; import { createGrid, Player, isFull, getOptions } from "../model"; import { randomInt } from "../utils";   function randomPlayer() { switch (randomInt(4)) { case 0: case 1: return Player.Player1; case 2: case 3: return Player.Player2; default: return Player.None; } } We create 10000 random grids with this function. The dimensions and the fields are chosen randomly. test("get-options", t => { for (let i = 0; i < 10000; i++) { const grid = createGrid(randomInt(10) + 1, randomInt(10) + 1) .map(row => row.map(randomPlayer)); Next, we check whether the property that we describe holds for this grid. const options = getOptions(grid, Player.Player1) t.is(isFull(grid), options.length === 0); We also check that the function does not give the same option twice. for (let i = 1; i < options.length; i++) { for (let j = 0; j < i; j++) { t.notSame(options[i], options[j]); } } } }); Depending on how critical a function is, you can add more tests. In this case, you could check that only one field is modified in an option or that only an empty field can be modified in an option. Now you can run the tests using gulp && ava dist/test.You can add this to your package.json file. In the scripts section, you can add commands that you want to run. With npm run xxx, you can run task xxx. npm testthat was added as shorthand for npm run test, since the test command is often used. { "name": "article-7", "version": "1.0.0", "scripts": { "test": "gulp && ava dist/test"   }, ... Implementing the AI using Minimax We create an AI based on Minimax. The computer cannot know what his opponent will do in the next steps, but he can check what he can do in the worst-case. The minimum outcome of these worst cases is maximized by this algorithm. This behavior has given Minimax its name. To learn how Minimax works, we will take a look at the value or score of a grid. If the game is finished, we can easily define its value: if you won, the value is 1; if you lost, -1 and if it is a draw, 0. Thus, for player 1 the next grid has value 1 and for player 2 the value is -1. X|X|X-+-+-O|O|-+-+-X|O| We will also define the value of a grid for a game that has not been finished. We take a look at the following grid: X| |X-+-+-O|O|-+-+-O|X| It is player 1's turn. He can place his stone on the top row, and he would win, resulting in a value of 1. He can also choose to lay his stone on the second row. Then the game will result in a draft, if player 2 is not dumb, with score 0. If he chooses to place the stone on the last row, player 2 can win resulting in -1. We assume that player 1 is smart and that he will go for the first option. Thus, we could say that the value of this unfinished game is 1. We will now formalize this. In the previous paragraph, we have summed up all options for the player. For each option, we have calculated the minimum value that the player could get if he would choose that option. From these options, we have chosen the maximum value. Minimax chooses the option with the highest value of all options. Implementing Minimax in TypeScript As you can see, the definition of Minimax looks like you can implement it with recursion. We create a function that returns both the best option and the value of the game. A function can only return a single value, but multiple values can be combined into a single value in a tuple, which is an array with these values. First, we handle the base cases. If the game is finished, the player has no options and the value can be calculated directly. import { Player, Grid, findWinner, isFull, getOpponent, getOptions } from "./model";   export function minimax(grid: Grid, rowLength: number, player: Player): [Grid, number] { const winner = findWinner(grid, rowLength); if (winner === player) { return [undefined, 1]; } else if (winner !== Player.None) { return [undefined, -1]; } else if (isFull(grid)) { return [undefined, 0]; Otherwise, we list all options. For all options, we calculate the value. The value of an option is the same as the opposite of the value of the option for the opponent. Finally, we choose the option with the best value. } else { let options = getOptions(grid, player); const opponent = getOpponent(player); return options.map<[Grid, number]>( option => [option, -(minimax(option, rowLength, opponent)[1])] ).reduce( (previous, current) => previous[1] < current[1] ? current : previous ); } } When you use tuple types, you should explicitly add a type definition for it. Since tuples are arrays too, an array type is automatically inferred. When you add the tuple as return type, expressions after the return keyword will be inferred as these tuples. For options.map, you can mention the union type as a type argument or by specifying it in the callback function (options.map((option): [Grid, number] => ...);). You can easily see that such an AI can also be used for other kinds of games. Actually, the minimax function has no direct reference to Tic-Tac-Toe, only findWinner, isFull and getOptions are related to Tic-Tac-Toe. Optimizing the algorithm The Minimax algorithm can be slow. Choosing the first set, especially, takes a long time since the algorithm tries all ways of playing the game. We will use two techniques to speed up the algorithm. First, we can use the symmetry of the game. When the board is empty it does not matter whether you place a stone in the upper-left corner or the lower-right corner. Rotating the grid around the center 180 degrees gives an equivalent board. Thus, we only need to take a look at half the options when the board is empty. Secondly, we can stop searching for options if we found an option with value 1. Such an option is already the best thing to do. Implementing these techniques gives the following function: import { Player, Grid, findWinner, isFull, getOpponent, getOptions } from "./model";   export function minimax(grid: Grid, rowLength: number, player: Player): [Grid, number] { const winner = findWinner(grid, rowLength); if (winner === player) { return [undefined, 1]; } else if (winner !== Player.None) { return [undefined, -1]; } else if (isFull(grid)) { return [undefined, 0]; } else { let options = getOptions(grid, player); const gridSize = grid.length * grid[0].length; if (options.length === gridSize) { options = options.slice(0, Math.ceil(gridSize / 2)); } const opponent = getOpponent(player); let best: [Grid, number]; for (const option of options) { const current: [Grid, number] = [option, -(minimax(option, rowLength, opponent)[1])]; if (current[1] === 1) { return current; } else if (best === undefined || current[1] > best[1]) { best = current; } } return best; } } This will speed up the AI. In the next sections we will implement the interface for the game and we will write some tests for the AI. Creating the interface NodeJS can be used to create servers. You can also create tools with a command line interface (CLI). For instance, gulp, NPM and typings are command line interfaces built with NodeJS. We will use NodeJS to create the interface for our game. Handling interaction The interaction from the user can only happen by text input in the terminal. When the game starts, it will ask the user some questions about the configuration: width, height, row length for a sequence, and the player(s) that are played by the computer. The highlighted lines are the input of the user. Tic-Tac-Toe Width3 Height3 Row length2 Who controls player 1?1You 2Computer1 Who controls player 2?1You 2Computer1 During the game, the game asks the user which of the possible options he wants to do. All possible steps are shown on the screen, with an index. The user can type the index of the option he wants. X| |-+-+-O|O|-+-+- |X|   It's player one's turn! Choose one out of these options: 1X|X|-+-+-O|O|-+-+- |X|   2X| |X-+-+-O|O|-+-+- |X|   3X| |-+-+-O|O|X-+-+- |X|   4X| |-+-+-O|O|-+-+-X|X|   5X| |-+-+-O|O|-+-+- |X|X A NodeJS application has three standard streams to interact with the user. Standard input, stdin, is used to receive input from the user. Standard output, stdout, is used to show text to the user. Standard error, stderr, is used to show error messages to the user. You can access these streams with process.stdin, process.stdout and process.stderr. You have probably already used console.log to write text to the console. This function writes the text to stdout. We will use console.log to write text to stdout and we will not use stderr. We will create a helper function that reads a line from stdin. This is an asynchronous task, the function starts listening and resolves when the user hits enter. In lib/cli.ts, we start by importing the types and function that we have written. import { Grid, Player, getOptions, getOpponent, showGrid, findWinner, isFull, createGrid } from "./model"; import { minimax } from "./ai"; We can listen to input from stdin using the data event. The process sends either the string or a buffer, an efficient way to store binary data in memory. With once, the callback will only be fired once. If you want to listen to all occurrences of the event, you can use on. function readLine() { return new Promise<string>(resolve => { process.stdin.once("data", (data: string | Buffer) => resolve(data.toString())); }); } We can easily use readLinein async functions. For instance, we can now create a function that reads, parses and validates a line. We can use this to read the input of the user, parse it to a number, and finally check that the number is within a certain range. This function will return the value if it passes the validator. Otherwise it shows a message and retries. async function readAndValidate<U>(message: string, parse: (data: string) => U, validate: (value: U) => boolean): Promise<U> { const data = await readLine(); const value = parse(data); if (validate(value)) { return value; } else { console.log(message); return readAndValidate(message, parse, validate); } } We can use this function to show a question where the user has various options. The user should type the index of his answer. This function validates that the index is within bounds. We will show indices starting at 1 to the user, so we must carefully handle these. async function choose(question: string, options: string[]) { console.log(question); for (let i = 0; i < options.length; i++) { console.log((i + 1) + "t" + options[i].replace(/n/g, "nt")); console.log(); } return await readAndValidate( `Enter a number between 1 and ${ options.length }`, parseInt, index => index >= 1 && index <= options.length ) - 1; } Creating players A player could either be a human or the computer. We create a type that can contain both kinds of players. type PlayerController = (grid: Grid) => Grid | Promise<Grid>; Next we create a function that creates such a player. For a user, we must first know whether he is the first or the second player. Then we return an async function that asks the player which step he wants to do. const getUserPlayer = (player: Player) => async (grid: Grid) => { const options = getOptions(grid, player); const index = await choose("Choose one out of these options:", options.map(showGrid)); return options[index]; }; For the AI player, we must know the player index and the length of a sequence. We use these variables and the grid of the game to run the Minimax algorithm. const getAIPlayer = (player: Player, rowLength: number) => (grid: Grid) => minimax(grid, rowLength, player)[0]; Now we can create a function that asks the player whether a player should be played by the user or the computer. async function getPlayer(index: number, player: Player, rowLength: number): Promise<PlayerController> { switch (await choose(`Who controls player ${ index }?`, ["You", "Computer"])) { case 0: return getUserPlayer(player); default: return getAIPlayer(player, rowLength); } } We combine these functions in a function that handles the whole game. First, we must ask the user to provide the width, height and length of a sequence. export async function game() { console.log("Tic-Tac-Toe"); console.log(); console.log("Width"); const width = await readAndValidate("Enter an integer", parseInt, isFinite); console.log("Height"); const height = await readAndValidate("Enter an integer", parseInt, isFinite); console.log("Row length"); const rowLength = await readAndValidate("Enter an integer", parseInt, isFinite); We ask the user which players should be controlled by the computer. const player1 = await getPlayer(1, Player.Player1, rowLength); const player2 = await getPlayer(2, Player.Player2, rowLength); The user can now play the game. We do not use a loop, but we use recursion to give the player their turns. return play(createGrid(width, height), Player.Player1);   async function play(grid: Grid, player: Player): Promise<[Grid, Player]> { In every step, we show the grid. If the game is finished, we show which player has won. console.log(); console.log(showGrid(grid)); console.log();   const winner = findWinner(grid, rowLength); if (winner === Player.Player1) { console.log("Player 1 has won!"); return <[Grid, Player]> [grid, winner]; } else if (winner === Player.Player2) { console.log("Player 2 has won!"); return <[Grid, Player]>[grid, winner]; } else if (isFull(grid)) { console.log("It's a draw!"); return <[Grid, Player]>[grid, Player.None]; } If the game is not finished, we ask the current player or the computer which set he wants to do. console.log(`It's player ${ player === Player.Player1 ? "one's" : "two's" } turn!`);   const current = player === Player.Player1 ? player1 : player2; return play(await current(grid), getOpponent(player)); } } In lib/index.ts, we can start the game. When the game is finished, we must manually exit the process. import { game } from "./cli";   game().then(() => process.exit()); We can compile and run this in a terminal: gulp && node --harmony_destructuring dist At the time of writing, NodeJS requires the --harmony_destructuring flag to allow destructuring, like [x, y] = z. In future versions of NodeJS, this flag will be removed and you can run it without it. Testing the AI We will add some tests to check that the AI works properly. For a standard three by three game, the AI should never lose. That means when an AI plays against an AI, it should result in a draw. We can add a test for this. In lib/test/ai.ts, we import AVA and our own definitions. import test from "ava"; import { createGrid, Grid, findWinner, isFull, getOptions, Player } from "../model"; import { minimax } from "../ai"; import { randomInt } from "../utils"; We create a function that simulates the whole gameplay. type PlayerController = (grid: Grid) => Grid; function run(grid: Grid, a: PlayerController, b: PlayerController): Player { const winner = findWinner(grid, 3); if (winner !== Player.None) return winner; if (isFull(grid)) return Player.None; return run(a(grid), b, a); } We write a function that executes a step for the AI. const aiPlayer = (player: Player) => (grid: Grid) => minimax(grid, 3, player)[0]; Now we create the test that validates that a game where the AI plays against the AI results in a draw. test("AI vs AI", t => { const result = run(createGrid(3, 3), aiPlayer(Player.Player1), aiPlayer(Player.Player2)) t.is(result, Player.None); }); Testing with a random player We can also test what happens when the AI plays against a random player or when a player plays against the AI. The AI should win or it should result in a draw. We run these multiple times; what you should always do when you use randomization in your test. We create a function that creates the random player. const randomPlayer = (player: Player) => (grid: Grid) => { const options = getOptions(grid, player); return options[randomInt(options.length)]; }; We write the two tests that both run 20 games with a random player and an AI. test("random vs AI", t => { for (let i = 0; i < 20; i++) { const result = run(createGrid(3, 3), randomPlayer(Player.Player1), aiPlayer(Player.Player2)) t.not(result, Player.Player1); } });   test("AI vs random", t => { for (let i = 0; i < 20; i++) { const result = run(createGrid(3, 3), aiPlayer(Player.Player1), randomPlayer(Player.Player2)) t.not(result, Player.Player2); } }); We have written different kinds of tests: Tests that check the exact results of single function Tests that check a certain property of results of a function Tests that check a big component Always start writing tests for small components. If the AI tests should fail, that could be caused by a mistake in hasWinner, isFull or getOptions, so it is hard to find the location of the error. Only testing small components is not enough; bigger tests, such as the AI tests, are closer to what the user will do. Bigger tests are harder to create, especially when you want to test the user interface. You must also not forget that tests cannot guarantee that your code runs correctly, it just guarantees that your test cases work correctly. Summary In this article, we have written an AI for Tic-Tac-Toe. With the command line interface, you can play this game against the AI or another human. You can also see how the AI plays against the AI. We have written various tests for the application. You have learned how Minimax works for turn-based games. You can apply this to other turn-based games as well. If you want to know more on strategies for such games, you can take a look at game theory, the mathematical study of these games. Resources for Article: Further resources on this subject: Basic Website using Node.js and MySQL database [article] Data Science with R [article] Web Typography [article]
Read more
  • 0
  • 0
  • 14233
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-make-your-presentation-ipython
Oli Huggins
11 May 2016
5 min read
Save for later

Make Your Presentation with IPython

Oli Huggins
11 May 2016
5 min read
The IPython ecosystem is very powerful, integrating many functionalities, and it has become very popular. Some people use it as a full IDE environment, for example, the Rodeo, which was coded based on IPython. Some use IPython as a full platform, including coding presentations and even showing them. IPython is able to combine codes, LaTeX, markdown, and HTML excerpts; display videos; and include graphics and interactive JavaScript objects. Here we will focus on the method of converting IPython notebooks to single HTML files, which can be displayed in any of the several modern browsers (for example, Chrome and Firefox). IPython notebooks integrate an amazing tool called Reveal.js. Using nbconvert (which brings the possibility to export .ipynb to other formats such as markdown, latex, pdf, and also slides), the slides saved with IPython generate a Reveal.js-powered HTML slide show. Requirements Generally a single pip install should do the trick: pip install ipython['all'] If you do not have the IPython notebook installed, you can also upgrade it: pip install --upgrade ipython['all'] It is also necessary to have a copy of Reveal.js inside the same directory where you will save the converted IPython notebook, considering that you will present your slides without a running IPython session. Using Notebooks Start running your IPython notebook with the ipython notebook command in your console, and it will display a session of IPython in your default browser. To start coding your presentation, in the top-right corner, you will find a cascade menu called new. Just select Notebook and it will display a new file, which can be saved with your preferred filename File > Rename.... Now, you have to select what type of content you will create. For this usage, you must select View > Cell toolbar > Slideshow or simply click on the CellToolbar button and select Slideshow. This will display in the top-right corner of each cell a Slide Type menu, which can be chosen between: Slide: Creates a new slide Sub-Slide: Creates a sub-slide, which will be displayed under the main slide Fragment: This will not change the slide but will insert each cell as a fragment Skip: It will not be displayed Notes: Yes, with Reveal.js you can write your notes and use them when presenting The content of each cell may be changed in the upper menu, where the default is code, but you can select between Markdown content, Raw NBConvert, and Heading. You have already started your presentation process, where you can insert codes, figures, plots, videos, and much more. You can also include math equations using LaTeX coding, by simply changing the cell type to Markdown and type, for example: $$c = sqrt{a^2 + b^2}$$ To include images, you must code this: from IPython.display import Image And choose skip in the Slide Type menu. After this, you can select your disk image. Image('my_figure.png') Changing the Behavior and Customizing Your Presentation By default, IPython will display your entry cell and its output, but you can also change this behavior. Include a cell at the top of your notebook. Change the cell type to Raw NBConvert, including the following CSS code, and change Slide Type to Skip: <style type="text/css"> .input_prompt, .input_area, .output_prompt { display:none !important; } </style> You can customize whatever you want with CSS codes, changing the default parameters set with Reveal.js. For example, to change the header font to Times, you just have to include the following snippet in the same first cell: <style type="text/css"> .reveal h1, .reveal h2 { font-family:times } </style> All of your CSS code may be included in a separate CSS file called custom.css and saved in the same directory where your HTML presentation will stand. Generating Your Presentation After all of your efforts in creating your presentation with Notebook, it is time to generate your HTML file. Using nbconvert, it is possible to set a lot of parameters (see ipython nbconvert --help to see them all). ipython nbconvert presentation.ipynb --to slides This command will generate a file called presentation.slides.html, which can be displayed with a browser. If you want to directly run your IPython notebook, you may use this: ipython nbconvert presentation.ipynb --to slides --post serve An interesting note about presenting your .html file is that you can use the Reveal.js shortcuts; for example the key S will open another browser window displaying the elapsed time and a preview of the current slide, next slide, and the Notes associated with that specific slide. The key B turns your screen black, in case you want to avoid distracting people. If you press Esc or O, you will escape from full-screen to toggle overview. Much more can be done by using Reveal.js together with IPython notebooks. For example, it is actually possible to live run code inside your presentation using the extension RISE, which simulates live IPython. Conclusion The usage of IPython as a slide show platform is a way to show code, results, and other formatted math text very directly. Otherwise, if you want a fancier presentation, maybe this approach will push you to further customize CSS code (it is also possible to use external themes to generate HTML files with nbconvert appending the argument—a theme that can be found on the Web). Have a great time using IPython while coding your presentation and sharing your .ipynb files for the public with IPython Slides Viewer. About the author Arnaldo D'Amaral Pereira Granja Russo has a PhD in Biological Oceanography and is a researcher at the Instituto Ambiental Boto Flipper (institutobotoflipper.org). While not teaching or programming he is surfing, climbing or paragliding. You can find him at www.ciclotux.org or on Github here.
Read more
  • 0
  • 0
  • 18001

article-image-testing-your-application-cljstest
Packt
11 May 2016
13 min read
Save for later

Testing Your Application with cljs.test

Packt
11 May 2016
13 min read
In this article written by David Jarvis, Rafik Naccache, and Allen Rohner, authors of the book Learning ClojureScript, we'll take a look at how to configure our ClojureScript application or library for testing. As usual, we'll start by creating a new project for us to play around with: $ lein new figwheel testing (For more resources related to this topic, see here.) We'll be playing around in a test directory. Most JVM Clojure projects will have one already, but since the default Figwheel template doesn't include a test directory, let's make one first (following the same convention used with source directories, that is, instead of src/$PROJECT_NAME we'll create test/$PROJECT_NAME): $ mkdir –p test/testing We'll now want to make sure that Figwheel knows that it has to watch the test directory for file modifications. To do that, we will edit the the dev build in our project.clj project's :cljsbuild map so that it's :source-paths vector includes both src and test. Your new dev build configuration should look like the following: {:id "dev" :source-paths ["src" "test"] ;; If no code is to be run, set :figwheel true for continued automagical reloading :figwheel {:on-jsload "testing.core/on-js-reload"} :compiler {:main testing.core :asset-path "js/compiled/out" :output-to "resources/public/js/compiled/testing.js" :output-dir "resources/public/js/compiled/out" :source-map-timestamp true}} Next, we'll get the old Figwheel REPL going so that we can have our ever familiar hot reloading: $ cd testing $ rlwrap lein figwheel Don't forget to navigate a browser window to http://localhost:3449/ to get the browser REPL to connect. Now, let's create a new core_test.cljs file in the test/testing directory. By convention, most libraries and applications in Clojure and ClojureScript have test files that correspond to source files with the suffix _test. In this project, this means that test/testing/core_test.cljs is intended to contain the tests for src/testing/core.cljs. Let's get started by just running tests on a single file. Inside core_test.cljs, let's add the following code: (ns testing.core-test (:require [cljs.test :refer-macros [deftest is]])) (deftest i-should-fail (is (= 1 0))) (deftest i-should-succeed (is (= 1 1))) This code first requires two of the most important cljs.test macros, and then gives us two simple examples of what a failed test and a successful test should look like. At this point, we can run our tests from the Figwheel REPL: cljs.user=> (require 'testing.core-test) ;; => nil cljs.user=> (cljs.test/run-tests 'testing.core-test) Testing testing.core-test FAIL in (i-should-fail) (cljs/test.js?zx=icyx7aqatbda:430:14) expected: (= 1 0) actual: (not (= 1 0)) Ran 2 tests containing 2 assertions. 1 failures, 0 errors. ;; => nil At this point, what we've got is tolerable, but it's not really practical in terms of being able to test a larger application. We don't want to have to test our application in the REPL and pass in our test namespaces one by one. The current idiomatic solution for this in ClojureScript is to write a separate test runner that is responsible for important executions and then run all of your tests. Let's take a look at what this looks like. Let's start by creating another test namespace. Let's call this one app_test.cljs, and we'll put the following in it: (ns testing.app-test (:require [cljs.test :refer-macros [deftest is]])) (deftest another-successful-test (is (= 4 (count "test")))) We will not do anything remarkable here; it's just another test namespace with a single test that should pass by itself. Let's quickly make sure that's the case at the REPL: cljs.user=> (require 'testing.app-test) nil cljs.user=> (cljs.test/run-tests 'testing.app-test) Testing testing.app-test Ran 1 tests containing 1 assertions. 0 failures, 0 errors. ;; => nil Perfect. Now, let's write a test runner. Let's open a new file that we'll simply call test_runner.cljs, and let's include the following: (ns testing.test-runner (:require [cljs.test :refer-macros [run-tests]] [testing.app-test] [testing.core-test])) ;; This isn't strictly necessary, but is a good idea depending ;; upon your application's ultimate runtime engine. (enable-console-print!) (defn run-all-tests [] (run-tests 'testing.app-test 'testing.core-test)) Again, nothing surprising. We're just making a single function for us that runs all of our tests. This is handy for us at the REPL: cljs.user=> (testing.test-runner/run-all-tests) Testing testing.app-test Testing testing.core-test FAIL in (i-should-fail) (cljs/test.js?zx=icyx7aqatbda:430:14) expected: (= 1 0) actual: (not (= 1 0)) Ran 3 tests containing 3 assertions. 1 failures, 0 errors. ;; => nil Ultimately, however, we want something we can run at the command line so that we can use it in a continuous integration environment. There are a number of ways we can go about configuring this directly, but if we're clever, we can let someone else do the heavy lifting for us. Enter doo, the handy ClojureScript testing plugin for Leiningen. Using doo for easier testing configuration doo is a library and Leiningen plugin for running cljs.test in many different JavaScript environments. It makes it easy to test your ClojureScript regardless of whether you're writing for the browser or for the server, and it also includes file watching capabilities such as Figwheel so that you can automatically rerun tests on file changes. The doo project page can be found at https://github.com/bensu/doo. To configure our project to use doo, first we need to add it to the list of plugins in our project.clj file. Modify the :plugins key so that it looks like the following: :plugins [[lein-figwheel "0.5.2"] [lein-doo "0.1.6"] [lein-cljsbuild "1.1.3" :exclusions [[org.clojure/clojure]]]] Next, we will add a new cljsbuild build configuration for our test runner. Add the following build map after the dev build map on which we've been working with until now: {:id "test" :source-paths ["src" "test"] :compiler {:main testing.test-runner :output-to "resources/public/js/compiled/testing_test.js" :optimizations :none}} This configuration tells Cljsbuild to use both our src and test directories, just like our dev profile. It adds some different configuration elements to the compiler options, however. First, we're not using testing.core as our main namespace anymore—instead, we'll use our test runner's namespace, testing.test-runner. We will also change the output JavaScript file to a different location from our compiled application code. Lastly, we will make sure that we pass in :optimizations :none so that the compiler runs quickly and doesn't have to do any magic to look things up. Note that our currently running Figwheel process won't know about the fact that we've added lein-doo to our list of plugins or that we've added a new build configuration. If you want to make Figwheel aware of doo in a way that'll allow them to play nicely together, you should also add doo as a dependency to your project. Once you've done that, exit the Figwheel process and restart it after you've saved the changes to project.clj. Lastly, we need to modify our test runner namespace so that it's compatible with doo. To do this, open test_runner.cljs and change it to the following: (ns testing.test-runner (:require [doo.runner :refer-macros [doo-tests]] [testing.app-test] [testing.core-test])) ;; This isn't strictly necessary, but is a good idea depending ;; upon your application's ultimate runtime engine. (enable-console-print!) (doo-tests 'testing.app-test 'testing.core-test) This shouldn't look too different from our original test runner—we're just importing from doo.runner rather than cljs.test and using doo-tests instead of a custom runner function. The doo-tests runner works very similarly to cljs.test/run-tests, but it places hooks around the tests to know when to start them and finish them. We're also putting this at the top-level of our namespace rather than wrapping it in a particular function. The last thing we're going to need to do is to install a JavaScript runtime that we can use to execute our tests with. Up until now, we've been using the browser via Figwheel, but ideally, we want to be able to run our tests in a headless environment as well. For this purpose. we recommend installing PhantomJS (though other execution environments are also fine). If you're on OS X and have Homebrew installed (http://www.brew.sh), installing PhantomJS is as simple as typing brew install phantomjs. If you're not on OS X or don't have Homebrew, you can find instructions on how to install PhantomJS on the project's website at http://phantomjs.org/. The key thing is that the following should work: $ phantomjs -v 2.0.0 Once you've got PhantomJS installed, you can now invoke your test runner from the command line with the following: $ lein doo phantom test once ;; ====================================================================== ;; Testing with Phantom: Testing testing.app-test Testing testing.core-test FAIL in (i-should-fail) (:) expected: (= 1 0) actual: (not (= 1 0)) Ran 3 tests containing 3 assertions. 1 failures, 0 errors. Subprocess failed Let's break down this command. The first part, lein doo, just tells Leiningen to invoke the doo plugin. Next, we have phantom, which tells doo to use PhantomJS as its running environment. The doo plugin supports a number of other environments, including Chrome, Firefox, Internet Explorer, Safari, Opera, SlimerJS, NodeJS, Rhino, and Nashorn. Be aware that if you're interested in running doo on one of these other environments, you may have to configure and install additional software. For instance, if you want to run tests on Chrome, you'll need to install Karma as well as the appropriate Karma npm modules to enable Chrome interaction. Next we have test, which refers to the cljsbuild build ID we set up earlier. Lastly, we have once, which tells doo to just run tests and not to set up a filesystem watcher. If, instead, we wanted doo to watch the filesystem and rerun tests on any changes, we would just use lein doo phantom test. Testing fixtures The cljs.test project has support for adding fixtures to your tests that can run before and after your tests. Test fixtures are useful for establishing isolated states between tests—for instance, you can use tests to set up a specific database state before each test and to tear it down afterward. You can add them to your ClojureScript tests by declaring them with the use-fixtures macro within the testing namespace you want fixtures applied to. Let's see what this looks like in practice by changing one of our existing tests and adding some fixtures to it. Modify app-test.cljs to the following: (ns testing.app-test (:require [cljs.test :refer-macros [deftest is use-fixtures]])) ;; Run these fixtures for each test. ;; We could also use :once instead of :each in order to run ;; fixtures once for the entire namespace instead of once for ;; each individual test. (use-fixtures :each {:before (fn [] (println "Setting up tests...")) :after (fn [] (println "Tearing down tests..."))}) (deftest another-successful-test ;; Give us an idea of when this test actually executes. (println "Running a test...") (is (= 4 (count "test")))) Here, we've added a call to use-fixtures that prints to the console before and after running the test, and we've added a println call to the test itself so that we know when it executes. Now when we run this test, we get the following: $ lein doo phantom test once ;; ====================================================================== ;; Testing with Phantom: Testing testing.app-test Setting up tests... Running a test... Tearing down tests... Testing testing.core-test FAIL in (i-should-fail) (:) expected: (= 1 0) actual: (not (= 1 0)) Ran 3 tests containing 3 assertions. 1 failures, 0 errors. Subprocess failed Note that our fixtures get called in the order we expect them to. Asynchronous testing Due to the fact that client-side code is frequently asynchronous and JavaScript is single threaded, we need to have a way to support asynchronous tests. To do this, we can use the async macro from cljs.test. Let's take a look at an example using an asynchronous HTTP GET request. First, let's modify our project.clj file to add cljs-ajax to our dependencies. Our dependencies project key should now look something like this: :dependencies [[org.clojure/clojure "1.8.0"] [org.clojure/clojurescript "1.7.228"] [cljs-ajax "0.5.4"] [org.clojure/core.async "0.2.374" :exclusions [org.clojure/tools.reader]]] Next, let's create a new async_test.cljs file in our test.testing directory. Inside it, we will add the following code: (ns testing.async-test (:require [ajax.core :refer [GET]] [cljs.test :refer-macros [deftest is async]])) (deftest test-async (GET "http://www.google.com" ;; will always fail from PhantomJS because ;; `Access-Control-Allow-Origin` won't allow ;; our headless browser to make requests to Google. {:error-handler (fn [res] (is (= (:status-text res) "Request failed.")) (println "Test finished!"))})) Note that we're not using async in our test at the moment. Let's try running this test with doo (don't forget that you have to add testing.async-test to test_runner.cljs!): $ lein doo phantom test once ... Testing testing.async-test ... Ran 4 tests containing 3 assertions. 1 failures, 0 errors. Subprocess failed Now, our test here passes, but note that the println async code never fires, and our additional assertion doesn't get called (looking back at our previous examples, since we've added a new is assertion we should expect to see four assertions in the final summary)! If we actually want our test to appropriately validate the error-handler callback within the context of the test, we need to wrap it in an async block. Doing so gives us a test that looks like the following: (deftest test-async (async done (GET "http://www.google.com" ;; will always fail from PhantomJS because ;; `Access-Control-Allow-Origin` won't allow ;; our headless browser to make requests to Google. {:error-handler (fn [res] (is (= (:status-text res) "Request failed.")) (println "Test finished!") (done))}))) Now, let's try to run our tests again: $ lein doo phantom test once ... Testing testing.async-test Test finished! ... Ran 4 tests containing 4 assertions. 1 failures, 0 errors. Subprocess failed Awesome! Note that this time we see the printed statement from our callback, and we can see that cljs.test properly ran all four of our assertions. Asynchronous fixtures One final "gotcha" on testing—the fixtures we talked about earlier in this article do not handle asynchronous code automatically. This means that if you have a :before fixture that executes asynchronous logic, your test can begin running before your fixture has completed! In order to get around this, all you need to do is to wrap your :before fixture in an async block, just like with asynchronous tests. Consider the following for instance: (use-fixtures :once {:before #(async done ... (done)) :after #(do ...)}) Summary This concludes our section on cljs.test. Testing, whether in ClojureScript or any other language, is a critical software engineering best practice to ensure that your application behaves the way you expect it to and to protect you and your fellow developers from accidentally introducing bugs to your application. With cljs.test and doo, you have the power and flexibility to test your ClojureScript application with multiple browsers and JavaScript environments and to integrate your tests into a larger continuous testing framework. Resources for Article: Further resources on this subject: Clojure for Domain-specific Languages - Design Concepts with Clojure [article] Visualizing my Social Graph with d3.js [article] Improving Performance with Parallel Programming [article]
Read more
  • 0
  • 0
  • 7471

article-image-exploring-performance-issues-nodejsexpress-applications
Packt
11 May 2016
16 min read
Save for later

Exploring Performance Issues in Node.js/Express Applications

Packt
11 May 2016
16 min read
Node.js is an exciting new platform for developing web applications, application servers, any sort of network server or client, and general purpose programming. It is designed for extreme scalability in networked applications through an ingenious combination of server-side JavaScript, asynchronous I/O, asynchronous programming, built around JavaScript anonymous functions, and a single execution thread event-driven architecture. Companies—large and small—are adopting Node.js, for example, PayPal is one of the companies converting its application stack over to Node.js. An up-and-coming leading application model, the MEAN stack, combines MongoDB (or MySQL) with Express, AngularJS and, of course, Node.js. A look through current job listings demonstrates how important the MEAN stack and Node.js in general have become. It's claimed that Node.js is a lean, low-overhead, software platform. The excellent performance is supposedly because Node.js eschews the accepted wisdom of more traditional platforms, such as JavaEE and its complexity. Instead of relying on a thread-oriented architecture to fill the CPU cores of the server, Node.js has a simple single-threaded architecture, avoiding the overhead and complexity of threads. Using threads to implement concurrency often comes with admonitions like these: expensive and error-prone, the error-prone synchronization primitives of Java, or designing concurrent software can be complex and error prone. The complexity comes from the access to shared variables and various strategies to avoid deadlock and competition between threads. The synchronization primitives of Java are an example of such a strategy, and obviously many programmers find them hard to use. There's the tendency to create frameworks, such as java.util.concurrent, to tame the complexity of threaded concurrency, but some might argue that papering over complexity does not make things simpler. Adopting Node.js is not a magic wand that will instantly make performance problems disappear forever. The development team must approach this intelligently, or else, you'll end up with one core on the server running flat out and the other cores twiddling their thumbs. Your manager will want to know how you're going to fully utilize the server hardware. And, because Node.js is single-threaded, your code must return from event handlers quickly, or else, your application will be frequently blocked and will provide poor performance. Your manager will want to know how you'll deliver the promised high transaction rate. In this article by David Herron, author of the book Node.JS Web Development - Third Edition, we will explore this issue. We'll write a program with an artificially heavy computational load. The naive Fibonacci function we'll use is elegantly simple, but is extremely recursive and can take a long time to compute. (For more resources related to this topic, see here.) Node.js installation Before launching into writing code, we need to install Node.js on our laptop. Proceed to the Node.js downloads page by going to http://nodejs.org/ and clicking on the downloads link. It's preferable if you can install Node.js from the package management system for your computer. While the Downloads page offers precompiled binary Node.js packages for popular computer systems (Windows, Mac OS X, Linux, and so on), installing from the package management system makes it easier to update the install later. The Downloads page has a link to instructions for using package management systems to install Node.js. Once you've installed Node.js, you can quickly test it by running a couple of commands: $ node –help Prints out helpful information about using the Node.js command-line tool: $ npm help Npm is the default package management system for Node.js, and is automatically installed along with Node.js. It lets us download Node.js packages from over the Internet, using them as the building blocks for our applications. Next, let's create a directory to develop an Express application within it to calculate Fibonacci numbers: $ mkdir fibonacci $ cd fibonacci $ npm install express-generator@4.x $ ./node_modules/.bin/express . --ejs $ npm install The application will be written against the current Express version, version 4.x. Specifying the version number this way makes sure of compatibility. The express command generated for us a starting application. You can inspect the package.json file to see what will be installed, and the last command installs those packages. What we'll have in front of us is a minimal Express application. Our first stop is not to create an Express application, but to gather some basic data about computation-dominant code in Node.js. Heavy-weight computation Let's start the exploration by creating a Node.js module namedmath.js, containing: var fibonacci = exports.fibonacci = function(n) { if (n === 1) return 1; else if (n === 2) return 1; else return fibonacci(n-1) + fibonacci(n-2); } Then, create another file namedfibotimes.js containing this: var math = require('./math'); var util = require('util'); for (var num = 1; num < 80; num++) { util.log('Fibonacci for '+ num +' = '+ math.fibonacci(num)); } Running this script produces the following output: $ node fibotimes.js 31 Jan 14:41:28 - Fibonacci for 1 = 1 31 Jan 14:41:28 - Fibonacci for 2 = 1 31 Jan 14:41:28 - Fibonacci for 3 = 2 31 Jan 14:41:28 - Fibonacci for 4 = 3 31 Jan 14:41:28 - Fibonacci for 5 = 5 31 Jan 14:41:28 - Fibonacci for 6 = 8 31 Jan 14:41:28 - Fibonacci for 7 = 13 … 31 Jan 14:42:27 - Fibonacci for 38 = 39088169 31 Jan 14:42:28 - Fibonacci for 39 = 63245986 31 Jan 14:42:31 - Fibonacci for 40 = 102334155 31 Jan 14:42:34 - Fibonacci for 41 = 165580141 31 Jan 14:42:40 - Fibonacci for 42 = 267914296 31 Jan 14:42:50 - Fibonacci for 43 = 433494437 31 Jan 14:43:06 - Fibonacci for 44 = 701408733 This quickly calculates the first 40 or so members of the Fibonacci sequence. After the 40th member, it starts taking a couple seconds per result and quickly degrades from there. It isuntenable to execute code of this sort on a single-threaded system that relies on a quick return to the event loop. That's an important point because the Node.js design requires that event handlers quickly return to the event loop. The single-thread event-loop does everything in Node.js and event handlers that return quickly to the event loop keep it humming. A correctly written application can sustain a tremendous request throughput, but a badly written application can prevent Node.js from fulfilling that promise. This Fibonacci function demonstrates algorithms that churn away at their calculation without ever letting Node.js process the event loop. Calculating fibonacci(44) requires 16 seconds of calculation, which is an eternity for a modern web service. With any server that's bogged down like this, not processing events, the perceived performance is zilch. Your manager will be rightfully angry. This is a completely artificial example, because it's trivial to refactor the Fibonacci calculation for excellent performance. This is a stand-in for any algorithm that might monopolize the event loop. There are two general ways in Node.js to solve this problem: Algorithmic refactoring: Perhaps, like the Fibonacci function we chose, one of your algorithms is suboptimal and can be rewritten to be faster. Or, if not faster, it can be split into callbacks dispatched through the event loop. We'll look at one such method in a moment. Creating a backend service: Can you imagine a backend server dedicated to calculating Fibonacci numbers? Okay, maybe not, but it's quite common to implement backend servers to offload work from frontend servers, and we will implement a backend Fibonacci server at the end of this article. But first, we need to set up an Express application that demonstrates the impact on the event loop. An Express app to calculate Fibonacci numbers To see the impact of a computation-heavy application on Node.js performance, let's write a simple Express application to do Fibonacci calculations. Express is a key Node.js technology, so this will also give you a little exposure to writing an Express application. We've already created the blank application, so let's make a couple of small changes, so it uses our Fibonacci algorithm. Edit views/index.ejs to have this code: <!DOCTYPE html> <html> <head> <title><%= title %></title> <link rel='stylesheet' href='/stylesheets/style.css' /> </head> <body> <h1><%= title %></h1> <% if (typeof fiboval !== "undefined") { %> <p>Fibonacci for <%= fibonum %> is <%= fiboval %></p> <hr/> <% } %> <p>Enter a number to see its' Fibonacci number</p> <form name='fibonacci' action='/' method='get'> <input type='text' name='fibonum' /> <input type='submit' value='Submit' /> </form> </body> </html> This simple template sets up an HTML form where we can enter a number. This number designates the desired member of the Fibonacci sequences to calculate. This is written for the EJS template engine. You can see that <%= variable %> substitutes the named variable into the output, and JavaScript code is written in the template by enclosing it within <% %> delimiters. We use that to optionally print out the requested Fibonacci value if one is available. var express = require('express'); var router = express.Router(); var math = require('../math'); router.get('/', function(req, res, next) { if (req.query.fibonum) { res.render('index', { title: "Fibonacci Calculator", fibonum: req.query.fibonum, fiboval: math.fibonacci(req.query.fibonum) }); } else { res.render('index', { title: "Fibonacci Calculator", fiboval: undefined }); } }); module.exports = router; This router definition handles the home page for the Fibonacci calculator. The router.get function means this route handles HTTP GET operations on the / URL. If the req.query.fibonum value is set, that means the URL had a ?fibonum=# value which would be produced by the form in index.ejs. If that's the case, the fiboval value is calculated by calling math.fibonacci, the function we showed earlier. By using that function, we can safely predict ahead performance problems when requesting larger Fibonacci values. On the res.render calls, the second argument is an object defining variables that will be made available to the index.ejs template. Notice how the two res.render calls differ in the values passed to the template, and how the template will differ as a result. There are no changes required in app.js. You can study that file, and bin/www, if you're curious how Express applications work. In the meantime, you run it simply: $ npm start > fibonacci@0.0.0 start /Users/david/fibonacci > node ./bin/www And this is what it'll look like in the browser—at http://localhost:3000: For small Fibonacci values, the result will return quickly. As implied by the timing results we looked at earlier, at around the 40th Fibonacci number, it'll take a few seconds to calculate the result. The 50th Fibonacci number will take 20 minutes or so. That's enough time to run a little experiment. Open two browser windows onto http://localhost:3000. You'll see the Fibonacci calculator in each window. In one, request the value for 45 or more. In the other, enter 10 that, in normal circumstances, we know would return almost immediately. Instead, the second window won't respond until the first one finishes. Unless, that is, your browser times out and throws an error. What's happening is the Node.js event loop is blocked from processing events because the Fibonacci algorithm is running and does not ever yield to the event loop. As soon as the Fibonacci calculation finishes, the event loop starts being processed again. It then receives and processes the request made from the second window. Algorithmic refactoring The problem here is the applications that stop processing events. We might solve the problem by ensuring events are handled while still performing calculations. In other words, let's look at algorithmic refactoring. To prove that we have an artificial problem on our hands, add this function to math.js: var fibonacciLoop = exports.fibonacciLoop = function(n) { var fibos = []; fibos[0] = 0; fibos[1] = 1; fibos[2] = 1; for (var i = 3; i <= n; i++) { fibos[i] = fibos[i-2] + fibos[i-1]; } return fibos[n]; } Change fibotimes.js to call this function, and the Fibonacci values will fly by so fast your head will spin. Some algorithms aren't so simple to optimize as this. For such a case, it is possible to divide the calculation into chunks and then dispatch the computation of those chunks through the event loop. Consider the following code: var fibonacciAsync = exports.fibonacciAsync = function(n, done) { if (n === 0) done(undefined, 0); else if (n === 1 || n === 2) done(undefined, 1); else { setImmediate(function() { fibonacciAsync(n-1, function(err, val1) { if (err) done(err); else setImmediate(function() { fibonacciAsync(n-2, function(err, val2) { if (err) done(err); else done(undefined, val1+val2); }); }); }); }); } }; This converts the fibonacci function from a synchronous function to an asynchronous function one with a callback. By using setImmediate, each stage of the calculation is managed through Node.js's event loop, and the server can easily handle other requests while churning away on a calculation. It does nothing to reduce the computation required; this is still the silly inefficient Fibonacci algorithm. All we've done is spread the computation through the event loop. To use this new Fibonacci function, we need to change the router function in routes/index.js to the following: exports.index = function(req, res) { if (req.query.fibonum) { math.fibonacciAsync(req.query.fibonum, function(err,fiboval){ res.render('index', { title: "Fibonacci Calculator", fibonum: req.query.fibonum, fiboval: fiboval }); }); } else { res.render('index', { title: "Fibonacci Calculator", fiboval: undefined }); } }; This makes an asynchronous call to fibonacciAsync, and when the calculation finishes, the result is sent to the browser. With this change, the server no longer freezes when calculating a large Fibonacci number. The calculation, of course, still takes a long time, because fibonacciAsync is still an inefficient algorithm. At least, other users of the application aren't blocked, because it regularly yields to the event loop. Repeat the same test used earlier. Open two or more browser windows to the Fibonacci calculator, make a large request in one window, and the requests in the other window will be promptly answered. Creating a backend REST service The next way to mitigate computationally intensive code is to push the calculation to a backend process. To do that, we'll request computations from a backend Fibonacci server. While Express has a powerful templating system, making it suitable for delivering HTML web pages to browsers, it can also be used to implement a simple REST service. Express supports parameterized URL's in route definitions, so it can easily receive REST API arguments, and Express makes it easy to return data encoded in JSON. Create a file named fiboserver.js containing this code: var math = require('./math'); var express = require('express'); var logger = require('morgan'); var util = require('util'); var app = express(); app.use(logger('dev')); app.get('/fibonacci/:n', function(req, res, next) { math.fibonacciAsync(Math.floor(req.params.n), function(err, val) { if (err) next('FIBO SERVER ERROR ' + err); else { util.log(req.params.n +': '+ val); res.send({ n: req.params.n, result: val }); } }); }); app.listen(3333); This is a stripped down Express application that gets right to the point of providing a Fibonacci calculation service. The one route it supports does the Fibonacci computation using the same fibonacciAsync function used earlier. The res.send function is a flexible way to send data responses. As used here, it automatically detects the object, formats it as JSON text, and sends it with the correct content-type. Then, in package.json, add this to the scripts section: "server": "node ./fiboserver" Now, let's run it: $ npm run server > fibonacci@0.0.0 server /Users/david/fibonacci > node ./fiboserver Then, in a separate command window, use curl to request values from this service. $ curl -f http://localhost:3002/fibonacci/10 {"n":"10","result":55} Over in the window, where the service is running, we'll see a log of GET requests and how long each took to process. It's easy to create a small Node.js script to directly call this REST service. But let's instead move directly to changing our Fibonacci calculator application to do so. Make this change to routes/index.js: router.get('/', function(req, res, next) { if (req.query.fibonum) { var httpreq = require('http').request({ method: 'GET', host: "localhost", port: 3333, path: "/fibonacci/"+Math.floor(req.query.fibonum) }, function(httpresp) { httpresp.on('data', function(chunk) { var data = JSON.parse(chunk); res.render('index', { title: "Fibonacci Calculator", fibonum: req.query.fibonum, fiboval: data.result }); }); httpresp.on('error', function(err) { next(err); }); }); httpreq.on('error', function(err) { next(err); }); httpreq.end(); } else { res.render('index', { title: "Fibonacci Calculator", fiboval: undefined }); } }); Running the Fibonacci Calculator service now requires starting both processes. In one command window, we run: $ npm run server And in the other command window: $ npm start In the browser, we visit http://localhost:3000 and see what looks like the same application, because no changes were made to views/index.ejs. As you make requests in the browser window, the Fibonacci service window prints a log of requests it receives and values it sent. You can, of course, repeat the same experiment as before. Open two browser windows, in one window request a large Fibonacci number, and in the other make smaller requests. You'll see, because the server uses fibonacciAsync, that it's able to respond to every request. Why did we go through this trouble when we could just directly call fibonacciAsync? We can now push the CPU load for this heavy-weight calculation to a separate server. Doing so would preserve CPU capacity on the frontend server, so it can attend to web browsers. The heavy computation can be kept separate, and you could even deploy a cluster of backend servers sitting behind a load balancer evenly distributing requests. Decisions like this are made all the time to create multitier systems. What we've demonstrated is that it's possible to implement simple multitier REST services in a few lines of Node.js and Express. Summary While the Fibonacci algorithm we chose is artificially inefficient, it gave us an opportunity to explore common strategies to mitigate performance problems in Node.js. Optimizing the performance of our systems is as important as correctness, fixing bugs, mobile friendliness, and usability. Inefficient algorithms means having to deploy more hardware to satisfy load requirements, costing more money, and creating a bigger environmental impact. For real-world applications, optimizing away performance problems won't be as easy as it would be for the Fibonacci calculator. We could have just used the fibonacciLoop function, since it provides all the performance we'd need. But we needed to explore more typical approaches to performing heavy-weight calculations in Node.js while still keeping the event loop rolling. The bottom line is that in Node.js the event loop must run. Resources for Article: Further resources on this subject: Learning Node.js for Mobile Application Development [article] Node.js Fundamentals [article] Making a Web Server in Node.js [article]
Read more
  • 0
  • 0
  • 7620

article-image-quick-user-authentication-setup-django
Packt
10 May 2016
21 min read
Save for later

Quick User Authentication Setup with Django

Packt
10 May 2016
21 min read
In this article by Asad Jibran Ahmed author of book Django Project Blueprints we are going to start with a simple blogging platform in Django. In recent years, Django has emerged as one of the clear leaders in web frameworks. When most people decide to start using a web framework, their searches lead them to either Ruby on Rails or Django. Both are mature, stable, and extensively used. It appears that the decision to use one or the other depends mostly on which programming language you’re familiar with. Rubyists go with RoR, and Pythonistas go with Django. In terms of features, both can be used to achieve the same results, although they have different approaches to how things are done. One of the most popular platforms these days is Medium, widely used by a number of high-profile bloggers. Its popularity stems from its elegant theme and simple-to-use interface. I’ll walk you through creating a similar application in Django with a few surprise features that most blogging platforms don’t have. This will give you a taste of things to come and show you just how versatile Django can be. Before starting any software development project, it’s a good idea to have a rough roadmap of what we would like to achieve. Here’s a list of features that our blogging platform will have: Users should be able to register an account and create their blogs Users should be able to tweak the settings of their blogs There should be a simple interface for users to create and edit blog posts Users should be able to share their blog posts on other blogs on the platform I know this seems like a lot of work, but Django comes with a couple of contrib packages that speed up our work considerably. (For more resources related to this topic, see here.) The contrib Packages The contrib packages are a part of Django that contain some very useful applications that the Django developers decided should be shipped with Django. The included applications provide an impressive set of features, including some that we’ll be using in this application: Admin: This is a full-featured CMS that can be used to manage the content of a Django site. The Admin application is an important reason for the popularity of Django. We’ll use this to provide an interface for site administrators to moderate and manage the data in our application. Auth: This provides user registration and authentication without requiring us to do any work. We’ll be using this module to allow users to sign up, sign in, and manage their profiles in our application. Sites: This frameworkprovides utilities that help us run multiple Django sites using the same code base. We’ll use this feature to allow each user to have his own blog with content that can be shared between multiple blogs. There are a lot more goodies in the contrib module. I suggest you take a look at the complete list at https://docs.djangoproject.com/en/stable/ref/contrib/#contrib-packages. I usually end up using at least three of the contrib packages in all my Django projects. They provide often-required features such as user registration and management and free you to work on the core parts of your project, providing a solid foundation to build upon. Setting up our development environment Let’s start by creating the directory structure for our project, setting up the virtual environment, and configuring some basic Django settings that need to be set up in every project. Let’s call our blogging platform BlueBlog. To start a new project, you need to first open up your terminal program. In Mac OS X, it is the built-in terminal. In Linux, the terminal is named separately for each distribution, but you should not have trouble finding it; try searching your program list for the word terminal and something relevant should show up. In Windows, the terminal program is called the Command Line. You’ll need to start the relevant program depending on your operating system. If you are using the Windows operating system, some things will need to be done differently from what the book shows. If you are using Mac OS X or Linux, the commands shown here should work without any problems. Open the relevant terminal program for your operating system and start by creating the directory structure for our project and cd into the root project directory using the following commands: > mkdir –p blueblog > cd blueblog Next, let’s create the virtual environment, install Django, and start our project: > pyvenv blueblogEnv > source blueblogEnv/bin/activate > pip install django > django-admin.py startproject blueblog src With this out of the way, we’re ready to start developing our blogging platform. Database settings Open up the settings found at $PROJECT_DIR/src/blueblog/settings.py in your favorite editor and make sure that the DATABASES settings variable matches this: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'), } } In order to initialize the database file, run the following commands: > cd src > python manage.py migrate Staticfiles settings The last step in setting up our development environment is configuring the staticfiles contrib application. The staticfiles application provides a number of features that make it easy to manage the static files (css, images, and javascript) of your projects. While our usage will be minimal, you should look at the Django documentation for staticfiles in further detail as it is used quite heavily in most real-world Django projects. You can find the documentation at https://docs.djangoproject.com/en/stable/howto/static-files/. In order to set up the staticfiles application, we have to configure a few settings in the settings.py file. First, make sure that django.contrib.staticfiles is added to INSTALLED_APPS. Django should have done this by default. Next, set STATIC_URL to whatever URL you want your static files to be served from. I usually leave this to the default value, ‘/static/’. This is the URL that Django will put in your templates when you use the static template tag to get the path to a static file. Base template Next, let’s set up a base template that all the other templates in our application will inherit from. I prefer to have templates that are used by more than one application of a project in a directory named templates in the project source folder. To set this up, add os.path.join(BASE_DIR, 'templates') to the DIRS array of the TEMPLATES configuration dictionary in the settings file, and then create a directory named templates in $PROJECT_ROOT/src. Next, using your favorite text editor, create a file named base.html in the new folder with the following content: <html> <head> <title>BlueBlog</title> </head> <body> {% block content %} {% endblock %} </body> </html> Just as Python classes can inherit from other classes, Django templates can also inherit from other templates. Also, just as Python classes can have functions overridden by their subclasses, Django templates can also define blocks that children templates can override. Our base.html template provides one block to inherit templates to override called content. The reason for using template inheritance is code reuse. We should put HTML that we want to be visible on every page of our site, such as headers, footers, copyright notices, meta tags, and so on, in the base template. Then, any template inheriting from it will automatically get all this common HTML included automatically, and we will only need to override the HTML code for the block that we want to customize. You’ll see this principal of creating and overriding blocks in base templates used throughout the projects in this book. User accounts With the database setup out of the way, let’s start creating our application. If you remember, the first thing on our list of features is to allow users to register accounts on our site. As I’ve mentioned before, we’ll be using the auth package from the Django contrib packages to provide user account features. In order to use the auth package, we’ll need to add it our INSTALLED_APPS list in the settings file (found at $PROJECT_ROOT/src/blueblog/settings.py). In the settings file, find the line defining INSTALLED_APPS and make sure that the ‘django.contrib.auth’ string is part of the list. It should be by default but if, for some reason, it’s not there, add it manually. You’ll see that Django has included the auth package and a couple of other contrib applications to the list by default. A new Django project includes these applications by default because almost all Django projects end up using these. If you need to add the auth application to the list, remember to use quotes to surround the application name. We also need to make sure that the MIDDLEWARE_CLASSES list contains django.contrib.sessions.middleware.SessionMiddleware, django.contrib.auth.middleware.AuthenticationMiddleware, and django.contrib.auth.middleware.SessionAuthenticationMiddleware. These middleware classes give us access to the logged in user in our views and also make sure that if I change the password for my account, I’m logged out from all other devices that I previously logged on to. As you learn more about the various contrib applications and their purpose, you can start removing any that you know you won’t need in your project. Now, let’s add the URLs, views, and templates that allow the users to register with our application. The user accounts app In order to create the various views, URLs, and templates related to user accounts, we’ll start a new application. To do so, type the following in your command line: > python manage.py startapp accounts This should create a new accounts folder in the src folder. We’ll add code that deals with user accounts in files found in this folder. To let Django know that we want to use this application in our project, add the application name (accounts) to the INSTALLED_APPS setting variable; making sure to surround it with quotes. Account registration The first feature that we will work on is user registration. Let’s start by writing the code for the registration view in accounts/views.py. Make the contents of views.py match what is shown here: from django.contrib.auth.forms import UserCreationForm from django.core.urlresolvers import reverse from django.views.generic import CreateView class UserRegistrationView(CreateView): form_class = UserCreationForm template_name = 'user_registration.html' def get_success_url(self): return reverse('home') I’ll explain what each line of this code is doing in a bit. First, I’d like you to get to a state where you can register a new user and see for yourself how the flow works. Next, we’ll create the template for this view. In order to create the template, you first need to create a new folder called templates in the accounts folder. The name of the folder is important as Django automatically searches for templates in folders of that name. To create this folder, just type the following: > mkdir accounts/templates Next, create a new file called user_registration.html in the templates folder and type in the following code: {% extends "base.html" %} {% block content %} <h1>Create New User</h1> <form action="" method="post">{% csrf_token %} {{ form.as_p }} <input type="submit" value="Create Account" /> </form> {% endblock %} Finally, remove the existing code in blueblog/urls.py and replace it with this: from django.conf.urls import include from django.conf.urls import url from django.contrib import admin from django.views.generic import TemplateView from accounts.views import UserRegistrationView urlpatterns = [ url(r'^admin/', include(admin.site.urls)), url(r'^$', TemplateView.as_view(template_name='base.html'), name='home'), url(r'^new-user/$', UserRegistrationView.as_view(), name='user_registration'), ] That’s all the code that we need to get user registration in our project! Let’s do a quick demonstration. Run the development server by typing as follows: > python manage.py runserver In your browser, visit http://127.0.0.1:8000/new-user/ and you’ll see a user registration form. Fill this in and click on submit. You’ll be taken to a blank page on successful registration. If there are some errors, the form will be shown again with the appropriate error messages. Let’s verify that our new account was indeed created in our database. For the next step, we will need to have an administrator account. The Django auth contrib application can assign permissions to user accounts. The user with the highest level of permission is called the super user. The super user account has free reign over the application and can perform any administrator actions. To create a super user account, run this command: > python manage.py createsuperuser As you already have the runserver command running in your terminal, you will need to quit it first by pressing Ctrl + C in the terminal. You can then run the createsuperuser command in the same terminal. After running the createsuperuser command, you’ll need to start the runserver command again to browse the site. If you want to keep the runserver command running and run the createsuperuser command in a new terminal window, you will need to make sure that you activate the virtual environment for this application by running the same source blueblogEnv/bin/activate command that we ran earlier when we created our new project. After you have created the account, visit http://127.0.0.1:8000/admin/ and log in with the admin account. You will see a link titled Users. Click on this, and you should see a list of users registered in our app. It will include the user that you just created. Congrats! In most other frameworks, getting to this point with a working user registration feature would take a lot more effort. Django, with it’s batteries included approach, allows us to do the same with a minimum of effort. Next, I’ll explain what each line of code that you wrote does. Generic views Here’s the code for the user registration view again: class UserRegistrationView(CreateView): form_class = UserCreationForm template_name = 'user_registration.html' def get_success_url(self): return reverse('home') Our view is pretty short for something that does such a lot of work. That’s because instead of writing code from scratch to handle all the work, we use one of the most useful features of Django, Generic Views. Generic views are base classes included with Django that provide functionality commonly required by a lot of web apps. The power of generic views comes from the ability to customize them to a great degree with ease. You can read more about Django generic views in the documentation available at https://docs.djangoproject.com/en/1.9/topics/class-based-views/. Here, we’re using the CreateView generic view. This generic view can display ModelForm using a template and, on submission, can either redisplay the page with errors if the form data was invalid or call the save method on the form and redirect the user to a configurable URL. CreateView can be configured in a number of ways. If you want ModelForm to be created automatically from some Django model, just set the model attribute to the model class, and the form will be generated automatically from the fields of the model. If you want to have the form only show certain fields from the model, use the fields attribute to list the fields that you want, exactly like you’d do while creating ModelForm. In our case, instead of having ModelForm generated automatically, we’re providing one of our own, UserCreationForm. We do this by setting the form_class attribute on the view. This form, which is part of the auth contrib app, provides the fields and a save method that can be used to create a new user. You’ll see that this theme of composing solutions from small reusable parts provided by Django is a common practice in Django web app development and, in my opinion, one of the best features of the framework. Finally, we define a get_success_url function that does a simple reverse URL and returns the generated URL. CreateView calls this function to get the URL to redirect the user to when a valid form is submitted and saved successfully. To get something up and running quickly, we left out a real success page and just redirected the user to a blank page. We’ll fix this later. Templates and URLs The template, which extends the base template that we created earlier, simply displays the form passed to it by CreateView using the form.as_p method, which you might have seen in the simple Django projects you may have worked on before. The urls.py file is a bit more interesting. You should be familiar with most of it—the parts where we include the admin site URLs and the one where we assign our view a URL. It’s the usage of TemplateView that I want to explain here. Like CreateView, TemplateView is another generic view provided to us by Django. As the name suggests, this view can render and display a template to the user. It has a number of customization options. The most important one is template_name, which tells it which template to render and display to the user. We could have created another view class that subclassed TemplateView and customized it by setting attributes and overriding functions like we did for our registration view. However, I wanted to show you another method of using a generic view in Django. If you only need to customize some basic parameters of a generic view—in this case, we only wanted to set the template_name parameter of the view—you can just pass the values as key=value pairs as function keyword arguments to the as_view method of the class when including it in the urls.py file. Here, we pass the template name that the view renders when the user accesses its URL. As we just needed a placeholder URL to redirect the user to, we simply use the blank base.html template. This technique of customizing generic views by passing key/value pairs only makes sense when you’re interested in customizing very basic attributes, like we do here. In case you want more complicated customizations, I advice you to subclass the view; otherwise, you will quickly get messy code that is difficult to maintain. Login and logout With registration out of the way, let’s write code to provide users with the ability to log in and log out. To start out, the user needs some way to go to the login and registration pages from any page on the site. To do this, we’ll need to add header links to our template. This is the perfect opportunity to demonstrate how template inheritance can lead to much cleaner and less code in our templates. Add the following lines right after the body tag to our base.html file: {% block header %} <ul> <li><a href="">Login</a></li> <li><a href="">Logout</a></li> <li><a href="{% url "user_registration"%}">Register Account</a></li> </ul> {% endblock %} If you open the home page for our site now (at http://127.0.0.1:8000/), you should see that we now have three links on what was previously a blank page. It should look similar to the following screenshot: Click on the Register Account link. You’ll see the registration form we had before and the same three links again. Note how we only added these links to the base.html template. However, as the user registration template extends the base template, it got those links without any effort on our part. This is where template inheritance really shines. You might have noticed that href for the login/logout links is empty. Let’s start with the login part. Login view Let’s define the URL first. In blueblog/urls.py, import the login view from the auth app: from django.contrib.auth.views import login Next, add this to the urlpatterns list: url(r'^login/$', login, {'template_name': 'login.html'}, name='login'), Then, create a new file in accounts/templates called login.html. Put in the following content: {% extends "base.html" %} {% block content %} <h1>Login</h1> <form action="{% url "login" %}" method="post">{% csrf_token %} {{ form.as_p }} <input type="hidden" name="next" value="{{ next }}" /> <input type="submit" value="Submit" /> </form> {% endblock %} Finally, open up blueblog/settings.py file and add the following line to the end of the file: LOGIN_REDIRECT_URL = '/' Let’s go over what we’ve done here. First, notice that instead of creating our own code to handle the login feature, we used the view provided by the auth app. We import it using from django.contrib.auth.views import login. Next, we associate it with the login/ URL. If you remember the user registration part, we passed the template name to the home page view as a keyword parameter in the as_view() function. This approach is used for class-based views. For old-style view functions, we can pass a dictionary to the url function that is passed as keyword arguments to the view. Here, we use the template that we created in login.html. If you look at the documentation for the login view (https://docs.djangoproject.com/en/stable/topics/auth/default/#django.contrib.auth.views.login), you’ll see that on successfully logging in, it redirects the user to settings.LOGIN_REDIRECT_URL. By default, this setting has a value of /accounts/profile/. As we don’t have such a URL defined, we change the setting to point to our home page URL instead. Next, let’s define the logout view. Logout view In blueblog/urls.py, import the logout view: from django.contrib.auth.views import logout Add the following to the urlpatterns list: url(r'^logout/$', logout, {'next_page': '/login/'}, name='logout'), That’s it. The logout view doesn’t need a template; it just needs to be configured with a URL to redirect the user to after logging them out. We just redirect the user back to the login page. Navigation links Having added the login/logout view, we need to make the links we added in our navigation menu earlier take the user to those views. Change the list of links that we had in templates/base.html to the following: <ul> {% if request.user.is_authenticated %} <li><a href="{% url "logout" %}">Logout</a></li> {% else %} <li><a href="{% url "login" %}">Login</a></li> <li><a href="{% url "user_registration"%}">Register Account</a></li> {% endif %} </ul> This will show the Login and Register Account links to the user if they aren’t already logged in. If they are logged in, which we check using the request.user.is_authenticated function, they are only shown the Logout link. You can test all of these links yourself and see how little code was needed to make such a major feature of our site work. This is all possible because of the contrib applications that Django provides. Summary In this article we started with a simple blogging platform in Django. We also had a look at setting up the Database, Staticfiles and Base templates. We have also created a user account app with registration and navigation links in it. Resources for Article: Further resources on this subject: Setting up a Complete Django E-commerce store in 30 minutes [article] "D-J-A-N-G-O... The D is silent." - Authentication in Django [article] Test-driven API Development with Django REST Framework [article]
Read more
  • 0
  • 0
  • 11304
article-image-managing-payment-and-shipping-magento-2
Packt
10 May 2016
24 min read
Save for later

Managing Payment and Shipping with Magento 2

Packt
10 May 2016
24 min read
In this article by Bret Williams, author of the book Learning Magento 2 Administration, we will see how to manage payment gateways, shipping methods and orders with Magneto 2. E-commerce doesn't work unless customers actually purchase a product or service. In order to make that happen on your Magento store, you need to take payments, provide shipping solutions, collect any required taxes, and, of course, process orders. In this article , we're going to: Understand the checkout and payment process Discuss various payment methods you can offer your customers Configure table rate shipping and review other shipping options Manage the order process (For more resources related to this topic, see here.) It's extremely important that you take care to understand and manage these aspects of your online business, as this involves money — the customer's and yours. No matter how great your products or your pricing, if customers cannot purchase easily, understand your shipping and delivery, or feel in the least hesitant about completing their transaction, your customer leaves and neither they nor you achieve satisfactory results. Once an order is placed, you also have steps to take to process the purchase and make good on your obligation to fulfill your customer's request. Fortunately — as with many other aspects of online commerce — Magento has the features and tools in place to create a solid, efficient checkout experience. Understanding the checkout and payment process Since most people shopping online today have made at least one e-commerce purchase on a website, the general process of completing an order is fairly well established, although the exact steps will vary somewhat: Customer reviews their shopping cart, confirming the items they have decided to purchase. Customer enters their shipping destination information. Customer chooses a shipping method based on cost, method and time of delivery. Customer enters their payment information. Customer reviews their order and confirms their intent to purchase. The system (Magento, in our case) queries a payment processor for approval. The order is completed and ready for processing. Of course, as we'll explore in this article, there is much more detail related to this process. As online merchants, you want your customers to have a thorough, yet easy, purchasing experience and you want a valid order that can be fulfilled without complications. To achieve both ends, you have to prepare your Magento store to accurately process orders. So, let's jump in. Payment methods When a customer places an order on your Magento store, you'll naturally want to provide a means of capturing payment, whether it's immediate (credit card, PayPal, etc.) or delayed (COD, check, money order, credit). The payment methods you choose to provide, of course, are up to you, but you'll want to provide methods that: Reduce your risk of not getting paid. Provide convenience to your customers while fulfilling their payment expectations. Consumers expect to pay by credit card or through a third-party service such as PayPal. Wholesale buyers may expect to purchase using a Purchase Order or sending you a check before shipment. As with any business, you have to decide what will best benefit both you and your buyers. How Payment gateways work If you're new to online payments as a merchant, it's helpful to have an understanding of how payments are approved and captured in e-commerce. For this explanation, we're focusing on those payment gateways that allow you to accept credit and debit cards in your store. While PayPal Express and Standard works in a similar fashion, the three gateways that are included in the default Magento installation – PayPal Payments, Braintree and Authorize.net — process credit and debit cards similarly: Your customer enters their card information in your website during checkout. When the order is submitted, Magento sends a request to the gateway (PayPal Payments, Braintree or Authorize.net) for authorization of the card. The gateway submits the card information and order amount to a clearinghouse service that determines if the card is valid and the order amount does not exceed the credit limit of the cardholder. A success or failure code is returned to the gateway and on to the Magento store. If the intent is to capture the funds at time of purchase, the gateway will queue the capture into a batch for processing later in the day and notify Magento that the funds are "captured". A successful transaction will commit the order in Magento and a failure will result in a message to the purchaser. Other payment methods, such as PayPal Standard and PayPal Express, take the customer to the payment provider's website to complete the payment portion of the transaction. Once the payment is completed, the customer is returned to your Magento store front. When properly configured, integrated payment gateways will update Magento orders as they are authorized and/or captured. This automation means you spend less time managing orders and more time fulfilling shipments and satisfying your customers! PCI Compliance The protection of your customer's payment information is extremely important. Not only would a breach of security cause damage to your customer's credit and financial accounts, but the publicity of such a breach could be devastating to your business. Merchant account providers will require that your store meet stringent guidelines for PCI Compliance, a set of security requirements called Payment Card Industry Data Security Standard (PCI DSS). Your ability to be PCI compliant is based on the integrity of your hosting environment and by why methods you allow customers to enter credit card information on your site. Magento 2 no longer offers a Stored Credit Card payment method. It is highly unlikely that you could — or would want to — provide a server configuration secure enough to meet PCI DSS requirements for storing credit card information. You probably don't want the liability exposure, as well. You can, however, provide SSL Encryption that could satisfy PCI compliance as long as the credit card information is encrypted before being sent to your server, and then from your server to the credit card processor. As long as you're not storing the customer's credit card information on your server, you can meet PCI compliance as long as your hosting provider can assure compliance for server and database security. Even with SSL encryption, not all hosting environments will pass PCI DSS standards. It's vital that you work with a hosting company that has real Magento experience and can document proof of PCI compliance. Therefore, you should decide whether to provide onsite or offsite credit card payments. In other words, do you want to take payment information within your Magento checkout page or redirect the user to a payment service, such as PayPal, to complete their transaction? There are pros and cons of each method. Onsite transactions may be perceived as less secure and you do have to prove PCI compliance to your merchant account provider on an ongoing basis. However, onsite transactions mean that the customer can complete their transaction without leaving your website. This helps to preserve your brand experience for your customers. Fortunately, Magento is versatile enough to allow you to provide both options to your customers. Personally, we feel that offering multiple payment methods means you're more likely to complete a sale, while also showing your customers that you want to provide the most convenience in purchasing. Let's now review the various payment methods offered by default in Magento 2. Magento 2 comes with a host of the most popular and common payment methods. However, you should review other possibilities, such as Amazon Payments, Stripe and Moneybookers, depending on your target market. We anticipate that developers will be offering add-ons for these and other payment methods. Note that as you change the Merchant Location at the top of the Payment Methods panel, the payment methods available to you may change. PayPal all-in-one payment solutions While PayPal is commonly known for their quick and easy PayPal Express buttons — the ubiquitous yellow buttons you see throughout the web — PayPal can provide you with credit/debit card solutions that allow customers to use their cards without needing a PayPal account. To your customer, the checkout appears no different than if they were using a normal credit card checkout process. The big difference is that you have to set up a business account with PayPal before you can begin accepting non-PayPal account payments. Proceeds will go almost immediately into your PayPal account (you have to have a PayPal account), but your customers can pay by using a credit/debit card or their own PayPal account. With all-in-one solution, PayPal approves your application for a merchant account and allows you to accept all popular cards, including American Express, as a flat 2.9% rate, plus $0.30 per transaction. PayPal payments incur normal per transaction PayPal charges. We like this solution as it keeps all your online receipts in one account, while also giving you fast access to your sales income. PayPal also provides a debit card for its merchants that can earn back 1% on purchases. We use our PayPal debit card for all kinds of business purchases and receive a nice little cash back dividend each month. PayPal provide two ways to incorporate credit card payment capture on your website: PayPal Payments Advanced inserts a form on your site that is actually hosted from PayPal's highly secure servers. The form appears as part of your store, but you don't have any PCI compliance concerns. PayPal Payments Pro allows you to obtain payment information using the normal Magento form, then submit it to PayPal for approval. The difference to your customer is that for Advanced, there is a slight delay while the credit card form is inserted into the checkout page. You may also have some limitations in terms of styling. PayPal Standard, also a part of the all-in-one solution, takes your customer to a PayPal site for payment. Unlike PayPal Express, however, you can style this page to better reflect your brand image. Plus, customers do not have to have a PayPal account in order to use this checkout method. PayPal payment gateways If you already have a merchant account for collecting online payments, you can still utilize the integration of PayPal and Magento by setting up a PayPal business account that is linked to your merchant account. Instead of paying PayPal a percentage of each transaction — you would pay this to your merchant account provider — you simply pay a small per transaction fee. PayPal Express Offering PayPal Express is as easy as having a PayPal account. It does require some configurations of API credentials, but it does provide the simplest means of offering payment services without setting up a merchant account. PayPal Express will add "Buy Now" buttons to your product pages and the cart page of your store, giving shoppers quick and immediate ability to checkout using their PayPal account. Braintree PayPal recently acquired Braintree, a payment services company that adds additional services to merchants. While many of their offerings appear to overlap PayPal's, Braintree brings additional features to the marketplace such as Bitcoin, Venmo, Android Pay and Apple Pay payment methods, recurring billing and fraud protection. Like PayPal Payments, Braintree charges 2.9% + $0.30 per transaction. A Word about Merchant Fees After operating our own e-commerce businesses for many years, we have used many different merchant accounts and gateways. At first glance, 2.9% — offered by PayPal, Braintree and Stripe — appear to be expensive percentages. If you've been solicited by merchant account providers, you no doubt have been quoted rates as low as 1.7%. What is not often disclosed is that this rate only applies to basic cards that do not contain miles or other premiums. Rates for most cards you accept can be quite a bit higher. American Express usually charges more than 3% on transactions. Once you factor in gateway costs, reporting, monthly account costs, etc. you may find, as we did, that our total merchant costs using a traditional merchant account averaged over 3.3%! One cost you may not think to factor is the expense of set-up and integration. PayPal and Braintree have worked hard to create easy integrations to Magento (Stripe is not yet available for Magento 2 as of this writing). Check / Money Order If you have customers for whom you will accept payment by check and/or money order, you can enable this payment method. Be sure to enter all the information fields, especially Make Check Payable to and Send Check to. You will most likely want to keep the New Order Status as Pending, which means the order is not ready for fulfillment until you receive payment and update the order as Paid. As with any payment method, be sure to edit the Title of the method to reflect how you wish to communicate it to your customers. If you only wish to accept Money Orders, for instance, you might change Title to Money Orders (sorry, no checks). Bank transfer payment As with Check / Money order, you can allow customers to wire money to your account by providing information to your customers who choose this method. Cash on Delivery payment Likewise, you can offer COD payments. We still see this method being made available on wholesale shipments, but very rarely on B2C (Business-to-Consumer) sales. COD shipments usually cost more, so you will need to accommodate this added fee in your pricing or shipping methods. At present, there is no ability to add a COD fee using this payment method panel. Zero Subtotal Checkout If your customer, by use of discounts or credits, or selecting free items, owes nothing at checkout, enabling this method will cause Magento to hide payment methods during checkout. The content in the Title field will be displayed in these cases. Purchase order In B2B (Business-to-Business) sales, it's quite common to accept purchase order (PO) for customers with approved credit. If you enable this payment method, an additional field is presented to customers for entering their PO number when ordering. Authorize.net direct post Authorize.net — perhaps the largest payment gateway provider in the USA — provides an integrated payment capture mechanism that gives your customers the convenience of entering credit/debit card information on your site, but the actual form submission bypasses your server and goes directly to Authorize.net. This mechanism, as with PayPal Payments Advanced, lessens your responsibility for PCI compliance as the data is communicated directly between your customer and Authorize.net instead of passing through the Magento programming. In Magento 1.x, the regular Authorize.net gateway (AIM) was one of several default payment methods. We're not certain it will be added as a default in Magento 2, although we would imagine someone will build an extension. Regardless, we think Direct Post is a wonderful way to use Authorize.net and meet your PCI compliance obligations. Shipping methods Once you get paid for a sale, you need to fulfill the order and that means you have to ship the items purchased. How you ship products is largely a function of what shipping methods you make available to your customers. Shipping is one of the most complex aspects of e-commerce, and one where you can lose money if you're not careful. As you work through your shipping configurations, it's important to keep in mind: What you charge your customers for shipping does not have to be exactly what you're charged by your carriers. Just as you can offer free shipping, you can also charge flat rates based on weight or quantity, or add a surcharge to live rates. By default, Magento does not provide you with highly sophisticated shipping rate calculations, especially when it comes to dimensional shipping. Consider shipping rate calculations as estimates only. Consult with whomever is actually doing your shipping to determine if any rate adjustments should be made to accommodate dimensional shipping. Dimensional shipping refers to a recent change by UPS, FedEx and others to charge you the greater of two rates: the cost based on weight or the cost based on a formula to determine the equivalent weight of a package based on its size: (Length x Width x Height) ÷ 166 (for US domestic shipments; other factors apply for other countries and exports). Therefore, if you have a large package that doesn't weigh much, the live rate quoted in Magento might not be reflective of your actual cost once the dimensional weight is calculated. If your packages may be large and lightweight, consult your carrier representative or shipping fulfillment partner for guidance. If your shipping calculations need more sophistication than provided natively in Magento 2, consider an add-on. However, remember that what you charge to your customers does not have to be what you pay. For that reason — and to keep it simple for your customers — consider offering Table rates (as described later). Each method you choose will be displayed to your customers if their cart and shipping destination matches the conditions of the method. Take care not to confuse your customers with too many choices: simpler is better. Keeping these insights in mind, let's explore the various shipping methods available by default in Magento 2. Before we go over the shipping methods, let's go over some basic concepts that will apply to most, if not all, shipping methods. Origin From where you ship your products will determine shipping rates, especially for carrier rates (e.g. UPS, FedEx). To set your origin, go to Stores | Configuration | Sales | Shipping Settings and expand the Origin panel. At the very least, enter the Country, Region/State and ZIP/Postal Code field. The others are optional for rate calculation purposes. At the bottom of this panel is the choice to Apply custom Shipping Policy. If enabled, a field will appear where you can enter text about your overall shipping policy. For instance, you may want to enter Orders placed by 12:00p CT will be processed for shipping on the same day. Applies only to orders placed Monday-Friday, excluding shipping holidays. Handling fee You can add an invisible handling fee to all shipping rate calculations. Invisible in that it does not appear as a separate line item charge to your customers. To add a handling fee to a shipping method: Choose whether you wish to add a fixed amount or a percentage of the shipping cost If you choose to add a percentage, enter the amount as a decimal number instead of a percentage (example: 0.06 instead of 6%) Allowed countries As you configure your shipping methods, don't forget to designate to which countries you will ship. If you only ship to the US and Canada, for instance, be sure to have only those countries selected. Otherwise, you'll have customers from other countries placing orders you will have to cancel and refund. Method not available In some cases, the method you configured may not be applicable to a customer based on destination, type of product, weight or any number of factors. For these instances, you can choose to: Show the method (e.g. UPS, USPS, DHL, etc.), but with an error message that the method is not applicable Don't show the method at all Depending on your shipping destinations and target customers, you may want to show an error message just so the customer knows why no shipping solution is being displayed. If you don't show any error message and the customer disqualifies for any shipping method, the customer will be confused. Free shipping There are several ways to offer free shipping to your customers. If you want to display a Free Shipping option to all customers whose carts meet a minimum order amount (not including taxes or shipping), enable this panel. However, you may want to be more judicious in how and when you offer free shipping. Other alternatives include: Creating Shopping Cart Promotions Include a free shipping method in your Table Rates (see later in this section) Designate a specific free shipping method and minimum qualifying amount within a carrier configuration (such as UPS and FedEx) If you choose to use this panel, note that it will apply to all orders. Therefore, if you want to be more selective, consider one of the above methods. Flat Rate As with Free Shipping, above, the Flat Rate panel allows you to charge one, singular flat rate for all orders regardless of weight or destination. You can apply the rate on a per item or per order basis, as well. Table Rates While using live carrier rates can provide more accurate shipping quotes for your customers, you may find it more convenient to offer a series of rates for your customers at certain break points. For example, you might only need something as simple as for any domestic destination: 0-5 lbs, $5.99 6-10 lbs, $8.99 11+ lbs, $10.99 Let's assume you're a US-based shipper. While these rates will work for you when shipping to any of the contiguous 48 states, you need to charge more for shipments to Alaska and Hawaii. For our example, let's assume tiered pricing of $7.99, $11.99 and $14.99 at the same weight breaks. All of these conditions can be handled using the Table Rates shipping method. Based on our example, we would first start by creating a spreadsheet (in Excel or Numbers) similar to the following: Country Region/State Zip/Postal Code Weight (and above) Shipping Price USA * * 0 5.99 USA * * 6 8.99 USA * * 11 10.99 USA AK * 0 7.99 USA AK * 6 11.99 USA AK * 11 14.99 USA HI * 0 7.99 USA HI * 6 11.99 USA HI * 11 14.99 Let's review the columns in this chart: Country. Here, you would enter the 3-character country code (for a list of valid codes, see http://goo.gl/6A1woj). Region/State. Enter the 2-character code for any state or province. Zip/Postal Code. Enter any specific postal codes for which you wish the rate to apply. Weight (and above). Enter the minimum applicable weight for the range. The assigned rate will apply until the weight of the cart products combined equals a higher weight tier. Shipping Price. Enter the shipping charge you wish to provide to the customer. Do not include the currency prefix (example: "$" or "€"). Now, let's discuss the asterisk (*) and how to limit the scope of your rates. As you can see in the chart, we have only indicated rates for US destinations. That's because there are no rows for any other countries. We could easily add rates for all other countries, simply by adding rows with an asterisk in the first column. By adding those rows, we're telling Magento to use the US rates if the customer's ship-to address is in the US, and to use other rates for all other country destinations. Likewise for the states column: Magento will first look for matches for any state codes listed. If it can't find any, then it will look for any rates with an asterisk. If no asterisk is present for a qualifying weight, then no applicable rate will be provided to the customer. The asterisk in the Zip/Postal Code column means that the rates apply to all postal codes for all states. To get a sample file with which to configure your rates, you can set your configuration scope to one of your Websites (Furniture or Sportswear in our examples) and click Export CSV in the Table Rates panel. Quantity and price based rates In the preceding example, we used the weight of the items in the cart to determine shipping rates. You can also configure table rates to use calculations based on the number of items in the cart or the total price of all items (less taxes and shipping). To set up your chart, simply rename the fourth column "Quantity (and above)" or "Subtotal (and above)." Save your rate table To upload your table rates, you'll need to save/export your spreadsheet as a CSV file. You can name it whatever you like. Save it to your computer where you can find it for the next steps. Table rate settings Before you upload your new rates, you should first set your Table Rates configurations. To do so, you can set your default settings at the Default configuration scope. However, to upload your CSV file, you will need to switch your Store View to the appropriate Website scope. When changing to a Website scope, you will see the Export CSV button and the ability to upload your rate table file. You'll note that all other settings may have Use Default checked. You can, of course, uncheck this box beside any field and adjust the settings according to your preferences. Let's review the unique fields in this panel: Enabled. Set to "Yes" to enable table rates. Title. Enter the name you wish displayed to customers when they're presented with a table rate-based shipping charge in the checkout process. Method Name. This name is presented to the customer in the shopping cart. You should probably change the default "Table Rate" to something more descriptive, as this term is likely irrelevant to customers. We have used terms "Standard Ground," "Economy," or "Saver" as names. The Title should probably be the same, as well, so that the customer, during checkout, has a visual confirmation of their shipping choice. Condition. This allows you to choose the calculation method you want to use. Your choices, as we described earlier, are "Weight vs. Destination," "Price vs. Destination," and "# of items vs. Destination." Include Virtual Products in Price Calculation. Since virtual products have no weight, this will have no effect on rate calculations for weight-based rates. However, it will affect rate calculations for price or quantity-based rates. Once you have your settings, click Save Config. Upload Rate Table Once you have saved your settings, you can now click the button next to Import and upload your rate table. Be sure to test your rates to see that you have properly constructed your rate table. Carrier Methods The remaining shipping methods involve configuring UPS, USPS, FedEx and/or DHL to provide "live" rate calculations. UPS is the only one that is set to query for live rates without the need for you to have an account with the carrier. This is both good and bad. It's good, as you only have to enable the shipping method to have it begin querying rates for your customers. On the flip side, the rates that are returned are not negotiated rates. Negotiated rates are those you may have been offered as discounted rates based on your shipping volume. FedEx, USPS and DHL require account-specific information in order to activate. This connection with your account should provide rates based on any discounts you have established with your carrier. If you wish to use negotiated rates for UPS, you may have to find a Magento add-on that will accommodate or have your developer extend your Magento installation to make a modified rate query. If you have some history with shipping, you should negotiate rates with the carriers. We have found most are willing to offer some discount from "published rates." Shipping integrations Unless you have your own sophisticated warehouse operation, it may be wise to partner with a fulfillment provider that can not only store, pick, pack and ship your orders, but also offers deep discounts on shipping rates due to their large volumes. Amazon FBA (Fulfillment By Amazon) is a very popular solution. Shipping is a low flat rate based on weight (http://goo.gl/UKjg7). ShipWire is another fulfillment provider that is well integrated with Magento. In fact, their integration can provide real-time rate quotes for your customers based on the products selected, warehouse availability and destination (http://www.ShipWire.com). We have not heard if they have updated their integration for Magento 2, yet, but we suspect they will. Summary Selling is the primary purpose of building an online store. As you've seen in this article, Magento 2 arms you with a very rich array of features to help you give your customers the ability to purchase using a variety of payment methods. You're able to customize your shipping options and manage complex tax rules. All of this combines to make it easy for your customers to complete their online purchases. Resources for Article: Further resources on this subject: Social Media and Magento [article] Creating a Responsive Magento Theme with Bootstrap 3 [article] Magento 2 – the New E-commerce Era [article]
Read more
  • 0
  • 0
  • 34638

article-image-redis-cluster-client-development
Zhe Lin
09 May 2016
5 min read
Save for later

Redis Cluster Client Development

Zhe Lin
09 May 2016
5 min read
In this post, I'd like to discuss how to write a client for a Redis Cluster a little deeper. The assumption here is that readers have a working knowledge of the Redis protocol, and therefore we will talk about how to deal with data sharding/resharding in a Redis Cluster. As is known, each Redis instance in a cluster serves a number of non-overlapping data slots. This means two basic things: There's a way to know how the cluster shards its dataset, which is the CLUSTER NODES command. If you send a data command such as GETSET to a Redis instance that doesn't contain the slot of the command key, you will get a MOVED error, so you need to keep the sharding information up to date. A CLUSTER NODES command to each Redis instance in a cluster will bring you equivalent sharding and replication information and can help you know who serves what. For example, the result could be like this: 33e5b1010d56d328370a993cf8420d70b3052d2c 127.0.0.1:7004 slave 71568798b4f1edeb0577614e072b492173b63808 0 1444704850437 2 connected 71568798b4f1edeb0577614e072b492173b63808 127.0.0.1:7002 master - 0 1444704848935 2 connected 3000-8499 f34b1ee569f3b856a80633a0b68892a58bfc87d2 127.0.0.1:7001 slave 1141f524b3ac82bb47bb101e0f8ae75ed56cdf54 0 1444704850938 1 connected 1141f524b3ac82bb47bb101e0f8ae75ed56cdf54 127.0.0.1:7003 master - 0 1444704849436 1 connected 0-2999 8500-10499 026f1e9b8c205de0a3645fd8a5eae3fbd0ca8639 127.0.0.1:7005 master - 0 1444704850838 3 connected 10500-16383 79ef081c431bbc7edc305a9611278be0df8fcd04 127.0.0.1:7000 myself,slave 026f1e9b8c205de0a3645fd8a5eae3fbd0ca8639 0 0 1 connected By simple splitting of each line by the space character, there are at least eight items a line. The first four items are node id, address, flags, and master node id. A node with the myself flag is the one receiving the CLUSTER NODES command. If a node has the slave flag, its master node ID, which won't be a dash, indicates who it has replicated. A node with a fail, fail?, or handshake flag is not connectible from others and may probably fail to respond, so you can just ignore it. Then let's see the items after the first eight, which indicate the slots held by the node. The slots' ranges are closed intervals; for example, 3000-8499 means 3000, 3001, ..., 8498, 8499, so the node at 127.0.0.1:7002 serves 5,500 slots. And the ranges could be separated so that the slots' ranges 127.0.0.1:7003 contained are 0-2999 and 8500-10499. After parsing the cluster information, you may know which master node a data command belongs to. If you don't mind getting possibly stale data, you can increase the QPS by sending reading commands to a slave. A hint is that you need to send a READONLY command before you can read data from the slave Redis. It's a good start if your client library is able to parse static sharding information. Since the Redis Cluster may reshard, you need to know what happens when Redis instances join or quit the cluster. For example, after a Redis that binds 127.0.0.1:8000 joined and slot #0 is migrating to it from 127.0.0.1:7003. However, your client probably has no idea about that and sends a command such as GET h-893 (h-893 is in slot #0) to 127.0.0.1:7003 as usual. Then the result could be one of the following: Normal response if the slot is in migrating state and the key hits the cache at 127.0.0.1:7003. An -ASK [ADDRESS] error if the slot is in migrating state and the key doesn't hit the cache (non-existent or just migrated). A -MOVED [SLOT#] [ADDRESS] error if the slot is completely migrated to the new instance. In case #2, you may need to retry the command later, and in either of the other two cases, you need to update the sharding information. You can simply redirect the command to the new address according to what the MOVED error told you, or update the complete slots mapping. When a Redis instance quits the cluster and become a free node, it won't reset your TCP connections, but will lways reply with a -CLUSTERDOWN error for any data commands. In this case, don't panic and try doing a sharding information update from other Redis instances (so that your client has to remember all the instances in case of it). But if you think it over, you may realize that there's a chance that the Redis can join another cluster after it quits the old one, and it happens to serve the same slots it used to. Therefore, you client may work normally but data actually goes somewhere else. So, your client ought to provide an approach for an initiative sharding update. As we have discussed normal cases, what happens when some Redis has been killed? This won't be so complicated if the slaves fail over their disconnected masters and eventually the cluster state turns to OK (otherwise, ask your Redis administrator to fix it immediately). Your client will encounter an I/O exception of TCP connection reset when the Redis server is down or some error is writing to the socket. Anyway, it's time to do a sharding information update. These are the things that a client for a Redis Cluster should cover. Hopefully, this helps you if you decide to write a cluster client, or provides you with valuable information if you want to learn more details about your client lib (provided you are already using a Redis Cluster). About the author Zhe Lin is a system engineer at the IT Developing Center at Hunantv.com. He is in charge of Redis related toolkit developing, maintenance and in-company training. Major works include: * [redis-trib.py] : Redis Cluster creating and resharding toolkit in Python2 * [redis-cerberus] : Redis Cluster proxy * [redis-ctl] : Redis management tool with web UI
Read more
  • 0
  • 0
  • 5453

article-image-jira-workflows
Packt
05 May 2016
15 min read
Save for later

JIRA Workflows

Packt
05 May 2016
15 min read
In this article by Patrick Li, author of the book Jira 7 Administration Cookbook - Second Edition, we will take a look at how Workflows are one of the core and most powerful features in JIRA. They control how issues in JIRA move from one stage to another as they are worked on, often passing from one assignee to another. For this reason, workflows can be thought of as the life cycle of issues. In this article, we will cover: Setting up different workflows for your project Capturing additional information during workflow transitions Using common transitions Using global transitions Restricting the availability of workflow transitions Validating user input in workflow transitions Performing additional processing after a transition is executed Unlike many other systems, JIRA allows you to create your own workflows to resemble the work processes you may already have in your organization. This is a good example of how JIRA is able to adapt to your needs without your having to change the way you work. (For more resources related to this topic, see here.) Setting up different workflows for your project A workflow is similar to a flowchart, in which issues can go from one state to another by following the direction paths between the states. In JIRA's workflow terminology, the states are called statuses, and the paths are called transitions. We will use these two major components when customizing a workflow. In this recipe, we will create a new, simple workflow from scratch. We will look at how to use existing statuses, create new statuses, and link them together using transitions. How to do it… The first step is to create a new skeleton workflow in JIRA, as follows: Log in to JIRA as a JIRA administrator. Navigate to Administration > Issues > Workflows. Click on the Add Workflow button and name the workflow Simple Workflow. Click on the Diagram button to use the workflow designer or diagram mode. The following screenshot explains some of the key elements of the workflow designer: As of now, we have created a new, inactive workflow. The next step is to add various statuses for the issues to go through. JIRA comes with a number of existing statuses, such as In Progress and Resolved, for us to use: Click on the Add status button. Select the In Progress status from the list and click on Add. You can type the status name into the field, and JIRA will automatically find the status for you. Repeat the steps to add the Closed status. Once you add the statuses to the workflow, you can drag them around to reposition them on the canvas. We can also create new statuses, as follows: Click on the Add status button. Name the new status Frozen and click on Add. JIRA will let you know if the status you are entering is new by showing the text (new status) next to the status name. Now that we have added the statuses, we need to link them using transitions: Select the originating status, which in this example is Open. Click on the small circle around the OPEN status and drag your cursor onto the In PROGRESS status. This will prompt you to provide details for the new transition, as shown in the following screenshot: Name the new transition Start Progress and select the None option for the screen. Repeat the steps to create a transition called Close between the In PROGRESS and CLOSED statuses. You should finish with a workflow that looks similar to the following screenshot: At this point, the workflow is inactive, which means that it is not used by a project, and you can edit it without any restrictions. Workflows are applied on a project and issue-type basis. Perform the following steps to apply the new workflow to a project: Select the project to apply the workflow to. Click on the Administration tab to go to the project administration page. Select Workflows from the left-hand side of the page. Click on Add Existing from the Add Workflow menu. Then, select the new Simple Workflow from the dialog and click on Next. Choose the issue types to apply (for example, Bug) the workflow to and click on Finish. After we apply the workflow to a project, the workflow is placed in the active state. So, if we now create a new issue in the target project of the selected issue type, our new Simple Workflow will be used. Capturing additional information during workflow transitions When users execute a workflow transition, we have an option to display an intermediate workflow screen. This is a very useful way to collect some additional information from the user. For example, the default JIRA workflow will display a screen for users to select the Resolution value when the issue is resolved. Issues with resolution values are considered completed. You should only add the Resolution field to workflow screens that represent the closing of an issue. Getting ready We need to have a workflow to configure, such as Simple Workflow, which was created in the previous recipe. We also need to have screens to display; JIRA's out-of-the-box Workflow screen and Result Issue screen will suffice, but if you create your own screens, those can also be used. How to do it… Perform the following steps to add a screen to a workflow transition: Select the workflow to update, such as our Simple Workflow. Click on the Edit button if the workflow is active. This will create a draft workflow for us to work on. Select the Start Progress transition and click on the Edit link from the panel on the right-hand side. Select the Workflow screen from the Screen drop-down menu and click on Save. Repeat Step 3 and Step 4 to add the Resolve Issue screen to the Close transition. If you are working with a draft workflow, you must click on the Publish Draft button to apply our changes to the live workflow. If you do not see your changes reflected, it is most likely that you forgot to publish your draft workflow. Using common transitions Often, you will have transitions that need to be made available from several different statuses in a workflow, such as the Resolve and Close transitions. In other words, these are transitions that have a common destination status but many different originating statuses. To help you simplify the process of creating these transitions, JIRA lets you reuse an existing transition as a common transition if it has the same destination status. Common transitions are transitions that have the same destination status but different originating statuses. A common transition has an additional advantage of ensuring that transition screens and other relevant configurations, such as validators, stay consistent. Otherwise, you would have to constantly check the various transitions every time you make a change to one of them. How to do it… Perform the following steps to create and use common transitions in your workflow: Select the workflow and click on the Edit link to create a draft. Select the Diagram mode. Create a transition between two steps—for example, Open and Closed. Create another transition from a different step to the same destination step and click on the Reuse a transition tab. Next, select the transition created in Step 3 from the transition to reuse the drop-down menu and click on Add. Click on Publish Draft to apply the change. See also Refer to the Using global transitions recipe. Using global transitions While a common transition is a great way to share transitions in a workflow and reduce the amount of management work that would otherwise be required, it has the following two limitations: Currently, it is only supported in the classic diagram mode (if running on older JIRA versions) You still have to manually create the transitions between the various steps As your workflow starts to become complicated, explicitly creating the transitions becomes a tedious job; this is where global transitions come in. A global transition is similar to a common transition in the sense that they both share the property of having a single destination status. The difference between the two is that the global transition is a single transition that is available to all the statuses in a workflow. In this recipe, we will look at how to use global transitions so that issues can be transitioned to the Frozen status throughout the workflow. Getting ready As usual, you need to have a workflow you can edit. As we will demonstrate how global transitions work, you need to have a status called Frozen in your workflow and ensure that there are no transitions linked to it. How to do it… Perform the following steps to create and use global transitions in your workflow: Select and edit the workflow that you will be adding the global transition to. Select the Diagram mode. Select the Frozen status. Then, check the Allow all statuses to transition to this one option. Click on Publish Draft to apply the change. The following screenshot depicts the preceding steps: Once you create a global transition for a status, it will be represented as an All transition, as shown in the following screenshot: After the global transition is added to the Frozen status, you will be able to transition issues to Frozen regardless of its current status. You can only add global transitions in the Diagram mode. See also Refer to the Restricting the availability of workflow transitions recipe on how to remove a transition when an issue is already in the Frozen status. Restricting the availability of workflow transitions Workflow transitions, by default, are accessible to anyone who has access to the issue. There will be times when you would want to restrict the access to certain transitions. For example, you might want to restrict access to the Freeze Issue transition for the following reasons: The transition should only be available to users in specific groups or project roles As the transition is a global one, it is available to all the workflow statuses, but it does not make sense to show the transition when the issue is already in the Frozen status To restrict the availability of a workflow transition, we can use workflow conditions. Getting ready For this recipe, we need to have the JIRA Suite Utilities add-on installed. You can download it from the following link or install it directly from Universal Plugin Manager: https://marketplace.atlassian.com/plugins/com.googlecode.jira-suite-utilities How to do it… We need to create a new custom field configuration scheme in order to set up a new set of select list options. Perform the following steps for this: Select and edit the workflow to configure. Select the Diagram mode. Click on the Frozen global workflow transition. Click on the Conditions link from the panel on the right-hand side. Click on the Add condition, select the Value Field condition (provided by JIRA Suite Utilities) from the list, and click on Add. Configure the condition with the following parameters: The Status field for Field The Not equal != for Condition Frozen for Value String for Comparison Type This means that it should only show the transition if the issue's status field's value is not Frozen. Click on the Add button to complete the condition setup. At this point, we added the condition that will make sure the Freeze Issue transition is not shown when the issue is already in the Frozen status. The next step is to add another condition to restrict the transition and is only available to users in the Developer role. Click on the Add condition again and select the User in the Project Role condition. Then, select the Developer project role and click on Add. Click on Publish Draft to apply the change. After you apply the workflow conditions, the Frozen transition will no longer be available if the issue is already in the Frozen status and/or the current user is not in the Developer project role. There's more… Using the Value Field condition (that comes with the JIRA Suite Utilities add-on) is one of the many ways in which we can restrict the availability of a transition based on the issue's previous status. There is another add-on called JIRA Misc Workflow Extensions (this is a paid add-on), which comes with a Previous Status condition for this very use case. You can download it from https://marketplace.atlassian.com/plugins/com.innovalog.jmwe.jira-misc-workflow-extensions. When you have more than one workflow condition applied to the transition, as in our example, the default behavior is that all the conditions must pass for the transition to be available. You can change this so that only one condition needs to pass for the transition to be available by changing the condition group logic from All of the following conditions to Any of the following conditions, as shown in the following screenshot: Validating user input in workflow transitions For workflow transitions that have transition screens, you can add validation logic to make sure what the users put in is what you are expecting. This is a great way to ensure data integrity, and we can do this with workflow validators. In this recipe, we will add a validator to perform a date comparison between a custom field and the issue's create date, so the date value we select for the custom field must be after the issue's create date. Getting ready For this recipe, we need to have the JIRA Suite Utilities add-on installed. You can download it from the following link or install it directly using Universal Plugin Manager: https://marketplace.atlassian.com/plugins/com.googlecode.jira-suite-utilities As we will also do a date comparison, we need to create a new date custom field called Start Date and add it to the Workflow screen. How to do it… Perform the following steps to add validation rules during a workflow transition: Select and edit the workflow to configure. Select the Diagram mode. Select the Start Progress transition and click on the Validators link on the right-hand side. Click on the Add validator link and select Date Compare from the list. Configure the validator with the following parameters: The Start Date custom field for This date The Greater than > for Condition Created for Compare with Click on Add to add the validator. Then, click on Publish Draft to apply the change. After we add the validator, if we now try to select a date that is before the issue's create date, JIRA will prompt you with an error message and stop the transition from going through, as shown in the following screenshot: How it works… Validators are run before the workflow transition is executed. This way, validators can intercept and prevent a transition from going through if one or more of the validation logics fail. If you have more than one validator, all of them must pass for the transition to go through. Performing additional processing after a transition is executed JIRA allows you to perform additional tasks as part of a workflow transition through the use of post functions. JIRA makes heavy use of post functions internally; for example, with the out-of-the-box workflow, when you reopen an issue, the resolution field value is cleared automatically. In this recipe, we will look at how to add post functions to a workflow transition. We will add a post function to automatically clear out the value stored in the Reason for Freezing custom field when we take it out of the Frozen status. Getting ready By default, JIRA comes with a post function that can change the values for standard issue fields, but as Reason for Freezing is a custom field, we need to have the JIRA Suite Utilities add-on installed. You can download this from the following link or install it directly using Universal Plugin Manager: https://marketplace.atlassian.com/plugins/com.googlecode.jira-suite-utilities How to do it… Perform the following steps to add processing logic after a workflow transition is executed: Select and edit the workflow to configure. Select the Diagram mode. Click on the Frozen global workflow transition. Next, click on the Post functions link on the right-hand side. Click on Add post function, select Clear Field Value post function from the list, and click on Add. Select the Reason for Freezing field from Field and click on the Add button. Now, click on Publish Draft to apply the change. With the post function in place, after you execute the transition, the Reason for Freezing field will be cleared out. You can also note from the issue's change history as a part of the transition execution that where the Status field is changed from Frozen to Open, the change for the Reason for Freezing field is also recorded. How it works… Post functions are run after the transition is executed. When you add a new post function, you might notice that the transition already has a number of post functions preadded; this is shown in the screenshot that follows. These post functions are system post functions that carry out important internal functions, such as keeping the search index up to date. The order of these post functions is important. Always add your own post functions at the top of the list. For example, any changes to issue field values, such as the one we just added, should always happen before the Reindex post function, so by the time the transition is completed, all the field indexes are up to date and ready to be searched. Summary In this article, you learned about not only how to create workflows with the new workflow designer but also how to use workflow components, such as conditions and validators, to add additional behavior to your workflows. We also looked at many different add-ons that are available to expand the possibilities of what you can do with workflows. Resources for Article: Further resources on this subject: JIRA Agile for Scrum [article] JIRA – an Overview [article] Gadgets in JIRA [article]
Read more
  • 0
  • 0
  • 6204
article-image-understanding-drivers
Packt
04 May 2016
7 min read
Save for later

Understanding Drivers

Packt
04 May 2016
7 min read
In this article by Jeff Stokes and Manuel Singer, authors of the book Mastering the Microsoft Deployment Toolkit 2013, we will discuss how to utilize Microsoft Deployment Toolkit (MDT) to make the complex world of device drivers into a much more manageable experience. We will focus on how drivers get installed via MDT, how to specifically control the drivers that get installed, and general best practices around proper driver management. We will cover the following topics in this article: Understanding offline servicing The MDT method of driver detection and injection (For more resources related to this topic, see here.) Understanding offline servicing Those of us who created images for the deployment of Windows XP were often met with an enormous challenge of dealing with drivers for many different models of hardware. We were already forced to create separate images for different hardware abstraction layer (HAL) families. Additionally, in order to deal with different hardware models within the same HAL family, the standard practice was to usually have a folder called C:Drivers, which contained a copy of every possible driver that could be required by this image for all of the different hardware models it would be installed to. There was an OemPnPDriversPath entry in the registry that individually listed each of the driver paths (subfolders under the C:Drivers directory) for the Windows Plug and Play process to locate and install the driver. As you can imagine, this was not a very efficient way to manage drivers. One reason is that every driver for every machine was staged in the image, causing the image size to grow. Another reason being that we were relying on Plug and Play to figure out the right driver to install, which gives us less control of the driver that actually gets installed, based on a driver ranking process. Fast forward to Windows Vista and current versions of Windows, and we can now utilize the magic of offline servicing to inject drivers into our Windows Imaging Format (WIM) as it is getting deployed. With this in mind, consider the concept of having your customized Windows image created through your reference image build process, but it contains no drivers. Now, when we deploy this image, we can utilize a process to detect all the hardware in the target machine, and then grab only the correct drivers that we need for this particular machine. Then, we can utilize Deployment Image Servicing and Management (DISM) to inject them into our WIM before the WIM actually gets installed, therefore, making the drivers available to be installed as Windows is installed on this machine. MDT is doing just that. The MDT method of driver detection and injection When we boot a target machine via our Lite Touch media, one of the initial task sequence steps will enumerate (via PnpEnum) all the PNP IDs for every device in the machine. Then, as part of the inject drivers task sequence step, we will search all of our Out-of-Box driver INF files to find the matching driver, then MDT will utilize DISM to inject these drivers offline into the WIM. Note that, by default, we will be searching our entire Out-of-Box repository and letting PnP figure things out. We can force MDT to only choose from drivers that we specify, therefore, gaining strict control over which drivers actually get installed. The preceding scenario indicates that this whole process hinges on the fact that we are searching through driver INF files to find matching PNP IDs in order to correctly detect and install the correct driver. This brings up a concern: what if the driver does not contain an INF file, but rather it simply has to be installed via an EXE program? In this scenario, we cannot utilize the driver injection process. Instead, we would treat that driver as an application in MDT, meaning we would add a new application, using the EXE program as the source files, specifying the command-line syntax to launch the driver install program and install silently, and then adding this application as a task sequence step. I will later demonstrate how to utilize conditional statements in your task sequence to only install that driver program on the model that it applies to; therefore, keeping our task sequence flexible to be able to install correctly on any hardware. Populating the Out-of-Box Drivers node of MDT The first step will be to visit the OEM Manufacturer’s website and download all the device drivers for each model machine that we will be deploying to. Note that many OEMs now offer a deployment-specific download or CAB file that has all the drivers for a particular model compressed into one single CAB file. This benefits you as you will not have to go through the hassle of downloading and extracting each individual driver for each device separately (NIC, video, audio, and so on). Once you download the necessary drivers, store them in a folder for each specific model, as you will need to extract the drivers within your folder before importing them into MDT. Next, we want to create a folder structure under the Out-of-Box Drivers node in MDT to organize our drivers. This will not only allow easy manageability of drivers, as new drivers are released by the OEM; but if we name the folders to match the model names exactly, we can later introduce logic to limit our PnP search to the exact folder that contains the correct drivers for our particular hardware model. As we will have different drivers for x86 and x64, as well as for different operating systems, a general best practice would be to create the first hierarchy of your folder structure. Perform the following steps to populate the node in MDT: In order to create the folder structure, simply click on Out-Of-Box-Drivers and choose New Folder, as shown in the following screenshot: Next, we will want to create a folder for each model that we will be deploying to: In order to ensure that you are using the correct model name, you can use the following WMI query to see what the hardware returns as the model name: Once you have your folder structure created, you are ready to inject the drivers. Right-click on the model folder and choose Import Drivers. Point the driver source directory to the folder, where you have downloaded and extracted the OEM drivers: There is a checkbox stating Import drivers even if they are duplicates of an existing driver. This is because MDT is utilizing the single instance storage technology to store the drivers in the actual deployment share. If you are importing multiple copies of a drivers to different folders, MDT only stores one copy of the file in the actual filesystem by default, and the folder structure you see within the MDT Workbench will be pointing duplicates to the same file in order to not waste space. As new drivers are released from the OEM, you can simply replace the drivers by going to the particular folder for this model, removing the old drivers, and importing the new drivers. Then, the next time you install your WIM in this model, you will be using the new drivers, and you won’t have to make any modifications or updates to your WIM. Summary In this article, we understood offline servicing, MDT method for driver detection and injection, and how to populate the Out-of-Box Drivers node of MDT. For more information related to MDT, refer to the following book by Packt Publishing: Mastering the Microsoft Deployment Toolkit 2013: https://www.packtpub.com/hardware-and-creative/mastering-microsoft-deployment-toolkit-2013 Resources for Article:   Further resources on this subject: The Configuration Manager Troubleshooting Toolkit [article] Social-Engineer Toolkit [article] Working with Entities in Google Web Toolkit 2 [article]
Read more
  • 0
  • 0
  • 26109

article-image-working-cloud
Packt
02 May 2016
33 min read
Save for later

Working with the Cloud

Packt
02 May 2016
33 min read
In this article by Gastón C. Hillar , author of book Internet of Things with Python, we will take advantage of many cloud services to publish and visualize data collected for sensors and to establish bi-directional communications between Internet-connected things. (For more resources related to this topic, see here.) Sending and receiving data in real-time through Internet with PubNub We want to send and receive data in real-time through the Internet and a RESTful API is not the most appropriate option to do this. Instead, we will work with a publish/subscribe model-based on a protocol that is lighter than the HTTP protocol. Specifically, we will use a service-based on the MQ Telemetry Transport (MQTT) protocol. The MQTT protocol is a machine-to-machine (M2M) connectivity protocol. MQTT is a lightweight messaging protocol that runs on top of the TCP/IP protocol and works with a publish-subscribe mechanism. It is possible for any device to subscribe to a specific channel (also known as topic) and receive all the messages published to this channel. In addition, the device can publish message to this or other channel. The protocol is becoming very popular in IoT and M2M projects. You can read more about the MQTT protocol in the following Webpage: http://mqtt.org. PubNub provides many cloud-based services and one of them allows us to easily stream data and signal any device in real-time, working with the MQTT protocol under the hoods. We will take advantage of this PubNub service to send and receive data in real-time through Internet and make it easy to control our Intel Galileo Gen 2 board through the Internet. As PubNub provides a Python API with high quality documentation and examples, it is extremely easy to use the service in Python. PubNub defines itself as the global data stream network for IoT, Mobile and Web applications. You can read more about PubNub in its Webpage: http://www.pubnub.com. In our example, we will take advantage of the free services offered by PubNub and we won't use some advanced features and additional services that might empower our IoT project connectivity requirements but also require a paid subscription. PubNub requires us to sign up and create an account with a valid e-mail and a password before we can create an application within PubNub that allows us to start using their free services. We aren't required to enter any credit card or payment information. If you already have an account at PubNub, you can skip the next step. Once you created your account PubNub will redirect you to the admin portal that lists your PubNub applications. It is necessary to generate your PubNub publish and subscribe keys in order to send and receive messages in the network. A new pane will represent the application in the admin portal. The following screenshot shows the Temperature Control application pane in the PubNub admin portal. Click on the Temperature Control pane and PubNub will display the Demo Keyset pane that has been automatically generated for the application. Click on this pane and PubNub will display the publish, subscribe and secret keys. We must copy and paste each of the keys to use them in our code that will publish messages and subscribe to them. The following screenshot shows the prefixes for the keys and the remaining characters have been erased in the image. In order to copy the secret key, you must click on the eye icon at the right-hand side of the key and PubNub will make all the characters visible. Now, we will use pip installer to install PubNub Python SDK 3.7.6. We just need to run the following command in the SSH terminal to install the package. Notice that it can take a few minutes to complete the installation. pip install pubnub The last lines for the output will indicate that the pubnub package has been successfully installed. Don't worry about the error messages related to building wheel and the insecure platform warning. Downloading pubnub-3.7.6.tar.gz Collecting pycrypto>=2.6.1 (from pubnub) Downloading pycrypto-2.6.1.tar.gz (446kB) 100% |################################| 446kB 25kB/s Requirement already satisfied (use --upgrade to upgrade): requests>=2.4.0 in /usr/lib/python2.7/site-packages (from pubnub) Installing collected packages: pycrypto, pubnub Running setup.py install for pycrypto Installing collected packages: pycrypto, pubnub Running setup.py install for pycrypto Running setup.py install for pubnub Successfully installed pubnub-3.7.6 pycrypto-2.6.1 We will use iot_python_chapter_08_03.py (you will find this in the code bundle) code as a baseline to add new features that will allow us to perform the following actions with PubNub messages sent to a specific channel from any device that has a Web browser: Rotate the servo's shaft to display a temperature value in degrees Fahrenheit received as part of the message Display a line of text received as part of the message at the bottom of the OLED matrix We will use the recently installed pubnub module to subscribe to a specific channel and run code when we receive messages in the channel. We will create a MessageChannel class to represent the communications channel, configure the PubNub subscription and declare the code for the callbacks that are going to be executed when certain events are fired. The code file for the sample is iot_python_chapter_09_02.py (you will find this in the code bundle). Remember that we use the code file iot_python_chapter_08_03.py (you will find this in the code bundle)as a baseline, and therefore, we will add the class to the existing code in this file and we will create a new Python file. Don't forget to replace the strings assigned to the publish_key and subscribe_key local variables in the __init__ method with the values you have retrieved from the previously explained PubNub key generation process. import time from pubnub import Pubnub class MessageChannel: command_key = "command" def __init__(self, channel, temperature_servo, oled): self.temperature_servo = temperature_servo self.oled = oled self.channel = channel # Publish key is the one that usually starts with the "pub-c-" prefix # Do not forget to replace the string with your publish key publish_key = "pub-c-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" # Subscribe key is the one that usually starts with the "sub-c" prefix # Do not forget to replace the string with your subscribe key subscribe_key = "sub-c-xxxxxxxx-xxxx-xxxx-xxxx- xxxxxxxxxxxx" self.pubnub = Pubnub(publish_key=publish_key, subscribe_key=subscribe_key) self.pubnub.subscribe(channels=self.channel, callback=self.callback, error=self.callback, connect=self.connect, reconnect=self.reconnect, disconnect=self.disconnect) def callback(self, message, channel): if channel == self.channel: if self.__class__.command_key in message: if message[self.__class__.command_key] == "print_temperature_fahrenheit": self.temperature_servo.print_temperature (message["temperature_fahrenheit"]) elif message[self.__class__.command_key] == "print_information_message": self.oled.print_line(11, message["text"]) print("I've received the following message: {0}".format(message)) def error(self, message): print("Error: " + str(message)) def connect(self, message): print("Connected to the {0} channel". format(self.channel)) print(self.pubnub.publish( channel=self.channel, message="Listening to messages in the Intel Galileo Gen 2 board")) def reconnect(self, message): print("Reconnected to the {0} channel". format(self.channel)) def disconnect(self, message): print("Disconnected from the {0} channel". Format(self.channel)) The MessageChannel class declares the command_key class attribute that defines the key string that defines what the code will understand as the command. Whenever we receive a message that includes the specified key string, we know that the value associated to this key in the dictionary will indicate the command that the message wants the code running in the board to be processed. Each command requires additional key-value pairs that provide the necessary information to execute the command. We have to specify the PubNub channel name, the TemperatureServo instance and the Oled instance in the channel, temperature_servo and oled required arguments. The constructor, that is, the __init__ method, saves the received arguments in three attributes with the same names. The channel argument specifies the PubNub channel to which we are going to subscribe to listen to the messages that other devices send to this channel. We will also publish messages to this channel, and therefore, we will be both a subscriber and a publisher for this channel. In this case, we will only subscribe to one channel. However, it is very important to know that we are not limited to subscribe to a single channel, we might subscribe to many channels. Then, the constructor declares two local variables: publish_key and subscribe_key. These local variables save the publish and subscribe keys we had generated with the PubNub admin portal. Then, the code creates a new Pubnub instance with publish_key and subscribe_key as the arguments, and saves the reference for the new instance in the pubnub attribute. Finally, the code calls the subscribe method for the new instance to subscribe to data on the channel saved in the channel attribute. Under the hoods, the subscribe method makes the client create an open TCP socket to the PubNub network that includes an MQTT broker and starts listening to messages on the specified channel. The call to this method specifies many methods declared in the MessageChannel class for the following named arguments: callback: Specifies the function that will be called when there is a new message received from the channel error: Specifies the function that will be called on an error event connect: Specifies the function that will be called when a successful connection is established with the PubNub cloud reconnect: Specifies the function that will be called when a successful re-connection is completed with the PubNub cloud disconnect: Specifies the function that will be called when the client disconnects from the PubNub cloud This way, whenever one of the previously enumerated events occur, the specified method will be executed. The callback method receives two arguments: message and channel. First, the method checks whether the received channel matches the value in the channel attribute. In this case, whenever the callback method is executed, the value in the channel argument will always match the value in the channel attribute because we just subscribed to one channel. However, in case we subscribe to more than one channel, is is always necessary to check the channel in which the message was sent and in which we are receiving the message. Then, the code checks whether the command_key class attribute is included in the message dictionary. If the expression evaluates to True, it means that the message includes a command that we have to process. However, before we can process the command, we have to check which is the command, and therefore, it is necessary to retrieve the value associated with the key equivalent to the command_key class attribute. The code is capable of running code when the value is any of the following two commands: print_temperature_fahrenheit: The command must specify the temperature value expressed in degrees Fahrenheit in the value of the temperature_fahrenheit key. The code calls the self.temperature_servo.print_temperature method with the temperature value retrieved from the dictionary as an argument. This way, the code moves the servo's shaft-based on the specified temperature value in the message that includes the command. print_information_message: The command must specify the line of text that has to be displayed at the bottom of the OLED matrix in the value of the print_information_message key. The code calls the self.oled.print_line method with 11 and the text value retrieved from the dictionary as arguments. This way, the code displays the text received in the message that includes the command at the bottom of the OLED matrix. No matter whether the message included a valid command or not, the method prints the raw message that it received in the console output. The connect method prints a message indicating that a connection has been established with the channel. Then, the method prints the results of calling the self.pubnub.publish method that publishes a message in the channel name saved in self.channel with the following message: "Listening to messages in the Intel Galileo Gen 2 board". In this case, the call to this method runs with a synchronous execution. We will work with asynchronous execution for this method in our next example. Currently, we are already subscribed to this channel, and therefore, we will receive the previously published message and the callback method will be executed with this message as an argument. However, as the message doesn't include the key that identifies a command, the code in the callback method will just display the received message and it won't process any of the previously analyzed commands. The other methods declared in the MessageChannel class just display information to the console output about the event that has occurred. Now, we will use the previously coded MessageChannel class to create a new version of the __main__ method that uses the PubNub cloud to receive and process commands. The new version doesn't rotate the servo's shaft when the ambient temperature changes, instead, it will do this when it receives the appropriate command from any device connected to PubNub cloud. The following lines show the new version of the __main__ method. The code file for the sample is iot_python_chapter_09_02.py (you will find this in the code bundle). if __name__ == "__main__": temperature_and_humidity_sensor = TemperatureAndHumiditySensor(0) oled = TemperatureAndHumidityOled(0) temperature_servo = TemperatureServo(3) message_channel = MessageChannel("temperature", temperature_servo, oled) while True: temperature_and_humidity_sensor. measure_temperature_and_humidity() oled.print_temperature( temperature_and_humidity_sensor .temperature_fahrenheit, temperature_and_humidity_sensor.temperature_celsius) oled.print_humidity( temperature_and_humidity_sensor.humidity) print("Ambient temperature in degrees Celsius: {0}". format(temperature_and_humidity_sensor .temperature_celsius)) print("Ambient temperature in degrees Fahrenheit: {0}". format(temperature_and_humidity_sensor .temperature_fahrenheit)) print("Ambient humidity: {0}". format(temperature_and_humidity_sensor.humidity)) # Sleep 10 seconds (10000 milliseconds) time.sleep(10) The highlighted line creates an instance of the previously coded MessageChannel class with "temperature", temperature_servo and oled as the arguments. The constructor will subscribe to the temperature channel in the PubNub cloud, and therefore, we must send the messages to this channel in order to send the commands that the code will process with an asynchronous execution. The loop will read the values from the sensor and print the values to the console similar to the previous version of the code, and therefore, we will have code running in the loop and listening to the messages in the temperature channel in the PubNub cloud. We will start the example later because we want to subscribe to the channel in the PubNub debug console before we run the code in the board. Publishing messages with commands through the PubNub cloud Now, we will take advantage of the PubNub console to send messages with commands to the temperature channel and make the Python code running on the board process these commands. In case you have logged out of PubNub, login again and click on the Temperature Control pane in the Admin Portal. PubNub will display the Demo Keyset pane. Click on the Demo Keyset pane and PubNub will display the publish, subscribe and secret keys. This way, we select the keyset that we want to use for our PubNub application. Click on Debug Console on the sidebar located the left-hand side of the screen. PubNub will create a client for a default channel and subscribe to this channel using the secret keys we have selected in the previous step. We want to subscribe to the temperature channel, and therefore, enter temperature in the Default Channel textbox within a pane that includes the Add client button at the bottom. Then, click on Add client and PubNub will add a new pane with a random client name as a title and the channel name, temperature, in the second line. PubNub makes the client subscribe to this channel and we will be able to receive messages published to this channel and send messages to this channel. The following picture shows the pane for the generated client named Client-ot7pi, subscribed to the temperature channel. Notice that the client name will be different when you follow the explained steps. The client pane displays the output generated when PubNub subscribed the client to the channel. PubNub returns a formatted response for each command. In this case, it indicates that the status is equal to Subscribed and the channel name is temperature. [1,"Subscribed","temperature"] Now, it is time to start running the example in the Intel Galileo Gen 2 board. The following line will start the example in the SSH console. python iot_python_chapter_09_02.py After you run the example, go to the Web browser in which you are working with the PubNub debug console. You will see the following message listed in the previously created client: "Listening to messages in the Intel Galileo Gen 2 board" The Python code running in the board published this message, specifically, the connect method in the MessageChannel class sent this message after the application established a connection with the PubNub cloud. The following picture shows the message listed in the previously created client. Notice that the icon at the left-hand side of the text indicates it is a message. The other message was a debug message with the results of subscribing to the channel. At the bottom of the client pane, you will see the following text and the SEND button at the right-hand side: {"text":"Enter Message Here"} Now, we will replace the previously shown text with a message. Enter the following JSON code and click SEND: {"command":"print_temperature_fahrenheit", "temperature_fahrenheit": 50 } The text editor where you enter the message has some issues in certain browsers. Thus, it is convenient to use your favorite text editor to enter the JSON code, copy it and then past it to replace the text that is included by default in the text for the message to be sent. After you click SEND, the following lines will appear in the client log. The first line is a debug message with the results of publishing the message and indicates that the message has been sent. The formatted response includes a number (1 message), the status (Sent) and a time token. The second line is the message that arrives to the channel because we are subscribed to the temperature channel, that is, we also receive the message we sent. [1,"Sent","14594756860875537"] { "command": "print_temperature_fahrenheit", "temperature_fahrenheit": 50 } The following picture shows the messages and debug messages log for the PubNub client after we clicked the SEND button. After you publish the previous message, you will see the following output in the SSH console for the Intel Galileo Gen 2 board. You will notice the servo's shaft rotates to 50 degrees. I've received the following message: {u'command': u'print_temperature_fahrenheit', u'temperature_fahrenheit': 50} Now, enter the following JSON code and click SEND: {"command":"print_information_message", "text": "Client ready"} After you click SEND, the following lines will appear in the client log. The first line is a debug message with the previously explained formatted response with the results of publishing the message and indicates that the message has been sent. The second line is the message that arrives to the channel because we are subscribed to the temperature channel, that is, we also receive the message we sent. [1,"Sent","14594794434885921"] { "command": "print_information_message", "text": "Client ready" } The following picture shows the messages and debug messages log for the PubNub client after we clicked the SEND button. After you publish the previous message, you will see the following output in the SSH console for the Intel Galileo Gen 2 board. You will see the following text displayed at the bottom of the OLED matrix: Client ready. I've received the following message: {u'text': u'Client ready', u'command': u'print_information_message'} When we published the two messages with the commands, we have definitely noticed a problem. We don't know whether the command was processed or not in the code that is running on the IoT device, that is, in the Intel Galileo Gen 2 board. We know that the board started listening messages in the temperature channel, but we don't receive any kind of response from the IoT device after the command has been processed. Working with bi-directional communications We can easily add a few lines of code to publish a message to the same channel in which we are receiving messages to indicate that the command has been successfully processed. We will use our previous example as a baseline and we will create a new version of the MessageChannel class. The code file was iot_python_chapter_09_02.py (you will find this in the code bundle). Don't forget to replace the strings assigned to the publish_key and subscribe_key local variables in the __init__ method with the values you have retrieved from the previously explained PubNub key generation process. The following lines show the new version of the MessageChannel class that publishes a message after a command has been successfully processed. The code file for the sample is iot_python_chapter_09_03.py (you will find this in the code bundle). import time from pubnub import Pubnub class MessageChannel: command_key = "command" successfully_processed_command_key = "successfully_processed_command" def __init__(self, channel, temperature_servo, oled): self.temperature_servo = temperature_servo self.oled = oled self.channel = channel # Do not forget to replace the string with your publish key publish_key = "pub-c-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" # Subscribe key is the one that usually starts with the "sub-c" prefix # Do not forget to replace the string with your subscribe key subscribe_key = "sub-c-xxxxxxxx-xxxx-xxxx-xxxx- xxxxxxxxxxxx" self.pubnub = Pubnub(publish_key=publish_key, subscribe_key=subscribe_key) self.pubnub.subscribe(channels=self.channel, callback=self.callback, error=self.callback, connect=self.connect, reconnect=self.reconnect, disconnect=self.disconnect) def callback_response_message(self, message): print("I've received the following response from PubNub cloud: {0}".format(message)) def error_response_message(self, message): print("There was an error when working with the PubNub cloud: {0}".format(message)) def publish_response_message(self, message): response_message = { self.__class__.successfully_processed_command_key: message[self.__class__.command_key]} self.pubnub.publish( channel=self.channel, message=response_message, callback=self.callback_response_message, error=self.error_response_message) def callback(self, message, channel): if channel == self.channel: print("I've received the following message: {0}".format(message)) if self.__class__.command_key in message: if message[self.__class__.command_key] == "print_temperature_fahrenheit": self.temperature_servo.print_temperature (message["temperature_fahrenheit"]) self.publish_response_message(message) elif message[self.__class__.command_key] == "print_information_message": self.oled.print_line(11, message["text"]) self.publish_response_message(message) def error(self, message): print("Error: " + str(message)) def connect(self, message): print("Connected to the {0} channel". format(self.channel)) print(self.pubnub.publish( channel=self.channel, message="Listening to messages in the Intel Galileo Gen 2 board")) def reconnect(self, message): print("Reconnected to the {0} channel". format(self.channel)) def disconnect(self, message): print("Disconnected from the {0} channel". format(self.channel)) The highlighted lines in the previous code for the new version of the MessageChannel class show the changes we made in the code. First, the code declares the successfully_processed_command_key class attribute that defines the key string that defines what the code will use as a successfully processed command key in a response message published to the channel. Whenever we publish a message that includes the specified key string, we know that the value associated to this key in the dictionary will indicate the command that the board has successfully processed. The code declares the following three new methods: callback_response_message: This method will be used as the callback that will be executed when a successfully processed command response message is published to the channel. The method just prints the formatted response that PubNub returns when a message has been successfully published in the channel. In this case, the message argument doesn't hold the original message that has been published, it holds the formatted response. We use message for the argument name to keep consistency with the PubNub API. error_response_message: This method will be used as the callback that will be executed when an error occurs when trying to publish a successfully processed command response message to the channel. The method just prints the error message that PubNub returns when a message hasn't been successfully published in the channel. publish_response_message: This method receives the message with the command that was successfully processed in the message argument. The code creates a response_message dictionary with the successfully_processed_command_key class attribute as the key and the value of the key specified in the command_key class attribute for the message dictionary as the value. Then, the code calls the self.pubnub.publish method to publish the response_message dictionary to the channel saved in the channel attribute. The call to this method specifies self.callback_response_message as the callback to be executed when the message is successfully published and self.error_response_message as the callback to be executed when an error occurred during the publishing process. When we specify a callback, the publish method works with an asynchronous execution, and therefore, the execution is non-blocking. The publication of the message and the callbacks that are specified will run in a different thread. Now, the callback method defined in the MessageChannel class adds a call to the publish_response_message method with the message that included the command that has been successfully processed (message) as an argument. As previously explained, the publish_response_message method is non-blocking and will return immediately while the successfully processed message is published in another thread. Now, it is time to start running the example in the Intel Galileo Gen 2 board. The following line will start the example in the SSH console. python iot_python_chapter_09_03.py After you run the example, go to the Web browser in which you are working with the PubNub debug console. You will see the following message listed in the previously created client: "Listening to messages in the Intel Galileo Gen 2 board" Enter the following JSON code and click SEND: {"command":"print_temperature_fahrenheit", "temperature_fahrenheit": 90 } After you click SEND, the following lines will appear in the client log. The last message was published by the board to the channel indicates that the print_temperature_fahrenheit command has been successfully processed. [1,"Sent","14595406989121047"] { "command": "print_temperature_fahrenheit", "temperature_fahrenheit": 90 } { "successfully_processed_command": "print_temperature_fahrenheit" } The following picture shows the messages and debug messages log for the PubNub client after we clicked the SEND button: After you publish the previous message, you will see the following output in the SSH console for the Intel Galileo Gen 2 board. You will notice the servo's shaft rotates to 90 degrees. The board also receives the successfully processed command message because it is subscribed to the channel in which the message has been published. I've received the following message: {u'command': u'print_temperature_fahrenheit', u'temperature_fahrenheit': 90} I've received the following response from PubNub cloud: [1, u'Sent', u'14595422426124592'] I've received the following message: {u'successfully_processed_command': u'print_temperature_fahrenheit'} Now, enter the following JSON code and click SEND: {"command":"print_information_message", "text": "2nd message"} After you click Send, the following lines will appear in the client log. The last message published by the board to the channel indicates that the print_information_message command has been successfully processed. [1,"Sent","14595434708640961"] { "command": "print_information_message", "text": "2nd message" } { "successfully_processed_command": "print_information_message" } The following picture shows the messages and debug messages log for the PubNub client after we clicked the SEND button: After you publish the previous message, you will see the following output in the SSH console for the Intel Galileo Gen 2 board. You will see the following text displayed at the bottom of the OLED matrix: 2nd message. The board also receives the successfully processed command message because it is subscribed to the channel in which the message has been published. I've received the following message: {u'text': u'2nd message', u'command': u'print_information_message'} 2nd message I've received the following response from PubNub cloud: [1, u'Sent', u'14595434710438777'] I've received the following message: {u'successfully_processed_command': u'print_information_message'} We can work with the different SDKs provided by PubNub to subscribe and publish to a channel. We can also make different IoT devices talk to themselves by publishing messages to channels and processing them. In this case, we just created a few commands and we didn't add detailed information about the device that has to process the command or the device that has generated a specific message. A more complex API would require commands that include more information and security. Publishing messages to the cloud with a Python PubNub client So far, we have been using the PubNub debug console to publish messages to the temperature channel and make the Python code running in the Intel Galileo Gen 2 board process them. Now, we are going to code a Python client that will publish messages to the temperature channel. This way, we will be able to design applications that can talk to IoT devices with Python code in the publisher and in the subscriber devices. We can run the Python client on another Intel Galileo Gen 2 board or in any device that has Python 2.7.x installed. In addition, the code will run with Python 3.x. For example, we can run the Python client in our computer. We just need to make sure that we install the pubnub module we have previously installed with pip in the Python version that is running in the Yocto Linux for the board. We will create a Client class to represent a PubNub client, configure the PubNub subscription, make it easy to publish a message with a command and the required values for the command and declare the code for the callbacks that are going to be executed when certain events are fired. The code file for the sample is iot_python_chapter_09_04.py (you will find this in the code bundle). Don't forget to replace the strings assigned to the publish_key and subscribe_key local variables in the __init__ method with the values you have retrieved from the previously explained PubNub key generation process. The following lines show the code for the Client class: import time from pubnub import Pubnub class Client: command_key = "command" def __init__(self, channel): self.channel = channel # Publish key is the one that usually starts with the "pub-c-" prefix publish_key = "pub-c-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" # Subscribe key is the one that usually starts with the "sub-c" prefix # Do not forget to replace the string with your subscribe key subscribe_key = "sub-c-xxxxxxxx-xxxx-xxxx-xxxx- xxxxxxxxxxxx" self.pubnub = Pubnub(publish_key=publish_key, subscribe_key=subscribe_key) self.pubnub.subscribe(channels=self.channel, callback=self.callback, error=self.callback, connect=self.connect, reconnect=self.reconnect, disconnect=self.disconnect) def callback_command_message(self, message): print("I've received the following response from PubNub cloud: {0}".format(message)) def error_command_message(self, message): print("There was an error when working with the PubNub cloud: {0}".format(message)) def publish_command(self, command_name, key, value): command_message = { self.__class__.command_key: command_name, key: value} self.pubnub.publish( channel=self.channel, message=command_message, callback=self.callback_command_message, error=self.error_command_message) def callback(self, message, channel): if channel == self.channel: print("I've received the following message: {0}".format(message)) def error(self, message): print("Error: " + str(message)) def connect(self, message): print("Connected to the {0} channel". format(self.channel)) print(self.pubnub.publish( channel=self.channel, message="Listening to messages in the PubNub Python Client")) def reconnect(self, message): print("Reconnected to the {0} channel". format(self.channel)) def disconnect(self, message): print("Disconnected from the {0} channel". format(self.channel)) The Client class declares the command_key class attribute that defines the key string that defines what the code understands as a command in the messages. Our main goal is to build and publish command messages to a specified channel. We have to specify the PubNub channel name in the channel-required argument. The constructor, that is, the __init__ method, saves the received argument in an attribute with the same name. We will be both a subscriber and a publisher for this channel. Then, the constructor declares two local variables: publish_key and subscribe_key. These local variables save the publish and subscribe keys we had generated with the PubNub Admin portal. Then, the code creates a new Pubnub instance with publish_key and subscribe_key as the arguments, and saves the reference for the new instance in the pubnub attribute. Finally, the code calls the subscribe method for the new instance to subscribe to data on the channel saved in the channel attribute. The call to this method specifies many methods declared in the Client class as we did for our previous examples. The publish_command method receives a command name, the key and the value that provide the necessary information to execute the command in the command_name, key and value required arguments. In this case, we don't target the command to a specific IoT device and all the devices that subscribe to the channel and run the code in our previous example will process the commands that we publish. We can use the code as a baseline to work with more complex examples in which generate commands that target specific IoT devices. Obviously, it is also necessary to improve the security. The method creates a dictionary and saves it in the command_message local variable. The command_key class attribute is the first key for the dictionary and the command_name received as an argument, the value that composes the first key-value pair. Then, the code calls the self.pubnub.publish method to publish the command_message dictionary to the channel saved in the channel attribute. The call to this method specifies self.callback_command_message as the callback to be executed when the message is published successfully and self.error_command_message as the callback to be executed when an error occurred during the publishing process. As happened in our previous example, when we specify a callback, the publish method works with an asynchronous execution. Now, we will use the previously coded Client class to write a __main__ method that uses the PubNub cloud to publish two commands that our board will process. The following lines show the code for the __main__ method. The code file for the sample is iot_python_chapter_09_04.py (you will find this in the code bundle). if __name__ == "__main__": client = Client("temperature") client.publish_command( "print_temperature_fahrenheit", "temperature_fahrenheit", 45) client.publish_command( "print_information_message", "text", "Python IoT" ) # Sleep 60 seconds (60000 milliseconds) time.sleep(60) The code in the __main__ method is very easy to understand. The code creates an instance of the Client class with "temperature" as an argument to become both a subscriber and a publisher for this channel in the PubNub cloud. The code saves the new instances in the client local variable. The code calls the publish_command method with the necessary arguments to build and publish the print_temperature_fahrenheit command with a temperature value of 45. The method will publish the command with an asynchronous execution. Then, the code calls the publish_command method again with the necessary arguments to build and publish the print_information_message command with a text value of "Python IoT". The method will publish the second command with an asynchronous execution. Finally, the code sleeps for 1 minute (60 seconds) in order to make it possible for the asynchronous executions to publish the commands successfully. The different callbacks defined in the Client class will be executed as the different events fire. As we are also subscribed to the channel, we will also receive the messages we publish in the temperature channel. Keep the Python code we executed in our previous example running on the board. We want the board to process our commands. In addition, keep the Web browser in which you are working with the PubNub debug console opened because we also want to see all the messages in the log. The following line will start the example for the Python client in any computer or device that you want to use as a client. It is possible to run the code in another SSH terminal in case you want to use the same board as a client. python iot_python_chapter_09_04.py After you run the example, you will see the following output in the Python console that runs the Python client, that is, the iot_python_chapter_09_04.py (you will find this in the code bundle)Python script. Connected to the temperature channel I've received the following response from PubNub cloud: [1, u'Sent', u'14596508980494876'] I've received the following response from PubNub cloud: [1, u'Sent', u'14596508980505581'] [1, u'Sent', u'14596508982165140'] I've received the following message: {u'text': u'Python IoT', u'command': u'print_information_message'} I've received the following message: {u'command': u'print_temperature_fahrenheit', u'temperature_fahrenheit': 45} I've received the following message: Listening to messages in the PubNub Python Client I've received the following message: {u'successfully_processed_command': u'print_information_message'} I've received the following message: {u'successfully_processed_command': u'print_temperature_fahrenheit'} The code used the PubNub Python SDK to build and publish the following two command messages in the temperature channel: {"command":"print_temperature_fahrenheit", "temperature_fahrenheit": "45"} {"command":"print_information_message", "text": "Python IoT"} As we are also subscribed to the temperature channel, we receive the messages we sent with an asynchronous execution. Then, we received the successfully processed command messages for the two command messages. The board has processed the commands and published the messages to the temperature channel. After you run the example, go to the Web browser in which you are working with the PubNub debug console. You will see the following messages listed in the previously created client: [1,"Subscribed","temperature"] "Listening to messages in the Intel Galileo Gen 2 board" { "text": "Python IoT", "command": "print_information_message" } { "command": "print_temperature_fahrenheit", "temperature_fahrenheit": 45 } "Listening to messages in the PubNub Python Client" { "successfully_processed_command": "print_information_message" } { "successfully_processed_command": "print_temperature_fahrenheit" } The following picture shows the last messages displayed in the log for the PubNub client after we run the previous example: You will see the following text displayed at the bottom of the OLED matrix: Python IoT. In addition, the servo's shaft will rotate to 45 degrees. We can use the PubNub SDKs available in different programming languages to create applications and apps that publish and receive messages in the PubNub cloud and interact with IoT devices. In this case, we worked with the Python SDK to create a client that publishes commands. It is possible to create mobile apps that publish commands and easily build an app that can interact with our IoT device. Summary In this article, we combined many cloud-based services that allowed us to easily publish data collected from sensors and visualize it in a Web-based dashboard. We realized that there is always a Python API, and therefore, it is easy to write Python code that interacts with popular cloud-based services. Resources for Article: Further resources on this subject: Internet of Things with BeagleBone[article] The Internet of Things[article] Python Data Structures[article]
Read more
  • 0
  • 0
  • 13133
Modal Close icon
Modal Close icon