Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-parallel-universe-r
Packt
12 May 2016
10 min read
Save for later

The Parallel Universe of R

Packt
12 May 2016
10 min read
This article by Simon Chapple, author of the book Mastering Parallel Programming with R, helps us understand the intricacies of parallel computing. Here, we'll take a look into Delores' Crystal Ball at what the future holds for massively parallel computation that will likely have a significant impact on the world of R programming, particularly when applied to big data. (For more resources related to this topic, see here.) Three steps to successful parallelization The following three-step distilled guidance is intended to help you decide what form of parallelism might be best suited for your particular algorithm/problem and summarizes what you learned throughout this article. Necessarily, it applies a level of generalization, so approach these guidelines with due consideration: Determine the type of parallelism that may best apply to your algorithm. Is the problem you are solving more computationally bound or data bound? If the former, then your problem may be amenable to GPUs; if the latter, then your problem may be more amenable to cluster-based computing; and if your problem requires a complex processing chain, then consider using the Spark framework. Can you divide the problem data/space to achieve a balanced workload across all processes, or do you need to employ an adaptive load-balancing scheme—for example, a task farm-based approach? Does your problem/algorithm naturally divide spatially? If so, consider whether a grid-based parallel approach can be used. Perhaps your problem is on an epic scale? If so, maybe you can develop your message passing-based code and run it on a supercomputer. Is there an implied sequential dependency between tasks; that is, do processes need to cooperate and share data during their computation, or can each separate divided task be executed entirely independently of one another? A large proportion of parallel algorithms typically have a work distribution phase, a parallel computation phase, and a result aggregation phase. To reduce the overhead of the startup and close down phases, consider whether a Tree-based approach to work distribution and result aggregation may be appropriate in your case. Ensure that the basis of the compute in your algorithm has optimal implementation. Profile your code in serial to determine whether there are any bottlenecks, and target these for improvement. Is there an existing parallel implementation similar to your algorithm that you can use directly or adopt? Review CRAN Task View: High-Performance and Parallel Computing with R at https://cran.r-project.org/web/views/HighPerformanceComputing.html; in particular, take a look at the subsection entitled Parallel Computing: Applications, a snapshot of which at the time of writing can be seen in the following figure: Figure 1: CRAN provides various parallelized packages you can use in your own program. Test and evaluate the parallel efficiency of your implementation. Use the P-estimated form of Amdahl's Law to predict the level of scalability you can achieve. Test your algorithm at varying amounts of parallelism, particularly odd numbers that trigger edge-case behaviors. Don't forget to run with just a single process. Running with more processes than processors will lead to trigger-lurking deadlock/race conditions (this is most applicable to message passing-based implementations). Where possible, to reduce overhead, ensure that your method of deployment/initialization places the data being consumed locally to each parallel process. What does the future hold? Obviously, this final section is at a considerable risk of "crystal ball gazing" and getting it wrong. However, there are a number of clear directions in which we can see how both hardware and software will develop, which makes it clear that parallel programming will play an ever more important and increasing role in our computational future. Besides, it has now become critical for us to be able to process vast amounts of information within a short window of time in order to ensure our own individual and collective safety. For example, we are experiencing an increased momentum towards significant climate change and extreme weather events and will therefore require increasingly accurate weather prediction to help us deal with this; this will only be possible with highly efficient parallel algorithms. In order to gaze into the future, we need to look back at the past. The hardware technology available to parallel computing has evolved at a phenomenal pace through the years. The levels of performance that can be achieved today by single chip designs are truly staggering in terms of recent history. The history of HPC For an excellent infographic review of the development of computing performance, I would urge you to visit the following web page: http://pages.experts-exchange.com/processing-power-compared/ This beautifully illustrates how, for example, an iPhone 4 released in 2010 has near-equivalent performance to the Cray 2 supercomputer from 1985 of around 1.5 gigaflops, and the Apple Watch released in 2015 has around twice the performance of the iPhone 4 and Cray 2! While chip manufacturers have managed to maintain the famous Moore's law that predicts transistor count doubling every two years, we are now at 14 nanometers (nm) in chip production, giving us around 100 complex processing cores in a single chip. In July 2015, IBM announced a prototype chip at 7 nm (1/10,000th the width of a human hair). Some scientists suggest that quantum tunneling effects will start to impact at 5 nm (which Intel expects to bring to the market by 2020), although a number of research groups have shown individual transistor construction as small as 1 nm in the lab using materials such as graphene. What all of this suggests is that the placement of 1,000 independent high-performance computational cores, together with sufficient amounts of high-speed cache memory inside a single-chip package comparable to the size of today's chips, could potentially be possible within the next 10 years. NIVIDA and Intel are arguably at the forefront of the dedicated HPC chip development with their respective offerings used in the world's fastest supercomputers, which can also be embedded in your desktop computer. NVIDIA produces Tesla, the K80 GPU-based accelerator available that now peaks at 1.87 teraflops double precision and 5.6 teraflops single precision, utilizing 4,992 cores (dual processor) and 24 GB of on-board RAM. Intel produces Xeon Phi, the collective family brand name for its Many Integrated Core (MIC) architecture; Knights Landing, which is new, is expected to peak at 3 teraflops double precision and 6 teraflops single precision, utilizing 72 cores (single processor) and 16 GB of the highly integrated on-chip fast memory when it is released, likely in fall 2015. The successors to these chips, namely NVIDIA's Volta and Intel's Knights Hill, will be the foundation for the next generation of American $200 million dollar supercomputers in 2018, delivering around 150 to 300 petaflops peak performance (around 150 million iPhone 4s), as compared to China's TIANHE-2, ranked as the fastest supercomputer in the world in 2015, with a peak performance of around 50 petaflops from 3.1 million cores. At the other extreme, within the somewhat smaller and less expensive world of mobile devices, most currently use between two and four cores, though mixed multicore capability such as ARM's big.LITTLE octacore makes eight cores available. However, this is already on the increase with, for example, MediaTek's new SoC MT6797, which has 10 main processing cores split into a pair and two groups of four cores with different clock speeds and power requirements to serve as the basis for the next-generation mobile phones. Top-end mobile devices, therefore, exhibit a rich heterogeneous architecture with mixed power cores, separate sensor chips, GPUs, and Digital Signal Processors (DSP) to direct different aspects of workload to the most power-efficient component. Mobile phones increasingly act as the communications hub and signal a processing gateway for a plethora of additional devices, such as biometric wearables and the rapidly expanding number of ultra-low power Internet of Things (IoT) sensing devices smartening all aspects of our local environment. While we are a little bit away from running R itself natively on mobile devices, the time will come when we seek to harness the distributed computing power of all our mobile devices. In 2014 alone, around 1.25 billion smartphones were sold. That's a lot of crowd-sourced compute power and potentially far outstrips any dedicated supercomputer on the planet either existing or planned. The software that enables us to utilize parallel systems, which, as we noted, are increasingly heterogeneous, continues to evolve. In this book, we have examined how you can utilize OpenCL from R to gain access to both the GPU and CPU, making it possible to perform mixed computation across both components, exploiting the particular strengths of each for certain types of processing. Indeed, another related initiative, Heterogeneous System Architecture (HSA), which enables even lower-level access to the spectrum of processor capabilities, may well gain traction over the coming years and help promote the uptake of OpenCL and its counterparts. HSA Foundation HSA Foundation was founded by a cross-industry group led by AMD, ARM, Imagination, MediaTek, Qualcomm, Samsung, and Texas Instruments. Its stated goal is to help support the creation of applications that seamlessly blend scalar processing on the CPU, parallel processing on the GPU, and optimized processing on the DSP via high bandwidth shared memory access, enabling greater application performance at low power consumption. To enable this, HSA Foundation is defining key interfaces for parallel computation using CPUs, GPUs, DSPs, and other programmable and fixed-function devices, thus supporting a diverse set of high-level programming languages and creating the next generation in general-purpose computing. You can find the recently released version 1.0 of the HSA specification at the following link: http://www.hsafoundation.com/html/HSA_Library.htm Hybrid parallelism As a final wrapping up, I thought I would show how you can overcome some of the inherent single-threaded nature of R even further and demonstrate a hybrid approach to parallelism that combines two of the different techniques we covered previously within a single R program. We've also discussed how heterogeneous computing is potentially the way of the future. This example refers to the code we would develop to utilize MPI through pbdMPI together with ROpenCL to enable us to exploit both the CPU and GPU simultaneously. While this is a slightly contrived example and both the devices compute the same dist() function, the intention is to show you just how far you can take things with R to get the most out of all your available compute resource. Basically, all we need to do is to top and tail our implementation of the dist() function in OpenCL with the appropriate pbdMPI initialization and termination and run the script with mpiexec on two processes, as follows: # Initialise both ROpenCL and pdbMPI require(ROpenCL) library(pbdMPI, quietly = TRUE) init() # Select device based on my MPI rank r <- comm.rank() if (r == 0) { # use gpu device <- 1 } else { # use cpu device <- 2 } ... # Execute the OpenCL dist() function on my assigned device comm.print(sprintf("%d executing on device %s", r, getDeviceType(deviceID)), all.rank = TRUE) res <- teval(openclDist(kernel)) comm.print(sprintf("%d done in %f secs",r,res$Duration), all.rank = TRUE) finalize() This is simple and very effective! Summary In this article, we looked at Crystal Ball and saw the prospects for the combination of heterogeneous compute hardware that is here today and that will expand in capability even further in the future, not only in our supercomputers and laptops but also in our personal devices. Parallelism is the only way these systems can be utilized effectively. As the volume of new quantified self- and environmentally-derived data increases and the number of cores in our compute architectures continues to rise, so does the importance of being able to write parallel programs to make use of it all—job security for parallel programmers looks good for many years to come! Resources for Article: Further resources on this subject: Multiplying Performance with Parallel Computing [article] Training and Visualizing a neural network with R [article] Big Data Analysis (R and Hadoop) [article]
Read more
  • 0
  • 0
  • 1812

article-image-make-your-presentation-ipython
Oli Huggins
11 May 2016
5 min read
Save for later

Make Your Presentation with IPython

Oli Huggins
11 May 2016
5 min read
The IPython ecosystem is very powerful, integrating many functionalities, and it has become very popular. Some people use it as a full IDE environment, for example, the Rodeo, which was coded based on IPython. Some use IPython as a full platform, including coding presentations and even showing them. IPython is able to combine codes, LaTeX, markdown, and HTML excerpts; display videos; and include graphics and interactive JavaScript objects. Here we will focus on the method of converting IPython notebooks to single HTML files, which can be displayed in any of the several modern browsers (for example, Chrome and Firefox). IPython notebooks integrate an amazing tool called Reveal.js. Using nbconvert (which brings the possibility to export .ipynb to other formats such as markdown, latex, pdf, and also slides), the slides saved with IPython generate a Reveal.js-powered HTML slide show. Requirements Generally a single pip install should do the trick: pip install ipython['all'] If you do not have the IPython notebook installed, you can also upgrade it: pip install --upgrade ipython['all'] It is also necessary to have a copy of Reveal.js inside the same directory where you will save the converted IPython notebook, considering that you will present your slides without a running IPython session. Using Notebooks Start running your IPython notebook with the ipython notebook command in your console, and it will display a session of IPython in your default browser. To start coding your presentation, in the top-right corner, you will find a cascade menu called new. Just select Notebook and it will display a new file, which can be saved with your preferred filename File > Rename.... Now, you have to select what type of content you will create. For this usage, you must select View > Cell toolbar > Slideshow or simply click on the CellToolbar button and select Slideshow. This will display in the top-right corner of each cell a Slide Type menu, which can be chosen between: Slide: Creates a new slide Sub-Slide: Creates a sub-slide, which will be displayed under the main slide Fragment: This will not change the slide but will insert each cell as a fragment Skip: It will not be displayed Notes: Yes, with Reveal.js you can write your notes and use them when presenting The content of each cell may be changed in the upper menu, where the default is code, but you can select between Markdown content, Raw NBConvert, and Heading. You have already started your presentation process, where you can insert codes, figures, plots, videos, and much more. You can also include math equations using LaTeX coding, by simply changing the cell type to Markdown and type, for example: $$c = sqrt{a^2 + b^2}$$ To include images, you must code this: from IPython.display import Image And choose skip in the Slide Type menu. After this, you can select your disk image. Image('my_figure.png') Changing the Behavior and Customizing Your Presentation By default, IPython will display your entry cell and its output, but you can also change this behavior. Include a cell at the top of your notebook. Change the cell type to Raw NBConvert, including the following CSS code, and change Slide Type to Skip: <style type="text/css"> .input_prompt, .input_area, .output_prompt { display:none !important; } </style> You can customize whatever you want with CSS codes, changing the default parameters set with Reveal.js. For example, to change the header font to Times, you just have to include the following snippet in the same first cell: <style type="text/css"> .reveal h1, .reveal h2 { font-family:times } </style> All of your CSS code may be included in a separate CSS file called custom.css and saved in the same directory where your HTML presentation will stand. Generating Your Presentation After all of your efforts in creating your presentation with Notebook, it is time to generate your HTML file. Using nbconvert, it is possible to set a lot of parameters (see ipython nbconvert --help to see them all). ipython nbconvert presentation.ipynb --to slides This command will generate a file called presentation.slides.html, which can be displayed with a browser. If you want to directly run your IPython notebook, you may use this: ipython nbconvert presentation.ipynb --to slides --post serve An interesting note about presenting your .html file is that you can use the Reveal.js shortcuts; for example the key S will open another browser window displaying the elapsed time and a preview of the current slide, next slide, and the Notes associated with that specific slide. The key B turns your screen black, in case you want to avoid distracting people. If you press Esc or O, you will escape from full-screen to toggle overview. Much more can be done by using Reveal.js together with IPython notebooks. For example, it is actually possible to live run code inside your presentation using the extension RISE, which simulates live IPython. Conclusion The usage of IPython as a slide show platform is a way to show code, results, and other formatted math text very directly. Otherwise, if you want a fancier presentation, maybe this approach will push you to further customize CSS code (it is also possible to use external themes to generate HTML files with nbconvert appending the argument—a theme that can be found on the Web). Have a great time using IPython while coding your presentation and sharing your .ipynb files for the public with IPython Slides Viewer. About the author Arnaldo D'Amaral Pereira Granja Russo has a PhD in Biological Oceanography and is a researcher at the Instituto Ambiental Boto Flipper (institutobotoflipper.org). While not teaching or programming he is surfing, climbing or paragliding. You can find him at www.ciclotux.org or on Github here.
Read more
  • 0
  • 0
  • 18001

article-image-playing-tic-tac-toe-against-ai
Packt
11 May 2016
30 min read
Save for later

Playing Tic-Tac-Toe against an AI

Packt
11 May 2016
30 min read
In this article by Ivo Gabe de Wolff, author of the book TypeScript Blueprints, we will build a game in which the computer will play well. The game is called Tic-Tac-Toe. The game is played by two players on a grid, usually three by three. The players try to place their symbols threein a row (horizontal, vertical or diagonal). The first player can place crosses, the second player placescircles. If the board is full, and no one has three symbols in a row, it is a draw. (For more resources related to this topic, see here.) The game is usually played on a three by three grid and the target is to have three symbols in a row. To make the application more interesting, we will make the dimension and the row length variable. We will not create a graphical interface for this application. We will only build the game mechanics and the artificial intelligence(AI). An AI is a player controlled by the computer. If implemented correctly, the computer should never lose on a standard three by three grid. When the computer plays against the computer, it will result in a draft. We will also write various unit tests for the application. We will build the game as a command line application. That means you can play the game in a terminal. You can interact with the game only with text input. It's player one's turn! Choose one out of these options: 1X|X|-+-+- |O|-+-+- | |   2X| |X-+-+- |O|-+-+- | |   3X| |-+-+-X|O|-+-+- | |   4X| |-+-+- |O|X-+-+- | |   5X| |-+-+- |O|-+-+-X| |   6X| |-+-+- |O|-+-+- |X|   7X| |-+-+- |O|-+-+- | |X Creating the project structure We will locate the source files in lib and the tests in lib/test. We use gulp to compile the project and AVA to run tests. We can install the dependencies of our project with NPM: npm init -y npm install ava gulp gulp-typescript --save-dev In gulpfile.js, we configure gulp to compile our TypeScript files. var gulp = require("gulp"); var ts = require("gulp-typescript");   var tsProject = ts.createProject("./lib/tsconfig.json");   gulp.task("default", function() { return tsProject.src() .pipe(ts(tsProject)) .pipe(gulp.dest("dist")); }); Configure TypeScript We can download type definitions for NodeJS with NPM: npm install @types/node --save-dev We must exclude browser files in TypeScript. In lib/tsconfig.json, we add the configuration for TypeScript: {   "compilerOptions": {     "target": "es6",     "module": "commonjs" }   } For applications that run in the browser, you will probably want to target ES5, since ES6 is not supported in all browsers. However, this application will only beexecuted in NodeJS, so we do not have such limitations. You have to use NodeJS 6 or later for ES6 support. Adding utility functions Since we will work a lot with arrays, we can use some utility functions. First, we create a function that flattens a two dimensional array into a one dimensional array. export function flatten<U>(array: U[][]) { return (<U[]>[]).concat(...array); } Next, we create a functionthat replaces a single element of an array with a specified value. We will use functional programming in this article again, so we must use immutable data structures. We can use map for this, since this function provides both the element and the index to the callback. With this index, we can determine whether that element should be replaced. export function arrayModify<U>(array: U[], index: number, newValue: U) { return array.map((oldValue, currentIndex) => currentIndex === index ? newValue : oldValue); } We also create a function that returns a random integer under a certain upper bound. export function randomInt(max: number) { return Math.floor(Math.random() * max); } We will use these functions in the next sessions. Creating the models In lib/model.ts, we will create the model for our game. The model should contain the game state. We start with the player. The game is played by two players. Each field of the grid contains the symbol of a player or no symbol. We will model the grid as a two dimensional array, where each field can contain a player. export type Grid = Player[][]; A player is either Player1, Player2 or no player. export enum Player { Player1 = 1, Player2 = -1, None = 0 } We have given these members values so we can easily get the opponent of a player. export function getOpponent(player: Player): Player { return -player; } We create a type to represent an index of the grid. Since the grid is two dimensional, such an index requires two values. export type Index = [number, number]; We can use this type to create two functions that get or update one field of the grid. We use functional programming in this article, so we will not modify the grid. Instead, we return a new grid with one field changed. export function get(grid: Grid, [rowIndex, columnIndex]: Index) { const row = grid[rowIndex]; if (!row) return undefined; return row[columnIndex]; } export function set(grid: Grid, [row, column]: Index, value: Player) { return arrayModify(grid, row, arrayModify(grid[row], column, value) ); } Showing the grid To show the game to the user, we must convert a grid to a string. First, we will create a function that converts a player to a string, then a function that uses the previous function to show a row, finally a function that uses these functions to show the complete grid. The string representation of a grid should have lines between the fields. We create these lines with standard characters (+, -, and |). This gives the following result: X|X|O-+-+- |O|-+-+-X| | To convert a player to the string, we must get his symbol. For Player1, that is a cross and for Player2, a circle. If a field of the grid contains no symbol, we return a space to keep the grid aligned. function showPlayer(player: Player) { switch (player) { case Player.Player1: return "X"; case Player.Player2: return "O"; default: return ""; } } We can use this function to the tokens of all fields in a row. We add a separator between these fields. function showRow(row: Player[]) { return row.map(showPlayer).reduce((previous, current) => previous + "|" + current); } Since we must do the same later on, but with a different separator, we create a small helper function that does this concatenation based on a separator. const concat = (separator: string) => (left: string, right: string) => left + separator + right; This function requires the separator and returns a function that can be passed to reduce. We can now use this function in showRow. function showRow(row: Player[]) { return row.map(showPlayer).reduce(concat("|")); } We can also use this helper function to show the entire grid. First we must compose the separator, which is almost the same as showing a single row. Next, we can show the grid with this separator. export function showGrid(grid: Grid) { const separator = "n" + grid[0].map(() =>"-").reduce(concat("+")) + "n"; return grid.map(showRow).reduce(concat(separator)); } Creating operations on the grid We will now create some functions that do operations on the grid. These functions check whether the board is full, whether someone has won, and what options a player has. We can check whether the board is full by looking at all fields. If no field exists that has no symbol on it, the board is full, as every field has a symbol. export function isFull(grid: Grid) { for (const row of grid) { for (const field of row) { if (field === Player.None) return false; } } return true; } To check whether a user has won, we must get a list of all horizontal, vertical and diagonal rows. For each row, we can check whether a row consists of a certain amount of the same symbols on a row. We store the grid as an array of the horizontal rows, so we can easily get those rows. We can also get the vertical rows relatively easily. function allRows(grid: Grid) { return [ ...grid, ...grid[0].map((field, index) => getVertical(index)), ... ];   function getVertical(index: number) { return grid.map(row => row[index]); } } Getting a diagonal row requires some more work. We create a helper function that will walk on the grid from a start point, in a certain direction. We distinguish two different kinds of diagonals: a diagonal that goes to the lower-right and a diagonal that goes to the lower-left. For a standard three by three game, only two diagonals exist. However, a larger grid may have more diagonals. If the grid is 5 by 5, and the users should get three in a row, ten diagonals with a length of at least three exist: 0, 0 to 4, 40, 1 to 3, 40, 2 to 2, 41, 0 to 4, 32, 0 to 4, 24, 0 to 0, 43, 0 to 0, 32, 0 to 0, 24, 1 to 1, 44, 2 to 2, 4 The diagonals that go toward the lower-right, start at the first column or at the first horizontal row. Other diagonals start at the last column or at the first horizontal row. In this function, we will just return all diagonals, even if they only have one element, since that is easy to implement. We implement this with a function that walks the grid to find the diagonal. That function requires a start position and a step function. The step function increments the position for a specific direction. function allRows(grid: Grid) { return [ ...grid, ...grid[0].map((field, index) => getVertical(index)), ...grid.map((row, index) => getDiagonal([index, 0], stepDownRight)), ...grid[0].slice(1).map((field, index) => getDiagonal([0, index + 1], stepDownRight)), ...grid.map((row, index) => getDiagonal([index, grid[0].length - 1], stepDownLeft)), ...grid[0].slice(1).map((field, index) => getDiagonal([0, index], stepDownLeft)) ];   function getVertical(index: number) { return grid.map(row => row[index]); }   function getDiagonal(start: Index, step: (index: Index) => Index) { const row: Player[] = []; for (let index = start; get(grid, index) !== undefined; index = step(index)) { row.push(get(grid, index)); } return row; } function stepDownRight([i, j]: Index): Index { return [i + 1, j + 1]; } function stepDownLeft([i, j]: Index): Index { return [i + 1, j - 1]; } function stepUpRight([i, j]: Index): Index { return [i - 1, j + 1]; } } To check whether a row has a certain amount of the same elements on a row, we will create a function with some nice looking functional programming. The function requires the array, the player, and the index at which the checking starts. That index will usually be zero, but during recursion we can set it to a different value. originalLength contains the original length that a sequence should have. The last parameter, length, will have the same value in most cases, but in recursion we will change the value. We start with some base cases. Every row contains a sequence of zero symbols, so we can always return true in such a case. function isWinningRow(row: Player[], player: Player, index: number, originalLength: number, length: number): boolean { if (length === 0) { return true; } If the row does not contain enough elements to form a sequence, the row will not have such a sequence and we can return false. if (index + length > row.length) { return false; } For other cases, we use recursion. If the current element contains a symbol of the provided player, this row forms a sequence if the next length—1 fields contain the same symbol. if (row[index] === player) { return isWinningRow(row, player, index + 1, originalLength, length - 1); } Otherwise, the row should contain a sequence of the original length in some other position. return isWinningRow(row, player, index + 1, originalLength, originalLength); } If the grid is large enough, a row could contain a long enough sequence after a sequence that was too short. For instance, XXOXXX contains a sequence of length three. This function handles these rows correctly with the parameters originalLength and length. Finally, we must create a function that returns all possible sets that a player can do. To implement this function, we must first find all indices. We filter these indices to indices that reference an empty field. For each of these indices, we change the value of the grid into the specified player. This results in a list of options for the player. export function getOptions(grid: Grid, player: Player) { const rowIndices = grid.map((row, index) => index); const columnIndices = grid[0].map((column, index) => index);   const allFields = flatten(rowIndices.map( row => columnIndices.map(column =><Index> [row, column]) ));   return allFields .filter(index => get(grid, index) === Player.None) .map(index => set(grid, index, player)); } The AI will use this to choose the best option and a human player will get a menu with these options. Creating the grid Before the game can be started, we must create an empty grid. We will write a function that creates an empty grid with the specified size. export function createGrid(width: number, height: number) { const grid: Grid = []; for (let i = 0; i < height; i++) { grid[i] = []; for (let j = 0; j < width; j++) { grid[i][j] = Player.None; } } return grid; } In the next section, we will add some tests for the functions that we have written. These functions work on the grid, so it will be useful to have a function that can parse a grid based on a string. We will separate the rows of a grid with a semicolon. Each row contains tokens for each field. For instance, "XXO; O ;X  " results in this grid: X|X|O-+-+- |O|-+-+-X| | We can implement this by splitting the string into an array of lines. For each line, we split the line into an array of characters. We map these characters to a Player value. export function parseGrid(input: string) { const lines = input.split(";"); return lines.map(parseLine);   function parseLine(line: string) { return line.split("").map(parsePlayer); } function parsePlayer(character: string) { switch (character) { case "X": return Player.Player1; case "O": return Player.Player2; default: return Player.None; } } } In the next section we will use this function to write some tests. Adding tests We will use AVA to write tests for our application. Since the functions do not have side effects, we can easily test them. In lib/test/winner.ts, we test the findWinner function. First, we check whether the function recognizes the winner in some simple cases. import test from "ava"; import { Player, parseGrid, findWinner } from "../model";   test("player winner", t => { t.is(findWinner(parseGrid("   ;XXX;   "), 3), Player.Player1); t.is(findWinner(parseGrid("   ;OOO;   "), 3), Player.Player2); t.is(findWinner(parseGrid("   ;   ;   "), 3), Player.None); }); We can also test all possible three-in-a-row positions in the three by three grid. With this test, we can find out whether horizontal, vertical, and diagonal rows are checked correctly. test("3x3 winner", t => { const grids = [     "XXX;   ;   ",     "   ;XXX;   ",     "   ;   ;XXX",     "X  ;X  ;X  ",     " X ; X ; X ",     "  X;  X;  X",     "X  ; X ;  X",     "  X; X ;X  " ]; for (const grid of grids) { t.is(findWinner(parseGrid(grid), 3), Player.Player1); } });   We must also test that the function does not claim that someone won too often. In the next test, we validate that the function does not return a winner for grids that do not have a winner. test("3x3 no winner", t => { const grids = [     "XXO;OXX;XOO",     "   ;   ;   ",     "XXO;   ;OOX",     "X  ;X  ; X " ]; for (const grid of grids) { t.is(findWinner(parseGrid(grid), 3), Player.None); } }); Since the game also supports other dimensions, we should check these too. We check that all diagonals of a four by three grid are checked correctly, where the length of a sequence should be two. test("4x3 winner", t => { const grids = [     "X   ; X  ;    ",     " X  ;  X ;    ",     "  X ;   X;    ",     "    ;X   ; X  ",     "  X ;   X;    ",     " X  ;  X ;    ",     "X   ; X  ;    ",     "    ;   X;  X " ]; for (const grid of grids) { t.is(findWinner(parseGrid(grid), 2), Player.Player1); } }); You can of course add more test grids yourself. Add tests before you fix a bug. These tests should show the wrong behavior related to the bug. When you have fixed the bug, these tests should pass. This prevents the bug returning in a future version. Random testing Instead of running the test on some predefined set of test cases, you can also write tests that run on random data. You cannot compare the output of a function directly with an expected value, but you can check some properties of it. For instance, getOptions should return an empty list if and only if the board is full. We can use this property to test getOptions and isFull. First, we create a function that randomly chooses a player. To higher the chance of a full grid, we add some extra weight on the players compared to an empty field. import test from "ava"; import { createGrid, Player, isFull, getOptions } from "../model"; import { randomInt } from "../utils";   function randomPlayer() { switch (randomInt(4)) { case 0: case 1: return Player.Player1; case 2: case 3: return Player.Player2; default: return Player.None; } } We create 10000 random grids with this function. The dimensions and the fields are chosen randomly. test("get-options", t => { for (let i = 0; i < 10000; i++) { const grid = createGrid(randomInt(10) + 1, randomInt(10) + 1) .map(row => row.map(randomPlayer)); Next, we check whether the property that we describe holds for this grid. const options = getOptions(grid, Player.Player1) t.is(isFull(grid), options.length === 0); We also check that the function does not give the same option twice. for (let i = 1; i < options.length; i++) { for (let j = 0; j < i; j++) { t.notSame(options[i], options[j]); } } } }); Depending on how critical a function is, you can add more tests. In this case, you could check that only one field is modified in an option or that only an empty field can be modified in an option. Now you can run the tests using gulp && ava dist/test.You can add this to your package.json file. In the scripts section, you can add commands that you want to run. With npm run xxx, you can run task xxx. npm testthat was added as shorthand for npm run test, since the test command is often used. { "name": "article-7", "version": "1.0.0", "scripts": { "test": "gulp && ava dist/test"   }, ... Implementing the AI using Minimax We create an AI based on Minimax. The computer cannot know what his opponent will do in the next steps, but he can check what he can do in the worst-case. The minimum outcome of these worst cases is maximized by this algorithm. This behavior has given Minimax its name. To learn how Minimax works, we will take a look at the value or score of a grid. If the game is finished, we can easily define its value: if you won, the value is 1; if you lost, -1 and if it is a draw, 0. Thus, for player 1 the next grid has value 1 and for player 2 the value is -1. X|X|X-+-+-O|O|-+-+-X|O| We will also define the value of a grid for a game that has not been finished. We take a look at the following grid: X| |X-+-+-O|O|-+-+-O|X| It is player 1's turn. He can place his stone on the top row, and he would win, resulting in a value of 1. He can also choose to lay his stone on the second row. Then the game will result in a draft, if player 2 is not dumb, with score 0. If he chooses to place the stone on the last row, player 2 can win resulting in -1. We assume that player 1 is smart and that he will go for the first option. Thus, we could say that the value of this unfinished game is 1. We will now formalize this. In the previous paragraph, we have summed up all options for the player. For each option, we have calculated the minimum value that the player could get if he would choose that option. From these options, we have chosen the maximum value. Minimax chooses the option with the highest value of all options. Implementing Minimax in TypeScript As you can see, the definition of Minimax looks like you can implement it with recursion. We create a function that returns both the best option and the value of the game. A function can only return a single value, but multiple values can be combined into a single value in a tuple, which is an array with these values. First, we handle the base cases. If the game is finished, the player has no options and the value can be calculated directly. import { Player, Grid, findWinner, isFull, getOpponent, getOptions } from "./model";   export function minimax(grid: Grid, rowLength: number, player: Player): [Grid, number] { const winner = findWinner(grid, rowLength); if (winner === player) { return [undefined, 1]; } else if (winner !== Player.None) { return [undefined, -1]; } else if (isFull(grid)) { return [undefined, 0]; Otherwise, we list all options. For all options, we calculate the value. The value of an option is the same as the opposite of the value of the option for the opponent. Finally, we choose the option with the best value. } else { let options = getOptions(grid, player); const opponent = getOpponent(player); return options.map<[Grid, number]>( option => [option, -(minimax(option, rowLength, opponent)[1])] ).reduce( (previous, current) => previous[1] < current[1] ? current : previous ); } } When you use tuple types, you should explicitly add a type definition for it. Since tuples are arrays too, an array type is automatically inferred. When you add the tuple as return type, expressions after the return keyword will be inferred as these tuples. For options.map, you can mention the union type as a type argument or by specifying it in the callback function (options.map((option): [Grid, number] => ...);). You can easily see that such an AI can also be used for other kinds of games. Actually, the minimax function has no direct reference to Tic-Tac-Toe, only findWinner, isFull and getOptions are related to Tic-Tac-Toe. Optimizing the algorithm The Minimax algorithm can be slow. Choosing the first set, especially, takes a long time since the algorithm tries all ways of playing the game. We will use two techniques to speed up the algorithm. First, we can use the symmetry of the game. When the board is empty it does not matter whether you place a stone in the upper-left corner or the lower-right corner. Rotating the grid around the center 180 degrees gives an equivalent board. Thus, we only need to take a look at half the options when the board is empty. Secondly, we can stop searching for options if we found an option with value 1. Such an option is already the best thing to do. Implementing these techniques gives the following function: import { Player, Grid, findWinner, isFull, getOpponent, getOptions } from "./model";   export function minimax(grid: Grid, rowLength: number, player: Player): [Grid, number] { const winner = findWinner(grid, rowLength); if (winner === player) { return [undefined, 1]; } else if (winner !== Player.None) { return [undefined, -1]; } else if (isFull(grid)) { return [undefined, 0]; } else { let options = getOptions(grid, player); const gridSize = grid.length * grid[0].length; if (options.length === gridSize) { options = options.slice(0, Math.ceil(gridSize / 2)); } const opponent = getOpponent(player); let best: [Grid, number]; for (const option of options) { const current: [Grid, number] = [option, -(minimax(option, rowLength, opponent)[1])]; if (current[1] === 1) { return current; } else if (best === undefined || current[1] > best[1]) { best = current; } } return best; } } This will speed up the AI. In the next sections we will implement the interface for the game and we will write some tests for the AI. Creating the interface NodeJS can be used to create servers. You can also create tools with a command line interface (CLI). For instance, gulp, NPM and typings are command line interfaces built with NodeJS. We will use NodeJS to create the interface for our game. Handling interaction The interaction from the user can only happen by text input in the terminal. When the game starts, it will ask the user some questions about the configuration: width, height, row length for a sequence, and the player(s) that are played by the computer. The highlighted lines are the input of the user. Tic-Tac-Toe Width3 Height3 Row length2 Who controls player 1?1You 2Computer1 Who controls player 2?1You 2Computer1 During the game, the game asks the user which of the possible options he wants to do. All possible steps are shown on the screen, with an index. The user can type the index of the option he wants. X| |-+-+-O|O|-+-+- |X|   It's player one's turn! Choose one out of these options: 1X|X|-+-+-O|O|-+-+- |X|   2X| |X-+-+-O|O|-+-+- |X|   3X| |-+-+-O|O|X-+-+- |X|   4X| |-+-+-O|O|-+-+-X|X|   5X| |-+-+-O|O|-+-+- |X|X A NodeJS application has three standard streams to interact with the user. Standard input, stdin, is used to receive input from the user. Standard output, stdout, is used to show text to the user. Standard error, stderr, is used to show error messages to the user. You can access these streams with process.stdin, process.stdout and process.stderr. You have probably already used console.log to write text to the console. This function writes the text to stdout. We will use console.log to write text to stdout and we will not use stderr. We will create a helper function that reads a line from stdin. This is an asynchronous task, the function starts listening and resolves when the user hits enter. In lib/cli.ts, we start by importing the types and function that we have written. import { Grid, Player, getOptions, getOpponent, showGrid, findWinner, isFull, createGrid } from "./model"; import { minimax } from "./ai"; We can listen to input from stdin using the data event. The process sends either the string or a buffer, an efficient way to store binary data in memory. With once, the callback will only be fired once. If you want to listen to all occurrences of the event, you can use on. function readLine() { return new Promise<string>(resolve => { process.stdin.once("data", (data: string | Buffer) => resolve(data.toString())); }); } We can easily use readLinein async functions. For instance, we can now create a function that reads, parses and validates a line. We can use this to read the input of the user, parse it to a number, and finally check that the number is within a certain range. This function will return the value if it passes the validator. Otherwise it shows a message and retries. async function readAndValidate<U>(message: string, parse: (data: string) => U, validate: (value: U) => boolean): Promise<U> { const data = await readLine(); const value = parse(data); if (validate(value)) { return value; } else { console.log(message); return readAndValidate(message, parse, validate); } } We can use this function to show a question where the user has various options. The user should type the index of his answer. This function validates that the index is within bounds. We will show indices starting at 1 to the user, so we must carefully handle these. async function choose(question: string, options: string[]) { console.log(question); for (let i = 0; i < options.length; i++) { console.log((i + 1) + "t" + options[i].replace(/n/g, "nt")); console.log(); } return await readAndValidate( `Enter a number between 1 and ${ options.length }`, parseInt, index => index >= 1 && index <= options.length ) - 1; } Creating players A player could either be a human or the computer. We create a type that can contain both kinds of players. type PlayerController = (grid: Grid) => Grid | Promise<Grid>; Next we create a function that creates such a player. For a user, we must first know whether he is the first or the second player. Then we return an async function that asks the player which step he wants to do. const getUserPlayer = (player: Player) => async (grid: Grid) => { const options = getOptions(grid, player); const index = await choose("Choose one out of these options:", options.map(showGrid)); return options[index]; }; For the AI player, we must know the player index and the length of a sequence. We use these variables and the grid of the game to run the Minimax algorithm. const getAIPlayer = (player: Player, rowLength: number) => (grid: Grid) => minimax(grid, rowLength, player)[0]; Now we can create a function that asks the player whether a player should be played by the user or the computer. async function getPlayer(index: number, player: Player, rowLength: number): Promise<PlayerController> { switch (await choose(`Who controls player ${ index }?`, ["You", "Computer"])) { case 0: return getUserPlayer(player); default: return getAIPlayer(player, rowLength); } } We combine these functions in a function that handles the whole game. First, we must ask the user to provide the width, height and length of a sequence. export async function game() { console.log("Tic-Tac-Toe"); console.log(); console.log("Width"); const width = await readAndValidate("Enter an integer", parseInt, isFinite); console.log("Height"); const height = await readAndValidate("Enter an integer", parseInt, isFinite); console.log("Row length"); const rowLength = await readAndValidate("Enter an integer", parseInt, isFinite); We ask the user which players should be controlled by the computer. const player1 = await getPlayer(1, Player.Player1, rowLength); const player2 = await getPlayer(2, Player.Player2, rowLength); The user can now play the game. We do not use a loop, but we use recursion to give the player their turns. return play(createGrid(width, height), Player.Player1);   async function play(grid: Grid, player: Player): Promise<[Grid, Player]> { In every step, we show the grid. If the game is finished, we show which player has won. console.log(); console.log(showGrid(grid)); console.log();   const winner = findWinner(grid, rowLength); if (winner === Player.Player1) { console.log("Player 1 has won!"); return <[Grid, Player]> [grid, winner]; } else if (winner === Player.Player2) { console.log("Player 2 has won!"); return <[Grid, Player]>[grid, winner]; } else if (isFull(grid)) { console.log("It's a draw!"); return <[Grid, Player]>[grid, Player.None]; } If the game is not finished, we ask the current player or the computer which set he wants to do. console.log(`It's player ${ player === Player.Player1 ? "one's" : "two's" } turn!`);   const current = player === Player.Player1 ? player1 : player2; return play(await current(grid), getOpponent(player)); } } In lib/index.ts, we can start the game. When the game is finished, we must manually exit the process. import { game } from "./cli";   game().then(() => process.exit()); We can compile and run this in a terminal: gulp && node --harmony_destructuring dist At the time of writing, NodeJS requires the --harmony_destructuring flag to allow destructuring, like [x, y] = z. In future versions of NodeJS, this flag will be removed and you can run it without it. Testing the AI We will add some tests to check that the AI works properly. For a standard three by three game, the AI should never lose. That means when an AI plays against an AI, it should result in a draw. We can add a test for this. In lib/test/ai.ts, we import AVA and our own definitions. import test from "ava"; import { createGrid, Grid, findWinner, isFull, getOptions, Player } from "../model"; import { minimax } from "../ai"; import { randomInt } from "../utils"; We create a function that simulates the whole gameplay. type PlayerController = (grid: Grid) => Grid; function run(grid: Grid, a: PlayerController, b: PlayerController): Player { const winner = findWinner(grid, 3); if (winner !== Player.None) return winner; if (isFull(grid)) return Player.None; return run(a(grid), b, a); } We write a function that executes a step for the AI. const aiPlayer = (player: Player) => (grid: Grid) => minimax(grid, 3, player)[0]; Now we create the test that validates that a game where the AI plays against the AI results in a draw. test("AI vs AI", t => { const result = run(createGrid(3, 3), aiPlayer(Player.Player1), aiPlayer(Player.Player2)) t.is(result, Player.None); }); Testing with a random player We can also test what happens when the AI plays against a random player or when a player plays against the AI. The AI should win or it should result in a draw. We run these multiple times; what you should always do when you use randomization in your test. We create a function that creates the random player. const randomPlayer = (player: Player) => (grid: Grid) => { const options = getOptions(grid, player); return options[randomInt(options.length)]; }; We write the two tests that both run 20 games with a random player and an AI. test("random vs AI", t => { for (let i = 0; i < 20; i++) { const result = run(createGrid(3, 3), randomPlayer(Player.Player1), aiPlayer(Player.Player2)) t.not(result, Player.Player1); } });   test("AI vs random", t => { for (let i = 0; i < 20; i++) { const result = run(createGrid(3, 3), aiPlayer(Player.Player1), randomPlayer(Player.Player2)) t.not(result, Player.Player2); } }); We have written different kinds of tests: Tests that check the exact results of single function Tests that check a certain property of results of a function Tests that check a big component Always start writing tests for small components. If the AI tests should fail, that could be caused by a mistake in hasWinner, isFull or getOptions, so it is hard to find the location of the error. Only testing small components is not enough; bigger tests, such as the AI tests, are closer to what the user will do. Bigger tests are harder to create, especially when you want to test the user interface. You must also not forget that tests cannot guarantee that your code runs correctly, it just guarantees that your test cases work correctly. Summary In this article, we have written an AI for Tic-Tac-Toe. With the command line interface, you can play this game against the AI or another human. You can also see how the AI plays against the AI. We have written various tests for the application. You have learned how Minimax works for turn-based games. You can apply this to other turn-based games as well. If you want to know more on strategies for such games, you can take a look at game theory, the mathematical study of these games. Resources for Article: Further resources on this subject: Basic Website using Node.js and MySQL database [article] Data Science with R [article] Web Typography [article]
Read more
  • 0
  • 0
  • 14233

article-image-testing-your-application-cljstest
Packt
11 May 2016
13 min read
Save for later

Testing Your Application with cljs.test

Packt
11 May 2016
13 min read
In this article written by David Jarvis, Rafik Naccache, and Allen Rohner, authors of the book Learning ClojureScript, we'll take a look at how to configure our ClojureScript application or library for testing. As usual, we'll start by creating a new project for us to play around with: $ lein new figwheel testing (For more resources related to this topic, see here.) We'll be playing around in a test directory. Most JVM Clojure projects will have one already, but since the default Figwheel template doesn't include a test directory, let's make one first (following the same convention used with source directories, that is, instead of src/$PROJECT_NAME we'll create test/$PROJECT_NAME): $ mkdir –p test/testing We'll now want to make sure that Figwheel knows that it has to watch the test directory for file modifications. To do that, we will edit the the dev build in our project.clj project's :cljsbuild map so that it's :source-paths vector includes both src and test. Your new dev build configuration should look like the following: {:id "dev" :source-paths ["src" "test"] ;; If no code is to be run, set :figwheel true for continued automagical reloading :figwheel {:on-jsload "testing.core/on-js-reload"} :compiler {:main testing.core :asset-path "js/compiled/out" :output-to "resources/public/js/compiled/testing.js" :output-dir "resources/public/js/compiled/out" :source-map-timestamp true}} Next, we'll get the old Figwheel REPL going so that we can have our ever familiar hot reloading: $ cd testing $ rlwrap lein figwheel Don't forget to navigate a browser window to http://localhost:3449/ to get the browser REPL to connect. Now, let's create a new core_test.cljs file in the test/testing directory. By convention, most libraries and applications in Clojure and ClojureScript have test files that correspond to source files with the suffix _test. In this project, this means that test/testing/core_test.cljs is intended to contain the tests for src/testing/core.cljs. Let's get started by just running tests on a single file. Inside core_test.cljs, let's add the following code: (ns testing.core-test (:require [cljs.test :refer-macros [deftest is]])) (deftest i-should-fail (is (= 1 0))) (deftest i-should-succeed (is (= 1 1))) This code first requires two of the most important cljs.test macros, and then gives us two simple examples of what a failed test and a successful test should look like. At this point, we can run our tests from the Figwheel REPL: cljs.user=> (require 'testing.core-test) ;; => nil cljs.user=> (cljs.test/run-tests 'testing.core-test) Testing testing.core-test FAIL in (i-should-fail) (cljs/test.js?zx=icyx7aqatbda:430:14) expected: (= 1 0) actual: (not (= 1 0)) Ran 2 tests containing 2 assertions. 1 failures, 0 errors. ;; => nil At this point, what we've got is tolerable, but it's not really practical in terms of being able to test a larger application. We don't want to have to test our application in the REPL and pass in our test namespaces one by one. The current idiomatic solution for this in ClojureScript is to write a separate test runner that is responsible for important executions and then run all of your tests. Let's take a look at what this looks like. Let's start by creating another test namespace. Let's call this one app_test.cljs, and we'll put the following in it: (ns testing.app-test (:require [cljs.test :refer-macros [deftest is]])) (deftest another-successful-test (is (= 4 (count "test")))) We will not do anything remarkable here; it's just another test namespace with a single test that should pass by itself. Let's quickly make sure that's the case at the REPL: cljs.user=> (require 'testing.app-test) nil cljs.user=> (cljs.test/run-tests 'testing.app-test) Testing testing.app-test Ran 1 tests containing 1 assertions. 0 failures, 0 errors. ;; => nil Perfect. Now, let's write a test runner. Let's open a new file that we'll simply call test_runner.cljs, and let's include the following: (ns testing.test-runner (:require [cljs.test :refer-macros [run-tests]] [testing.app-test] [testing.core-test])) ;; This isn't strictly necessary, but is a good idea depending ;; upon your application's ultimate runtime engine. (enable-console-print!) (defn run-all-tests [] (run-tests 'testing.app-test 'testing.core-test)) Again, nothing surprising. We're just making a single function for us that runs all of our tests. This is handy for us at the REPL: cljs.user=> (testing.test-runner/run-all-tests) Testing testing.app-test Testing testing.core-test FAIL in (i-should-fail) (cljs/test.js?zx=icyx7aqatbda:430:14) expected: (= 1 0) actual: (not (= 1 0)) Ran 3 tests containing 3 assertions. 1 failures, 0 errors. ;; => nil Ultimately, however, we want something we can run at the command line so that we can use it in a continuous integration environment. There are a number of ways we can go about configuring this directly, but if we're clever, we can let someone else do the heavy lifting for us. Enter doo, the handy ClojureScript testing plugin for Leiningen. Using doo for easier testing configuration doo is a library and Leiningen plugin for running cljs.test in many different JavaScript environments. It makes it easy to test your ClojureScript regardless of whether you're writing for the browser or for the server, and it also includes file watching capabilities such as Figwheel so that you can automatically rerun tests on file changes. The doo project page can be found at https://github.com/bensu/doo. To configure our project to use doo, first we need to add it to the list of plugins in our project.clj file. Modify the :plugins key so that it looks like the following: :plugins [[lein-figwheel "0.5.2"] [lein-doo "0.1.6"] [lein-cljsbuild "1.1.3" :exclusions [[org.clojure/clojure]]]] Next, we will add a new cljsbuild build configuration for our test runner. Add the following build map after the dev build map on which we've been working with until now: {:id "test" :source-paths ["src" "test"] :compiler {:main testing.test-runner :output-to "resources/public/js/compiled/testing_test.js" :optimizations :none}} This configuration tells Cljsbuild to use both our src and test directories, just like our dev profile. It adds some different configuration elements to the compiler options, however. First, we're not using testing.core as our main namespace anymore—instead, we'll use our test runner's namespace, testing.test-runner. We will also change the output JavaScript file to a different location from our compiled application code. Lastly, we will make sure that we pass in :optimizations :none so that the compiler runs quickly and doesn't have to do any magic to look things up. Note that our currently running Figwheel process won't know about the fact that we've added lein-doo to our list of plugins or that we've added a new build configuration. If you want to make Figwheel aware of doo in a way that'll allow them to play nicely together, you should also add doo as a dependency to your project. Once you've done that, exit the Figwheel process and restart it after you've saved the changes to project.clj. Lastly, we need to modify our test runner namespace so that it's compatible with doo. To do this, open test_runner.cljs and change it to the following: (ns testing.test-runner (:require [doo.runner :refer-macros [doo-tests]] [testing.app-test] [testing.core-test])) ;; This isn't strictly necessary, but is a good idea depending ;; upon your application's ultimate runtime engine. (enable-console-print!) (doo-tests 'testing.app-test 'testing.core-test) This shouldn't look too different from our original test runner—we're just importing from doo.runner rather than cljs.test and using doo-tests instead of a custom runner function. The doo-tests runner works very similarly to cljs.test/run-tests, but it places hooks around the tests to know when to start them and finish them. We're also putting this at the top-level of our namespace rather than wrapping it in a particular function. The last thing we're going to need to do is to install a JavaScript runtime that we can use to execute our tests with. Up until now, we've been using the browser via Figwheel, but ideally, we want to be able to run our tests in a headless environment as well. For this purpose. we recommend installing PhantomJS (though other execution environments are also fine). If you're on OS X and have Homebrew installed (http://www.brew.sh), installing PhantomJS is as simple as typing brew install phantomjs. If you're not on OS X or don't have Homebrew, you can find instructions on how to install PhantomJS on the project's website at http://phantomjs.org/. The key thing is that the following should work: $ phantomjs -v 2.0.0 Once you've got PhantomJS installed, you can now invoke your test runner from the command line with the following: $ lein doo phantom test once ;; ====================================================================== ;; Testing with Phantom: Testing testing.app-test Testing testing.core-test FAIL in (i-should-fail) (:) expected: (= 1 0) actual: (not (= 1 0)) Ran 3 tests containing 3 assertions. 1 failures, 0 errors. Subprocess failed Let's break down this command. The first part, lein doo, just tells Leiningen to invoke the doo plugin. Next, we have phantom, which tells doo to use PhantomJS as its running environment. The doo plugin supports a number of other environments, including Chrome, Firefox, Internet Explorer, Safari, Opera, SlimerJS, NodeJS, Rhino, and Nashorn. Be aware that if you're interested in running doo on one of these other environments, you may have to configure and install additional software. For instance, if you want to run tests on Chrome, you'll need to install Karma as well as the appropriate Karma npm modules to enable Chrome interaction. Next we have test, which refers to the cljsbuild build ID we set up earlier. Lastly, we have once, which tells doo to just run tests and not to set up a filesystem watcher. If, instead, we wanted doo to watch the filesystem and rerun tests on any changes, we would just use lein doo phantom test. Testing fixtures The cljs.test project has support for adding fixtures to your tests that can run before and after your tests. Test fixtures are useful for establishing isolated states between tests—for instance, you can use tests to set up a specific database state before each test and to tear it down afterward. You can add them to your ClojureScript tests by declaring them with the use-fixtures macro within the testing namespace you want fixtures applied to. Let's see what this looks like in practice by changing one of our existing tests and adding some fixtures to it. Modify app-test.cljs to the following: (ns testing.app-test (:require [cljs.test :refer-macros [deftest is use-fixtures]])) ;; Run these fixtures for each test. ;; We could also use :once instead of :each in order to run ;; fixtures once for the entire namespace instead of once for ;; each individual test. (use-fixtures :each {:before (fn [] (println "Setting up tests...")) :after (fn [] (println "Tearing down tests..."))}) (deftest another-successful-test ;; Give us an idea of when this test actually executes. (println "Running a test...") (is (= 4 (count "test")))) Here, we've added a call to use-fixtures that prints to the console before and after running the test, and we've added a println call to the test itself so that we know when it executes. Now when we run this test, we get the following: $ lein doo phantom test once ;; ====================================================================== ;; Testing with Phantom: Testing testing.app-test Setting up tests... Running a test... Tearing down tests... Testing testing.core-test FAIL in (i-should-fail) (:) expected: (= 1 0) actual: (not (= 1 0)) Ran 3 tests containing 3 assertions. 1 failures, 0 errors. Subprocess failed Note that our fixtures get called in the order we expect them to. Asynchronous testing Due to the fact that client-side code is frequently asynchronous and JavaScript is single threaded, we need to have a way to support asynchronous tests. To do this, we can use the async macro from cljs.test. Let's take a look at an example using an asynchronous HTTP GET request. First, let's modify our project.clj file to add cljs-ajax to our dependencies. Our dependencies project key should now look something like this: :dependencies [[org.clojure/clojure "1.8.0"] [org.clojure/clojurescript "1.7.228"] [cljs-ajax "0.5.4"] [org.clojure/core.async "0.2.374" :exclusions [org.clojure/tools.reader]]] Next, let's create a new async_test.cljs file in our test.testing directory. Inside it, we will add the following code: (ns testing.async-test (:require [ajax.core :refer [GET]] [cljs.test :refer-macros [deftest is async]])) (deftest test-async (GET "http://www.google.com" ;; will always fail from PhantomJS because ;; `Access-Control-Allow-Origin` won't allow ;; our headless browser to make requests to Google. {:error-handler (fn [res] (is (= (:status-text res) "Request failed.")) (println "Test finished!"))})) Note that we're not using async in our test at the moment. Let's try running this test with doo (don't forget that you have to add testing.async-test to test_runner.cljs!): $ lein doo phantom test once ... Testing testing.async-test ... Ran 4 tests containing 3 assertions. 1 failures, 0 errors. Subprocess failed Now, our test here passes, but note that the println async code never fires, and our additional assertion doesn't get called (looking back at our previous examples, since we've added a new is assertion we should expect to see four assertions in the final summary)! If we actually want our test to appropriately validate the error-handler callback within the context of the test, we need to wrap it in an async block. Doing so gives us a test that looks like the following: (deftest test-async (async done (GET "http://www.google.com" ;; will always fail from PhantomJS because ;; `Access-Control-Allow-Origin` won't allow ;; our headless browser to make requests to Google. {:error-handler (fn [res] (is (= (:status-text res) "Request failed.")) (println "Test finished!") (done))}))) Now, let's try to run our tests again: $ lein doo phantom test once ... Testing testing.async-test Test finished! ... Ran 4 tests containing 4 assertions. 1 failures, 0 errors. Subprocess failed Awesome! Note that this time we see the printed statement from our callback, and we can see that cljs.test properly ran all four of our assertions. Asynchronous fixtures One final "gotcha" on testing—the fixtures we talked about earlier in this article do not handle asynchronous code automatically. This means that if you have a :before fixture that executes asynchronous logic, your test can begin running before your fixture has completed! In order to get around this, all you need to do is to wrap your :before fixture in an async block, just like with asynchronous tests. Consider the following for instance: (use-fixtures :once {:before #(async done ... (done)) :after #(do ...)}) Summary This concludes our section on cljs.test. Testing, whether in ClojureScript or any other language, is a critical software engineering best practice to ensure that your application behaves the way you expect it to and to protect you and your fellow developers from accidentally introducing bugs to your application. With cljs.test and doo, you have the power and flexibility to test your ClojureScript application with multiple browsers and JavaScript environments and to integrate your tests into a larger continuous testing framework. Resources for Article: Further resources on this subject: Clojure for Domain-specific Languages - Design Concepts with Clojure [article] Visualizing my Social Graph with d3.js [article] Improving Performance with Parallel Programming [article]
Read more
  • 0
  • 0
  • 7471

article-image-exploring-performance-issues-nodejsexpress-applications
Packt
11 May 2016
16 min read
Save for later

Exploring Performance Issues in Node.js/Express Applications

Packt
11 May 2016
16 min read
Node.js is an exciting new platform for developing web applications, application servers, any sort of network server or client, and general purpose programming. It is designed for extreme scalability in networked applications through an ingenious combination of server-side JavaScript, asynchronous I/O, asynchronous programming, built around JavaScript anonymous functions, and a single execution thread event-driven architecture. Companies—large and small—are adopting Node.js, for example, PayPal is one of the companies converting its application stack over to Node.js. An up-and-coming leading application model, the MEAN stack, combines MongoDB (or MySQL) with Express, AngularJS and, of course, Node.js. A look through current job listings demonstrates how important the MEAN stack and Node.js in general have become. It's claimed that Node.js is a lean, low-overhead, software platform. The excellent performance is supposedly because Node.js eschews the accepted wisdom of more traditional platforms, such as JavaEE and its complexity. Instead of relying on a thread-oriented architecture to fill the CPU cores of the server, Node.js has a simple single-threaded architecture, avoiding the overhead and complexity of threads. Using threads to implement concurrency often comes with admonitions like these: expensive and error-prone, the error-prone synchronization primitives of Java, or designing concurrent software can be complex and error prone. The complexity comes from the access to shared variables and various strategies to avoid deadlock and competition between threads. The synchronization primitives of Java are an example of such a strategy, and obviously many programmers find them hard to use. There's the tendency to create frameworks, such as java.util.concurrent, to tame the complexity of threaded concurrency, but some might argue that papering over complexity does not make things simpler. Adopting Node.js is not a magic wand that will instantly make performance problems disappear forever. The development team must approach this intelligently, or else, you'll end up with one core on the server running flat out and the other cores twiddling their thumbs. Your manager will want to know how you're going to fully utilize the server hardware. And, because Node.js is single-threaded, your code must return from event handlers quickly, or else, your application will be frequently blocked and will provide poor performance. Your manager will want to know how you'll deliver the promised high transaction rate. In this article by David Herron, author of the book Node.JS Web Development - Third Edition, we will explore this issue. We'll write a program with an artificially heavy computational load. The naive Fibonacci function we'll use is elegantly simple, but is extremely recursive and can take a long time to compute. (For more resources related to this topic, see here.) Node.js installation Before launching into writing code, we need to install Node.js on our laptop. Proceed to the Node.js downloads page by going to http://nodejs.org/ and clicking on the downloads link. It's preferable if you can install Node.js from the package management system for your computer. While the Downloads page offers precompiled binary Node.js packages for popular computer systems (Windows, Mac OS X, Linux, and so on), installing from the package management system makes it easier to update the install later. The Downloads page has a link to instructions for using package management systems to install Node.js. Once you've installed Node.js, you can quickly test it by running a couple of commands: $ node –help Prints out helpful information about using the Node.js command-line tool: $ npm help Npm is the default package management system for Node.js, and is automatically installed along with Node.js. It lets us download Node.js packages from over the Internet, using them as the building blocks for our applications. Next, let's create a directory to develop an Express application within it to calculate Fibonacci numbers: $ mkdir fibonacci $ cd fibonacci $ npm install express-generator@4.x $ ./node_modules/.bin/express . --ejs $ npm install The application will be written against the current Express version, version 4.x. Specifying the version number this way makes sure of compatibility. The express command generated for us a starting application. You can inspect the package.json file to see what will be installed, and the last command installs those packages. What we'll have in front of us is a minimal Express application. Our first stop is not to create an Express application, but to gather some basic data about computation-dominant code in Node.js. Heavy-weight computation Let's start the exploration by creating a Node.js module namedmath.js, containing: var fibonacci = exports.fibonacci = function(n) { if (n === 1) return 1; else if (n === 2) return 1; else return fibonacci(n-1) + fibonacci(n-2); } Then, create another file namedfibotimes.js containing this: var math = require('./math'); var util = require('util'); for (var num = 1; num < 80; num++) { util.log('Fibonacci for '+ num +' = '+ math.fibonacci(num)); } Running this script produces the following output: $ node fibotimes.js 31 Jan 14:41:28 - Fibonacci for 1 = 1 31 Jan 14:41:28 - Fibonacci for 2 = 1 31 Jan 14:41:28 - Fibonacci for 3 = 2 31 Jan 14:41:28 - Fibonacci for 4 = 3 31 Jan 14:41:28 - Fibonacci for 5 = 5 31 Jan 14:41:28 - Fibonacci for 6 = 8 31 Jan 14:41:28 - Fibonacci for 7 = 13 … 31 Jan 14:42:27 - Fibonacci for 38 = 39088169 31 Jan 14:42:28 - Fibonacci for 39 = 63245986 31 Jan 14:42:31 - Fibonacci for 40 = 102334155 31 Jan 14:42:34 - Fibonacci for 41 = 165580141 31 Jan 14:42:40 - Fibonacci for 42 = 267914296 31 Jan 14:42:50 - Fibonacci for 43 = 433494437 31 Jan 14:43:06 - Fibonacci for 44 = 701408733 This quickly calculates the first 40 or so members of the Fibonacci sequence. After the 40th member, it starts taking a couple seconds per result and quickly degrades from there. It isuntenable to execute code of this sort on a single-threaded system that relies on a quick return to the event loop. That's an important point because the Node.js design requires that event handlers quickly return to the event loop. The single-thread event-loop does everything in Node.js and event handlers that return quickly to the event loop keep it humming. A correctly written application can sustain a tremendous request throughput, but a badly written application can prevent Node.js from fulfilling that promise. This Fibonacci function demonstrates algorithms that churn away at their calculation without ever letting Node.js process the event loop. Calculating fibonacci(44) requires 16 seconds of calculation, which is an eternity for a modern web service. With any server that's bogged down like this, not processing events, the perceived performance is zilch. Your manager will be rightfully angry. This is a completely artificial example, because it's trivial to refactor the Fibonacci calculation for excellent performance. This is a stand-in for any algorithm that might monopolize the event loop. There are two general ways in Node.js to solve this problem: Algorithmic refactoring: Perhaps, like the Fibonacci function we chose, one of your algorithms is suboptimal and can be rewritten to be faster. Or, if not faster, it can be split into callbacks dispatched through the event loop. We'll look at one such method in a moment. Creating a backend service: Can you imagine a backend server dedicated to calculating Fibonacci numbers? Okay, maybe not, but it's quite common to implement backend servers to offload work from frontend servers, and we will implement a backend Fibonacci server at the end of this article. But first, we need to set up an Express application that demonstrates the impact on the event loop. An Express app to calculate Fibonacci numbers To see the impact of a computation-heavy application on Node.js performance, let's write a simple Express application to do Fibonacci calculations. Express is a key Node.js technology, so this will also give you a little exposure to writing an Express application. We've already created the blank application, so let's make a couple of small changes, so it uses our Fibonacci algorithm. Edit views/index.ejs to have this code: <!DOCTYPE html> <html> <head> <title><%= title %></title> <link rel='stylesheet' href='/stylesheets/style.css' /> </head> <body> <h1><%= title %></h1> <% if (typeof fiboval !== "undefined") { %> <p>Fibonacci for <%= fibonum %> is <%= fiboval %></p> <hr/> <% } %> <p>Enter a number to see its' Fibonacci number</p> <form name='fibonacci' action='/' method='get'> <input type='text' name='fibonum' /> <input type='submit' value='Submit' /> </form> </body> </html> This simple template sets up an HTML form where we can enter a number. This number designates the desired member of the Fibonacci sequences to calculate. This is written for the EJS template engine. You can see that <%= variable %> substitutes the named variable into the output, and JavaScript code is written in the template by enclosing it within <% %> delimiters. We use that to optionally print out the requested Fibonacci value if one is available. var express = require('express'); var router = express.Router(); var math = require('../math'); router.get('/', function(req, res, next) { if (req.query.fibonum) { res.render('index', { title: "Fibonacci Calculator", fibonum: req.query.fibonum, fiboval: math.fibonacci(req.query.fibonum) }); } else { res.render('index', { title: "Fibonacci Calculator", fiboval: undefined }); } }); module.exports = router; This router definition handles the home page for the Fibonacci calculator. The router.get function means this route handles HTTP GET operations on the / URL. If the req.query.fibonum value is set, that means the URL had a ?fibonum=# value which would be produced by the form in index.ejs. If that's the case, the fiboval value is calculated by calling math.fibonacci, the function we showed earlier. By using that function, we can safely predict ahead performance problems when requesting larger Fibonacci values. On the res.render calls, the second argument is an object defining variables that will be made available to the index.ejs template. Notice how the two res.render calls differ in the values passed to the template, and how the template will differ as a result. There are no changes required in app.js. You can study that file, and bin/www, if you're curious how Express applications work. In the meantime, you run it simply: $ npm start > fibonacci@0.0.0 start /Users/david/fibonacci > node ./bin/www And this is what it'll look like in the browser—at http://localhost:3000: For small Fibonacci values, the result will return quickly. As implied by the timing results we looked at earlier, at around the 40th Fibonacci number, it'll take a few seconds to calculate the result. The 50th Fibonacci number will take 20 minutes or so. That's enough time to run a little experiment. Open two browser windows onto http://localhost:3000. You'll see the Fibonacci calculator in each window. In one, request the value for 45 or more. In the other, enter 10 that, in normal circumstances, we know would return almost immediately. Instead, the second window won't respond until the first one finishes. Unless, that is, your browser times out and throws an error. What's happening is the Node.js event loop is blocked from processing events because the Fibonacci algorithm is running and does not ever yield to the event loop. As soon as the Fibonacci calculation finishes, the event loop starts being processed again. It then receives and processes the request made from the second window. Algorithmic refactoring The problem here is the applications that stop processing events. We might solve the problem by ensuring events are handled while still performing calculations. In other words, let's look at algorithmic refactoring. To prove that we have an artificial problem on our hands, add this function to math.js: var fibonacciLoop = exports.fibonacciLoop = function(n) { var fibos = []; fibos[0] = 0; fibos[1] = 1; fibos[2] = 1; for (var i = 3; i <= n; i++) { fibos[i] = fibos[i-2] + fibos[i-1]; } return fibos[n]; } Change fibotimes.js to call this function, and the Fibonacci values will fly by so fast your head will spin. Some algorithms aren't so simple to optimize as this. For such a case, it is possible to divide the calculation into chunks and then dispatch the computation of those chunks through the event loop. Consider the following code: var fibonacciAsync = exports.fibonacciAsync = function(n, done) { if (n === 0) done(undefined, 0); else if (n === 1 || n === 2) done(undefined, 1); else { setImmediate(function() { fibonacciAsync(n-1, function(err, val1) { if (err) done(err); else setImmediate(function() { fibonacciAsync(n-2, function(err, val2) { if (err) done(err); else done(undefined, val1+val2); }); }); }); }); } }; This converts the fibonacci function from a synchronous function to an asynchronous function one with a callback. By using setImmediate, each stage of the calculation is managed through Node.js's event loop, and the server can easily handle other requests while churning away on a calculation. It does nothing to reduce the computation required; this is still the silly inefficient Fibonacci algorithm. All we've done is spread the computation through the event loop. To use this new Fibonacci function, we need to change the router function in routes/index.js to the following: exports.index = function(req, res) { if (req.query.fibonum) { math.fibonacciAsync(req.query.fibonum, function(err,fiboval){ res.render('index', { title: "Fibonacci Calculator", fibonum: req.query.fibonum, fiboval: fiboval }); }); } else { res.render('index', { title: "Fibonacci Calculator", fiboval: undefined }); } }; This makes an asynchronous call to fibonacciAsync, and when the calculation finishes, the result is sent to the browser. With this change, the server no longer freezes when calculating a large Fibonacci number. The calculation, of course, still takes a long time, because fibonacciAsync is still an inefficient algorithm. At least, other users of the application aren't blocked, because it regularly yields to the event loop. Repeat the same test used earlier. Open two or more browser windows to the Fibonacci calculator, make a large request in one window, and the requests in the other window will be promptly answered. Creating a backend REST service The next way to mitigate computationally intensive code is to push the calculation to a backend process. To do that, we'll request computations from a backend Fibonacci server. While Express has a powerful templating system, making it suitable for delivering HTML web pages to browsers, it can also be used to implement a simple REST service. Express supports parameterized URL's in route definitions, so it can easily receive REST API arguments, and Express makes it easy to return data encoded in JSON. Create a file named fiboserver.js containing this code: var math = require('./math'); var express = require('express'); var logger = require('morgan'); var util = require('util'); var app = express(); app.use(logger('dev')); app.get('/fibonacci/:n', function(req, res, next) { math.fibonacciAsync(Math.floor(req.params.n), function(err, val) { if (err) next('FIBO SERVER ERROR ' + err); else { util.log(req.params.n +': '+ val); res.send({ n: req.params.n, result: val }); } }); }); app.listen(3333); This is a stripped down Express application that gets right to the point of providing a Fibonacci calculation service. The one route it supports does the Fibonacci computation using the same fibonacciAsync function used earlier. The res.send function is a flexible way to send data responses. As used here, it automatically detects the object, formats it as JSON text, and sends it with the correct content-type. Then, in package.json, add this to the scripts section: "server": "node ./fiboserver" Now, let's run it: $ npm run server > fibonacci@0.0.0 server /Users/david/fibonacci > node ./fiboserver Then, in a separate command window, use curl to request values from this service. $ curl -f http://localhost:3002/fibonacci/10 {"n":"10","result":55} Over in the window, where the service is running, we'll see a log of GET requests and how long each took to process. It's easy to create a small Node.js script to directly call this REST service. But let's instead move directly to changing our Fibonacci calculator application to do so. Make this change to routes/index.js: router.get('/', function(req, res, next) { if (req.query.fibonum) { var httpreq = require('http').request({ method: 'GET', host: "localhost", port: 3333, path: "/fibonacci/"+Math.floor(req.query.fibonum) }, function(httpresp) { httpresp.on('data', function(chunk) { var data = JSON.parse(chunk); res.render('index', { title: "Fibonacci Calculator", fibonum: req.query.fibonum, fiboval: data.result }); }); httpresp.on('error', function(err) { next(err); }); }); httpreq.on('error', function(err) { next(err); }); httpreq.end(); } else { res.render('index', { title: "Fibonacci Calculator", fiboval: undefined }); } }); Running the Fibonacci Calculator service now requires starting both processes. In one command window, we run: $ npm run server And in the other command window: $ npm start In the browser, we visit http://localhost:3000 and see what looks like the same application, because no changes were made to views/index.ejs. As you make requests in the browser window, the Fibonacci service window prints a log of requests it receives and values it sent. You can, of course, repeat the same experiment as before. Open two browser windows, in one window request a large Fibonacci number, and in the other make smaller requests. You'll see, because the server uses fibonacciAsync, that it's able to respond to every request. Why did we go through this trouble when we could just directly call fibonacciAsync? We can now push the CPU load for this heavy-weight calculation to a separate server. Doing so would preserve CPU capacity on the frontend server, so it can attend to web browsers. The heavy computation can be kept separate, and you could even deploy a cluster of backend servers sitting behind a load balancer evenly distributing requests. Decisions like this are made all the time to create multitier systems. What we've demonstrated is that it's possible to implement simple multitier REST services in a few lines of Node.js and Express. Summary While the Fibonacci algorithm we chose is artificially inefficient, it gave us an opportunity to explore common strategies to mitigate performance problems in Node.js. Optimizing the performance of our systems is as important as correctness, fixing bugs, mobile friendliness, and usability. Inefficient algorithms means having to deploy more hardware to satisfy load requirements, costing more money, and creating a bigger environmental impact. For real-world applications, optimizing away performance problems won't be as easy as it would be for the Fibonacci calculator. We could have just used the fibonacciLoop function, since it provides all the performance we'd need. But we needed to explore more typical approaches to performing heavy-weight calculations in Node.js while still keeping the event loop rolling. The bottom line is that in Node.js the event loop must run. Resources for Article: Further resources on this subject: Learning Node.js for Mobile Application Development [article] Node.js Fundamentals [article] Making a Web Server in Node.js [article]
Read more
  • 0
  • 0
  • 7620

article-image-quick-user-authentication-setup-django
Packt
10 May 2016
21 min read
Save for later

Quick User Authentication Setup with Django

Packt
10 May 2016
21 min read
In this article by Asad Jibran Ahmed author of book Django Project Blueprints we are going to start with a simple blogging platform in Django. In recent years, Django has emerged as one of the clear leaders in web frameworks. When most people decide to start using a web framework, their searches lead them to either Ruby on Rails or Django. Both are mature, stable, and extensively used. It appears that the decision to use one or the other depends mostly on which programming language you’re familiar with. Rubyists go with RoR, and Pythonistas go with Django. In terms of features, both can be used to achieve the same results, although they have different approaches to how things are done. One of the most popular platforms these days is Medium, widely used by a number of high-profile bloggers. Its popularity stems from its elegant theme and simple-to-use interface. I’ll walk you through creating a similar application in Django with a few surprise features that most blogging platforms don’t have. This will give you a taste of things to come and show you just how versatile Django can be. Before starting any software development project, it’s a good idea to have a rough roadmap of what we would like to achieve. Here’s a list of features that our blogging platform will have: Users should be able to register an account and create their blogs Users should be able to tweak the settings of their blogs There should be a simple interface for users to create and edit blog posts Users should be able to share their blog posts on other blogs on the platform I know this seems like a lot of work, but Django comes with a couple of contrib packages that speed up our work considerably. (For more resources related to this topic, see here.) The contrib Packages The contrib packages are a part of Django that contain some very useful applications that the Django developers decided should be shipped with Django. The included applications provide an impressive set of features, including some that we’ll be using in this application: Admin: This is a full-featured CMS that can be used to manage the content of a Django site. The Admin application is an important reason for the popularity of Django. We’ll use this to provide an interface for site administrators to moderate and manage the data in our application. Auth: This provides user registration and authentication without requiring us to do any work. We’ll be using this module to allow users to sign up, sign in, and manage their profiles in our application. Sites: This frameworkprovides utilities that help us run multiple Django sites using the same code base. We’ll use this feature to allow each user to have his own blog with content that can be shared between multiple blogs. There are a lot more goodies in the contrib module. I suggest you take a look at the complete list at https://docs.djangoproject.com/en/stable/ref/contrib/#contrib-packages. I usually end up using at least three of the contrib packages in all my Django projects. They provide often-required features such as user registration and management and free you to work on the core parts of your project, providing a solid foundation to build upon. Setting up our development environment Let’s start by creating the directory structure for our project, setting up the virtual environment, and configuring some basic Django settings that need to be set up in every project. Let’s call our blogging platform BlueBlog. To start a new project, you need to first open up your terminal program. In Mac OS X, it is the built-in terminal. In Linux, the terminal is named separately for each distribution, but you should not have trouble finding it; try searching your program list for the word terminal and something relevant should show up. In Windows, the terminal program is called the Command Line. You’ll need to start the relevant program depending on your operating system. If you are using the Windows operating system, some things will need to be done differently from what the book shows. If you are using Mac OS X or Linux, the commands shown here should work without any problems. Open the relevant terminal program for your operating system and start by creating the directory structure for our project and cd into the root project directory using the following commands: > mkdir –p blueblog > cd blueblog Next, let’s create the virtual environment, install Django, and start our project: > pyvenv blueblogEnv > source blueblogEnv/bin/activate > pip install django > django-admin.py startproject blueblog src With this out of the way, we’re ready to start developing our blogging platform. Database settings Open up the settings found at $PROJECT_DIR/src/blueblog/settings.py in your favorite editor and make sure that the DATABASES settings variable matches this: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'), } } In order to initialize the database file, run the following commands: > cd src > python manage.py migrate Staticfiles settings The last step in setting up our development environment is configuring the staticfiles contrib application. The staticfiles application provides a number of features that make it easy to manage the static files (css, images, and javascript) of your projects. While our usage will be minimal, you should look at the Django documentation for staticfiles in further detail as it is used quite heavily in most real-world Django projects. You can find the documentation at https://docs.djangoproject.com/en/stable/howto/static-files/. In order to set up the staticfiles application, we have to configure a few settings in the settings.py file. First, make sure that django.contrib.staticfiles is added to INSTALLED_APPS. Django should have done this by default. Next, set STATIC_URL to whatever URL you want your static files to be served from. I usually leave this to the default value, ‘/static/’. This is the URL that Django will put in your templates when you use the static template tag to get the path to a static file. Base template Next, let’s set up a base template that all the other templates in our application will inherit from. I prefer to have templates that are used by more than one application of a project in a directory named templates in the project source folder. To set this up, add os.path.join(BASE_DIR, 'templates') to the DIRS array of the TEMPLATES configuration dictionary in the settings file, and then create a directory named templates in $PROJECT_ROOT/src. Next, using your favorite text editor, create a file named base.html in the new folder with the following content: <html> <head> <title>BlueBlog</title> </head> <body> {% block content %} {% endblock %} </body> </html> Just as Python classes can inherit from other classes, Django templates can also inherit from other templates. Also, just as Python classes can have functions overridden by their subclasses, Django templates can also define blocks that children templates can override. Our base.html template provides one block to inherit templates to override called content. The reason for using template inheritance is code reuse. We should put HTML that we want to be visible on every page of our site, such as headers, footers, copyright notices, meta tags, and so on, in the base template. Then, any template inheriting from it will automatically get all this common HTML included automatically, and we will only need to override the HTML code for the block that we want to customize. You’ll see this principal of creating and overriding blocks in base templates used throughout the projects in this book. User accounts With the database setup out of the way, let’s start creating our application. If you remember, the first thing on our list of features is to allow users to register accounts on our site. As I’ve mentioned before, we’ll be using the auth package from the Django contrib packages to provide user account features. In order to use the auth package, we’ll need to add it our INSTALLED_APPS list in the settings file (found at $PROJECT_ROOT/src/blueblog/settings.py). In the settings file, find the line defining INSTALLED_APPS and make sure that the ‘django.contrib.auth’ string is part of the list. It should be by default but if, for some reason, it’s not there, add it manually. You’ll see that Django has included the auth package and a couple of other contrib applications to the list by default. A new Django project includes these applications by default because almost all Django projects end up using these. If you need to add the auth application to the list, remember to use quotes to surround the application name. We also need to make sure that the MIDDLEWARE_CLASSES list contains django.contrib.sessions.middleware.SessionMiddleware, django.contrib.auth.middleware.AuthenticationMiddleware, and django.contrib.auth.middleware.SessionAuthenticationMiddleware. These middleware classes give us access to the logged in user in our views and also make sure that if I change the password for my account, I’m logged out from all other devices that I previously logged on to. As you learn more about the various contrib applications and their purpose, you can start removing any that you know you won’t need in your project. Now, let’s add the URLs, views, and templates that allow the users to register with our application. The user accounts app In order to create the various views, URLs, and templates related to user accounts, we’ll start a new application. To do so, type the following in your command line: > python manage.py startapp accounts This should create a new accounts folder in the src folder. We’ll add code that deals with user accounts in files found in this folder. To let Django know that we want to use this application in our project, add the application name (accounts) to the INSTALLED_APPS setting variable; making sure to surround it with quotes. Account registration The first feature that we will work on is user registration. Let’s start by writing the code for the registration view in accounts/views.py. Make the contents of views.py match what is shown here: from django.contrib.auth.forms import UserCreationForm from django.core.urlresolvers import reverse from django.views.generic import CreateView class UserRegistrationView(CreateView): form_class = UserCreationForm template_name = 'user_registration.html' def get_success_url(self): return reverse('home') I’ll explain what each line of this code is doing in a bit. First, I’d like you to get to a state where you can register a new user and see for yourself how the flow works. Next, we’ll create the template for this view. In order to create the template, you first need to create a new folder called templates in the accounts folder. The name of the folder is important as Django automatically searches for templates in folders of that name. To create this folder, just type the following: > mkdir accounts/templates Next, create a new file called user_registration.html in the templates folder and type in the following code: {% extends "base.html" %} {% block content %} <h1>Create New User</h1> <form action="" method="post">{% csrf_token %} {{ form.as_p }} <input type="submit" value="Create Account" /> </form> {% endblock %} Finally, remove the existing code in blueblog/urls.py and replace it with this: from django.conf.urls import include from django.conf.urls import url from django.contrib import admin from django.views.generic import TemplateView from accounts.views import UserRegistrationView urlpatterns = [ url(r'^admin/', include(admin.site.urls)), url(r'^$', TemplateView.as_view(template_name='base.html'), name='home'), url(r'^new-user/$', UserRegistrationView.as_view(), name='user_registration'), ] That’s all the code that we need to get user registration in our project! Let’s do a quick demonstration. Run the development server by typing as follows: > python manage.py runserver In your browser, visit http://127.0.0.1:8000/new-user/ and you’ll see a user registration form. Fill this in and click on submit. You’ll be taken to a blank page on successful registration. If there are some errors, the form will be shown again with the appropriate error messages. Let’s verify that our new account was indeed created in our database. For the next step, we will need to have an administrator account. The Django auth contrib application can assign permissions to user accounts. The user with the highest level of permission is called the super user. The super user account has free reign over the application and can perform any administrator actions. To create a super user account, run this command: > python manage.py createsuperuser As you already have the runserver command running in your terminal, you will need to quit it first by pressing Ctrl + C in the terminal. You can then run the createsuperuser command in the same terminal. After running the createsuperuser command, you’ll need to start the runserver command again to browse the site. If you want to keep the runserver command running and run the createsuperuser command in a new terminal window, you will need to make sure that you activate the virtual environment for this application by running the same source blueblogEnv/bin/activate command that we ran earlier when we created our new project. After you have created the account, visit http://127.0.0.1:8000/admin/ and log in with the admin account. You will see a link titled Users. Click on this, and you should see a list of users registered in our app. It will include the user that you just created. Congrats! In most other frameworks, getting to this point with a working user registration feature would take a lot more effort. Django, with it’s batteries included approach, allows us to do the same with a minimum of effort. Next, I’ll explain what each line of code that you wrote does. Generic views Here’s the code for the user registration view again: class UserRegistrationView(CreateView): form_class = UserCreationForm template_name = 'user_registration.html' def get_success_url(self): return reverse('home') Our view is pretty short for something that does such a lot of work. That’s because instead of writing code from scratch to handle all the work, we use one of the most useful features of Django, Generic Views. Generic views are base classes included with Django that provide functionality commonly required by a lot of web apps. The power of generic views comes from the ability to customize them to a great degree with ease. You can read more about Django generic views in the documentation available at https://docs.djangoproject.com/en/1.9/topics/class-based-views/. Here, we’re using the CreateView generic view. This generic view can display ModelForm using a template and, on submission, can either redisplay the page with errors if the form data was invalid or call the save method on the form and redirect the user to a configurable URL. CreateView can be configured in a number of ways. If you want ModelForm to be created automatically from some Django model, just set the model attribute to the model class, and the form will be generated automatically from the fields of the model. If you want to have the form only show certain fields from the model, use the fields attribute to list the fields that you want, exactly like you’d do while creating ModelForm. In our case, instead of having ModelForm generated automatically, we’re providing one of our own, UserCreationForm. We do this by setting the form_class attribute on the view. This form, which is part of the auth contrib app, provides the fields and a save method that can be used to create a new user. You’ll see that this theme of composing solutions from small reusable parts provided by Django is a common practice in Django web app development and, in my opinion, one of the best features of the framework. Finally, we define a get_success_url function that does a simple reverse URL and returns the generated URL. CreateView calls this function to get the URL to redirect the user to when a valid form is submitted and saved successfully. To get something up and running quickly, we left out a real success page and just redirected the user to a blank page. We’ll fix this later. Templates and URLs The template, which extends the base template that we created earlier, simply displays the form passed to it by CreateView using the form.as_p method, which you might have seen in the simple Django projects you may have worked on before. The urls.py file is a bit more interesting. You should be familiar with most of it—the parts where we include the admin site URLs and the one where we assign our view a URL. It’s the usage of TemplateView that I want to explain here. Like CreateView, TemplateView is another generic view provided to us by Django. As the name suggests, this view can render and display a template to the user. It has a number of customization options. The most important one is template_name, which tells it which template to render and display to the user. We could have created another view class that subclassed TemplateView and customized it by setting attributes and overriding functions like we did for our registration view. However, I wanted to show you another method of using a generic view in Django. If you only need to customize some basic parameters of a generic view—in this case, we only wanted to set the template_name parameter of the view—you can just pass the values as key=value pairs as function keyword arguments to the as_view method of the class when including it in the urls.py file. Here, we pass the template name that the view renders when the user accesses its URL. As we just needed a placeholder URL to redirect the user to, we simply use the blank base.html template. This technique of customizing generic views by passing key/value pairs only makes sense when you’re interested in customizing very basic attributes, like we do here. In case you want more complicated customizations, I advice you to subclass the view; otherwise, you will quickly get messy code that is difficult to maintain. Login and logout With registration out of the way, let’s write code to provide users with the ability to log in and log out. To start out, the user needs some way to go to the login and registration pages from any page on the site. To do this, we’ll need to add header links to our template. This is the perfect opportunity to demonstrate how template inheritance can lead to much cleaner and less code in our templates. Add the following lines right after the body tag to our base.html file: {% block header %} <ul> <li><a href="">Login</a></li> <li><a href="">Logout</a></li> <li><a href="{% url "user_registration"%}">Register Account</a></li> </ul> {% endblock %} If you open the home page for our site now (at http://127.0.0.1:8000/), you should see that we now have three links on what was previously a blank page. It should look similar to the following screenshot: Click on the Register Account link. You’ll see the registration form we had before and the same three links again. Note how we only added these links to the base.html template. However, as the user registration template extends the base template, it got those links without any effort on our part. This is where template inheritance really shines. You might have noticed that href for the login/logout links is empty. Let’s start with the login part. Login view Let’s define the URL first. In blueblog/urls.py, import the login view from the auth app: from django.contrib.auth.views import login Next, add this to the urlpatterns list: url(r'^login/$', login, {'template_name': 'login.html'}, name='login'), Then, create a new file in accounts/templates called login.html. Put in the following content: {% extends "base.html" %} {% block content %} <h1>Login</h1> <form action="{% url "login" %}" method="post">{% csrf_token %} {{ form.as_p }} <input type="hidden" name="next" value="{{ next }}" /> <input type="submit" value="Submit" /> </form> {% endblock %} Finally, open up blueblog/settings.py file and add the following line to the end of the file: LOGIN_REDIRECT_URL = '/' Let’s go over what we’ve done here. First, notice that instead of creating our own code to handle the login feature, we used the view provided by the auth app. We import it using from django.contrib.auth.views import login. Next, we associate it with the login/ URL. If you remember the user registration part, we passed the template name to the home page view as a keyword parameter in the as_view() function. This approach is used for class-based views. For old-style view functions, we can pass a dictionary to the url function that is passed as keyword arguments to the view. Here, we use the template that we created in login.html. If you look at the documentation for the login view (https://docs.djangoproject.com/en/stable/topics/auth/default/#django.contrib.auth.views.login), you’ll see that on successfully logging in, it redirects the user to settings.LOGIN_REDIRECT_URL. By default, this setting has a value of /accounts/profile/. As we don’t have such a URL defined, we change the setting to point to our home page URL instead. Next, let’s define the logout view. Logout view In blueblog/urls.py, import the logout view: from django.contrib.auth.views import logout Add the following to the urlpatterns list: url(r'^logout/$', logout, {'next_page': '/login/'}, name='logout'), That’s it. The logout view doesn’t need a template; it just needs to be configured with a URL to redirect the user to after logging them out. We just redirect the user back to the login page. Navigation links Having added the login/logout view, we need to make the links we added in our navigation menu earlier take the user to those views. Change the list of links that we had in templates/base.html to the following: <ul> {% if request.user.is_authenticated %} <li><a href="{% url "logout" %}">Logout</a></li> {% else %} <li><a href="{% url "login" %}">Login</a></li> <li><a href="{% url "user_registration"%}">Register Account</a></li> {% endif %} </ul> This will show the Login and Register Account links to the user if they aren’t already logged in. If they are logged in, which we check using the request.user.is_authenticated function, they are only shown the Logout link. You can test all of these links yourself and see how little code was needed to make such a major feature of our site work. This is all possible because of the contrib applications that Django provides. Summary In this article we started with a simple blogging platform in Django. We also had a look at setting up the Database, Staticfiles and Base templates. We have also created a user account app with registration and navigation links in it. Resources for Article: Further resources on this subject: Setting up a Complete Django E-commerce store in 30 minutes [article] "D-J-A-N-G-O... The D is silent." - Authentication in Django [article] Test-driven API Development with Django REST Framework [article]
Read more
  • 0
  • 0
  • 11304
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-managing-payment-and-shipping-magento-2
Packt
10 May 2016
24 min read
Save for later

Managing Payment and Shipping with Magento 2

Packt
10 May 2016
24 min read
In this article by Bret Williams, author of the book Learning Magento 2 Administration, we will see how to manage payment gateways, shipping methods and orders with Magneto 2. E-commerce doesn't work unless customers actually purchase a product or service. In order to make that happen on your Magento store, you need to take payments, provide shipping solutions, collect any required taxes, and, of course, process orders. In this article , we're going to: Understand the checkout and payment process Discuss various payment methods you can offer your customers Configure table rate shipping and review other shipping options Manage the order process (For more resources related to this topic, see here.) It's extremely important that you take care to understand and manage these aspects of your online business, as this involves money — the customer's and yours. No matter how great your products or your pricing, if customers cannot purchase easily, understand your shipping and delivery, or feel in the least hesitant about completing their transaction, your customer leaves and neither they nor you achieve satisfactory results. Once an order is placed, you also have steps to take to process the purchase and make good on your obligation to fulfill your customer's request. Fortunately — as with many other aspects of online commerce — Magento has the features and tools in place to create a solid, efficient checkout experience. Understanding the checkout and payment process Since most people shopping online today have made at least one e-commerce purchase on a website, the general process of completing an order is fairly well established, although the exact steps will vary somewhat: Customer reviews their shopping cart, confirming the items they have decided to purchase. Customer enters their shipping destination information. Customer chooses a shipping method based on cost, method and time of delivery. Customer enters their payment information. Customer reviews their order and confirms their intent to purchase. The system (Magento, in our case) queries a payment processor for approval. The order is completed and ready for processing. Of course, as we'll explore in this article, there is much more detail related to this process. As online merchants, you want your customers to have a thorough, yet easy, purchasing experience and you want a valid order that can be fulfilled without complications. To achieve both ends, you have to prepare your Magento store to accurately process orders. So, let's jump in. Payment methods When a customer places an order on your Magento store, you'll naturally want to provide a means of capturing payment, whether it's immediate (credit card, PayPal, etc.) or delayed (COD, check, money order, credit). The payment methods you choose to provide, of course, are up to you, but you'll want to provide methods that: Reduce your risk of not getting paid. Provide convenience to your customers while fulfilling their payment expectations. Consumers expect to pay by credit card or through a third-party service such as PayPal. Wholesale buyers may expect to purchase using a Purchase Order or sending you a check before shipment. As with any business, you have to decide what will best benefit both you and your buyers. How Payment gateways work If you're new to online payments as a merchant, it's helpful to have an understanding of how payments are approved and captured in e-commerce. For this explanation, we're focusing on those payment gateways that allow you to accept credit and debit cards in your store. While PayPal Express and Standard works in a similar fashion, the three gateways that are included in the default Magento installation – PayPal Payments, Braintree and Authorize.net — process credit and debit cards similarly: Your customer enters their card information in your website during checkout. When the order is submitted, Magento sends a request to the gateway (PayPal Payments, Braintree or Authorize.net) for authorization of the card. The gateway submits the card information and order amount to a clearinghouse service that determines if the card is valid and the order amount does not exceed the credit limit of the cardholder. A success or failure code is returned to the gateway and on to the Magento store. If the intent is to capture the funds at time of purchase, the gateway will queue the capture into a batch for processing later in the day and notify Magento that the funds are "captured". A successful transaction will commit the order in Magento and a failure will result in a message to the purchaser. Other payment methods, such as PayPal Standard and PayPal Express, take the customer to the payment provider's website to complete the payment portion of the transaction. Once the payment is completed, the customer is returned to your Magento store front. When properly configured, integrated payment gateways will update Magento orders as they are authorized and/or captured. This automation means you spend less time managing orders and more time fulfilling shipments and satisfying your customers! PCI Compliance The protection of your customer's payment information is extremely important. Not only would a breach of security cause damage to your customer's credit and financial accounts, but the publicity of such a breach could be devastating to your business. Merchant account providers will require that your store meet stringent guidelines for PCI Compliance, a set of security requirements called Payment Card Industry Data Security Standard (PCI DSS). Your ability to be PCI compliant is based on the integrity of your hosting environment and by why methods you allow customers to enter credit card information on your site. Magento 2 no longer offers a Stored Credit Card payment method. It is highly unlikely that you could — or would want to — provide a server configuration secure enough to meet PCI DSS requirements for storing credit card information. You probably don't want the liability exposure, as well. You can, however, provide SSL Encryption that could satisfy PCI compliance as long as the credit card information is encrypted before being sent to your server, and then from your server to the credit card processor. As long as you're not storing the customer's credit card information on your server, you can meet PCI compliance as long as your hosting provider can assure compliance for server and database security. Even with SSL encryption, not all hosting environments will pass PCI DSS standards. It's vital that you work with a hosting company that has real Magento experience and can document proof of PCI compliance. Therefore, you should decide whether to provide onsite or offsite credit card payments. In other words, do you want to take payment information within your Magento checkout page or redirect the user to a payment service, such as PayPal, to complete their transaction? There are pros and cons of each method. Onsite transactions may be perceived as less secure and you do have to prove PCI compliance to your merchant account provider on an ongoing basis. However, onsite transactions mean that the customer can complete their transaction without leaving your website. This helps to preserve your brand experience for your customers. Fortunately, Magento is versatile enough to allow you to provide both options to your customers. Personally, we feel that offering multiple payment methods means you're more likely to complete a sale, while also showing your customers that you want to provide the most convenience in purchasing. Let's now review the various payment methods offered by default in Magento 2. Magento 2 comes with a host of the most popular and common payment methods. However, you should review other possibilities, such as Amazon Payments, Stripe and Moneybookers, depending on your target market. We anticipate that developers will be offering add-ons for these and other payment methods. Note that as you change the Merchant Location at the top of the Payment Methods panel, the payment methods available to you may change. PayPal all-in-one payment solutions While PayPal is commonly known for their quick and easy PayPal Express buttons — the ubiquitous yellow buttons you see throughout the web — PayPal can provide you with credit/debit card solutions that allow customers to use their cards without needing a PayPal account. To your customer, the checkout appears no different than if they were using a normal credit card checkout process. The big difference is that you have to set up a business account with PayPal before you can begin accepting non-PayPal account payments. Proceeds will go almost immediately into your PayPal account (you have to have a PayPal account), but your customers can pay by using a credit/debit card or their own PayPal account. With all-in-one solution, PayPal approves your application for a merchant account and allows you to accept all popular cards, including American Express, as a flat 2.9% rate, plus $0.30 per transaction. PayPal payments incur normal per transaction PayPal charges. We like this solution as it keeps all your online receipts in one account, while also giving you fast access to your sales income. PayPal also provides a debit card for its merchants that can earn back 1% on purchases. We use our PayPal debit card for all kinds of business purchases and receive a nice little cash back dividend each month. PayPal provide two ways to incorporate credit card payment capture on your website: PayPal Payments Advanced inserts a form on your site that is actually hosted from PayPal's highly secure servers. The form appears as part of your store, but you don't have any PCI compliance concerns. PayPal Payments Pro allows you to obtain payment information using the normal Magento form, then submit it to PayPal for approval. The difference to your customer is that for Advanced, there is a slight delay while the credit card form is inserted into the checkout page. You may also have some limitations in terms of styling. PayPal Standard, also a part of the all-in-one solution, takes your customer to a PayPal site for payment. Unlike PayPal Express, however, you can style this page to better reflect your brand image. Plus, customers do not have to have a PayPal account in order to use this checkout method. PayPal payment gateways If you already have a merchant account for collecting online payments, you can still utilize the integration of PayPal and Magento by setting up a PayPal business account that is linked to your merchant account. Instead of paying PayPal a percentage of each transaction — you would pay this to your merchant account provider — you simply pay a small per transaction fee. PayPal Express Offering PayPal Express is as easy as having a PayPal account. It does require some configurations of API credentials, but it does provide the simplest means of offering payment services without setting up a merchant account. PayPal Express will add "Buy Now" buttons to your product pages and the cart page of your store, giving shoppers quick and immediate ability to checkout using their PayPal account. Braintree PayPal recently acquired Braintree, a payment services company that adds additional services to merchants. While many of their offerings appear to overlap PayPal's, Braintree brings additional features to the marketplace such as Bitcoin, Venmo, Android Pay and Apple Pay payment methods, recurring billing and fraud protection. Like PayPal Payments, Braintree charges 2.9% + $0.30 per transaction. A Word about Merchant Fees After operating our own e-commerce businesses for many years, we have used many different merchant accounts and gateways. At first glance, 2.9% — offered by PayPal, Braintree and Stripe — appear to be expensive percentages. If you've been solicited by merchant account providers, you no doubt have been quoted rates as low as 1.7%. What is not often disclosed is that this rate only applies to basic cards that do not contain miles or other premiums. Rates for most cards you accept can be quite a bit higher. American Express usually charges more than 3% on transactions. Once you factor in gateway costs, reporting, monthly account costs, etc. you may find, as we did, that our total merchant costs using a traditional merchant account averaged over 3.3%! One cost you may not think to factor is the expense of set-up and integration. PayPal and Braintree have worked hard to create easy integrations to Magento (Stripe is not yet available for Magento 2 as of this writing). Check / Money Order If you have customers for whom you will accept payment by check and/or money order, you can enable this payment method. Be sure to enter all the information fields, especially Make Check Payable to and Send Check to. You will most likely want to keep the New Order Status as Pending, which means the order is not ready for fulfillment until you receive payment and update the order as Paid. As with any payment method, be sure to edit the Title of the method to reflect how you wish to communicate it to your customers. If you only wish to accept Money Orders, for instance, you might change Title to Money Orders (sorry, no checks). Bank transfer payment As with Check / Money order, you can allow customers to wire money to your account by providing information to your customers who choose this method. Cash on Delivery payment Likewise, you can offer COD payments. We still see this method being made available on wholesale shipments, but very rarely on B2C (Business-to-Consumer) sales. COD shipments usually cost more, so you will need to accommodate this added fee in your pricing or shipping methods. At present, there is no ability to add a COD fee using this payment method panel. Zero Subtotal Checkout If your customer, by use of discounts or credits, or selecting free items, owes nothing at checkout, enabling this method will cause Magento to hide payment methods during checkout. The content in the Title field will be displayed in these cases. Purchase order In B2B (Business-to-Business) sales, it's quite common to accept purchase order (PO) for customers with approved credit. If you enable this payment method, an additional field is presented to customers for entering their PO number when ordering. Authorize.net direct post Authorize.net — perhaps the largest payment gateway provider in the USA — provides an integrated payment capture mechanism that gives your customers the convenience of entering credit/debit card information on your site, but the actual form submission bypasses your server and goes directly to Authorize.net. This mechanism, as with PayPal Payments Advanced, lessens your responsibility for PCI compliance as the data is communicated directly between your customer and Authorize.net instead of passing through the Magento programming. In Magento 1.x, the regular Authorize.net gateway (AIM) was one of several default payment methods. We're not certain it will be added as a default in Magento 2, although we would imagine someone will build an extension. Regardless, we think Direct Post is a wonderful way to use Authorize.net and meet your PCI compliance obligations. Shipping methods Once you get paid for a sale, you need to fulfill the order and that means you have to ship the items purchased. How you ship products is largely a function of what shipping methods you make available to your customers. Shipping is one of the most complex aspects of e-commerce, and one where you can lose money if you're not careful. As you work through your shipping configurations, it's important to keep in mind: What you charge your customers for shipping does not have to be exactly what you're charged by your carriers. Just as you can offer free shipping, you can also charge flat rates based on weight or quantity, or add a surcharge to live rates. By default, Magento does not provide you with highly sophisticated shipping rate calculations, especially when it comes to dimensional shipping. Consider shipping rate calculations as estimates only. Consult with whomever is actually doing your shipping to determine if any rate adjustments should be made to accommodate dimensional shipping. Dimensional shipping refers to a recent change by UPS, FedEx and others to charge you the greater of two rates: the cost based on weight or the cost based on a formula to determine the equivalent weight of a package based on its size: (Length x Width x Height) ÷ 166 (for US domestic shipments; other factors apply for other countries and exports). Therefore, if you have a large package that doesn't weigh much, the live rate quoted in Magento might not be reflective of your actual cost once the dimensional weight is calculated. If your packages may be large and lightweight, consult your carrier representative or shipping fulfillment partner for guidance. If your shipping calculations need more sophistication than provided natively in Magento 2, consider an add-on. However, remember that what you charge to your customers does not have to be what you pay. For that reason — and to keep it simple for your customers — consider offering Table rates (as described later). Each method you choose will be displayed to your customers if their cart and shipping destination matches the conditions of the method. Take care not to confuse your customers with too many choices: simpler is better. Keeping these insights in mind, let's explore the various shipping methods available by default in Magento 2. Before we go over the shipping methods, let's go over some basic concepts that will apply to most, if not all, shipping methods. Origin From where you ship your products will determine shipping rates, especially for carrier rates (e.g. UPS, FedEx). To set your origin, go to Stores | Configuration | Sales | Shipping Settings and expand the Origin panel. At the very least, enter the Country, Region/State and ZIP/Postal Code field. The others are optional for rate calculation purposes. At the bottom of this panel is the choice to Apply custom Shipping Policy. If enabled, a field will appear where you can enter text about your overall shipping policy. For instance, you may want to enter Orders placed by 12:00p CT will be processed for shipping on the same day. Applies only to orders placed Monday-Friday, excluding shipping holidays. Handling fee You can add an invisible handling fee to all shipping rate calculations. Invisible in that it does not appear as a separate line item charge to your customers. To add a handling fee to a shipping method: Choose whether you wish to add a fixed amount or a percentage of the shipping cost If you choose to add a percentage, enter the amount as a decimal number instead of a percentage (example: 0.06 instead of 6%) Allowed countries As you configure your shipping methods, don't forget to designate to which countries you will ship. If you only ship to the US and Canada, for instance, be sure to have only those countries selected. Otherwise, you'll have customers from other countries placing orders you will have to cancel and refund. Method not available In some cases, the method you configured may not be applicable to a customer based on destination, type of product, weight or any number of factors. For these instances, you can choose to: Show the method (e.g. UPS, USPS, DHL, etc.), but with an error message that the method is not applicable Don't show the method at all Depending on your shipping destinations and target customers, you may want to show an error message just so the customer knows why no shipping solution is being displayed. If you don't show any error message and the customer disqualifies for any shipping method, the customer will be confused. Free shipping There are several ways to offer free shipping to your customers. If you want to display a Free Shipping option to all customers whose carts meet a minimum order amount (not including taxes or shipping), enable this panel. However, you may want to be more judicious in how and when you offer free shipping. Other alternatives include: Creating Shopping Cart Promotions Include a free shipping method in your Table Rates (see later in this section) Designate a specific free shipping method and minimum qualifying amount within a carrier configuration (such as UPS and FedEx) If you choose to use this panel, note that it will apply to all orders. Therefore, if you want to be more selective, consider one of the above methods. Flat Rate As with Free Shipping, above, the Flat Rate panel allows you to charge one, singular flat rate for all orders regardless of weight or destination. You can apply the rate on a per item or per order basis, as well. Table Rates While using live carrier rates can provide more accurate shipping quotes for your customers, you may find it more convenient to offer a series of rates for your customers at certain break points. For example, you might only need something as simple as for any domestic destination: 0-5 lbs, $5.99 6-10 lbs, $8.99 11+ lbs, $10.99 Let's assume you're a US-based shipper. While these rates will work for you when shipping to any of the contiguous 48 states, you need to charge more for shipments to Alaska and Hawaii. For our example, let's assume tiered pricing of $7.99, $11.99 and $14.99 at the same weight breaks. All of these conditions can be handled using the Table Rates shipping method. Based on our example, we would first start by creating a spreadsheet (in Excel or Numbers) similar to the following: Country Region/State Zip/Postal Code Weight (and above) Shipping Price USA * * 0 5.99 USA * * 6 8.99 USA * * 11 10.99 USA AK * 0 7.99 USA AK * 6 11.99 USA AK * 11 14.99 USA HI * 0 7.99 USA HI * 6 11.99 USA HI * 11 14.99 Let's review the columns in this chart: Country. Here, you would enter the 3-character country code (for a list of valid codes, see http://goo.gl/6A1woj). Region/State. Enter the 2-character code for any state or province. Zip/Postal Code. Enter any specific postal codes for which you wish the rate to apply. Weight (and above). Enter the minimum applicable weight for the range. The assigned rate will apply until the weight of the cart products combined equals a higher weight tier. Shipping Price. Enter the shipping charge you wish to provide to the customer. Do not include the currency prefix (example: "$" or "€"). Now, let's discuss the asterisk (*) and how to limit the scope of your rates. As you can see in the chart, we have only indicated rates for US destinations. That's because there are no rows for any other countries. We could easily add rates for all other countries, simply by adding rows with an asterisk in the first column. By adding those rows, we're telling Magento to use the US rates if the customer's ship-to address is in the US, and to use other rates for all other country destinations. Likewise for the states column: Magento will first look for matches for any state codes listed. If it can't find any, then it will look for any rates with an asterisk. If no asterisk is present for a qualifying weight, then no applicable rate will be provided to the customer. The asterisk in the Zip/Postal Code column means that the rates apply to all postal codes for all states. To get a sample file with which to configure your rates, you can set your configuration scope to one of your Websites (Furniture or Sportswear in our examples) and click Export CSV in the Table Rates panel. Quantity and price based rates In the preceding example, we used the weight of the items in the cart to determine shipping rates. You can also configure table rates to use calculations based on the number of items in the cart or the total price of all items (less taxes and shipping). To set up your chart, simply rename the fourth column "Quantity (and above)" or "Subtotal (and above)." Save your rate table To upload your table rates, you'll need to save/export your spreadsheet as a CSV file. You can name it whatever you like. Save it to your computer where you can find it for the next steps. Table rate settings Before you upload your new rates, you should first set your Table Rates configurations. To do so, you can set your default settings at the Default configuration scope. However, to upload your CSV file, you will need to switch your Store View to the appropriate Website scope. When changing to a Website scope, you will see the Export CSV button and the ability to upload your rate table file. You'll note that all other settings may have Use Default checked. You can, of course, uncheck this box beside any field and adjust the settings according to your preferences. Let's review the unique fields in this panel: Enabled. Set to "Yes" to enable table rates. Title. Enter the name you wish displayed to customers when they're presented with a table rate-based shipping charge in the checkout process. Method Name. This name is presented to the customer in the shopping cart. You should probably change the default "Table Rate" to something more descriptive, as this term is likely irrelevant to customers. We have used terms "Standard Ground," "Economy," or "Saver" as names. The Title should probably be the same, as well, so that the customer, during checkout, has a visual confirmation of their shipping choice. Condition. This allows you to choose the calculation method you want to use. Your choices, as we described earlier, are "Weight vs. Destination," "Price vs. Destination," and "# of items vs. Destination." Include Virtual Products in Price Calculation. Since virtual products have no weight, this will have no effect on rate calculations for weight-based rates. However, it will affect rate calculations for price or quantity-based rates. Once you have your settings, click Save Config. Upload Rate Table Once you have saved your settings, you can now click the button next to Import and upload your rate table. Be sure to test your rates to see that you have properly constructed your rate table. Carrier Methods The remaining shipping methods involve configuring UPS, USPS, FedEx and/or DHL to provide "live" rate calculations. UPS is the only one that is set to query for live rates without the need for you to have an account with the carrier. This is both good and bad. It's good, as you only have to enable the shipping method to have it begin querying rates for your customers. On the flip side, the rates that are returned are not negotiated rates. Negotiated rates are those you may have been offered as discounted rates based on your shipping volume. FedEx, USPS and DHL require account-specific information in order to activate. This connection with your account should provide rates based on any discounts you have established with your carrier. If you wish to use negotiated rates for UPS, you may have to find a Magento add-on that will accommodate or have your developer extend your Magento installation to make a modified rate query. If you have some history with shipping, you should negotiate rates with the carriers. We have found most are willing to offer some discount from "published rates." Shipping integrations Unless you have your own sophisticated warehouse operation, it may be wise to partner with a fulfillment provider that can not only store, pick, pack and ship your orders, but also offers deep discounts on shipping rates due to their large volumes. Amazon FBA (Fulfillment By Amazon) is a very popular solution. Shipping is a low flat rate based on weight (http://goo.gl/UKjg7). ShipWire is another fulfillment provider that is well integrated with Magento. In fact, their integration can provide real-time rate quotes for your customers based on the products selected, warehouse availability and destination (http://www.ShipWire.com). We have not heard if they have updated their integration for Magento 2, yet, but we suspect they will. Summary Selling is the primary purpose of building an online store. As you've seen in this article, Magento 2 arms you with a very rich array of features to help you give your customers the ability to purchase using a variety of payment methods. You're able to customize your shipping options and manage complex tax rules. All of this combines to make it easy for your customers to complete their online purchases. Resources for Article: Further resources on this subject: Social Media and Magento [article] Creating a Responsive Magento Theme with Bootstrap 3 [article] Magento 2 – the New E-commerce Era [article]
Read more
  • 0
  • 0
  • 34638

article-image-redis-cluster-client-development
Zhe Lin
09 May 2016
5 min read
Save for later

Redis Cluster Client Development

Zhe Lin
09 May 2016
5 min read
In this post, I'd like to discuss how to write a client for a Redis Cluster a little deeper. The assumption here is that readers have a working knowledge of the Redis protocol, and therefore we will talk about how to deal with data sharding/resharding in a Redis Cluster. As is known, each Redis instance in a cluster serves a number of non-overlapping data slots. This means two basic things: There's a way to know how the cluster shards its dataset, which is the CLUSTER NODES command. If you send a data command such as GETSET to a Redis instance that doesn't contain the slot of the command key, you will get a MOVED error, so you need to keep the sharding information up to date. A CLUSTER NODES command to each Redis instance in a cluster will bring you equivalent sharding and replication information and can help you know who serves what. For example, the result could be like this: 33e5b1010d56d328370a993cf8420d70b3052d2c 127.0.0.1:7004 slave 71568798b4f1edeb0577614e072b492173b63808 0 1444704850437 2 connected 71568798b4f1edeb0577614e072b492173b63808 127.0.0.1:7002 master - 0 1444704848935 2 connected 3000-8499 f34b1ee569f3b856a80633a0b68892a58bfc87d2 127.0.0.1:7001 slave 1141f524b3ac82bb47bb101e0f8ae75ed56cdf54 0 1444704850938 1 connected 1141f524b3ac82bb47bb101e0f8ae75ed56cdf54 127.0.0.1:7003 master - 0 1444704849436 1 connected 0-2999 8500-10499 026f1e9b8c205de0a3645fd8a5eae3fbd0ca8639 127.0.0.1:7005 master - 0 1444704850838 3 connected 10500-16383 79ef081c431bbc7edc305a9611278be0df8fcd04 127.0.0.1:7000 myself,slave 026f1e9b8c205de0a3645fd8a5eae3fbd0ca8639 0 0 1 connected By simple splitting of each line by the space character, there are at least eight items a line. The first four items are node id, address, flags, and master node id. A node with the myself flag is the one receiving the CLUSTER NODES command. If a node has the slave flag, its master node ID, which won't be a dash, indicates who it has replicated. A node with a fail, fail?, or handshake flag is not connectible from others and may probably fail to respond, so you can just ignore it. Then let's see the items after the first eight, which indicate the slots held by the node. The slots' ranges are closed intervals; for example, 3000-8499 means 3000, 3001, ..., 8498, 8499, so the node at 127.0.0.1:7002 serves 5,500 slots. And the ranges could be separated so that the slots' ranges 127.0.0.1:7003 contained are 0-2999 and 8500-10499. After parsing the cluster information, you may know which master node a data command belongs to. If you don't mind getting possibly stale data, you can increase the QPS by sending reading commands to a slave. A hint is that you need to send a READONLY command before you can read data from the slave Redis. It's a good start if your client library is able to parse static sharding information. Since the Redis Cluster may reshard, you need to know what happens when Redis instances join or quit the cluster. For example, after a Redis that binds 127.0.0.1:8000 joined and slot #0 is migrating to it from 127.0.0.1:7003. However, your client probably has no idea about that and sends a command such as GET h-893 (h-893 is in slot #0) to 127.0.0.1:7003 as usual. Then the result could be one of the following: Normal response if the slot is in migrating state and the key hits the cache at 127.0.0.1:7003. An -ASK [ADDRESS] error if the slot is in migrating state and the key doesn't hit the cache (non-existent or just migrated). A -MOVED [SLOT#] [ADDRESS] error if the slot is completely migrated to the new instance. In case #2, you may need to retry the command later, and in either of the other two cases, you need to update the sharding information. You can simply redirect the command to the new address according to what the MOVED error told you, or update the complete slots mapping. When a Redis instance quits the cluster and become a free node, it won't reset your TCP connections, but will lways reply with a -CLUSTERDOWN error for any data commands. In this case, don't panic and try doing a sharding information update from other Redis instances (so that your client has to remember all the instances in case of it). But if you think it over, you may realize that there's a chance that the Redis can join another cluster after it quits the old one, and it happens to serve the same slots it used to. Therefore, you client may work normally but data actually goes somewhere else. So, your client ought to provide an approach for an initiative sharding update. As we have discussed normal cases, what happens when some Redis has been killed? This won't be so complicated if the slaves fail over their disconnected masters and eventually the cluster state turns to OK (otherwise, ask your Redis administrator to fix it immediately). Your client will encounter an I/O exception of TCP connection reset when the Redis server is down or some error is writing to the socket. Anyway, it's time to do a sharding information update. These are the things that a client for a Redis Cluster should cover. Hopefully, this helps you if you decide to write a cluster client, or provides you with valuable information if you want to learn more details about your client lib (provided you are already using a Redis Cluster). About the author Zhe Lin is a system engineer at the IT Developing Center at Hunantv.com. He is in charge of Redis related toolkit developing, maintenance and in-company training. Major works include: * [redis-trib.py] : Redis Cluster creating and resharding toolkit in Python2 * [redis-cerberus] : Redis Cluster proxy * [redis-ctl] : Redis management tool with web UI
Read more
  • 0
  • 0
  • 5453

article-image-jira-workflows
Packt
05 May 2016
15 min read
Save for later

JIRA Workflows

Packt
05 May 2016
15 min read
In this article by Patrick Li, author of the book Jira 7 Administration Cookbook - Second Edition, we will take a look at how Workflows are one of the core and most powerful features in JIRA. They control how issues in JIRA move from one stage to another as they are worked on, often passing from one assignee to another. For this reason, workflows can be thought of as the life cycle of issues. In this article, we will cover: Setting up different workflows for your project Capturing additional information during workflow transitions Using common transitions Using global transitions Restricting the availability of workflow transitions Validating user input in workflow transitions Performing additional processing after a transition is executed Unlike many other systems, JIRA allows you to create your own workflows to resemble the work processes you may already have in your organization. This is a good example of how JIRA is able to adapt to your needs without your having to change the way you work. (For more resources related to this topic, see here.) Setting up different workflows for your project A workflow is similar to a flowchart, in which issues can go from one state to another by following the direction paths between the states. In JIRA's workflow terminology, the states are called statuses, and the paths are called transitions. We will use these two major components when customizing a workflow. In this recipe, we will create a new, simple workflow from scratch. We will look at how to use existing statuses, create new statuses, and link them together using transitions. How to do it… The first step is to create a new skeleton workflow in JIRA, as follows: Log in to JIRA as a JIRA administrator. Navigate to Administration > Issues > Workflows. Click on the Add Workflow button and name the workflow Simple Workflow. Click on the Diagram button to use the workflow designer or diagram mode. The following screenshot explains some of the key elements of the workflow designer: As of now, we have created a new, inactive workflow. The next step is to add various statuses for the issues to go through. JIRA comes with a number of existing statuses, such as In Progress and Resolved, for us to use: Click on the Add status button. Select the In Progress status from the list and click on Add. You can type the status name into the field, and JIRA will automatically find the status for you. Repeat the steps to add the Closed status. Once you add the statuses to the workflow, you can drag them around to reposition them on the canvas. We can also create new statuses, as follows: Click on the Add status button. Name the new status Frozen and click on Add. JIRA will let you know if the status you are entering is new by showing the text (new status) next to the status name. Now that we have added the statuses, we need to link them using transitions: Select the originating status, which in this example is Open. Click on the small circle around the OPEN status and drag your cursor onto the In PROGRESS status. This will prompt you to provide details for the new transition, as shown in the following screenshot: Name the new transition Start Progress and select the None option for the screen. Repeat the steps to create a transition called Close between the In PROGRESS and CLOSED statuses. You should finish with a workflow that looks similar to the following screenshot: At this point, the workflow is inactive, which means that it is not used by a project, and you can edit it without any restrictions. Workflows are applied on a project and issue-type basis. Perform the following steps to apply the new workflow to a project: Select the project to apply the workflow to. Click on the Administration tab to go to the project administration page. Select Workflows from the left-hand side of the page. Click on Add Existing from the Add Workflow menu. Then, select the new Simple Workflow from the dialog and click on Next. Choose the issue types to apply (for example, Bug) the workflow to and click on Finish. After we apply the workflow to a project, the workflow is placed in the active state. So, if we now create a new issue in the target project of the selected issue type, our new Simple Workflow will be used. Capturing additional information during workflow transitions When users execute a workflow transition, we have an option to display an intermediate workflow screen. This is a very useful way to collect some additional information from the user. For example, the default JIRA workflow will display a screen for users to select the Resolution value when the issue is resolved. Issues with resolution values are considered completed. You should only add the Resolution field to workflow screens that represent the closing of an issue. Getting ready We need to have a workflow to configure, such as Simple Workflow, which was created in the previous recipe. We also need to have screens to display; JIRA's out-of-the-box Workflow screen and Result Issue screen will suffice, but if you create your own screens, those can also be used. How to do it… Perform the following steps to add a screen to a workflow transition: Select the workflow to update, such as our Simple Workflow. Click on the Edit button if the workflow is active. This will create a draft workflow for us to work on. Select the Start Progress transition and click on the Edit link from the panel on the right-hand side. Select the Workflow screen from the Screen drop-down menu and click on Save. Repeat Step 3 and Step 4 to add the Resolve Issue screen to the Close transition. If you are working with a draft workflow, you must click on the Publish Draft button to apply our changes to the live workflow. If you do not see your changes reflected, it is most likely that you forgot to publish your draft workflow. Using common transitions Often, you will have transitions that need to be made available from several different statuses in a workflow, such as the Resolve and Close transitions. In other words, these are transitions that have a common destination status but many different originating statuses. To help you simplify the process of creating these transitions, JIRA lets you reuse an existing transition as a common transition if it has the same destination status. Common transitions are transitions that have the same destination status but different originating statuses. A common transition has an additional advantage of ensuring that transition screens and other relevant configurations, such as validators, stay consistent. Otherwise, you would have to constantly check the various transitions every time you make a change to one of them. How to do it… Perform the following steps to create and use common transitions in your workflow: Select the workflow and click on the Edit link to create a draft. Select the Diagram mode. Create a transition between two steps—for example, Open and Closed. Create another transition from a different step to the same destination step and click on the Reuse a transition tab. Next, select the transition created in Step 3 from the transition to reuse the drop-down menu and click on Add. Click on Publish Draft to apply the change. See also Refer to the Using global transitions recipe. Using global transitions While a common transition is a great way to share transitions in a workflow and reduce the amount of management work that would otherwise be required, it has the following two limitations: Currently, it is only supported in the classic diagram mode (if running on older JIRA versions) You still have to manually create the transitions between the various steps As your workflow starts to become complicated, explicitly creating the transitions becomes a tedious job; this is where global transitions come in. A global transition is similar to a common transition in the sense that they both share the property of having a single destination status. The difference between the two is that the global transition is a single transition that is available to all the statuses in a workflow. In this recipe, we will look at how to use global transitions so that issues can be transitioned to the Frozen status throughout the workflow. Getting ready As usual, you need to have a workflow you can edit. As we will demonstrate how global transitions work, you need to have a status called Frozen in your workflow and ensure that there are no transitions linked to it. How to do it… Perform the following steps to create and use global transitions in your workflow: Select and edit the workflow that you will be adding the global transition to. Select the Diagram mode. Select the Frozen status. Then, check the Allow all statuses to transition to this one option. Click on Publish Draft to apply the change. The following screenshot depicts the preceding steps: Once you create a global transition for a status, it will be represented as an All transition, as shown in the following screenshot: After the global transition is added to the Frozen status, you will be able to transition issues to Frozen regardless of its current status. You can only add global transitions in the Diagram mode. See also Refer to the Restricting the availability of workflow transitions recipe on how to remove a transition when an issue is already in the Frozen status. Restricting the availability of workflow transitions Workflow transitions, by default, are accessible to anyone who has access to the issue. There will be times when you would want to restrict the access to certain transitions. For example, you might want to restrict access to the Freeze Issue transition for the following reasons: The transition should only be available to users in specific groups or project roles As the transition is a global one, it is available to all the workflow statuses, but it does not make sense to show the transition when the issue is already in the Frozen status To restrict the availability of a workflow transition, we can use workflow conditions. Getting ready For this recipe, we need to have the JIRA Suite Utilities add-on installed. You can download it from the following link or install it directly from Universal Plugin Manager: https://marketplace.atlassian.com/plugins/com.googlecode.jira-suite-utilities How to do it… We need to create a new custom field configuration scheme in order to set up a new set of select list options. Perform the following steps for this: Select and edit the workflow to configure. Select the Diagram mode. Click on the Frozen global workflow transition. Click on the Conditions link from the panel on the right-hand side. Click on the Add condition, select the Value Field condition (provided by JIRA Suite Utilities) from the list, and click on Add. Configure the condition with the following parameters: The Status field for Field The Not equal != for Condition Frozen for Value String for Comparison Type This means that it should only show the transition if the issue's status field's value is not Frozen. Click on the Add button to complete the condition setup. At this point, we added the condition that will make sure the Freeze Issue transition is not shown when the issue is already in the Frozen status. The next step is to add another condition to restrict the transition and is only available to users in the Developer role. Click on the Add condition again and select the User in the Project Role condition. Then, select the Developer project role and click on Add. Click on Publish Draft to apply the change. After you apply the workflow conditions, the Frozen transition will no longer be available if the issue is already in the Frozen status and/or the current user is not in the Developer project role. There's more… Using the Value Field condition (that comes with the JIRA Suite Utilities add-on) is one of the many ways in which we can restrict the availability of a transition based on the issue's previous status. There is another add-on called JIRA Misc Workflow Extensions (this is a paid add-on), which comes with a Previous Status condition for this very use case. You can download it from https://marketplace.atlassian.com/plugins/com.innovalog.jmwe.jira-misc-workflow-extensions. When you have more than one workflow condition applied to the transition, as in our example, the default behavior is that all the conditions must pass for the transition to be available. You can change this so that only one condition needs to pass for the transition to be available by changing the condition group logic from All of the following conditions to Any of the following conditions, as shown in the following screenshot: Validating user input in workflow transitions For workflow transitions that have transition screens, you can add validation logic to make sure what the users put in is what you are expecting. This is a great way to ensure data integrity, and we can do this with workflow validators. In this recipe, we will add a validator to perform a date comparison between a custom field and the issue's create date, so the date value we select for the custom field must be after the issue's create date. Getting ready For this recipe, we need to have the JIRA Suite Utilities add-on installed. You can download it from the following link or install it directly using Universal Plugin Manager: https://marketplace.atlassian.com/plugins/com.googlecode.jira-suite-utilities As we will also do a date comparison, we need to create a new date custom field called Start Date and add it to the Workflow screen. How to do it… Perform the following steps to add validation rules during a workflow transition: Select and edit the workflow to configure. Select the Diagram mode. Select the Start Progress transition and click on the Validators link on the right-hand side. Click on the Add validator link and select Date Compare from the list. Configure the validator with the following parameters: The Start Date custom field for This date The Greater than > for Condition Created for Compare with Click on Add to add the validator. Then, click on Publish Draft to apply the change. After we add the validator, if we now try to select a date that is before the issue's create date, JIRA will prompt you with an error message and stop the transition from going through, as shown in the following screenshot: How it works… Validators are run before the workflow transition is executed. This way, validators can intercept and prevent a transition from going through if one or more of the validation logics fail. If you have more than one validator, all of them must pass for the transition to go through. Performing additional processing after a transition is executed JIRA allows you to perform additional tasks as part of a workflow transition through the use of post functions. JIRA makes heavy use of post functions internally; for example, with the out-of-the-box workflow, when you reopen an issue, the resolution field value is cleared automatically. In this recipe, we will look at how to add post functions to a workflow transition. We will add a post function to automatically clear out the value stored in the Reason for Freezing custom field when we take it out of the Frozen status. Getting ready By default, JIRA comes with a post function that can change the values for standard issue fields, but as Reason for Freezing is a custom field, we need to have the JIRA Suite Utilities add-on installed. You can download this from the following link or install it directly using Universal Plugin Manager: https://marketplace.atlassian.com/plugins/com.googlecode.jira-suite-utilities How to do it… Perform the following steps to add processing logic after a workflow transition is executed: Select and edit the workflow to configure. Select the Diagram mode. Click on the Frozen global workflow transition. Next, click on the Post functions link on the right-hand side. Click on Add post function, select Clear Field Value post function from the list, and click on Add. Select the Reason for Freezing field from Field and click on the Add button. Now, click on Publish Draft to apply the change. With the post function in place, after you execute the transition, the Reason for Freezing field will be cleared out. You can also note from the issue's change history as a part of the transition execution that where the Status field is changed from Frozen to Open, the change for the Reason for Freezing field is also recorded. How it works… Post functions are run after the transition is executed. When you add a new post function, you might notice that the transition already has a number of post functions preadded; this is shown in the screenshot that follows. These post functions are system post functions that carry out important internal functions, such as keeping the search index up to date. The order of these post functions is important. Always add your own post functions at the top of the list. For example, any changes to issue field values, such as the one we just added, should always happen before the Reindex post function, so by the time the transition is completed, all the field indexes are up to date and ready to be searched. Summary In this article, you learned about not only how to create workflows with the new workflow designer but also how to use workflow components, such as conditions and validators, to add additional behavior to your workflows. We also looked at many different add-ons that are available to expand the possibilities of what you can do with workflows. Resources for Article: Further resources on this subject: JIRA Agile for Scrum [article] JIRA – an Overview [article] Gadgets in JIRA [article]
Read more
  • 0
  • 0
  • 6204

article-image-understanding-drivers
Packt
04 May 2016
7 min read
Save for later

Understanding Drivers

Packt
04 May 2016
7 min read
In this article by Jeff Stokes and Manuel Singer, authors of the book Mastering the Microsoft Deployment Toolkit 2013, we will discuss how to utilize Microsoft Deployment Toolkit (MDT) to make the complex world of device drivers into a much more manageable experience. We will focus on how drivers get installed via MDT, how to specifically control the drivers that get installed, and general best practices around proper driver management. We will cover the following topics in this article: Understanding offline servicing The MDT method of driver detection and injection (For more resources related to this topic, see here.) Understanding offline servicing Those of us who created images for the deployment of Windows XP were often met with an enormous challenge of dealing with drivers for many different models of hardware. We were already forced to create separate images for different hardware abstraction layer (HAL) families. Additionally, in order to deal with different hardware models within the same HAL family, the standard practice was to usually have a folder called C:Drivers, which contained a copy of every possible driver that could be required by this image for all of the different hardware models it would be installed to. There was an OemPnPDriversPath entry in the registry that individually listed each of the driver paths (subfolders under the C:Drivers directory) for the Windows Plug and Play process to locate and install the driver. As you can imagine, this was not a very efficient way to manage drivers. One reason is that every driver for every machine was staged in the image, causing the image size to grow. Another reason being that we were relying on Plug and Play to figure out the right driver to install, which gives us less control of the driver that actually gets installed, based on a driver ranking process. Fast forward to Windows Vista and current versions of Windows, and we can now utilize the magic of offline servicing to inject drivers into our Windows Imaging Format (WIM) as it is getting deployed. With this in mind, consider the concept of having your customized Windows image created through your reference image build process, but it contains no drivers. Now, when we deploy this image, we can utilize a process to detect all the hardware in the target machine, and then grab only the correct drivers that we need for this particular machine. Then, we can utilize Deployment Image Servicing and Management (DISM) to inject them into our WIM before the WIM actually gets installed, therefore, making the drivers available to be installed as Windows is installed on this machine. MDT is doing just that. The MDT method of driver detection and injection When we boot a target machine via our Lite Touch media, one of the initial task sequence steps will enumerate (via PnpEnum) all the PNP IDs for every device in the machine. Then, as part of the inject drivers task sequence step, we will search all of our Out-of-Box driver INF files to find the matching driver, then MDT will utilize DISM to inject these drivers offline into the WIM. Note that, by default, we will be searching our entire Out-of-Box repository and letting PnP figure things out. We can force MDT to only choose from drivers that we specify, therefore, gaining strict control over which drivers actually get installed. The preceding scenario indicates that this whole process hinges on the fact that we are searching through driver INF files to find matching PNP IDs in order to correctly detect and install the correct driver. This brings up a concern: what if the driver does not contain an INF file, but rather it simply has to be installed via an EXE program? In this scenario, we cannot utilize the driver injection process. Instead, we would treat that driver as an application in MDT, meaning we would add a new application, using the EXE program as the source files, specifying the command-line syntax to launch the driver install program and install silently, and then adding this application as a task sequence step. I will later demonstrate how to utilize conditional statements in your task sequence to only install that driver program on the model that it applies to; therefore, keeping our task sequence flexible to be able to install correctly on any hardware. Populating the Out-of-Box Drivers node of MDT The first step will be to visit the OEM Manufacturer’s website and download all the device drivers for each model machine that we will be deploying to. Note that many OEMs now offer a deployment-specific download or CAB file that has all the drivers for a particular model compressed into one single CAB file. This benefits you as you will not have to go through the hassle of downloading and extracting each individual driver for each device separately (NIC, video, audio, and so on). Once you download the necessary drivers, store them in a folder for each specific model, as you will need to extract the drivers within your folder before importing them into MDT. Next, we want to create a folder structure under the Out-of-Box Drivers node in MDT to organize our drivers. This will not only allow easy manageability of drivers, as new drivers are released by the OEM; but if we name the folders to match the model names exactly, we can later introduce logic to limit our PnP search to the exact folder that contains the correct drivers for our particular hardware model. As we will have different drivers for x86 and x64, as well as for different operating systems, a general best practice would be to create the first hierarchy of your folder structure. Perform the following steps to populate the node in MDT: In order to create the folder structure, simply click on Out-Of-Box-Drivers and choose New Folder, as shown in the following screenshot: Next, we will want to create a folder for each model that we will be deploying to: In order to ensure that you are using the correct model name, you can use the following WMI query to see what the hardware returns as the model name: Once you have your folder structure created, you are ready to inject the drivers. Right-click on the model folder and choose Import Drivers. Point the driver source directory to the folder, where you have downloaded and extracted the OEM drivers: There is a checkbox stating Import drivers even if they are duplicates of an existing driver. This is because MDT is utilizing the single instance storage technology to store the drivers in the actual deployment share. If you are importing multiple copies of a drivers to different folders, MDT only stores one copy of the file in the actual filesystem by default, and the folder structure you see within the MDT Workbench will be pointing duplicates to the same file in order to not waste space. As new drivers are released from the OEM, you can simply replace the drivers by going to the particular folder for this model, removing the old drivers, and importing the new drivers. Then, the next time you install your WIM in this model, you will be using the new drivers, and you won’t have to make any modifications or updates to your WIM. Summary In this article, we understood offline servicing, MDT method for driver detection and injection, and how to populate the Out-of-Box Drivers node of MDT. For more information related to MDT, refer to the following book by Packt Publishing: Mastering the Microsoft Deployment Toolkit 2013: https://www.packtpub.com/hardware-and-creative/mastering-microsoft-deployment-toolkit-2013 Resources for Article:   Further resources on this subject: The Configuration Manager Troubleshooting Toolkit [article] Social-Engineer Toolkit [article] Working with Entities in Google Web Toolkit 2 [article]
Read more
  • 0
  • 0
  • 26109
article-image-working-cloud
Packt
02 May 2016
33 min read
Save for later

Working with the Cloud

Packt
02 May 2016
33 min read
In this article by Gastón C. Hillar , author of book Internet of Things with Python, we will take advantage of many cloud services to publish and visualize data collected for sensors and to establish bi-directional communications between Internet-connected things. (For more resources related to this topic, see here.) Sending and receiving data in real-time through Internet with PubNub We want to send and receive data in real-time through the Internet and a RESTful API is not the most appropriate option to do this. Instead, we will work with a publish/subscribe model-based on a protocol that is lighter than the HTTP protocol. Specifically, we will use a service-based on the MQ Telemetry Transport (MQTT) protocol. The MQTT protocol is a machine-to-machine (M2M) connectivity protocol. MQTT is a lightweight messaging protocol that runs on top of the TCP/IP protocol and works with a publish-subscribe mechanism. It is possible for any device to subscribe to a specific channel (also known as topic) and receive all the messages published to this channel. In addition, the device can publish message to this or other channel. The protocol is becoming very popular in IoT and M2M projects. You can read more about the MQTT protocol in the following Webpage: http://mqtt.org. PubNub provides many cloud-based services and one of them allows us to easily stream data and signal any device in real-time, working with the MQTT protocol under the hoods. We will take advantage of this PubNub service to send and receive data in real-time through Internet and make it easy to control our Intel Galileo Gen 2 board through the Internet. As PubNub provides a Python API with high quality documentation and examples, it is extremely easy to use the service in Python. PubNub defines itself as the global data stream network for IoT, Mobile and Web applications. You can read more about PubNub in its Webpage: http://www.pubnub.com. In our example, we will take advantage of the free services offered by PubNub and we won't use some advanced features and additional services that might empower our IoT project connectivity requirements but also require a paid subscription. PubNub requires us to sign up and create an account with a valid e-mail and a password before we can create an application within PubNub that allows us to start using their free services. We aren't required to enter any credit card or payment information. If you already have an account at PubNub, you can skip the next step. Once you created your account PubNub will redirect you to the admin portal that lists your PubNub applications. It is necessary to generate your PubNub publish and subscribe keys in order to send and receive messages in the network. A new pane will represent the application in the admin portal. The following screenshot shows the Temperature Control application pane in the PubNub admin portal. Click on the Temperature Control pane and PubNub will display the Demo Keyset pane that has been automatically generated for the application. Click on this pane and PubNub will display the publish, subscribe and secret keys. We must copy and paste each of the keys to use them in our code that will publish messages and subscribe to them. The following screenshot shows the prefixes for the keys and the remaining characters have been erased in the image. In order to copy the secret key, you must click on the eye icon at the right-hand side of the key and PubNub will make all the characters visible. Now, we will use pip installer to install PubNub Python SDK 3.7.6. We just need to run the following command in the SSH terminal to install the package. Notice that it can take a few minutes to complete the installation. pip install pubnub The last lines for the output will indicate that the pubnub package has been successfully installed. Don't worry about the error messages related to building wheel and the insecure platform warning. Downloading pubnub-3.7.6.tar.gz Collecting pycrypto>=2.6.1 (from pubnub) Downloading pycrypto-2.6.1.tar.gz (446kB) 100% |################################| 446kB 25kB/s Requirement already satisfied (use --upgrade to upgrade): requests>=2.4.0 in /usr/lib/python2.7/site-packages (from pubnub) Installing collected packages: pycrypto, pubnub Running setup.py install for pycrypto Installing collected packages: pycrypto, pubnub Running setup.py install for pycrypto Running setup.py install for pubnub Successfully installed pubnub-3.7.6 pycrypto-2.6.1 We will use iot_python_chapter_08_03.py (you will find this in the code bundle) code as a baseline to add new features that will allow us to perform the following actions with PubNub messages sent to a specific channel from any device that has a Web browser: Rotate the servo's shaft to display a temperature value in degrees Fahrenheit received as part of the message Display a line of text received as part of the message at the bottom of the OLED matrix We will use the recently installed pubnub module to subscribe to a specific channel and run code when we receive messages in the channel. We will create a MessageChannel class to represent the communications channel, configure the PubNub subscription and declare the code for the callbacks that are going to be executed when certain events are fired. The code file for the sample is iot_python_chapter_09_02.py (you will find this in the code bundle). Remember that we use the code file iot_python_chapter_08_03.py (you will find this in the code bundle)as a baseline, and therefore, we will add the class to the existing code in this file and we will create a new Python file. Don't forget to replace the strings assigned to the publish_key and subscribe_key local variables in the __init__ method with the values you have retrieved from the previously explained PubNub key generation process. import time from pubnub import Pubnub class MessageChannel: command_key = "command" def __init__(self, channel, temperature_servo, oled): self.temperature_servo = temperature_servo self.oled = oled self.channel = channel # Publish key is the one that usually starts with the "pub-c-" prefix # Do not forget to replace the string with your publish key publish_key = "pub-c-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" # Subscribe key is the one that usually starts with the "sub-c" prefix # Do not forget to replace the string with your subscribe key subscribe_key = "sub-c-xxxxxxxx-xxxx-xxxx-xxxx- xxxxxxxxxxxx" self.pubnub = Pubnub(publish_key=publish_key, subscribe_key=subscribe_key) self.pubnub.subscribe(channels=self.channel, callback=self.callback, error=self.callback, connect=self.connect, reconnect=self.reconnect, disconnect=self.disconnect) def callback(self, message, channel): if channel == self.channel: if self.__class__.command_key in message: if message[self.__class__.command_key] == "print_temperature_fahrenheit": self.temperature_servo.print_temperature (message["temperature_fahrenheit"]) elif message[self.__class__.command_key] == "print_information_message": self.oled.print_line(11, message["text"]) print("I've received the following message: {0}".format(message)) def error(self, message): print("Error: " + str(message)) def connect(self, message): print("Connected to the {0} channel". format(self.channel)) print(self.pubnub.publish( channel=self.channel, message="Listening to messages in the Intel Galileo Gen 2 board")) def reconnect(self, message): print("Reconnected to the {0} channel". format(self.channel)) def disconnect(self, message): print("Disconnected from the {0} channel". Format(self.channel)) The MessageChannel class declares the command_key class attribute that defines the key string that defines what the code will understand as the command. Whenever we receive a message that includes the specified key string, we know that the value associated to this key in the dictionary will indicate the command that the message wants the code running in the board to be processed. Each command requires additional key-value pairs that provide the necessary information to execute the command. We have to specify the PubNub channel name, the TemperatureServo instance and the Oled instance in the channel, temperature_servo and oled required arguments. The constructor, that is, the __init__ method, saves the received arguments in three attributes with the same names. The channel argument specifies the PubNub channel to which we are going to subscribe to listen to the messages that other devices send to this channel. We will also publish messages to this channel, and therefore, we will be both a subscriber and a publisher for this channel. In this case, we will only subscribe to one channel. However, it is very important to know that we are not limited to subscribe to a single channel, we might subscribe to many channels. Then, the constructor declares two local variables: publish_key and subscribe_key. These local variables save the publish and subscribe keys we had generated with the PubNub admin portal. Then, the code creates a new Pubnub instance with publish_key and subscribe_key as the arguments, and saves the reference for the new instance in the pubnub attribute. Finally, the code calls the subscribe method for the new instance to subscribe to data on the channel saved in the channel attribute. Under the hoods, the subscribe method makes the client create an open TCP socket to the PubNub network that includes an MQTT broker and starts listening to messages on the specified channel. The call to this method specifies many methods declared in the MessageChannel class for the following named arguments: callback: Specifies the function that will be called when there is a new message received from the channel error: Specifies the function that will be called on an error event connect: Specifies the function that will be called when a successful connection is established with the PubNub cloud reconnect: Specifies the function that will be called when a successful re-connection is completed with the PubNub cloud disconnect: Specifies the function that will be called when the client disconnects from the PubNub cloud This way, whenever one of the previously enumerated events occur, the specified method will be executed. The callback method receives two arguments: message and channel. First, the method checks whether the received channel matches the value in the channel attribute. In this case, whenever the callback method is executed, the value in the channel argument will always match the value in the channel attribute because we just subscribed to one channel. However, in case we subscribe to more than one channel, is is always necessary to check the channel in which the message was sent and in which we are receiving the message. Then, the code checks whether the command_key class attribute is included in the message dictionary. If the expression evaluates to True, it means that the message includes a command that we have to process. However, before we can process the command, we have to check which is the command, and therefore, it is necessary to retrieve the value associated with the key equivalent to the command_key class attribute. The code is capable of running code when the value is any of the following two commands: print_temperature_fahrenheit: The command must specify the temperature value expressed in degrees Fahrenheit in the value of the temperature_fahrenheit key. The code calls the self.temperature_servo.print_temperature method with the temperature value retrieved from the dictionary as an argument. This way, the code moves the servo's shaft-based on the specified temperature value in the message that includes the command. print_information_message: The command must specify the line of text that has to be displayed at the bottom of the OLED matrix in the value of the print_information_message key. The code calls the self.oled.print_line method with 11 and the text value retrieved from the dictionary as arguments. This way, the code displays the text received in the message that includes the command at the bottom of the OLED matrix. No matter whether the message included a valid command or not, the method prints the raw message that it received in the console output. The connect method prints a message indicating that a connection has been established with the channel. Then, the method prints the results of calling the self.pubnub.publish method that publishes a message in the channel name saved in self.channel with the following message: "Listening to messages in the Intel Galileo Gen 2 board". In this case, the call to this method runs with a synchronous execution. We will work with asynchronous execution for this method in our next example. Currently, we are already subscribed to this channel, and therefore, we will receive the previously published message and the callback method will be executed with this message as an argument. However, as the message doesn't include the key that identifies a command, the code in the callback method will just display the received message and it won't process any of the previously analyzed commands. The other methods declared in the MessageChannel class just display information to the console output about the event that has occurred. Now, we will use the previously coded MessageChannel class to create a new version of the __main__ method that uses the PubNub cloud to receive and process commands. The new version doesn't rotate the servo's shaft when the ambient temperature changes, instead, it will do this when it receives the appropriate command from any device connected to PubNub cloud. The following lines show the new version of the __main__ method. The code file for the sample is iot_python_chapter_09_02.py (you will find this in the code bundle). if __name__ == "__main__": temperature_and_humidity_sensor = TemperatureAndHumiditySensor(0) oled = TemperatureAndHumidityOled(0) temperature_servo = TemperatureServo(3) message_channel = MessageChannel("temperature", temperature_servo, oled) while True: temperature_and_humidity_sensor. measure_temperature_and_humidity() oled.print_temperature( temperature_and_humidity_sensor .temperature_fahrenheit, temperature_and_humidity_sensor.temperature_celsius) oled.print_humidity( temperature_and_humidity_sensor.humidity) print("Ambient temperature in degrees Celsius: {0}". format(temperature_and_humidity_sensor .temperature_celsius)) print("Ambient temperature in degrees Fahrenheit: {0}". format(temperature_and_humidity_sensor .temperature_fahrenheit)) print("Ambient humidity: {0}". format(temperature_and_humidity_sensor.humidity)) # Sleep 10 seconds (10000 milliseconds) time.sleep(10) The highlighted line creates an instance of the previously coded MessageChannel class with "temperature", temperature_servo and oled as the arguments. The constructor will subscribe to the temperature channel in the PubNub cloud, and therefore, we must send the messages to this channel in order to send the commands that the code will process with an asynchronous execution. The loop will read the values from the sensor and print the values to the console similar to the previous version of the code, and therefore, we will have code running in the loop and listening to the messages in the temperature channel in the PubNub cloud. We will start the example later because we want to subscribe to the channel in the PubNub debug console before we run the code in the board. Publishing messages with commands through the PubNub cloud Now, we will take advantage of the PubNub console to send messages with commands to the temperature channel and make the Python code running on the board process these commands. In case you have logged out of PubNub, login again and click on the Temperature Control pane in the Admin Portal. PubNub will display the Demo Keyset pane. Click on the Demo Keyset pane and PubNub will display the publish, subscribe and secret keys. This way, we select the keyset that we want to use for our PubNub application. Click on Debug Console on the sidebar located the left-hand side of the screen. PubNub will create a client for a default channel and subscribe to this channel using the secret keys we have selected in the previous step. We want to subscribe to the temperature channel, and therefore, enter temperature in the Default Channel textbox within a pane that includes the Add client button at the bottom. Then, click on Add client and PubNub will add a new pane with a random client name as a title and the channel name, temperature, in the second line. PubNub makes the client subscribe to this channel and we will be able to receive messages published to this channel and send messages to this channel. The following picture shows the pane for the generated client named Client-ot7pi, subscribed to the temperature channel. Notice that the client name will be different when you follow the explained steps. The client pane displays the output generated when PubNub subscribed the client to the channel. PubNub returns a formatted response for each command. In this case, it indicates that the status is equal to Subscribed and the channel name is temperature. [1,"Subscribed","temperature"] Now, it is time to start running the example in the Intel Galileo Gen 2 board. The following line will start the example in the SSH console. python iot_python_chapter_09_02.py After you run the example, go to the Web browser in which you are working with the PubNub debug console. You will see the following message listed in the previously created client: "Listening to messages in the Intel Galileo Gen 2 board" The Python code running in the board published this message, specifically, the connect method in the MessageChannel class sent this message after the application established a connection with the PubNub cloud. The following picture shows the message listed in the previously created client. Notice that the icon at the left-hand side of the text indicates it is a message. The other message was a debug message with the results of subscribing to the channel. At the bottom of the client pane, you will see the following text and the SEND button at the right-hand side: {"text":"Enter Message Here"} Now, we will replace the previously shown text with a message. Enter the following JSON code and click SEND: {"command":"print_temperature_fahrenheit", "temperature_fahrenheit": 50 } The text editor where you enter the message has some issues in certain browsers. Thus, it is convenient to use your favorite text editor to enter the JSON code, copy it and then past it to replace the text that is included by default in the text for the message to be sent. After you click SEND, the following lines will appear in the client log. The first line is a debug message with the results of publishing the message and indicates that the message has been sent. The formatted response includes a number (1 message), the status (Sent) and a time token. The second line is the message that arrives to the channel because we are subscribed to the temperature channel, that is, we also receive the message we sent. [1,"Sent","14594756860875537"] { "command": "print_temperature_fahrenheit", "temperature_fahrenheit": 50 } The following picture shows the messages and debug messages log for the PubNub client after we clicked the SEND button. After you publish the previous message, you will see the following output in the SSH console for the Intel Galileo Gen 2 board. You will notice the servo's shaft rotates to 50 degrees. I've received the following message: {u'command': u'print_temperature_fahrenheit', u'temperature_fahrenheit': 50} Now, enter the following JSON code and click SEND: {"command":"print_information_message", "text": "Client ready"} After you click SEND, the following lines will appear in the client log. The first line is a debug message with the previously explained formatted response with the results of publishing the message and indicates that the message has been sent. The second line is the message that arrives to the channel because we are subscribed to the temperature channel, that is, we also receive the message we sent. [1,"Sent","14594794434885921"] { "command": "print_information_message", "text": "Client ready" } The following picture shows the messages and debug messages log for the PubNub client after we clicked the SEND button. After you publish the previous message, you will see the following output in the SSH console for the Intel Galileo Gen 2 board. You will see the following text displayed at the bottom of the OLED matrix: Client ready. I've received the following message: {u'text': u'Client ready', u'command': u'print_information_message'} When we published the two messages with the commands, we have definitely noticed a problem. We don't know whether the command was processed or not in the code that is running on the IoT device, that is, in the Intel Galileo Gen 2 board. We know that the board started listening messages in the temperature channel, but we don't receive any kind of response from the IoT device after the command has been processed. Working with bi-directional communications We can easily add a few lines of code to publish a message to the same channel in which we are receiving messages to indicate that the command has been successfully processed. We will use our previous example as a baseline and we will create a new version of the MessageChannel class. The code file was iot_python_chapter_09_02.py (you will find this in the code bundle). Don't forget to replace the strings assigned to the publish_key and subscribe_key local variables in the __init__ method with the values you have retrieved from the previously explained PubNub key generation process. The following lines show the new version of the MessageChannel class that publishes a message after a command has been successfully processed. The code file for the sample is iot_python_chapter_09_03.py (you will find this in the code bundle). import time from pubnub import Pubnub class MessageChannel: command_key = "command" successfully_processed_command_key = "successfully_processed_command" def __init__(self, channel, temperature_servo, oled): self.temperature_servo = temperature_servo self.oled = oled self.channel = channel # Do not forget to replace the string with your publish key publish_key = "pub-c-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" # Subscribe key is the one that usually starts with the "sub-c" prefix # Do not forget to replace the string with your subscribe key subscribe_key = "sub-c-xxxxxxxx-xxxx-xxxx-xxxx- xxxxxxxxxxxx" self.pubnub = Pubnub(publish_key=publish_key, subscribe_key=subscribe_key) self.pubnub.subscribe(channels=self.channel, callback=self.callback, error=self.callback, connect=self.connect, reconnect=self.reconnect, disconnect=self.disconnect) def callback_response_message(self, message): print("I've received the following response from PubNub cloud: {0}".format(message)) def error_response_message(self, message): print("There was an error when working with the PubNub cloud: {0}".format(message)) def publish_response_message(self, message): response_message = { self.__class__.successfully_processed_command_key: message[self.__class__.command_key]} self.pubnub.publish( channel=self.channel, message=response_message, callback=self.callback_response_message, error=self.error_response_message) def callback(self, message, channel): if channel == self.channel: print("I've received the following message: {0}".format(message)) if self.__class__.command_key in message: if message[self.__class__.command_key] == "print_temperature_fahrenheit": self.temperature_servo.print_temperature (message["temperature_fahrenheit"]) self.publish_response_message(message) elif message[self.__class__.command_key] == "print_information_message": self.oled.print_line(11, message["text"]) self.publish_response_message(message) def error(self, message): print("Error: " + str(message)) def connect(self, message): print("Connected to the {0} channel". format(self.channel)) print(self.pubnub.publish( channel=self.channel, message="Listening to messages in the Intel Galileo Gen 2 board")) def reconnect(self, message): print("Reconnected to the {0} channel". format(self.channel)) def disconnect(self, message): print("Disconnected from the {0} channel". format(self.channel)) The highlighted lines in the previous code for the new version of the MessageChannel class show the changes we made in the code. First, the code declares the successfully_processed_command_key class attribute that defines the key string that defines what the code will use as a successfully processed command key in a response message published to the channel. Whenever we publish a message that includes the specified key string, we know that the value associated to this key in the dictionary will indicate the command that the board has successfully processed. The code declares the following three new methods: callback_response_message: This method will be used as the callback that will be executed when a successfully processed command response message is published to the channel. The method just prints the formatted response that PubNub returns when a message has been successfully published in the channel. In this case, the message argument doesn't hold the original message that has been published, it holds the formatted response. We use message for the argument name to keep consistency with the PubNub API. error_response_message: This method will be used as the callback that will be executed when an error occurs when trying to publish a successfully processed command response message to the channel. The method just prints the error message that PubNub returns when a message hasn't been successfully published in the channel. publish_response_message: This method receives the message with the command that was successfully processed in the message argument. The code creates a response_message dictionary with the successfully_processed_command_key class attribute as the key and the value of the key specified in the command_key class attribute for the message dictionary as the value. Then, the code calls the self.pubnub.publish method to publish the response_message dictionary to the channel saved in the channel attribute. The call to this method specifies self.callback_response_message as the callback to be executed when the message is successfully published and self.error_response_message as the callback to be executed when an error occurred during the publishing process. When we specify a callback, the publish method works with an asynchronous execution, and therefore, the execution is non-blocking. The publication of the message and the callbacks that are specified will run in a different thread. Now, the callback method defined in the MessageChannel class adds a call to the publish_response_message method with the message that included the command that has been successfully processed (message) as an argument. As previously explained, the publish_response_message method is non-blocking and will return immediately while the successfully processed message is published in another thread. Now, it is time to start running the example in the Intel Galileo Gen 2 board. The following line will start the example in the SSH console. python iot_python_chapter_09_03.py After you run the example, go to the Web browser in which you are working with the PubNub debug console. You will see the following message listed in the previously created client: "Listening to messages in the Intel Galileo Gen 2 board" Enter the following JSON code and click SEND: {"command":"print_temperature_fahrenheit", "temperature_fahrenheit": 90 } After you click SEND, the following lines will appear in the client log. The last message was published by the board to the channel indicates that the print_temperature_fahrenheit command has been successfully processed. [1,"Sent","14595406989121047"] { "command": "print_temperature_fahrenheit", "temperature_fahrenheit": 90 } { "successfully_processed_command": "print_temperature_fahrenheit" } The following picture shows the messages and debug messages log for the PubNub client after we clicked the SEND button: After you publish the previous message, you will see the following output in the SSH console for the Intel Galileo Gen 2 board. You will notice the servo's shaft rotates to 90 degrees. The board also receives the successfully processed command message because it is subscribed to the channel in which the message has been published. I've received the following message: {u'command': u'print_temperature_fahrenheit', u'temperature_fahrenheit': 90} I've received the following response from PubNub cloud: [1, u'Sent', u'14595422426124592'] I've received the following message: {u'successfully_processed_command': u'print_temperature_fahrenheit'} Now, enter the following JSON code and click SEND: {"command":"print_information_message", "text": "2nd message"} After you click Send, the following lines will appear in the client log. The last message published by the board to the channel indicates that the print_information_message command has been successfully processed. [1,"Sent","14595434708640961"] { "command": "print_information_message", "text": "2nd message" } { "successfully_processed_command": "print_information_message" } The following picture shows the messages and debug messages log for the PubNub client after we clicked the SEND button: After you publish the previous message, you will see the following output in the SSH console for the Intel Galileo Gen 2 board. You will see the following text displayed at the bottom of the OLED matrix: 2nd message. The board also receives the successfully processed command message because it is subscribed to the channel in which the message has been published. I've received the following message: {u'text': u'2nd message', u'command': u'print_information_message'} 2nd message I've received the following response from PubNub cloud: [1, u'Sent', u'14595434710438777'] I've received the following message: {u'successfully_processed_command': u'print_information_message'} We can work with the different SDKs provided by PubNub to subscribe and publish to a channel. We can also make different IoT devices talk to themselves by publishing messages to channels and processing them. In this case, we just created a few commands and we didn't add detailed information about the device that has to process the command or the device that has generated a specific message. A more complex API would require commands that include more information and security. Publishing messages to the cloud with a Python PubNub client So far, we have been using the PubNub debug console to publish messages to the temperature channel and make the Python code running in the Intel Galileo Gen 2 board process them. Now, we are going to code a Python client that will publish messages to the temperature channel. This way, we will be able to design applications that can talk to IoT devices with Python code in the publisher and in the subscriber devices. We can run the Python client on another Intel Galileo Gen 2 board or in any device that has Python 2.7.x installed. In addition, the code will run with Python 3.x. For example, we can run the Python client in our computer. We just need to make sure that we install the pubnub module we have previously installed with pip in the Python version that is running in the Yocto Linux for the board. We will create a Client class to represent a PubNub client, configure the PubNub subscription, make it easy to publish a message with a command and the required values for the command and declare the code for the callbacks that are going to be executed when certain events are fired. The code file for the sample is iot_python_chapter_09_04.py (you will find this in the code bundle). Don't forget to replace the strings assigned to the publish_key and subscribe_key local variables in the __init__ method with the values you have retrieved from the previously explained PubNub key generation process. The following lines show the code for the Client class: import time from pubnub import Pubnub class Client: command_key = "command" def __init__(self, channel): self.channel = channel # Publish key is the one that usually starts with the "pub-c-" prefix publish_key = "pub-c-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" # Subscribe key is the one that usually starts with the "sub-c" prefix # Do not forget to replace the string with your subscribe key subscribe_key = "sub-c-xxxxxxxx-xxxx-xxxx-xxxx- xxxxxxxxxxxx" self.pubnub = Pubnub(publish_key=publish_key, subscribe_key=subscribe_key) self.pubnub.subscribe(channels=self.channel, callback=self.callback, error=self.callback, connect=self.connect, reconnect=self.reconnect, disconnect=self.disconnect) def callback_command_message(self, message): print("I've received the following response from PubNub cloud: {0}".format(message)) def error_command_message(self, message): print("There was an error when working with the PubNub cloud: {0}".format(message)) def publish_command(self, command_name, key, value): command_message = { self.__class__.command_key: command_name, key: value} self.pubnub.publish( channel=self.channel, message=command_message, callback=self.callback_command_message, error=self.error_command_message) def callback(self, message, channel): if channel == self.channel: print("I've received the following message: {0}".format(message)) def error(self, message): print("Error: " + str(message)) def connect(self, message): print("Connected to the {0} channel". format(self.channel)) print(self.pubnub.publish( channel=self.channel, message="Listening to messages in the PubNub Python Client")) def reconnect(self, message): print("Reconnected to the {0} channel". format(self.channel)) def disconnect(self, message): print("Disconnected from the {0} channel". format(self.channel)) The Client class declares the command_key class attribute that defines the key string that defines what the code understands as a command in the messages. Our main goal is to build and publish command messages to a specified channel. We have to specify the PubNub channel name in the channel-required argument. The constructor, that is, the __init__ method, saves the received argument in an attribute with the same name. We will be both a subscriber and a publisher for this channel. Then, the constructor declares two local variables: publish_key and subscribe_key. These local variables save the publish and subscribe keys we had generated with the PubNub Admin portal. Then, the code creates a new Pubnub instance with publish_key and subscribe_key as the arguments, and saves the reference for the new instance in the pubnub attribute. Finally, the code calls the subscribe method for the new instance to subscribe to data on the channel saved in the channel attribute. The call to this method specifies many methods declared in the Client class as we did for our previous examples. The publish_command method receives a command name, the key and the value that provide the necessary information to execute the command in the command_name, key and value required arguments. In this case, we don't target the command to a specific IoT device and all the devices that subscribe to the channel and run the code in our previous example will process the commands that we publish. We can use the code as a baseline to work with more complex examples in which generate commands that target specific IoT devices. Obviously, it is also necessary to improve the security. The method creates a dictionary and saves it in the command_message local variable. The command_key class attribute is the first key for the dictionary and the command_name received as an argument, the value that composes the first key-value pair. Then, the code calls the self.pubnub.publish method to publish the command_message dictionary to the channel saved in the channel attribute. The call to this method specifies self.callback_command_message as the callback to be executed when the message is published successfully and self.error_command_message as the callback to be executed when an error occurred during the publishing process. As happened in our previous example, when we specify a callback, the publish method works with an asynchronous execution. Now, we will use the previously coded Client class to write a __main__ method that uses the PubNub cloud to publish two commands that our board will process. The following lines show the code for the __main__ method. The code file for the sample is iot_python_chapter_09_04.py (you will find this in the code bundle). if __name__ == "__main__": client = Client("temperature") client.publish_command( "print_temperature_fahrenheit", "temperature_fahrenheit", 45) client.publish_command( "print_information_message", "text", "Python IoT" ) # Sleep 60 seconds (60000 milliseconds) time.sleep(60) The code in the __main__ method is very easy to understand. The code creates an instance of the Client class with "temperature" as an argument to become both a subscriber and a publisher for this channel in the PubNub cloud. The code saves the new instances in the client local variable. The code calls the publish_command method with the necessary arguments to build and publish the print_temperature_fahrenheit command with a temperature value of 45. The method will publish the command with an asynchronous execution. Then, the code calls the publish_command method again with the necessary arguments to build and publish the print_information_message command with a text value of "Python IoT". The method will publish the second command with an asynchronous execution. Finally, the code sleeps for 1 minute (60 seconds) in order to make it possible for the asynchronous executions to publish the commands successfully. The different callbacks defined in the Client class will be executed as the different events fire. As we are also subscribed to the channel, we will also receive the messages we publish in the temperature channel. Keep the Python code we executed in our previous example running on the board. We want the board to process our commands. In addition, keep the Web browser in which you are working with the PubNub debug console opened because we also want to see all the messages in the log. The following line will start the example for the Python client in any computer or device that you want to use as a client. It is possible to run the code in another SSH terminal in case you want to use the same board as a client. python iot_python_chapter_09_04.py After you run the example, you will see the following output in the Python console that runs the Python client, that is, the iot_python_chapter_09_04.py (you will find this in the code bundle)Python script. Connected to the temperature channel I've received the following response from PubNub cloud: [1, u'Sent', u'14596508980494876'] I've received the following response from PubNub cloud: [1, u'Sent', u'14596508980505581'] [1, u'Sent', u'14596508982165140'] I've received the following message: {u'text': u'Python IoT', u'command': u'print_information_message'} I've received the following message: {u'command': u'print_temperature_fahrenheit', u'temperature_fahrenheit': 45} I've received the following message: Listening to messages in the PubNub Python Client I've received the following message: {u'successfully_processed_command': u'print_information_message'} I've received the following message: {u'successfully_processed_command': u'print_temperature_fahrenheit'} The code used the PubNub Python SDK to build and publish the following two command messages in the temperature channel: {"command":"print_temperature_fahrenheit", "temperature_fahrenheit": "45"} {"command":"print_information_message", "text": "Python IoT"} As we are also subscribed to the temperature channel, we receive the messages we sent with an asynchronous execution. Then, we received the successfully processed command messages for the two command messages. The board has processed the commands and published the messages to the temperature channel. After you run the example, go to the Web browser in which you are working with the PubNub debug console. You will see the following messages listed in the previously created client: [1,"Subscribed","temperature"] "Listening to messages in the Intel Galileo Gen 2 board" { "text": "Python IoT", "command": "print_information_message" } { "command": "print_temperature_fahrenheit", "temperature_fahrenheit": 45 } "Listening to messages in the PubNub Python Client" { "successfully_processed_command": "print_information_message" } { "successfully_processed_command": "print_temperature_fahrenheit" } The following picture shows the last messages displayed in the log for the PubNub client after we run the previous example: You will see the following text displayed at the bottom of the OLED matrix: Python IoT. In addition, the servo's shaft will rotate to 45 degrees. We can use the PubNub SDKs available in different programming languages to create applications and apps that publish and receive messages in the PubNub cloud and interact with IoT devices. In this case, we worked with the Python SDK to create a client that publishes commands. It is possible to create mobile apps that publish commands and easily build an app that can interact with our IoT device. Summary In this article, we combined many cloud-based services that allowed us to easily publish data collected from sensors and visualize it in a Web-based dashboard. We realized that there is always a Python API, and therefore, it is easy to write Python code that interacts with popular cloud-based services. Resources for Article: Further resources on this subject: Internet of Things with BeagleBone[article] The Internet of Things[article] Python Data Structures[article]
Read more
  • 0
  • 0
  • 13133

article-image-rendering-cube-map-threejs
Soham Kamani
02 May 2016
7 min read
Save for later

Rendering a Cube Map in Three.js

Soham Kamani
02 May 2016
7 min read
Three.js is an awesome library. It makes complicated things such as 3D graphics and shaders easy for the average frontend developer, which opens up a lot of previously inaccessible avenues for web development. You can check out this repository of examples to see what's possible (basically, the sky is the limit). Even though Three.js provides all of this funcitonality, it is one of the lesser-documented libraries, and hence can be a little bit overwhelming for a newcomer. This tutorial will take you through the steps required to render a nice little scene with the Three.js library. Project Structure and Prerequisites We will be using npm and webpack to make our application, along with ES6 syntax. Initialize a new project in a new folder: npm init After that, install Three.js: npm install --save three And we are all set! Making a Cubemap A cubemap, if you haven't heard of it before, is precisely what its name suggests. Think of six enormous square pictures, all joined together to form a cube, with you being inside of the cube. The six pictures then form a cubemap. It is used to make 3D background sceneries and fillers. Every rendering in 3D graphics has two elements: the scene and the camera. The renderer then renders the scene relative to the camera. In this way, you can move through a scene by adjusting its camera, and at the same time stay still with respect to another scene because of its camera. This is the basic principle used while making movement-based 3D graphics. You (or in this case, your camera) are standing still with respect to the background (or in some cases, moving really slowly), considering it to be at a near-infinite distance from you. At the same time, you will be moving with respect to the objects around you, since they are considered to be within your immediate distance. First, we have to import our dependencies and initialize important objects. As mentioned before, each rendering will have a scene and a camera. Three.js provides constructors for both of these things. The arguments we see for the camera are parameters such as frustum vertical, aspect ratio, near frame, and far frame, which are much beyond the scope of this post. For almost all cases, these numbers can be consdered as defaults and need not be changed. import THREE from 'three'; let sceneCube = new THREE.Scene(); let cameraCube = new THREE.PerspectiveCamera( 60, window.innerWidth / window.innerHeight, 1, 100000 ); Each cubemap is composed of six images. In the snippet below, we are just making an array of paths, which is where all our images are kept. In this case, the image for one side of our cube (the positive x side) would be located at the path '/cubemap/cm_x_p.jpg': let path = '/cubemap/cm'; let format = '.jpg'; let urls = [ path + '_x_p' + format, path + '_x_n' + format, path + '_y_p' + format, path + '_y_n' + format, path + '_z_p' + format, path + '_z_n' + format ]; But where do I find cubemap images from? Normally, you would have to use Google for cubemap images, but those are not the best quality. You can make your own cubemap from normal images and some respectable photoshop skills, or you can take images from some of the examples that already exist. This is the part where we actually "create" the cubemap from the images we have. Each scene is defined by a number of "meshes." Each mesh is defined by a geometry, which specifies the shape of the mesh, and a material, which specifies the appearence and coloring of the mesh. We now load the six images defined previously into a "texture cube," which is then used to define our material. The geometry we used is called a box geometry, and we are defining the length, width, and breadth of this box as 100 units each: let textureCube = THREE.ImageUtils.loadTextureCube(urls, THREE.CubeRefractionMapping); let shader = THREE.ShaderLib.cube; shader.uniforms.tCube.value = textureCube; let material = new THREE.ShaderMaterial({ fragmentShader: shader.fragmentShader, vertexShader: shader.vertexShader, uniforms: shader.uniforms, depthWrite: false, side: THREE.BackSide }), mesh = new THREE.Mesh(new THREE.BoxGeometry(100, 100, 100), material); Finally, our cubemap is added to the scene: sceneCube.add(mesh); Of course, we want to keep our code modular, so all of the above code for making a cubemap should ideally be wrapped in its own function and used in our main program as and when it is needed. The final cubemap "module" would look something like this: 'use strict'; import THREE from 'three'; let Cubemap = function () { let sceneCube = new THREE.Scene(); let cameraCube = new THREE.PerspectiveCamera( 60, window.innerWidth / window.innerHeight, 1, 100000 ); let path = '/cubemap/cm'; let format = '.jpg'; let urls = [ path + '_x_p' + format, path + '_x_n' + format, path + '_y_p' + format, path + '_y_n' + format, path + '_z_p' + format, path + '_z_n' + format ]; let textureCube = THREE.ImageUtils.loadTextureCube(urls, THREE.CubeRefractionMapping); let shader = THREE.ShaderLib.cube; shader.uniforms.tCube.value = textureCube; let material = new THREE.ShaderMaterial({ fragmentShader: shader.fragmentShader, vertexShader: shader.vertexShader, uniforms: shader.uniforms, depthWrite: false, side: THREE.BackSide }), mesh = new THREE.Mesh(new THREE.BoxGeometry(100, 100, 100), material); sceneCube.add(mesh); return { scene : sceneCube, camera : cameraCube }; }; module.exports = Cubemap; Making Our Core Module Now, we will have to write the core of our little app to actually render the cubemap onto an element in the web browser: We import the dependencies to be used and initialize our WebGL renderer:   'use strict'; import THREE from 'three'; import Cubemap from './Cubemap'; let scene = new THREE.Scene(); let camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 10000); let renderer = new THREE.WebGLRenderer(); renderer.autoClear = false; renderer.setSize(window.innerWidth, window.innerHeight); document.body.appendChild(renderer.domElement); The renderer returns a canvas element, which you can fix to your DOM. This canvas element is where all the magic happens. As usual, we are creating another scene and camera, different from the ones in our cubemap. All objects that are not the cubemap will be included in this scene. Next, we add ambient light to our scene. This is so that we can actually see the other objects we add in the scene: let lightAmbient = new THREE.AmbientLight(0x202020); // soft white light scene.add(lightAmbient); We define and start our rendering function: let cubemap = Cubemap(); let render = function () { requestAnimationFrame(render); renderer.render(cubemap.scene, cubemap.camera); renderer.render(scene, camera); cubemap.camera.rotation.copy(camera.rotation); }; render(); In case you are using movement (which is most often the case with WebGL), you will want to render your scene a number of times a second. The requestAnimationFrame function is a native browser function that calls the function you pass to it after a set time. To really see your cubemapscene come alive, add a moving element as shown in the Hello World example from the Three.js website. Three.js may seem overwhelming at first, but it's a huge improvement over the otherwise steep learning curve for GLSL. If you want to see a slightly more complex example of using cubemaps and objects in Three.js, you can go here. About the author Soham Kamai is a Fullstack web developer and electronics hobbyist. He is especially interested in JavaScript, Python, and IOT. He can be found on Twitter @sohamkamani and GitHub at https://github.com/sohamkamani.
Read more
  • 0
  • 0
  • 15776

article-image-encrypting-zabbix-traffic
Packt
28 Apr 2016
17 min read
Save for later

Encrypting Zabbix Traffic

Packt
28 Apr 2016
17 min read
In this article by Rihards Olups, the author of the book Zabbix Network Monitoring, Second Edition, we will learn how communication between Zabbix components is done in plaintext by default. In many environments, this is not a significant problem, but monitoring over the Internet in plaintext is likely not a good approach. In previous Zabbix versions, there was no built-in solution, and various VPN, stunnel, and SSH port forwarding solutions were used. Such solutions can still be used, but version 3.0 is the first Zabbix version to provide built-in encryption. In this article, we will set up several of the components to use different methods of encryption. (For more resources related to this topic, see here.) Overview For Zabbix communication encryption, two methods are supported: Preshared Key Certificate-based encryption The Preshared Key (PSK) method is very easy to set up but is likely harder to scale. Certificate-based encryption can be more complicated to set up but easier to manage on a larger scale and potentially more secure. This encryption is supported between all Zabbix components: server, proxy, agent, and even zabbix_sender and zabbix_get. For outgoing connections (such as server-to-agent or proxy-to-server), one method may be used (no encryption, PSK or certificate-based). For incoming connections, multiple methods may be allowed. This way, an agent could work with encryption by default and then turn off encryption with zabbix_get for debugging. Backend libraries Behind the scenes, Zabbix encryption can use one of three different libraries: OpenSSL, GnuTLS, or mbedTLS. Which one to choose? If using packages, the easiest and safest method is to start with whichever one the packages are compiled with. If compiling from source, choose the one that is easiest to compile with. The Zabbix team has made a significant effort of implementing support for all three libraries to look as similar as possible from the user's perspective. There could be differences regarding support for some specific features, but those are likely to be more obscure ones; if such problems do come up later, switching from one library to another should be fairly easy. While in most cases it would likely not matter much which library you are using, it's a good idea to know it—one good reason for supporting these three different libraries is also the ability to switch to a different library if the currently used one has a security vulnerability. These libraries are used in a generic manner, and you don't need to use the same library for different Zabbix components—it's totally fine to use one library on the Zabbix server, another on the Zabbix proxy, and yet another with zabbix_sender. In this article, we will try out encryption with the Zabbix server and zabbix_sender first, then move on to encrypting agent traffic using both PSK and certificate-based encryption. If you have installed from the packages, your server most likely already supports encryption. Verify this by looking at the server and agent startup messages: TLS support:         YES If you compiled from source and there is no TLS support, recompile the server and agent adding one of these parameters: --with-openssl, --with-gnutls, or --with-mbedtls. Preshared key encryption Let's start with a simple situation: a single new host that will accept PSK-encrypted connections only, to which we will send some values using zabbix_sender. For this to work, both the Zabbix server and zabbix_sender must be compiled with Transport Layer Security (TLS) support. The PSK configuration consists of a PSK identity and key. The identity is a string that is not considered to be secret—it is not encrypted during the communication, so do not put sensitive information in the identity string. The key is a hexadecimal string. Zabbix requires the key it to be at least 32 characters long. The maximum in Zabbix is 512 characters, but it might depend on the specific version of the backend library you are using. We could just type the key in manually, but a slightly easier method is using the openssl command: $ openssl rand -hex 64 This will generate a 512-byte key, which we will use in a moment. Navigate to Configuration | Hosts, click on Create host, and fill in these values: Hostname: An encrypted host Groups: Have only Linux Servers in the In Groups block Switch to the Encryption tab, and in the Connections from host section, leave only PSK marked. In the PSK identity field, enter secret, and paste the key we generated earlier in the PSK field:   When done, click on the Add button at the bottom. Take a look at the AGENT ENCRYPTION column for this host: The first block has only one field and currently says NONE. For connections to the agent, only one method is possible; thus, this column must be showing the currently selected method for outgoing connections from the server's perspective. The second block has three fields. We can choose a combination of incoming connection methods; thus, this column must be showing which types of incoming connections from the server's perspective are accepted for this host. Now, click on Items next to Encrypted host, and click on Create item. Fill in these values: Name: Beers in the fridge Type: Zabbix trapper Key: fridge.beers Click on the Add button at the bottom. Let's try to send a value now using following command: $ zabbix_sender -z 127.0.0.1 -s "Encrypted host" -k fridge.beers -o 1 It should fail and show you the following message: info from server: "processed: 0; failed: 1; total: 1; seconds spent: 0.000193" Notice how the processed count is 0 and the failed count is 1. Let's check the Zabbix server logfile: 12254:20160122:231030.702 connection of type "unencrypted" is not allowed for host "Encrypted host" item "fridge.beers" (not every rejected item might be reported) Now this is actually quite a helpful message—we did not specify any encryption for zabbix_sender, but we did require encrypted connection for our host. Notice the text in parentheses—if multiple items on the same host fail because of this reason, we might only see some of them, and searching the logfile by item key only might not reveal the reason. Now is the time to get the PSK working for zabbix_sender. Run it with the --help parameter and look at the TLS connection options section. Oh yes, there's quite a lot of those. Luckily, for PSK encryption, we only need three of them: --tls-connect, --tls-psk-identity, and tls-psk-file. Before running the command, create a file called zabbix_encrypted_host_psk.txt and paste the hex key we generated earlier into it. It is more secure to create an empty file first, change its permissions to 400 or 600, and paste the key in the file afterwards. This way, another user won't have a chance to snatch the key from the file. Run zabbix_sender again, but with the three additional encryption parameters we just set: $ zabbix_sender -z 127.0.0.1 -s "Encrypted host" -k fridge.beers -o 1 --tls-connect psk --tls-psk-identity secret --tls-psk-file zabbix_encrypted_host_psk.txt With this command, we set the connection method to psk with the --tls-connect flag and specified the PSK identity and key file. Zabbix does not support specifying the PSK key on the command line for security reasons. It must be passed in from a file. This time, the value should be sent successfully: info from server: "processed: 1; failed: 0; total: 1; seconds spent: 0.000070" To be sure, verify that this item now has data in the frontend. Certificate-based encryption With PSK-based encryption protecting our sensitive Zabbix trapper item, let's move to the certificates. We will generate certificates for the Zabbix server and agent and require encrypted connections on the Zabbix agent side for passive items. You might have a certificate infrastructure in your organization, but for our first test, we will generate all required certificates ourselves. We will need a new Certificate Authority (CA) that will sign our certificate. Zabbix does not support self-signed certificates. It is highly recommended to use intermediate certificate authorities to sign client and server certificates. We will not use them in the following simple example. Being our own authority We'll start by creating the certificates in a separate directory. For simplicity's sake, let's do this on a test host. The following is not intended to be a good practice. It is actually doing quite a few bad and insecure things to obtain the certificates faster. Do not follow these steps for any production setup: $ mkdir zabbix_ca $ chmod 700 zabbix_ca $ cd zabbix_ca Generate the root CA key: $ openssl genrsa -aes256 -out zabbix_ca.key 4096 When prompted, enter a password twice to protect the key. Generate and self-sign the root certificate like this: $ openssl req -x509 -new -key zabbix_ca.key -sha256 -days 3560 -out zabbix_ca.crt When prompted, enter the password you used for the key before. Fill in the values as prompted—the easiest way is supplying empty values for most except the country code and common name. The common name does not have to be anything too meaningful for our test, so using a simple string such as zabbix_ca will suffice. Now, let's move on to creating a certificate we will use for the Zabbix server; first, let's generate server key and Certificate Signing Request (CSR): $ openssl genrsa -out zabbix_server.key 2048 $ openssl req -new -key zabbix_server.key -out zabbix_server.csr When prompted, enter the country code and the common name string as before. The common name does not have to match the server or agent name or anything else, so using a simple string such as zabbix_server will suffice. Now, let's sign this request: $ openssl x509 -req -in zabbix_server.csr -CA zabbix_ca.crt -CAkey zabbix_ca.key -CAcreateserial -out zabbix_server.crt -days 1460 -sha256 When prompted, enter the CA passphrase. Let's continue with the certificate we will use for the Zabbix agent. Generate the agent key and certificate signing request: $ openssl genrsa -out zabbix_agent.key 2048 $ openssl req -new -key zabbix_agent.key -out zabbix_agent.csr Again, when prompted, enter the country code and common name string as before. The common name does not have to match the server or agent name or anything else, so using a simple string such as zabbix_agent will suffice. Let's sign the request: $ openssl x509 -req -in zabbix_agent.csr -CA zabbix_ca.crt -CAkey zabbix_ca.key -CAcreateserial -out zabbix_agent.crt -days 1460 -sha256 When prompted, enter the CA passphrase. We're done with creating our test certificates. Both keys were created unencrypted. Zabbix does not support prompting for the key password at this time. Setting up Zabbix with certificates Now, we move on to making the passive items on our test host using the certificates we just generated. We must provide the certificates to the Zabbix agent. In the directory where the Zabbix agent configuration file is located, create a new directory called zabbix_agent_certs. Restrict access to it like this: # chown zabbix zabbix_agent_certs # chmod 500 zabbix_agent_certs From the directory where we generated the certificates, copy the relevant certificate files over to the new directory: # cp zabbix_ca.crt /path/to/zabbix_agent_certs/ # cp zabbix_agent.crt /path/to/zabbix_agent_certs/ # cp zabbix_agent.key /path/to/zabbix_agent_certs/ Edit zabbix_agentd.conf and modify these parameters: TLSAccept=cert TLSConnect=unencrypted TLSCAFile=/path/to/zabbix_agent_certs/zabbix_ca.crt TLSCertFile=/path/to/zabbix_agent_certs/zabbix_agent.crt TLSKeyFile=/path/to/zabbix_agent_certs/zabbix_agent.key This will make the agent only accept connections when they are encrypted and use a certificate signed by that CA, either directly or through intermediates. We'll still use unencrypted connections for active items. A user could supply certificates and expect all communication to be encrypted now, which would not be the case if both the TLSAccept and TLSConnect parameters did not require it; thus, Zabbix enforces them when certificates are supplied. Restart the Zabbix agent. Let's take a look at the host configuration list in the Zabbix frontend: Looks like connections to our test host do not work anymore. Let's check the agent logfile: 2820:20160125:194427.623 failed to accept an incoming connection: from 127.0.0.1: unencrypted connections are not allowed Looks like we broke it. We did set up encryption on the agent but did not get to configuring the server side. What if we would like to roll out encryption to all the agents and deal with the server later? In that case, it would be best to set TLSAccept=cert, unencrypted. Then, the agent would still accept unencrypted connections from our server. Once the certificates were deployed and configured on the Zabbix server, we'd only have to remove unencrypted from that parameter and restart the Zabbix agents. Let's try this out—change zabbix_agentd.conf again: TLSAccept=cert, unencrypted Restart the agent daemon and observe the monitoring resuming from the Zabbix server. Now let's make the server use its certificate. We'll place the certificate in a place where the Zabbix server can use it. In the directory where the Zabbix server configuration file is located, create a new directory called zabbix_server_certs. Restrict access to it using these commands: # chown zabbix zabbix_server_certs # chmod 500 zabbix_server_certs If using packages that run the Zabbix server with a different username such as zabbixs or zabbixserv, replace the username in these two commands. From the directory where we generated the certificates, copy them over to the new directory: # cp zabbix_ca.crt /path/to/zabbix_server_certs/ # cp zabbix_server.crt /path/to/zabbix_server_certs/ # cp zabbix_server.key /path/to/zabbix_server_certs/ Edit zabbix_server.conf and modify these parameters: TLSCAFile=/path/to/zabbix_server_certs/zabbix_ca.crt TLSCertFile=/path/to/zabbix_server_certs/zabbix_server.crt TLSKeyFile=/path/to/zabbix_server_certs/zabbix_server.key Now, restart the Zabbix server. Although we have specified the certificates in both the agent and server, the passive items still work in unencrypted mode. Let's proceed with making them encrypted. In the Zabbix frontend, navigate to Configuration | Hosts, click on A test host, and switch to the Encryption tab. In the Connections to host selection, choose Certificate, then click on the Update button. After the server configuration cache is updated, it will switch to using certificate-based encryption for this host. Going back to our scenario where we slowly rolled out certificate-based configuration to our agents and added it to the server later, we can now disable unencrypted connections on the agent side. Change this line in zabbix_agentd.conf: TLSAccept=cert Restart the agent. If we had followed this process from the very beginning, monitoring would have continued uninterrupted. Let's try to use zabbix_get: $ zabbix_get -s 127.0.0.1 -k system.cpu.load zabbix_get [5746]: Check access restrictions in Zabbix agent configuration That fails because agent only accepts encrypted connections now. Like we did for zabbix_sender, we can specify the certificate, but we must use the Zabbix server certificate now: $ zabbix_get -s 127.0.0.1 -k system.cpu.load --tls-connect cert --tls-ca-file /path/to/zabbix_server_certs/zabbix_ca.crt --tls-cert-file /path/to/zabbix_server_certs/zabbix_server.crt --tls-key-file /path/to/zabbix_server_certs/zabbix_server.key 0.030000 Certainly, this results in a more secure environment. It is not enough to spoof the IP address to access this agent. It is not enough to have an account on the Zabbix server to have access to all agents—access to the server certificate is required, too. On the other hand, it makes debugging a bit more complicated. We used PSK and certificate-based encryption with zabbix_sender and passive agent, but the same principles apply for active agents and zabbix_get. As an exercise, try to get the active agent items working with encryption too. Concerns and further reading At this time, encryption is a very new feature in Zabbix. While it has been developed and tested extremely carefully and pedantically, it is likely to receive further improvements. Make sure to read through the official documentation on encryption for more details and in case of changes being made. Right now, we will touch upon basic concerns and features that are missing. In this article, we covered the Zabbix server, agent, zabbix_get and zabbix_sender—what about Zabbix proxies? Zabbix proxies fully support encryption. The configuration on the proxy side is very similar to the agent configuration, and the configuration on the server or frontend side is done in a similar way to host encryption configuration, too. Keep in mind that all involved components must be compiled with TLS support—any proxies you have might have to be recompiled. When considering encryption, think about the areas where it's needed most: maybe you have Zabbix server and proxy communicating over the Internet while all other connections are in local networks. In that case, it might make sense to set up encryption only for server-proxy communication at first. Note that encryption is not supported when communicating with the Zabbix Java gateway, but one could easily have the gateway communicate with the Zabbix proxy on the localhost, which in turn provides encryption for the channel to the Zabbix server. We already figured out how the upgrading and transitioning to encryption can happen seamlessly without interrupting data collection—the ability of all components to accept various connection types allows us to roll the changes out sequentially. Another reason why one might want to implement encryption only partially is performance. Currently, Zabbix does not reuse connections, implement TLS session caches, or use any other mechanism that would avoid setting up encrypted connections from scratch every time. This can be especially devastating if you have lots of passive agent items. Make sure to understand the potential impact before reconfiguring it all. Encryption is not currently supported for authentication purposes. That is, we can not omit active agent hostnames and figure out which host it is based on the certificate alone. Similarly, we can not allow only encrypted connections for active agent auto-registration. For certificate-based encryption, we only specified the certificates and CA information. If the CA used is large enough, it would not be very secure: any certificate signed by that CA would be accepted. Zabbix also allows verifying both the issuer and subject of the remote certificate. Unless you are using an internal CA that is used for Zabbix only, it is highly recommended you limit issuers and subjects. Summary In this article, we explored Zabbix's built-in encryption, which is supported between all components: the server, proxy, agent, zabbix_sender, and zabbix_get. While not supported for the Java gateway, a Zabbix proxy could easily be put in front of the gateway to provide encryption back to the Zabbix server. Zabbix supports PSK and TLS certificate-based encryption and can use one of three different backend libraries: OpenSSL, GnuTLS, or mbedTLS. In case of security or other issues with one library, users have an option of switching to another library. The upgrade and encryption deployment can be done in steps. All Zabbix components can accept multiple connection methods at the same time. In our example, the agent would be set up to accept both encrypted and unencrypted connections, and when we would be done with all agents, we would switch to encrypted connections on the server side. Once that would be verified to work as expected, unencrypted connections could be disabled on the agents. With the encryption being built-in and easy to set up, it is worth remembering that encrypted connections will need more resources and that Zabbix does not support connection pooling or other methods that could decrease the load. It might be worth securing the most important channels first, leaving endpoints for later. For example, encrypting the communication between the Zabbix server and proxies would likely be a priority over connections to individual agents. Resources for Article:   Further resources on this subject: Deploying a Zabbix proxy [article] Zabbix Configuration [article] Monitoring Windows with Zabbix 1.8 [article]
Read more
  • 0
  • 0
  • 33970
article-image-and-running-views
Packt
27 Apr 2016
21 min read
Save for later

Up and Running with Views

Packt
27 Apr 2016
21 min read
 In this article by Gregg Marshall, the author of Mastering Drupal 8 Views, we will get introduced to the world of Views in Drupal. Drupal 8 was released November 19, 2015, after almost 5 years of development by over 3,000 members of the Drupal community. Drupal 8 is the largest refactoring in the project's history. One of the most important changes in Drupal 8 was the inclusion of the most popular contributed module, Views. Similar to including CCK in Drupal 7, adding Views to Drupal 8 influenced how Drupal operates as many of the administration pages, such as the content list page, are now Views that can be modified or extended by site builders. Every site builder needs to master the Views module to really take advantage of Drupal's content structuring capabilities by giving site builders the ability to create lists of content formatted in many different ways. A single piece of content can be used for different displays, and all the content in each View is dynamically created when a visitor comes to a page. It was the only contributed module included in the Acquia Site Builder certification examination for Drupal 7. In this article, we will discuss the following topics: Looking at the Views administration page Reviewing the general Views module settings Modifying one of the views from Drupal core to create a specialized administrative page (For more resources related to this topic, see here.) Drupal 8 is here, should I upgrade? "Jim, this is Lynn, how are things at Fancy Websites?" "I read that Drupal 8 is being released on November 19. From our conversations this year, I guess that means it is time to upgrade our current Drupal 6 site. Should I upgrade to Drupal 7 or Drupal 8?" "Lynn, we're really excited that Drupal 8 is finally ready. It is a game changer, and I can name 10 reasons why Drupal 8 is the way to go": Mobile device compatibility is built into Drupal 8's DNA. Analytics show that 32% of your site traffic is coming from buyers using phones, and that's up from only 19% compared to last year. Multilingual is baked in and really works, so we can go ahead and add the Spanish version of the site we have been talking about. There's a new theme engine that will make styling the new site much easier. It's time to update the look of your site; it's looking pretty outdated compared to the competition. Web services is built in. When you're ready to add an app for your customer's phones, Drupal 8 will be ready. There are lots of new fields, so we won't need to add half a dozen contributed modules to let you build your content types. Drupal 8 is built using industry standards. This was a huge change you won't see, but it means that our shop will be able to recruit new developers more easily. The configuration is now stored in code. Finally, we'll have a way for you to develop on your local computer and move your changes to staging and then to production without having to rebuild content types and Views manually over and over. The WYSIWYG editor is built in. The complex setup we went through to get the right buttons and make the output work won't be necessary in Drupal 8. There's a nice tour capability built in so that you can set up custom "how to" demonstrations for your new users. This should free up a lot of your time, which is good given how you are growing. I've saved the best for last. Your favorite module, Views, is now built into core! Between Fields in Drupal 7 and now Views in Drupal 8, you've got the tools to extend your site built right into core. The bottom line is I can't imagine not going ahead and upgrading to Drupal 8. Views in core is reason enough. Why don't I set up a Drupal 8 installation on your development server so that you can start playing with Drupal 8? We're not doing any development work on your site right now, and we still have staging to test any updates." "That sounds great, Jim! Let me know when I can log in." Less than an hour later, the e-mail arrived; the Drupal 8 development site was set up and ready for Lynn to start experimenting. Based on the existing Drupal 6 site, Lynn set up four content types with the same fields she had on the current site. Jim was able to use the built-in migrate module to move some of her data to the new site. Lynn was ready to start exploring Views in Drupal 8. Looking at the Views administration page That evening, Lynn logged into the new site. Clicking on the Manage menu item, she then clicked on the Structure submenu item, and at the bottom of the list displayed on the Structure page, she clicked on the Views option. About that time, Jackson came in and settled into his spot near her terminal. "Hi Jackson, ready to explore Views with me?" Looking at the Views administration page, Lynn noticed there were already a number of Views defined. Scanning the list, she said "Look Jackson, Drupal 8 uses Views for administration pages. This means we can customize them to fit our way of doing things. I like Drupal 8 already!". Jackson purred. Lynn studied the Views administration page shown here: Views administration page As Lynn looked at each view, the listing looked familiar; she had seen the same kind of listing on her Drupal 6 site. Trying the OPERATIONS pull-down menu on the first View, she saw that the options were Edit, Duplicate, Disable, and Delete. "That's pretty clear; I guess Duplicate is the same as Clone on my old version of Views. I can change a View, create a new one using this one as a template, make it temporarily unavailable, or wipe it completely off the face of the earth." "I wonder what kind of settings there are on the Settings tab of this listing page. Look, Jackson, there's a couple of subtabs hiding on the Settings page." As Lynn didn't want to mess up her new Drupal site, she called Jim. "Hi, Jim. Can you give me a quick rundown on the Views Settings tab?" "Sure," he replied. Views settings "Looking at the Views Settings tab, you'll notice two subtabs, Basic and Advanced. Select the advanced settings tab by clicking on Advanced to show the following display: The Views advanced settings configuration page Views advanced settings Let's look at the Advanced tab first since you'll probably never use these settings. The first option, Disable views data caching, shouldn't be checked unless you are having issues with Views not updating when the data changes. Even then, you should probably disable caching on a per-View basis using the caching setting in the View's edit page in the third column, labeled Advanced, near the bottom of the column. Disabling Views' data caching can really slow down the page loads on your site. You might actually use the Advanced settings tab if you need to clear all the Views' caches, which you would do by clicking on the Clear Views' cache button. Views basic settings The other advanced setting is DEBUGGING with a Add Views signature to all SQL queries checkbox. Unless you are using MySQL's logs to debug queries, which only an advanced developer would do, you aren't going to want this overhead added to Views queries, so just leave it unselected. Moving to the Basic tab, there are a number of settings that might be handy, and I'd recommend changing the default settings. Click on Basic to show the following display: The Views basic settings configuration page The first option, Always show the master (default) display, might or might not be useful. If you create a new View and don't select either create a page or create a block (or provide a REST export if this module is enabled), then a default View display is created called master. If you select either option or both, then page and/or block View displays are created, and generally, you won't see master. It's there; it's just hidden. Sometimes, it is handy to be able to edit or use the master display. While I don't like creating a lot of displays in each View, sometimes, I do create two or three if the content being displayed is very similar. An obvious example is when you want to display the same blog listing as either a page or in a block on other pages. The same teaser information is displayed, just in different ways. So, having the two displays in the same View makes sense. Just make sure when you customize each display that any changes you make are set to only apply to the current display and not all displays. Otherwise, you might make changes you hadn't planned on in the other displays. Most of the time, you will see a pull-down menu that defaults to All displays, but you can select This page (override) to have the setting change apply only to this display. Having the master display show lets you create the information that will be the same in all the displays you are creating; then, you can create and customize the different displays. Using our blog example, you may create a master display that has a basic list of titles, with the titles linking to the full blog post. Then, you can create a blog display page, and using the This page (override) option, you can add summaries, add more links, and set the results to 10 per page. Using the master display, you can go back and add a display block that shows only the last five blog posts without any pager, again applying each setting only to the block display. You might then go back to the master display and create a second block that uses the tags to select five blog posts that are related, again making sure that the changes are applied to the current block and not all displays. Finally, when you want to change something that will affect all the displays, make the change on the master display, and this time, use the All displays option to make sure the other displays are updated. In our blog example, you might decide to change the CSS class used to display the titles to apply formatting from the theme; you probably want this to look the same in every possible display of the blog posts. The next basic setting for Views is Allow embedded displays. You will not enable this option; it is for developers who will use Views-generated content in their custom code. However, if you see it enabled, don't disable it; doing this would likely break something on your site using this feature. The last setting before the LIVE PREVIEW SETTINGS field set is Label for "Any" value on non-required single-select exposed filters, which lets you pick either <Any> or -Any- as the format for exposed filters that would allow a user to ignore the filter. Live Preview Settings There are several LIVE PREVIEW SETTINGS field sets I like to enable because they make debugging your Views easier. If the LIVE PREVIEW SETTINGS field set is closed (that is, the options are not showing), click on the title next to the arrow, and it will open. It will look similar to this: Live Preview Settings I generally enable the Automatically update preview on changes option. This way, any change I make to the View when I edit it shows the results that would occur after each change. Seeing things change right away gives me a clue whether a change will have an effect I'm not expecting. A lot of Views options can be tricky to understand, so a bit of trial and error is often required. Hence, expect to make a change and not see what you expect; just change the setting back, rethink the problem, and try again. Almost always, you'll get the answer eventually. If you have a View that is really complex and very slow, you can always disable the live preview while you edit the View by selecting the Auto preview option in the grey Preview bar just under all the View's settings. The next two options control whether Views will display the SQL query generated by the Views options you selected in the edit screen. I like to display the SQL query, so I will select the Above the preview option under Show SQL query and then select the Show the SQL query checkbox that follows it. If you don't check the Show the SQL query option, it doesn't matter what you select for above or below the preview, and if you expect to see the SQL queries and don't, it is likely that you set one option and not the other. Showing the SQL query can be confusing at first, but after a while, you'll find it handy to figure out what is going on, especially if you have relationships (or should have relationships and don't realize it). And, of course, if you can't read the query, you can always e-mail me for a translation to English. The next option, Show performance statistics, is handy when trying to figure out why some Views-generated page is loading slowly. But usually, this isn't an issue you'd be thinking of, so I'd leave it off. You want to focus on getting the right information to display exactly the way you want without thinking about the performance. If we later decide it's too slow, the developer we'll assign to it will use this information and turn the option on in development. The same is true about Show other queries run during render during live preview. This information is handy to figure out performance issues and occasionally a display formatting issue during theming, but it isn't something you as a nonprogrammer should be worried about. Seeing all the extra queries can be confusing and intimidating, yet it doesn't really offer you any help creating a View. "Oh, don't forget to click on Save configuration if you change any settings. I don't know how many times I've forgotten to save a configuration change in Drupal and then wondered why my change hasn't stuck. Does this help?" "Thanks Jim, that is great. I owe you a coffee next time we get together." Hanging up the phone, Lynn said, "What do you think, Jackson? Let's start off by creating a property maintenance page for our salespeople to use? I think I'll get a quick win by modifying one of Drupal's core views." Adapting an existing View Lynn will use her knowledge from using Views on her existing Drupal site, and so move quickly. The existing content page provided by Views is general purpose and offers lots of options, and not all these options are appropriate for all content editors. This page looks similar to the following one: Drupal's standard content listing page Lynn started creating her property maintenance page by going to the Views listing page (Manage | Structure | Views) and selecting Duplicate from the OPERATIONS pull-down menu on the right-hand side of this row. On the next screen, she named the Property Maintenance view and clicked on the Duplicate button. When the View edit screen appeared, she was ready to adapt it to her need. First, she selected the Page display, assuming the Always show the master (default) display setting was already selected; otherwise, the Page display will be selected by default as it is the only display in this View. Remember that any change made in the View edit page isn't saved until you click on the Save button. Also, unsaved changes won't show up when the page/block is displayed. If you make a change, look at it using another browser or tab, and if you don't see the change reflected, it is likely that you didn't save the change you just made. The Property Maintenance screen before making any changes Editing the Property Maintenance view Starting with the left-hand side column of the View edit screen, Lynn changed the title by clicking on the Content link next to the Title label. She changed the title to Property Maintenance. Moving down the column, Lynn decided that the table display and settings were okay on the original screen and skipped them. Under the FIELDS section, Lynn decided to delete the Content: Node operations bulk form, Content: Type (Content Type), and (author) User: Name (Author) fields/columns as they weren't useful to the real estate salespeople who would be using this page. To do this, she clicked on Content: Node operations bulk form and then on the Remove link at the bottom of the Configure field modal that appeared. She repeated the removing of the field for the Content: Type (Content Type) and (author) User: Name (Author) fields. Lynn noted that the username field appeared to be the only field reference to the author entity, so she could delete the relationship later. Moving on to FILTER CRITERIA, Lynn was a bit confused by the first two filters. When she clicked on Content: Published status or admin user, the description said "Filters out unpublished content if the current user cannot view it". "This seems reasonable, let's keep this filter," she thought, and she clicked on Cancel. Next was Content: Publishing status (grouped), an exposed filter that lets the user filter by either published or unpublished. This seemed useful, so Lynn kept it and clicked on Cancel. The next filter, Content: Type (exposed), is necessary but shouldn't be selectable by the user, so Lynn clicked on it to edit the filter, unselected the Expose this filter to visitors option, and selected just the Property content type, making the filter only select content that are properties. The next filter, Content: Title (exposed), is handy, so Lynn left it as is. The final filter, Content: Translation language (exposed), isn't needed as Lynn's site isn't multilingual, so Lynn deleted the filter. Moving on to the center column of the View edit page, under the PAGE SETTINGS heading, Lynn changed the path for the View to /admin/property-maintenance by clicking on the existing /admin/content/node path, making the change, and clicking on the Apply button. Next in this column was the menu setting. Lynn doesn't want the property maintenance page to be part of the administration content page, so she clicked on Tab: Content and changed the menu type to Normal menu entry. This changed the fields displayed on the right-hand side of the modal, so Lynn changed the Menu link title to Property Maintenance, left the description blank, and left Show as expanded unselected. In the Parent pull-down menu, she selected the <Tools> menu. Tools is the default Drupal menu for site tools that is only shown to authorized users, who are logged into the site and can view the page linked to, which real estate salespeople will be able to view. She left the weight at -10, planning on reorganizing this menu when she has most of it configured. As this is the last option, she clicked on Apply to exit the modal. The last setting in the PAGE SETTINGS section is Access. Lynn knew she needed to change the required permission as she didn't plan on giving real estate salespeople access to the main content page, but she wasn't sure which permission to give them. Looking through the permissions page (the People | Permissions tab), Lynn didn't see any permission that made sense for who should be able to see this maintenance page. So, she clicked on the Permission link in the center column of the View edit page and changed the Access value from Permission to Role, and when she clicked on the Apply (all displays) button, she could select the role(s) she wanted to be able to see on this page. She selected the Administrator, Real Estate Salesperson, and Office Administrator roles. One way to test access while you develop is to use a second browser and log in as the other kind of user. A common mistake in Drupal is to see content while logged in as an administrator that can't be seen by other users. This can also be done using a second tab opened in "incognito" mode, but I find it easier to use a different browser (for example, Chrome and Firefox). You can even have three browsers open to the same page to test a third kind of user. Continuing down the column, Lynn decided she didn't need a header or footer on this administration page at least for now, but she did want to change the NO RESULTS BEHAVIOR message. Drupal has a text message defined, so she clicked on the Global: Unfiltered text (Global: Unfiltered text) link, changed the Content field to No properties meeting your filter criteria are available., and clicked on the Apply (all displays) button. The final section, PAGER, seemed fine, so Lynn skipped over it and moved to the third column of the view edit page, ADVANCED SETTINGS. As Lynn had changed the setting to always show the advanced settings, Lynn noticed that there was a relationship for author. As she had deleted displaying the author name, there wasn't any reason to keep the relationship because she wasn't using any of the author's details. She clicked on the author link and then on the Remove link at the bottom of the modal. Reviewing the results of the live preview, Lynn was satisfied and clicked on the Save button to save her modified view. There is a maxim in computers, Save Early, Save Often. As you develop or modify your View, when you reach a point where your progress so far is okay, click on the Save button. Then, if you make a terrible mistake in the next change, you can click on the Cancel button and then click on Edit to resume from where you last saved. Before saving the View, the result looked similar to the following screen: The resulting Property Maintenance View edit screen with all the changes Debugging – Live Preview is your friend Assuming you enabled Live Preview in your Views settings earlier in this article, as you are building your View, Views will show what will be displayed. Formatting and some JavaScript displays, such as Google mapping, can't be displayed in Live Preview, but to debug, you generally don't need them. Many Views challenges are getting the data that you want to display or getting data to be displayed the way you want. Many Views are created using the fields content display. Often, you will see fields that you don't want displayed when reviewing Live Preview because you didn't check the Exclude from display option in the field configuration. Or, you will select a field from the Add fields list that isn't actually the field you want to display the data you want—for instance, do you want article tags or article tags (field_tags: delta)? Sometimes you have to just try one and see what happens. If it isn't the right option, delete the field and try another. Experience will guide you as you use Views, but even the most experienced site builders wonder what some field or field option does in the context of the View they are building. Remember to save the View before you experiment with this next idea. Then, if it doesn't work out, you can just click on Cancel and not lose all the previous work you put in. If you disabled Live Preview, hopefully, you have decided to go back and enable it; seeing the output and looking at the generated SQL queries is really very useful in trying to figure out what might be going wrong. "Okay, Jackson, I see that a lot of what I knew from the previous versions of Views applies to the version in Drupal 8. Now that I've quickly gone through the edit screen to modify a core View, let's get serious and really learn the ins and outs of this version of Views." Summary In this article, we covered the Views administration page, where you can add, delete, edit, and duplicate views. Then, we reviewed all the general Views module settings. Finally, we modified a core View, quickly going through several configuration options. If you have used Views in older versions of Drupal, you should feel comfortable. If this is your first introduction to Views, don't panic that we glossed over a lot or if you felt lost. Resources for Article: Further resources on this subject: Working with Drupal Audio in Flash (part 2) [article] Modular Programming in ECMAScript 6 [article] Using NoSQL Databases [article]
Read more
  • 0
  • 0
  • 34294

article-image-what-flux
Packt
27 Apr 2016
27 min read
Save for later

What is Flux?

Packt
27 Apr 2016
27 min read
In this article by Adam Boduch, author of Flux Architecture covers the basic idea of Flux. Flux is supposed to be this great new way of building complex user interfaces that scale well. At least that's the general messaging around Flux, if you're only skimming the Internet literature. But, how do we define this great new way of building user interfaces? What makes it superior to other more established frontend architectures? The aim of this article is to cut through the sales bullet points and explicitly spell out what Flux is, and what it isn't, by looking at the patterns that Flux provides. And since Flux isn't a software package in the traditional sense, we'll go over the conceptual problems that we're trying to solve with Flux. Finally, we'll close the article by walking through the core components found in any Flux architecture, and we'll install the Flux npm package and write a hello world Flux application right away. Let's get started. (For more resources related to this topic, see here.) Flux is a set of patterns We should probably get the harsh reality out of the way first—Flux is not a software package. It's a set of architectural patterns for us to follow. While this might sound disappointing to some, don't despair—there's good reasons for not implementing yet another framework. Throughout the course of this book, we'll see the value of Flux existing as a set of patterns instead of a de facto implementation. For now, we'll go over some of the high-level architectural patterns put in place by Flux. Data entry points With traditional approaches to building frontend architectures, we don't put much thought into how data enters the system. We might entertain the idea of data entry points, but not in any detail. For example, with MVC (Model View Controller) architectures, the controller is supposed control the flow of data. And for the most part, it does exactly that. On the other hand, the controller is really just about controlling what happens after it already has the data. How does the controller get data in the first place? Consider the following illustration: At first glance, there's nothing wrong with this picture. The data flow, represented by the arrows, is easy to follow. But where does the data originate? For example, the view can create new data and pass it to the controller, in response to a user event. A controller can create new data and pass it to another controller, depending on the composition of our controller hierarchy. What about the controller in question—can it create data itself and then use it? In a diagram such as this one, these questions don't have much virtue. But, if we're trying to scale an architecture to have hundreds of these components, the points at which data enters the system becomes very important. Since Flux is used to build architectures that scale, it considers data entry points an important architectural pattern. Managing state State is one of those realities we need to cope with in frontend development. Unfortunately, we can't compose our entire application of pure functions with no side effects for two reasons. First, our code needs to interact with the DOM interface, in one way or another. This is how the user sees changes in the UI. Second, we don't store all our application data in the DOM (at least we shouldn't do this). As time passes and the user interacts with the application, this data will change. There's no cut-and-dry approach to managing state in a web application, but there are several ways to limit the amount of state changes that can happen, and enforce how they happen. For example, pure functions don't change the state of anything, they can only create new data. Here's an example of what this looks like: As you can see, there's no side effects with pure functions because no data changes state as a result of calling them. So why is this a desirable trait, if state changes are inevitable? The idea is to enforce where state changes happen. For example, perhaps we only allow certain types of components to change the state of our application data. This way, we can rule out several sources as the cause of a state change. Flux is big on controlling where state changes happen. Later on in the article, we'll see how Flux stores manage state changes. What's important about how Flux manages state is that it's handled at an architectural layer. Contrast this with an approach that lays out a set of rules that say which component types are allowed to mutate application data—things get confusing. With Flux, there's less room for guessing where state changes take place. Keeping updates synchronous Complimentary to data entry points is the notion of update synchronicity. That is, in addition to managing where the state changes originate from, we have to manage the ordering of these changes relative to other things. If the data entry points are the what of our data, then synchronously applying state changes across all the data in our system is the when. Let's think about why this matters for a moment. In a system where data is updated asynchronously, we have to account for race conditions. Race conditions can be problematic because one piece of data can depend on another, and if they're updated in the wrong order, we see cascading problems, from one component to another. Take a look at this diagram, which illustrates this problem: When something is asynchronous, we have no control over when that something changes state. So, all we can do is wait for the asynchronous updates to happen, and then go through our data and make sure all of our data dependencies are satisfied. Without tools that automatically handle these dependencies for us, we end up writing a lot of state-checking code. Flux addresses this problem by ensuring that the updates that take place across our data stores are synchronous. This means that the scenario illustrated in the preceding diagram isn't possible. Here's a better visualization of how Flux handles the data synchronization issues that are typical of JavaScript applications today: Information architecture It's easy to forget that we work in information technology and that we should be building technology around information. In recent times, however, we seem to have moved in the other direction, where we're forced to think about implementation before we think about information. More often than not, the data exposed by the sources used by our application, don't have what the user needs. It's up to our JavaScript to turn this raw data into something consumable by the user. This is our information architecture. Does this mean that Flux is used to design information architectures as opposed to a software architecture? This isn't the case at all. In fact, Flux components are realized as true software components that perform actual computations. The trick is that Flux patterns enable us to think about information architecture as a first-class design consideration. Rather than having to sift through all sorts of components and their implementation concerns, we can make sure that we're getting the right information to the user. Once our information architecture takes shape, the larger architecture of our application follows, as a natural extension to the information we're trying to communicate to our users. Producing information from data is the difficult part. We have to distill many sources of data into not only information, but information that's also of value to the user. Getting this wrong is a huge risk for any project. When we get it right, we can then move on to the specific application components, like the state of a button widget, and so on. Flux architectures keep data transformations confined to their stores. A store is an information factory—raw data goes in and new information comes out. Stores control how data enters the system, the synchronicity of state changes, and they define how the state changes. When we go into more depth on stores as we progress through the book, we'll see how they're the pillars of our information architecture. Flux isn't another framework Now that we've explored some of the high-level patterns of Flux, it's time to revisit the question: what is Flux again? Well, it is just a set of architectural patterns we can apply to our frontend JavaScript applications. Flux scales well because it puts information first. Information is the most difficult aspect of software to scale; Flux tackles information architecture head on. So, why aren't Flux patterns implemented as a Framework? This way, Flux would have a canonical implementation for everyone to use; and like any other large scale open source project, the code would improve over time as the project matures. The main problem is that Flux operates at an architectural level. It's used to address information problems that prevent a given application from scaling to meet user demand. If Facebook decided to release Flux as yet another JavaScript framework, it would likely have the same types of implementation issues that plague other frameworks out there. For example, if some component in a framework isn't implemented in a way that best suits the project we're working on, then it's not so easy to implement a better alternative, without hacking the framework to bits. What's nice about Flux is that Facebook decided to leave the implementation options on the table. They do provide a few Flux component implementations, but these are reference implementations. They're functional, but the idea is that they're a starting point for us to understand the mechanics of how things such as dispatchers are expected to work. We're free to implement the same Flux architectural pattern as we see it. Flux isn't a framework. Does this mean we have to implement everything ourselves? No, we do not. In fact, developers are implementing Flux frameworks and releasing them as open source projects. Some Flux libraries stick more closely to the Flux patterns than others. These implementations are opinionated, and there's nothing wrong with using them if they're a good fit for what we're building. The Flux patterns aim to solve generic conceptual problems with JavaScript development, so you'll learn what they are before diving into Flux implementation discussions. Flux solves conceptual problems If Flux is simply a collection of architectural patterns instead of a software framework, what sort of problems does it solve? In this section, we'll look at some of the conceptual problems that Flux addresses from an architectural perspective. These include unidirectional data flow, traceability, consistency, component layering, and loosely coupled components. Each of these conceptual problems pose a degree of risk to our software, in particular, the ability to scale it. Flux helps us get out in front of these issues as we're building the software. Data flow direction We're creating an information architecture to support the feature-rich application that will ultimately sit on top of this architecture. Data flows into the system, and will eventually reach an endpoint, terminating the flow. It's what happens in between the entry point and the termination point that determines the data flow within a Flux architecture. This is illustrated here: Data flow is a useful abstraction, because it's easy to visualize data as it enters the system and moves from one point to another. Eventually, the flow stops. But before it does, several side effects happen along the way. It's that middle block in the preceding diagram that's concerning, because we don't know exactly how the data-flow reached the end. Let's say that our architecture doesn't pose any restrictions on data flow. Any component is allowed to pass data to any other component, regardless of where that component lives. Let's try to visualize this setup: As you can see, our system has clearly defined entry and exit points for our data. This is good because it means that we can confidently say that the data flows through our system. The problem with this picture is with how the data flows between the components of the system. There's no direction, or rather, it's multidirectional. This isn't a good thing. Flux is a unidirectional data flow architecture. This means that the preceding component layout isn't possible. The question is—why does this matter? At times, it might seem convenient to be able to pass data around in any direction, that is, from any component to any other component. This in and of itself isn't the issue—passing data alone doesn't break our architecture. However, when data moves around our system in more than one direction, there's more opportunity for components to fall out of sync with one another. This simply means that if data doesn't always move in the same direction, there's always the possibility of ordering bugs. Flux enforces the direction of data flows, and thus eliminates the possibility of components updating themselves in an order that breaks the system. No matter what data has just entered the system, it'll always flow through the system in the same order as any other data, as illustrated here: Predictable root cause With data entering our system and flowing through our components in one direction, we can more easily trace any effect to it's cause. In contrast, when a component sends data to any other component residing in any architectural layer, it's a lot more difficult to figure how the data reached it's destination. Why does this matter? Debuggers are sophisticated enough that we can easily traverse any level of complexity during runtime. The problem with this notion is that it presumes we only need to trace what's happening in our code for the purposes of debugging. Flux architectures have inherently predictable data flows. This is important for a number of design activities and not just debugging. Programmers working on Flux applications will begin to intuitively sense what's going to happen. Anticipation is key, because it let's us avoid design dead-ends before we hit them. When the cause and effect are easy to tease out, we can spend more time focusing on building application features—the things the customers care about. Consistent notifications The direction in which we pass data from component to component in Flux architectures should be consistent. In terms of consistency, we also need to think about the mechanism used to move data around our system. For example, publish/subscribe (pub/sub) is a popular mechanism used for inter-component communication. What's neat about this approach is that our components can communicate with one another, and yet, we're able to maintain a level of decoupling. In fact, this is fairly common in frontend development because component communication is largely driven by user events. These events can be thought of as fire-and-forget. Any other components that want to respond to these events in some way, need to take it upon themselves to subscribe to the particular event. While pub/sub does have some nice properties, it also poses architectural challenges, in particular, scaling complexities. For example, let's say that we've just added several new components for a new feature. Well, in which order do these components receive update messages relative to pre-existing components? Do they get notified after all the pre-existing components? Should they come first? This presents a data dependency scaling issue. The other challenge with pub-sub is that the events that get published are often fine grained to the point where we'll want to subscribe and later unsubscribe from the notifications. This leads to consistency challenges because trying to code lifecycle changes when there's a large number of components in the system is difficult and presents opportunities for missed events. The idea with Flux is to sidestep the issue by maintaining a static inter-component messaging infrastructure that issues notifications to every component. In other words, programmers don't get to pick and choose the events their components will subscribe to. Instead, they have to figure out which of the events that are dispatched to them are relevant, ignoring the rest. Here's a visualization of how Flux dispatches events to components: The Flux dispatcher sends the event to every component; there's no getting around this. Instead of trying to fiddle with the messaging infrastructure, which is difficult to scale, we implement logic within the component to determine whether or not the message is of interest. It's also within the component that we can declare dependencies on other components, which helps influence the ordering of messages. Simple architectural layers Layers can be a great way to organize an architecture of components. For one thing, it's an obvious way to categorize the various components that make up our application. For another thing, layers serve as a means to put constraints around communication paths. This latter point is especially relevant to Flux architectures since it's imperative that data flow in one direction. It's much easier to apply constraints to layers than it is to individual components. Here is an illustration of Flux layers: This diagram isn't intended to capture the entire data flow of a Flux architecture, just how data flows between the main three layers. It also doesn't give any detail about what's in the layers. Don't worry, the next section gives introductory explanations of the types of Flux components and the communication that happens between the layers is the focus of this entire book. As you can see, the data flows from one layer to the next, in one direction. Flux only has a few layers, and as our applications scale in terms of component counts, the layer counts remains fixed. This puts a cap on the complexity involved with adding new features to an already large application. In addition to constraining the layer count and the data flow direction, Flux architectures are strict about which layers are actually allowed to communicate with one another. For example, the action layer could communicate with the view layer, and we would still be moving in one direction. We would still have the layers that Flux expects. However, skipping a layer like this is prohibited. By ensuring that layers only communicate with the layer directly beneath it, we can rule out bugs introduced by doing something out-of-order. Loosely coupled rendering One decision made by the Flux designers that stands out is that Flux architectures don't care how UI elements are rendered. That is to say, the view layer is loosely coupled to the rest of the architecture. There are good reasons for this. Flux is an information architecture first, and a software architecture second. We start with the former and graduate toward the latter. The challenge with view technology is that it can exert a negative influence on the rest of the architecture. For example, one view has a particular way of interacting with the DOM. Then, if we've already decided on this technology, we'll end up letting it influence the way our information architecture is structured. This isn't necessarily a bad thing, but it can lead to us making concessions about the information we ultimately display to our users. What we should really be thinking about is the information itself and how this information changes over time. What actions are involved that bring about these changes? How is one piece of data dependent on another piece of data? Flux naturally removes itself from the browser technology constraints of the day so that we can focus on the information first. It's easy to plug views into our information architecture as it evolves into a software product. Flux components In this section, we'll begin our journey into the concepts of Flux. These concepts are the essential ingredients used in formulating a Flux architecture. While there's no detailed specifications for how these components should be implemented, they nevertheless lay the foundation of our implementation. This is a high-level introduction to the components we'll be implementing throughout this book. Action Actions are the verbs of the system. In fact, it's helpful if we derive the name of an action directly from a sentence. These sentences are typically statements of functionality; something we want the application to do. Here are some examples: Fetch the session Navigate to the settings page Filter the user list Toggle the visibility of the details section These are simple capabilities of the application, and when we implement them as part of a Flux architecture, actions are the starting point. These human-readable action statements often require other new components elsewhere in the system, but the first step is always an action. So, what exactly is a Flux action? At it's simplest, an action is nothing more than a string—a name that helps identify the purpose of the action. More typically, actions consist of a name and a payload. Don't worry about the payload specifics just yet—as far as actions are concerned, they're just opaque pieces of data being delivered into the system. Put differently, actions are like mail parcels. The entry point into our Flux system doesn't care about the internals of the parcel, only that they get to where they need to go. Here's an illustration of actions entering a Flux system: This diagram might give the impression that actions are external to Flux when in fact, they're an integral part of the system. The reason this perspective is valuable is because it forces us to think about actions as the only means to deliver new data into the system. Golden Flux Rule: If it's not an action, it can't happen. Dispatcher The dispatcher in a Flux architecture is responsible for distributing actions to the store components (we'll talk about stores next). A dispatcher is actually kind of like a broker—if actions want to deliver new data to a store, they have to talk to the broker, so it can figure out the best way to deliver them. Think about a message broker in a system like RabbitMQ. It's the central hub where everything is sent before it's actually delivered. Here is a diagram depicting a Flux dispatcher receiving actions and dispatching them to stores: In a Flux application, there's only one dispatcher. It can be thought of more as a pseudo layer than an explicit layer. We know the dispatcher is there, but it's not essential to this level of abstraction. What we're concerned about at an architectural level, is making sure that when a given action is dispatched, we know that it's going to make it's way to every store in the system. Having said that, the dispatcher's role is critical to how Flux works. It's the place where store callback functions are registered. And it's how data dependencies are handled. Stores tell the dispatcher about other stores that it depends on, and it's up to the dispatcher to make sure these dependencies are properly handled. Golden Flux Rule: The dispatcher is the ultimate arbiter of data dependencies. Store Stores are where state is kept in a Flux application. Typically, this means the application data that's sent to the frontend from the API. However, Flux stores take this a step further and explicitly model the state of the entire application. For now, just know that stores are where state that matters can be found. Other Flux components don't have state—they have implicit state at the code level, but we're not interested in this, from an architectural point of view. Actions are the delivery mechanism for new data entering the system. The term new data doesn't imply that we're simply appending it to some collection in a store. All data entering the system is new in the sense that it hasn't been dispatched as an action yet—it could in fact result in a store changing state. Let's look at a visualization of an action that results in a store changing state: The key aspect of how stores change state is that there's no external logic that determines a state change should happen. It's the store, and only the store, that makes this decision and then carries out the state transformation. This is all tightly encapsulated within the store. This means that when we need to reason about a particular information, we need not look any further than the stores. They're their own boss—they're self-employed. Golden Flux Rule: Stores are where state lives, and only stores themselves can change this state. View The last Flux component we're going to look at in this section is the view, and it technically isn't even a part of Flux. At the same time, views are obviously a critical part of our application. Views are almost universally understood as the part of our architecture that's responsible for displaying data to the user—it's the last stop as data flows through our information architecture. For example, in MVC architectures, views take model data and display it. In this sense, views in a Flux-based application aren't all that different from MVC views. Where they differ markedly is with regard to handling events. Let's take a look at the following diagram: Here we can see the contrasting responsibilities of a Flux view, compared with a view component found in your typical MVC architecture. The two view types have similar types of data flowing into them—application data used to render the component and events (often user input). What's different between the two types of views is what flows out of them. The typical view doesn't really have any constraints in how it's event handler functions communicate with other components. For example, in response to a user clicking a button, the view could directly invoke behavior on a controller, change the state of a model, or it might query the state of another view. On the other hand, the Flux view can only dispatch new actions. This keeps our single entry point into the system intact and consistent with other mechanisms that want to change the state of our store data. In other words, an API response updates state in the exact same way as a user clicking a button does. Given that views should be restricted in terms of how data flows out of them (besides DOM updates) in a Flux architecture, you would think that views should be an actual Flux component. This would make sense insofar as making actions the only possible option for views. However, there's also no reason we can't enforce this now, with the benefit being that Flux remains entirely focused on creating information architectures. Keep in mind, however, that Flux is still in it's infancy. There's no doubt going to be external influences as more people start adopting Flux. Maybe Flux will have something to say about views in the future. Until then, views exist outside of Flux but are constrained by the unidirectional nature of Flux. Golden Flux Rule: The only way data flows out of a view is by dispatching an action. Installing the Flux package We'll get some of our boilerplate code setup tasks out of the way too, since we'll be using a similar setup throughout the book. We'll skip going over Node + NPM installation since it's sufficiently covered in great detail all over the Internet. We'll assume Node is installed and ready to go from this point forward. The first NPM package we'll need installed is Webpack. This is an advanced module bundler that's well suited for modern JavaScript applications, including Flux-based applications. We'll want to install this package globally so that the webpack command gets installed on our system: npm install webpack -g With Webpack in place, we can build each of the code examples that ship with this book. However, our project does require a couple local NPM packages, and these can be installed as follows: npm install flux babel-core babel-loader babel-preset-es2015 --save-dev The --save-dev option adds these development dependencies to our file, if one exists. This is just to get started—it isn't necessary to manually install these packages to run the code examples in this book. The examples you've downloaded already come with a package.json, so to install the local dependencies, simply run the following from within the same directory as the package.json file: npm install Now the webpack command can be used to build the example. Alternatively, if you plan on playing with the code, which is obviously encouraged, try running webpack --watch. This latter form of the command will monitor for file changes to the files used in the build, and run the build whenever they change. This is indeed a simple hello world to get us off to a running start, in preparation for the remainder of the book. We've taken care of all the boilerplate setup tasks by installing Webpack and it's supporting modules. Let's take a look at the code now. We'll start by looking at the markup that's used. <!doctype html> <html>   <head>     <title>Hello Flux</title>     <script src="main-bundle.js" defer></script>   </head>   <body></body> </html> Not a lot to it is there? There isn't even content within the body tag. The important part is the main-bundle.js script—this is the code that's built for us by Webpack. Let's take a look at this code now: // Imports the "flux" module. import * as flux from 'flux'; // Creates a new dispatcher instance. "Dispatcher" is // the only useful construct found in the "flux" module. const dispatcher = new flux.Dispatcher(); // Registers a callback function, invoked every time // an action is dispatched. dispatcher.register((e) => {   var p;   // Determines how to respond to the action. In this case,   // we're simply creating new content using the "payload"   // property. The "type" property determines how we create   // the content.   switch (e.type) {     case 'hello':       p = document.createElement('p');       p.textContent = e.payload;       document.body.appendChild(p);       break;     case 'world':       p = document.createElement('p');       p.textContent = `${e.payload}!`;       p.style.fontWeight = 'bold';       document.body.appendChild(p);       break;     default:       break;   } });   // Dispatches a "hello" action. dispatcher.dispatch({   type: 'hello',   payload: 'Hello' }); // Dispatches a "world" action. dispatcher.dispatch({   type: 'world',   payload: 'World' }); As you can see, there's not much too this hello world Flux application. In fact, the only Flux-specific component this code creates is a dispatcher. It then dispatches a couple of actions and the handler function that's registered to the store processes the actions. Don't worry that there's no stores or views in this example. The idea is that we've got the basic Flux NPM package installed and ready to go. Summary This article introduced you to Flux. Specifically, we looked at both what Flux is and what it isn't. Flux is a set of architectural patterns, that when applied to our JavaScript application, help with getting the data flow aspect of our architecture right. Flux isn't yet another framework used for solving specific implementation challenges, be it browser quirks or performance gains—there's a multitude of tools already available for these purposes. Perhaps the most important defining aspect of Flux are the conceptual problems it solves—things like unidirectional data flow. This is a major reason that there's no de facto Flux implementation. We wrapped the article up by walking through the setup of our build components used throughout the book. To test that the packages are all in place, we created a very basic hello world Flux application. Resources for Article: Further resources on this subject: Reactive Programming and the Flux Architecture [article] Advanced React [article] Qlik Sense's Vision [article]  
Read more
  • 0
  • 0
  • 12755
Modal Close icon
Modal Close icon