Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-using-nodejs-dependencies-nwjs
Max Gfeller
19 Nov 2015
6 min read
Save for later

Using Node.js dependencies in NW.js

Max Gfeller
19 Nov 2015
6 min read
NW.js (formerly known as node-webkit) is a framework that makes it possible to write multi-platform desktop applications using the technologies you already know well: HTML, CSS and JavaScript. It bundles a Chromium and a Node (or io.js) runtime and provides additional APIs to implement native-like features like real menu bars or desktop notifications. A big advantage of having a Node/io.js runtime is to be able to make use of all the modules that are available for node developers. We can categorize three different types of modules that we can use. Internal modules Node comes with a solid set of internal modules like fs or http. It is built on the UNIX philosophy of doing only one thing and doing it very well. Therefore you won't find too much functionality in node core. The following modules are shipped with node: assert: used for writing unit tests buffer: raw memory allocation used for dealing with binary data child_process: spawn and use child processes cluster: take advatage of multi-core systems crypto: cryptographic functions dgram: use datagram sockets - dns: perform DNS lookups domain: handle multiple different IO operations as a single group events: provides the EventEmitter fs: operations on the file system http: perform http queries and create http servers https: perform https queries and create https servers net: asynchronous network wrapper os: basic operating-system related utility functions path: handle and transform file paths punycode: deal with punycode domain names querystring: deal with query strings stream: abstract interface implemented by various objects in Node timers: setTimeout, setInterval etc. tls: encrypted stream communication url: URL resolution and parsing util: various utility functions vm: sandbox to run Node code in zlib: bindings to Gzip/Gunzip, Deflate/Inflate, and DeflateRaw/InflateRaw Those are documented on the official Node API documentation and can all be used within NW.js. Please take care that Chromium already defines a crypto global, so when using the crypto module in the webkit context you should assign it to a variable like crypt rather than crypto: var crypt = require('crypto'); The following example shows how we would read a file and use its contents using Node's modules: var fs = require('fs'); fs.readFile(__dirname + '/file.txt', function (error, contents) {   if (error) returnconsole.error(error);   console.log(contents); }); 3rd party JavaScript modules Soon after Node itself was started, Isaac Schlueter, who was friend of creator Ryan Dahl, started working on a package manager for Node itself. While Nodes's popularity reached new highs, a lot of packages got added to the npm registry and it soon became the fastest growing package registry. To the time of this writing there are over 169'000 packages on the registry and nearly two billion downloads each month. The npm registry is now also slowly evolving from being "only" a package manager for Node into a package manager for all things Javascript. Most of these packages can also be used inside NW.js applications. Your application's dependencies are being defined in your package.json file in the dependencies(or devDependencies) section: {   "name": "my-cool-application",   "version": "1.0.0",   "dependencies": {     "lodash": "^3.1.2"   },   "devDependencies": {     "uglify-js": "^2.4.3"   } } In the dependencies field you find all the modules that are required to run your application while in the devDependencies field only the modules required while developing the application are found. Installing a module is fairly easy and the best way to do this is with the npm install command: npm install lodash --save The install command directly downloads the latest version into your node_modules/ folder. The --save flag means that this dependency should also directly be written into your package.json file. You can also define a specific version to download by using following notation: npm install lodash@1.* or even npm install lodash@1.1 How does node's require() work? You need to deal with two different contexts in NW.js and it is really important to always know which context you are currently in as it changes the way the require() function works. When you load a moule using Node's require() function, then this module runs in the Node context. That means you have the same globals as you would have in a pure Node script but you can't access the globals from the browser, e.g. document or window. If you write Javascript code inside of a <script> tag in your html, or when you include a script inside your HTML using <script src="">, then this code runs in the webkit context. There you have access to all browsers globals. In the webkit context The require() function is a module loading system defined by the CommonJS Modules 1.0 standard and directly implemented in node core. To offer the same smooth experience you get a modified require() method that works in webkit, too. Whenever you want to include a certain module from the webkit context, e.g. directly from an inline script in your index.html file, you need to specify the path directly from the root of your project. Let's assume the following folder structure: - app/   - app.js   - foo.js   - bar.js   - index.html And you want to include the app/app.js file directly in your index.html you need to include it like this: <script type="text/javascript">   var app = require('./app/app.js'); </script> If you need to use a module from npm then you can simply require() it and NW.js will figure out where the corresponding node_modules/ folder is located. In the node context In node when you use relative paths it will always try to locate this module relative to the file you are requiring it from. If we take the example from above then we could require the foo.js module from app.js like this: var foo = require('./foo'); About the Author Max Gfeller is a passionate web developer and JavaScript enthusiast. He is making awesome things at Cylon and can be found on Twitter @mgefeller.
Read more
  • 0
  • 0
  • 12597

article-image-building-your-app-creating-executables-nwjs
Adam Lynch
17 Nov 2015
5 min read
Save for later

Building Your App: Creating Executables for NW.js

Adam Lynch
17 Nov 2015
5 min read
How hard can it be to package up your NW.js app into real executables? To be a true desktop app, it should be a self-contained .exe, .app, or similar. There are a few ways to approach this. Let's start with the simplest approach with the least amount of code or configuration. It's possible to run your app by creating a ZIP archive containing your app code, changing the file extension to .nw and then launching it using the official npm module like this: nw myapp.nw. Let's say you wanted to put your app out there as a download. Anyone looking to use it would have to have nw installed globally too. Unless you're making an app for NW.js users, that's not a great idea. Use an existing executable You could substitute one of the official NW.js executables in for the nw module. You could download a ZIP from the NW.js site containing an executable (nw.exe for example) and a few other bits and pieces. If you already have the nw module, then if you go to where it's installed on your machine (e.g. /usr/local/lib/node_modules/nw on Mac OS X), the executable can be found in the nwjs directorty. If you wanted, you could keep things really simple and leave it at that. Just use the official executable to open your .nw archive; i.e. nw.exe myapp.nw. Merging them Ideally though, you want as few files as possible. Think of your potential end users, they deserve better. One way to do this is to mash the NW.js executable and your .nw archive together to produce a single executable. This is achieved differently per platform though. On Windows, you need to run copy /b nw.exe+myapp.nw nw.exe on the command-line. Now we have a single nw.exe. Even though we now have a single executable, it still requires the DLLs and everything else which comes with the official builds to be in the same directory as the .exe for it to work correctly. You could rename nw.exe to something nicer but it's not advised as native modules will not work if the executable isn't named nw.exe. This is expected to be fixed in NW.js 0.13.0 when NW.js will come with an nw.dll (along with nw.exe) which modules will link to instead. On Linux, the command would be cat path/to/nw myapp.nw > myapp && chmod +x myapp (where nw is the NW.js executable). Since .app executables are just directories on Mac OS X, you could just copy the offical nwjs executable and edit it. Rename your .nw archive to app.nw, put it in the Contents/Resources inner directory, and you're done. Actually, a .nw archive isn't even necessarily. You could create an Contents/Resources/app.nw directory and add your raw app files there. Other noteworthy files which you could edit are Contents/Resources/nw.icns which is your app's icon and Contents/Info.plist, Apple's app package description file. nw-builder There are a few downsides to all of that; it's platform-specific, very manual, and is very limited. The nw-builder module will handle all of that for you, and more. Either from the command-line or programmatically, it makes building executables light work. Once you install it globally by running npm install -g nw-builder, then you could run the following command to generate executables: nwbuild your/app/files/directory -o destination/directory nw-builder will go and grab the latest NW.js version and generate self-contained executables for you. You can specify a lot of options here via flags too; the NW.js version you'd like, which platforms to build for, etc. Yes, you can build for multiple platforms. By default it builds 32-bit and 64-bit Windows and Mac executables, but Linux 32-bit and 64-bit executables can also be generated. E.g. nwbuild appDirectory -v 0.12.2 -o dest -p linux64. Note: I am a maintainer of nw-builder. Ignoring my bias, that was surprisingly simple. right? Using the API I personally prefer to use it programmatically though. That way I can have a build script which passes all of the options and so on. Let's say you create a simple file called build.js; var NwBuilder = require('nw-builder'); var nw = new NwBuilder({ files: './path/to/app/files/**/**' // use the glob format }); // .build() returns a promise but also supports a plain callback approach as well nw.build().then(function () { console.log('all done!'); }).catch(function (error) { console.error(error); }); Running node build.js will produce your executables. Simples. Gulp If you already use Gulp like I do and would like to slot this into your tasks, it's easy. Just use the same nw-builder module; var gulp = require('gulp'); var NwBuilder = require('nw-builder'); var nw = newNwBuilder({     files: './path/to/app/files/**/**'// use the glob format }); gulp.task('default', function(){     returnnw.build(); }); Grunt Yep, there's a plugin for that; run npm install grunt-nw-builder to get it. Then add something like the following to your Gruntfile: grunt.initConfig({   nwjs: {     options: {},     src: ['./path/to/app/files/**/*']   } }); Then running grunt nwjs will produce your executables. All nw-builder options are available to Grunt users too. Options There are a lot of options which pretty granular control. Aside from the ones I've mentioned already and options already available in the app manifest, there are ones for controlling the structure and or compression of inner files, your executables' icons, Mac OS X specific options concerning the plist file and so on. Go check out nw-builder for yourself and see how quickly you can package your Web app into real executables. About the Author Adam Lynch is a TeamworkChat Product Lead & Senior Software Engineer at Teamwork. He can be found on Twitter @lynchy010. 
Read more
  • 0
  • 0
  • 10540

article-image-how-make-generic-typealiases-swift
Alexander Altman
16 Nov 2015
4 min read
Save for later

How to Make Generic typealiases in Swift

Alexander Altman
16 Nov 2015
4 min read
Swift's typealias declarations are a good way to clean up our code. It's generally considered good practice in Swift to use typealiases to give more meaningful and domain-specific names to types that would otherwise be either too general-purpose or too long and complex. For example, the declaration: typealias Username = String gives a less vague name to the type String, since we're going to be using strings as usernames and we want a more domain-relevant name for that type. Similarly, the declaration: typealias IDMultimap = [Int: Set<Username>] gives a name for [Int: Set<Username>] that is not only more meaningful, but somewhat shorter. However, we run into problems when we want to do something a little more advanced; there is a possible application of typealias that Swift doesn't let us make use of. Specifically, Swift doesn't accept typealiases with generic parameters. If we try it the naïvely obvious way, typealias Multimap<Key: Hashable, Value: Hashable> = [Key: Set<Value>] we get an error at the begining of the type parameter list: Expected '=' in typealias declaration. Swift (as of version 2.1, at least) doesn't let us directly declare a generic typealias. This is quite a shame, as such an ability would be very useful in a lot of different contexts, and languages that have it (such as Rust, Haskell, Ocaml, F♯, or Scala) make use of it all the time, including in their standard libraries. Is there any way to work around this linguistic lack? As it turns out, there is! The Solution It's actually possible to effectively give a typealias type parameters, despite Swift appearing to disallow it. Here's how we can trick Swift into accepting a generic typealias: enum Multimap<Key: Hashable, Value: Hashable> { typealias T = [Key: Set<Value>] } The basic idea here is that we declare a generic enum whose sole purpose is to hold our (now technically non-generic) typealias as a member. We can then supply the enum with its type parameters and project out the actual typealias inside, like so: let idMMap: Multimap<Int, Username>.T = [0: ["alexander", "bob"], 1: [], 2: ["christina"]] func hasID0(user: Username) -> Bool { return idMMap[0]?.contains(user) ?? false } Notice that we used an enum rather than a struct; this is because an enum with no cases cannot have any instances (which is exactly what we want), but a struct with no members still has (at least) one instance, which breaks our layer of abstraction. We are essentially treating our caseless enum as a tiny generic module, within which everything (that is, just the typealias) has access to the Key and Value type parameters. This pattern is used in at least a few libraries dedicated to functional programming in Swift, since such constructs are especially valuable there. Nonetheless, this is a broadly useful technique, and it's the best method currently available for creating generic typealias in Swift. The Applications As sketched above, this technique works because Swift doesn't object to an ordinary typealias nested inside a generic type declaration. However, it does object to multiple generic types being nested inside each other; it even objects to either a generic type being nested inside a non-generic type or a non-generic type being nested inside a generic type. As a result, type-level currying is not possible. Despite this limitation, this kind of generic typealias is still useful for a lot of purposes; one big one is specialized error-return types, in which Swift can use this technique to imitate Rust's standard library: enum Result<V, E> { case Ok(V) case Err(E) } enum IO_Result<V> { typealias T = Result<V, ErrorType> } Another use for generic typealiases comes in the form of nested collections types: enum DenseMatrix<Element> { typealias T = [[Element]] } enum FlatMatrix<Element> { typealias T = (width: Int, elements: [Element]) } enum SparseMatrix<Element> { typealias T = [(x: Int, y: Int, value: Element)] } Finally, since Swift is a relatively young language, there are sure to be undiscovered applications for things like this; if you search, maybe you'll find one! Super-charge your Swift development by learning how to use the Flyweight pattern – Click here to read more About the author Alexander Altman (https://pthariensflame.wordpress.com) is a functional programming enthusiast who enjoys the mathematical and ergonomic aspects of programming language design. He's been working with Swift since the language's first public release, and he is one of the core contributors to the TypeLift (https://github.com/typelift) project.
Read more
  • 0
  • 0
  • 13545

article-image-using-jupyter-write-documentation
Marin Gilles
13 Nov 2015
5 min read
Save for later

How to Write Documentation with Jupyter

Marin Gilles
13 Nov 2015
5 min read
The Jupyter notebook is an interactive notebook allowing you to write documents with embedded code, and execute this code on the fly. It was originally developed as a part of the Ipython project, and could only be used for Python code at that time. Nowadays, the Jupyter notebook integrates multiple languages, such as R, Julia, Haskell and much more - the notebook supports about 50 languages. One of the best features of the notebook is to be able to mix code and markdown with embedded HTML/CSS. It allows an easy creation of beautiful interactive documents, such as documentation examples. It can also help with the writing of full documentation using its export mechanism to HTML, RST (ReStructured Text), markdown and PDF. Interactive documentation examples When writing library documentation, a lot of time should be dedicated to writing examples for each part of the library. However, those examples are quite often static code, each part being explained through comments. To improve the writing of those examples, a solution could be using a Jupyter notebook, which can be downloaded and played with by anyone reading your library documentation. Solutions also exist to have the notebook running directly on your website, as seen on the Jupyter website, where you can try the notebooks. This will not be explained in this post, but the notebook was designed on a server-client pattern, making this easy to get running. Using the notebook cells capabilities, you can separate each part of your code, or each example, and describe it nicely and properly outside the code cell, improving readability. From the Jupyter Python notebook example, we see what the following code does, execute it (and even get graphics back directly on the notebook!). Here is an example of a Jupyter notebook, for the Python language, with matplotlib integration: Even more than that, instead of just having your example code in a file, people downloading your notebook will directly get the full example documentation, giving them a huge advantage in understanding what the example is and what it does when opening the notebook again after six months. And they can just hit the run button, and tinker with your example to understand all its details, without having to go back and forth between the website and their code editor, saving them time. They will love you for that! Generate documentation from your notebooks Developing mainly in Python, I am used to the Sphinx library as a documentation generator. It can export your documentation to HTML from RST files, and scoops your code library to generate documentation from docstring, all with a single command, making it quite useful in the process of writing. As Jupyter notebooks can be exported to RST, why not use this mechanism to create your RST files with Jupyter, then generate your full documentation with Sphinx? To manually convert your notebook, you can click on File -> Download As -> reST. You will be prompted to download the file. That's it! Your notebook was exported. However, while this method is good for testing purposes, this will not be good for an automatic generation of documentation with sphinx. To be able to convert automatically, we are going to use a tool named nbconvert with which can do all the required conversions from the command line. To convert your notebook to RST, you just need to do the following $ jupyter nbconvert --to rst *your_notebook*.ipynb or to convert every notebook in the current folder: $ jupyter nbconvert --to rst *.ipynb Those commands can easily be integrated in a Makefile for your documentation, making the process of converting your notebooks completely invisible. If you want to keep your notebooks in a folder notebooks and your generated files in a folder rst, you can run assuming you have the following directory tree: Current working directory | |-rst/ |-notebooks/ |-notebook1.ipynb |-notebook2.ipynb |-... the following commands: $ cd rst $ jupyter nbconvert --to rst ../notebooks/*.ipynb This will convert all the notebooks in notebooks and place them in the rst folder. A Python API is also available if you want to generate your documentation from Python (Documentation). A lot more export options are available on the nbconvert documentation. You can create PDF, HTML or even slides, if you want to make a presentation based on a notebook, and can even pull a presentation from a remote location. Jupyter notebooks are very versatile documents, allowing interactive code exploration, export to a large number of formats, remote work, collaborative work and more. You can find more information on the official Jupyter website where you will also be able to try it. I mainly focused this post on the Python language, in which the IPython notebooks, ancestor of Jupyter were developed, but with the integration of more than 50 languages, it makes it a tool that every developer should be aware of, to create documentation, tutorials, or just to try code and keep notes at the same place. Dive deeper into the Jupyter Notebook by navigating your way around the dashboard and interface in our article. Read now! About the author Marin Gilles is a PhD student in Physics, in Dijon, France. A large part of his work is dedicated to physical simulations for which he developed his own simulation framework using Python, and contributed to open-source libraries such as Matplotlib or Ipython.
Read more
  • 0
  • 0
  • 35103

article-image-creating-simple-3d-multiplayer-game-threejs
Marco Stagni
12 Nov 2015
15 min read
Save for later

Creating a simple 3D Multiplayer Game with THREE.js

Marco Stagni
12 Nov 2015
15 min read
This post will teach you how to build a simple 3D multiplayer game using THREE.js and socket.io. This guide is intended to help you create a very simple yet perfectly functional multiplayer FPS ( First Person Shooter). Its name is "Dodgem" and will feature a simple arena, random destructible obstacles and your opponent. I've already done this, and you can check it out on github. The playable version of this project can be found here. Explanation First of all, we need a brief description of how the project is built. Speaking technically, we have a random number of clients (our actual players), and a server, which randomly select an opponent for each new player. Every player who joins the game is put in a "pending" status, until he/she is randomly selected for a new match. Before the match starts, the player will be able to wander around the arena, and when the server communicates the beginning of the fight, both clients will receive informations on how to create the obstacles in the world. Each player is represented like a blue sphere with two guns on the sides (this has been done for simplicity's sake, we all know a humanoid figure would be more interesting). Every obstacle is destructible, if you shoot them, and you life is represented as a red bar on top the screen. Once one of the players dies, both will be prompted to join another match or simply continue to move, jump and shoot around. Let's code! We can now start our project. We'll use a easy-to-use "Game Engine" I wrote, since it provides a lot of useful things for our project. Its name is "Wage", it's fully open source (you can check the github repository here) and it's available to install via npm. So, first things first, prompt this to your shell: npm install -g wage This will install wage as global package on your machine. You will be now able to create a new project wherever you want, using the "wage" command. Keep in mind that this Game Engine is still an in-dev version, so please use it very carefully, and submit any issue you want to the repository if needed. Now, run: wage create dodgem This will create a folder named "dodgem" in your current directory, with everything you need to start the project. We're now ready to start: I won't explain every single line, just the basic informations required to start and the skeleton of the app (The entire source code is available on github, and you're free to clone the repo on your machine). Only the server code is fully exaplained. Now, we can create our server. Server First of all, create a "server" folder beside the "app" folder. Add a "package.json" inside of it, with this content: { "name":"dodgem", "version":"1.0.0", "description":"", "main":"", "author":"Your name goes here, thanks to Marco Stagni", "license":"MIT", "dependencies":{ "socket.io":"*", "underscore":"*" } } This file will tell npm that our server uses socket.io and underscore as modules (no matter what version they are), and running npm install inside the "server" folder will install the dependencies inside a "node_modules" folder. Speaking about the modules, socket.io is obviously used as main communication system between server and clients, while underscore is used because it provides a LOT of useful tools when you're working with data sets. If you don't know what socket.io and underscore are, just click on the links for a deep explanation. I won't explain how socket.io works, because I assume that you're already aware of its functionalities. We'll now create the server.js file (you must create it inside the server folder): //server.js // server requirements var util = require("util"), io = require("socket.io"), _ = require("underscore"), Player = require("./Player.js").Player; // server variables var socket, players, pendings, matches; // init method functioninit() { players = []; pendings = []; matches = {}; // socket.io setup socket = io.listen(8000); // setting socket io transport socket.set("transports", ["websocket"]); // setting socket io log level socket.set("log lever", 2); setEventHandlers(); } var setEventHandlers = function() { // Socket.IO socket.sockets.on("connection", onSocketConnection); }; Util module is only used for logging purposes, and you don't need to install it via npm, since it's a system module. The Player variable refers to the Player model, which will be explained later. The other variables (socket, players, pendings an matches) will be used to store informations about pending players, matches and socket.io instance. init and setEventHandlers This method is used to instanciate socket.io and set a few options, such as the transport method (we're using only websocket, but socket.io provides also a lot of transports more than websocket) and log level. The socket.io server is set to listen on port 8000, but you can choose whatever port you desire. This method also instanciate players, pendings and matches variables, and calls the "setEventHandlers" method, which will attach a "connection" event listener to the socket.io instance. The init method is called at the end of the server code. We can now add a few lines after the "setEventHandlers" method. socket.io listeners // New socket connection functiononSocketConnection(client) { util.log("New player has connected: "+client.id); // Listen for client disconnected client.on("disconnect", onClientDisconnect); // Listen for new player message client.on("new player", onNewPlayer); // Listen for move player message client.on("move player", onMovePlayer); // Listen for shooting player client.on("shooting player", onShootingPlayer); // Listen for died player client.on("Idied", onDeadPlayer); // Listen for another match message client.on("anothermatch", onAnotherMatchRequested); }; This function is the "connection" event of socket.io, and what it does is setting every event listener we need for our server. The events listed are: "disconnect" (when our client closes his page, or reloads it), "new player" (called when a client connects to the server), "move player" (called every time the player moves around the arena), "shooting player" (the player is shooting), "Idied" (the player who sent this message has died) and finally "anothermatch" (our player is requesting another match to the server). The most important part of the listeners is the one which listen for new players. // New player has joined functiononNewPlayer(data) { // Create a new player var newPlayer = newPlayer(data.x, data.y, data.z, this); newPlayer.id = this.id; console.log("creating new player"); // Add new player to the players array players.push(newPlayer); // searching for a pending player var id = _.sample(pendings); if (!id) { // we didn't find a player console.log("added " + this.id + " to pending"); pendings.push(newPlayer.id); // sending a pending event to player newPlayer.getSocket().emit("pending", {status: "pending", message: "waiting for a new player."}); } else { // creating match pendings = _.without(pendings, id); matches[id] = newPlayer.id; matches[newPlayer.id] = id; console.log(matches); // generating world for this match var numObstacles = _.random(10, 30); var height = _.random(70, 100); var MAX_X = 490 var MINUS_MAX_X = -490 var MAX_Z = 990 var MINUS_MAX_Z = -990 var positions = []; for (var i=0; i<numObstacles; i++) { positions.push({ x: _.random(MINUS_MAX_X, MAX_X), z: _.random(MINUS_MAX_Z, MAX_Z) }); } console.log(numObstacles, height, positions); // sending both player info that they're connected newPlayer.getSocket().emit("matchstarted", {status: "matchstarted", message: "Player found!", numObstacles: numObstacles, height: height, positions: positions}); playerById(id).getSocket().emit("matchstarted", {status: "matchstarted", message: "Player found!", numObstacles: numObstacles, height: height, positions: positions}); } }; What it does, is creating a new player using the Player module imported at the beginning, storing the socket associated with the client. The new player is now stored inside the players list. The most important thing now is the research of a new match: the server will randomly pick a player from the pendings list, and if it's able to find one, it will create a match. If the server doesn't find a suitable player for the match, the new player is put inside the pendings list. Once the match is created, the server creates the informations needed by clients in order to create a common world. The informations are the sent to both clients. As you can see, I'm using a playerById method, which essentially is a function which search inside the players list for a player with the id equal to the one given. // Find player by ID functionplayerById(id) { var i; for (i = 0; i < players.length; i++) { if (players[i].id == id) return players[i]; }; returnfalse; }; The other functions used as socket listeners are: onMovePlayer This function is called when the "move player" event is received. This will find the player associated with the socket id, find its opponent using the "matches" oject, then emit a socket event to the opponent, proving the right informations about the player movement. Using pseudo code, the onMovePlayer function is something like this: onMovePlayer: function(data) { movingPlayer = findPlayerById(this.id) opponent = matches[movingPlayer.id] if !opponent console.log"error else opponent.socket.send(data.movement) } onShootingPlayer This function is called when the "shooting player" event is received. This will find the player associated with the socket id, find its opponent using the "matches" oject, then emit a socket event to the opponent, proving the right informations about the shooting player (such as starting point of the bullet and bullet direction). Using pseudo code, the onShootingPlayer function is something like this: onShootingPlayer: function(data) { shootingPlayer = findPlayerById(this.id) opponent = matches[shootingPlayer.id] if !opponent console.log"error else opponent.socket.send(data.startingpoint, data.direction) } onDeadPlayer, onAnotherMatchRequested This function is called every time a player dies. When this happen, the dead user is removed from matches, players and pendings references, and he's prompted to join a new match (his/her opponent is informed that he/she won the match). If this happen, the procedure is nearly the same as when he connects for the first time: another player is randomly picked from pendings list, and another match is created from scratch. onClientDisconnect Last but not least, onClientDisconnect is the function called when a user disconnects from server: this can happen when the user reloads the page, or when he/she closes the client. The corresponding opponent is informed of the situation, and put back to pending status. We now must see how the "Player" model is implemented, since it's used to create new players, and to retrieve informations about their connection, movements or behaviour. Player var Player = function(startx, starty, startz, socket) { var x = startx, y = starty, z = startz, socket = socket, rotx, roty, rotz, id; // getters var getX = function() { return x; } var getY = function() { return y; } var getZ = function() { return z; } var getSocket = function() { return socket; } var getRotX = function() { return rotx; } var getRotY = function() { return roty; } var getRotZ = function() { return rotz; } // setters var setX = function(value) { x = value; } var setY = function(value) { y = value; } var setZ = function(value) { z = value; } var setSocket = function(socket) { socket = socket; } var setRotX = function(value) { rotx = value; } var setRotY = function(value) { roty = value; } var setRotZ = function(value) { rotz = value; } return { getX: getX, getY: getY, getZ: getZ, getRotX: getRotX, getRotY: getRotY, getRotZ: getRotZ, getSocket: getSocket, setX: setX, setY: setY, setZ: setZ, setRotX: setRotX, setRotY: setRotY, setRotZ: setRotZ, setSocket: setSocket, id: id } }; exports.Player = Player; The Player model is pretty straightforward: you just have getter and setters for every parameter of the Player object, but not all of them are used inside this project. So, this was the server code. This is not obviously the complete source code, but I explained all of the characteristics it has. For the complete code, you can check the github repository here. Client The client is pretty easy to understand. Once you create the project using Wage, you will find a file, named "main.js": this is the starting point of your game, and can contain almost every single aspect of the game logic. The very first time you create something with Wage, you will find a file like this: include("app/scripts/cube/mybox") Class("MyGame", { MyGame: function() { App.call(this); }, onCreate: function() { var geometry = newTHREE.CubeGeometry(20, 20, 20); var material = newTHREE.MeshBasicMaterial({ color: 0x00ff00, wireframe : true }); var cube = newMesh(geometry, material, {script : "mybox", dir : "cube"}); console.log("Inside onCreate method"); document.addEventListener( 'mousemove', app.onDocumentMouseMove, false ); document.addEventListener( 'touchstart', app.onDocumentTouchStart, false ); document.addEventListener( 'touchmove', app.onDocumentTouchMove, false ); document.addEventListener( 'mousewheel', app.onDocumentMouseWheel, false); //example for camera movement app.camera.addScript("cameraScript", "camera"); } })._extends("App"); I know you need a brief description of what is going on inside the main.js file. Wage is using a library I built, wich provides a easy to use implementation of inheritance: this will allow you to create, extend and implement classes in an easy and readable way. The library is named "classy", and you can find every information you need on github. A Wage application needs an implementation of the "App" class, as you can see from the example above. The constructor is the function MyGame, and it simply calls the super class "App". The most important method you have to check is "onCreate", because it's the method that you have to use in order to star adding elements to your scene. First, you need a description of Wage is, and what is capable to do for you. Wage is a "Game Engine". It automatically creates a THREE.js Scene for you, and gives you a huge set of features that allow you to easily control your game. The most important things are a complete control of meshes (both animated and not), lights, sounds, shaders, particle effects and physics. Every single mesh and light is "scriptable", since you're able to modify their behaviour by "attaching" a custom script to the object itself (if you know how Unity3D works, then you know what I'm talking about. Maybe you can have a look a this, to better understand.). The scene created by Wage is added to index.html, which is the layout loaded by your app. Of course, index.html behave like a normal html page, so you can import everything you want, such as stylesheets or external javascript libraries. In this case, you have to import socket.io library inside index.html, like this: <head> ... <script type="text/javascript" src="http:/YOURIP:8000/socket.io/socket.io.js"></script> ... </head> I will now provide a description of what the client does, describing each feature in pseudo code. Class("Dodgem", { Dodgem: function() { super() }, onCreate: function() { this.socket = socketio.connect("IPADDRESS"); // setting listeners this.socket.on("shoot", shootListener); this.socket.on("move", moveListener); this.socket.on(message3, listener3); // creating platform this.platform = newPlatform(); }, shootListener: function(data) { // someone is shooting createBulletAgainstMe(data) }, moveListener: function(data) { // our enemy is moving around moveEnemyAround(data) } }); Game.update = function() { handleBulletsCollisions() updateBullets() updateHealth() if health == 0 Game.Die() } As you can see, the onCreate method takes care of creating the socket.io instance, adding event listeners for every message incoming from the game server. The events we need to handle are: "shooting" (our opponent is shooting at us), "move" (our opponent is moving, we need to update its position), "pending" (we've been added to the pending list, we're waiting for new player), "matchstarted" (our match is started), "goneplayer" (our opponent is no longer online), "win" (guess what this event means..). Every event has its own listener. onMatchStarted onMatchStarted: function(data) { alert(data.message) app.platform.createObstacles(data) app.opponent = newPlayer() } This function creates the arena's obstacles, and creates our opponent. A message is shown to user, telling that the match is started. onShooting onShooting: function(data) { createEnemyBullet() } This function only creates enemy bullets with informations coming from the server. The created bullet is then handled by the Game.update method, to check collisions with obstacles, enemy and our entity. onMove onMove: function(data) { app.opponent.move(data.movement) } This function handles the movements of our opponent. Every time he moves, we need to update his position on our screen. Player movements are updated at the highest rate possible. onGonePlayer onGonePlayer: function(data) { alert(data.message) } This function only shows a message, telling the user that his opponents has just left the arena. onPending onPending: function(data) { alert(data.message) } This function is called when we join the game server for the first time and the server is not able to find a suitable player for our match. We're able to move around the arena, randomly shooting. Conclusions Ok, that's pretty much everything you need to know on how to create a simple multiplayer game. I hope this guide gave you the informations you need to start creating you very first multiplayer game: the purpose of this guide wasn't to give you every line of code needed, but instead provide a useful guide line on how to create something funny and nicely playable. I didn't cover the graphic aspect of the game, because it's completely available on the github repository, and it's easily understandable. However, covering the graphic aspect of the game is not the main purpose of this tutorial. This is a guide that will let you understand what my simple multiplayer game does. The playable version of this project can be found here. About the author Marco Stagni is a Italian frontend and mobile developer, with a Bachelor's Degree in Computer Engineering. He's completely in love with JavaScript, and he's trying to push his knowledge of the language in every possible direction. After a few years as frontend and Android developer, working both with Italian Startups and Web Agencies, he's now deepening his knowledge about Game Programming. His efforts are currently aimed to the completion of his biggest project: a Javascript Game Engine, built on top of THREE.js and Physijs (the project is still in alpha version, but already downloadable via http://npmjs.org/package/wage. You can follow him also on twitter @marcoponds or on github at http://github.com/marco-ponds. Marco is a big NBA fan.
Read more
  • 0
  • 0
  • 22466

article-image-using-cloud-applications-and-containers
Xavier Bruhiere
10 Nov 2015
7 min read
Save for later

Using Cloud Applications and Containers

Xavier Bruhiere
10 Nov 2015
7 min read
We can find a certain comfort while developing an application on our local computer. We debug logs in real time. We know the exact location of everything, for we probably started it by ourselves. Make it work, make it right, make it fast - Kent Beck Optimization is the root of all devil - Donald Knuth So hey, we hack around until interesting results pop up (ok that's a bit exaggerated). The point is, when hitting the production server our code will sail a much different sea. And a much more hostile one. So, how to connect to third party resources ? How do you get a clear picture of what is really happening under the hood ? In this post we will try to answer those questions with existing tools. We won't discuss continuous integration or complex orchestration. Instead, we will focus on what it takes to wrap a typical program to make it run as a public service. A sample application Before diving into the real problem, we need some code to throw on remote servers. Our sample application below exposes a random key/value store over http. // app.js // use redis for data storage var Redis = require('ioredis'); // and express to expose a RESTFul API var express = require('express'); var app = express(); // connecting to redis server var redis = new Redis({ host: process.env.REDIS_HOST || '127.0.0.1', port: process.env.REDIS_PORT || 6379 }); // store random float at the given path app.post('/:key', function (req, res) { var key = req.params.key var value = Math.random(); console.log('storing', value,'at', key) res.json({set: redis.set(key, value)}); }); // retrieve the value at the given path app.get('/:key', function (req, res) { console.log('fetching value at ', req.params.key); redis.get(req.params.key).then(function(err, result) { res.json({ result: result || err }); }) }); var server = app.listen(3000, function () { var host = server.address().address; var port = server.address().port; console.log('Example app listening at http://%s:%s', host, port); }); And we define the following package.json and Dockerfile. { "name": "sample-app", "version": "0.1.0", "scripts": { "start": "node app.js" }, "dependencies": { "express": "^4.12.4", "ioredis": "^1.3.6", }, "devDependencies": {} } # Given a correct package.json, those two lines alone will properly install and run our code FROM node:0.12-onbuild # application's default port EXPOSE 3000 A Dockerfile ? Yeah, here is a first step toward cloud computation under control. Packing our code and its dependencies into a container will allow us to ship and launch the application with a few reproducible commands. # download official redis image docker pull redis # cd to the root directory of the app and build the container docker build -t article/sample . # assuming we are logged in to hub.docker.com, upload the resulting image for future deployment docker push article/sample Enough for the preparation, time to actually run the code. Service Discovery The server code needs a connection to redis. We can't hardcode it because host and port are likely to change under different deployments. Fortunately The Twelve-Factor App provides us with an elegant solution. The twelve-factor app stores config in environment variables (often shortened to env vars or env). Env vars are easy to change between deploys without changing any code; Indeed, this strategy integrates smoothly with an infrastructure composed of containers. docker run --detach --name redis redis # 7c5b7ff0b3f95e412fc7bee4677e1c5a22e9077d68ad19c48444d55d5f683f79 # fetch redis container virtual ip export REDIS_HOST=$(docker inspect -f '{{ .NetworkSettings.IPAddress }}' redis) # note : we don't specify REDIS_PORT as the redis container listens on the default port (6379) docker run -it --rm --name sample --env REDIS_HOST=$REDIS_HOST article/sample # > sample-app@0.1.0 start /usr/src/app # > node app.js # Example app listening at http://:::3000 In another terminal, we can check everything is working as expected. export SAMPLE_HOST=$(docker inspect -f '{{ .NetworkSettings.IPAddress }}' sample)) curl -X POST $SAMPLE_HOST:3000/test # {"set":{"isFulfilled":false,"isRejected":false}} curl -X GET $SAMPLE_HOST:3000/test # {"result":"0.5807915225159377"} We didn't precise any network informations but even so, containers can communicate. This method is widely used and projects like etcd or consul let us automate the whole process. Monitoring Performances can be a critical consideration for end-user experience or infrastructure costs. We should be able to identify bottlenecks or abnormal activities and once again, we will take advantage of containers and open source projects. Without modifying the running server, let's launch three new components to build a generic monitoring infrastructure. Influxdb is a fast time series database where we will store containers metrics. Since we properly defined the application into two single-purpose containers, it will give us an interesting overview of what's going on. # default parameters export INFLUXDB_PORT=8086 export INFLUXDB_USER=root export INFLUXDB_PASS=root export INFLUXDB_NAME=cadvisor # Start database backend docker run --detach --name influxdb --publish 8083:8083 --publish $INFLUXDB_PORT:8086 --expose 8090 --expose 8099 --env PRE_CREATE_DB=$INFLUXDB_NAME tutum/influxdb export INFLUXDB_HOST=$(docker inspect -f '{{ .NetworkSettings.IPAddress }}' influxdb) cadvisor Analyzes resource usage and performance characteristics of running containers. The command flags will instruct it how to use the database above to store metrics. docker run --detach --name cadvisor --volume=/var/run:/var/run:rw --volume=/sys:/sys:ro --volume=/var/lib/docker/:/var/lib/docker:ro --publish=8080:8080 google/cadvisor:latest --storage_driver=influxdb --storage_driver_user=$INFLUXDB_USER --storage_driver_password=$INFLUXDB_PASS --storage_driver_host=$INFLUXDB_HOST:$INFLUXDB_PORT --log_dir=/ # A live dashboard is available at $CADVISOR_HOST:8080/containers # We can also point the brower to $INFLUXDB_HOST:8083, with credentials above, to inspect containers data. # Query example: # > list series # > select time,memory_usage from stats where container_name='cadvisor' limit 1000 # More infos: https://github.com/google/cadvisor/blob/master/storage/influxdb/influxdb.go Grafana is a feature rich metrics dashboard and graph editor for Graphite, InfluxDB and OpenTSB. From its web interface, we will query the database and graph the metrics cadvisor collected and stored. docker run --detach --name grafana -p 8000:80 -e INFLUXDB_HOST=$INFLUXDB_HOST -e INFLUXDB_PORT=$INFLUXDB_PORT -e INFLUXDB_NAME=$INFLUXDB_NAME -e INFLUXDB_USER=$INFLUXDB_USER -e INFLUXDB_PASS=$INFLUXDB_PASS -e INFLUXDB_IS_GRAFANADB=true tutum/grafana # Get login infos generated docker logs grafana  Now we can head to localhost:8000 and build a custom dashboard to monitor the server. I won't repeat the comprehensive documentation but here is a query example: # note: cadvisor stores metrics in series named 'stats' select difference(cpu_cumulative_usage) where container_name='cadvisor' group by time 60s Grafana's autocompletion feature shows us what we can track : cpu, memory and network usage among other metrics. We all love screenshots and dashboards so here is a final reward for our hard work. Conclusion Development best practices and a good understanding of powerful tools gave us a rigorous workflow to launch applications with confidence. To sum up: Containers bundle code and requirements for flexible deployment and execution isolation. Environment stores third party services informations, giving developers a predictable and robust solution to read them. InfluxDB + Cadvisor + Grafana feature a complete monitoring solution independently of the project implementation. We fullfilled our expections but there's room for improvements. As mentioned, service discovery could be automated, but we also omitted how to manage logs. There are many discussions around this complex subject and we can expect shortly new improvements in our toolbox. About the author Xavier Bruhiere is the CEO of Hive Tech. He contributes to many community projects, including Occulus Rift, Myo, Docker and Leap Motion. In his spare time he enjoys playing tennis, the violin and the guitar. You can reach him at @XavierBruhiere.
Read more
  • 0
  • 0
  • 14686
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-adding-custom-meter-ceilometer
John Belamaric
09 Nov 2015
9 min read
Save for later

Adding a Custom Meter to Ceilometer

John Belamaric
09 Nov 2015
9 min read
OpenStack Ceilometer is a useful tool for monitoring your instances. It includes built-in monitoring for basic instance measures like CPU utilization and interface utilization. It includes alarm evaluation and notification infrastructure that works with the Heat orchestration engine’s AutoScalingGroups to enable automatic scaling of services. This all works nicely right out-of-the-box when your measures are already built into Ceilometer. But what if you want to scale on some other criteria? For example, at Infoblox we provide a virtual instance that is serving DNS, and we make the DNS queries/second rate available via SNMP. You may want to provide similar, application-level metrics for other applications – for example, you may want to poll a web application for internal metrics via HTTP. In this blog post, I will show you how to add your own meter in Ceilometer. Let’s start with getting an understanding of the components involved in a meter, and how they interact. The most basic version of the data collection service in Ceilometer consists of agents, a collector service, a message queue, and a database. Typically there is a central agent that runs on the controller, and a compute agent that runs on each compute node. The agents gather the data, and publish it to the message queue. The collector receives this data and stores it into the database. Periodically, the agent attempts to collect each meter. The frequency is controlled by the /etc/ceilometer/pipeline.yaml file, and can be configured on a per meter basis. If a specific meter is not configured, it will use the global interval configured in pipeline.yaml, which by default is 10 minutes. To add a new meter, we will add a package that plugs into one of the agents. Let’s pick the compute agent, which runs locally on each compute node. We will build a Python package that can be installed on each compute node using pip. After installing this package, you simply restart the Ceilometer compute agent, and you will start to see the new meter appear in the database. For reference, you can take a look at https://github.com/infobloxopen/ceilometer-infoblox. This is a package that installs a number of custom, SNMP-based meters for Infoblox virtual appliances (which run the NIOS operating system, which you will see references to in class names below). The package will deliver three basic classes: a discovery class, an inspector, and a pollster. In Ceilometer terminology, “discovery” refers to the process by which the agent identifies the resources to poll, and then each “pollster” will utilize the “inspector” to generate “samples” for those resources. When the agent initiates a polling cycle, it looks through all pollster classes defined for that agent. When you define a new meter that uses a new class for polling, you specify that meter class in your [entry_points] section of the setup.cfg: ceilometer.poll.compute = nios.dns.qps = ceilometer_infoblox.pollsters.dns:QPSPollster Similarly, the discovery class should be registered in setup.cfg: ceilometer.discover = nios_instances = ceilometer_infoblox.discovery:NIOSDiscovery The pollster class really ties the pieces together. It will identify the discovery class to use by specifying one of the values defined in setup.cfg: @property def default_discovery(self): return 'nios_instances' Then, it will directly use an inspector class that was delivered with the package. You can base your discovery, inspector, and pollster classes on those already defined as part of the core Ceilometer project. In the case of the ceilometer-infoblox code, the discovery class is based on the core instance discovery code in Ceilometer, as we see in discovery.py: class NIOSDiscovery(discovery.InstanceDiscovery): def __init__(self): super(NIOSDiscovery, self).__init__() The InstanceDiscovery class will use the Nova API to query for all instances defined on this compute node. This makes things very simple, because we are interested in polling the subset of those instances that are Infoblox virtual appliances. In this case, we identify those via a metadata tag. During the discovery process we loop through all the instances, rejecting those without the tag: def discover(self, manager, param=None): instances = super(NIOSDiscovery, self).discover(manager, param) username = cfg.CONF['infoblox'].snmp_community_or_username password = cfg.CONF['infoblox'].snmp_password port = cfg.CONF['infoblox'].snmp_port metadata_name = cfg.CONF['infoblox'].metadata_name resources = [] for instance in instances: try: metadata_value = instance.metadata.get(metadata_name, None) if metadata_value is None: LOG.debug("Skipping instance %s; not tagged with '%s' " "metadata tag." % (instance.id, metadata_name)) continue This code first calls the superclass to get all the Nova instances on this host, and then pulls in some necessary configuration data. The meat of the method starts with the loop through the instances; it rejects those that are not appropriately tagged. In the end, the discover method is expected to return an array of dictionary objects, containing the resources to poll, all information needed to poll them, and any metadata that should be included in the sample. If you follow along in the code, you will see another issue that needs to be dealt with in this case. Since we are polling SNMP from the instances, we need to be able to access the IP address of the instance via UDP. But the compute agent is running in the network namespace for the host, not for the instances. This means we need a floating IP to poll the instance; the code in the _instance_ip method figures out which IP to use in the polling. This will likely be important for any application-based meter, which will face a similar problem, even if you are not using SNMP. For example, if you use an HTTP method to gather data about the internal performance of a web application, you will still need to directly access the IP of the instance. If a floating IP is out-of-the-question, the polling agent would have to utilize the appropriate namespace; this is possible but much more complex. Ok, let’s review the process and see where we are. First, the agent looks at the installed pollster list. It finds our pollster, and calls the discovery process. This produces a list of resources. The next step is to use those resources to generate samples, using the get_samples method of the pollster. This method will loop through the resource list provided by the discover method, calling the inspector for each of those resources, resulting in one or more samples. In the SNMP case, we inherit most of the functionality from the parent class, ceilometer.hardware.plugin.HardwarePollster. The get_samples method in that class handles calling the inspector and then calls a generate_samples method to convert the data returned by the inspector into a Sample object, which in turn calls generate_one_sample. This is pretty typical through the Ceilometer code, and makes it easy to override and customize the behavior - we simply needed to override the generate_one_sample method. The inspector class in our case was also largely provided by the existing Ceilometer code. We simply subclass that, and define the specific SNMP OIDs to poll, and make sure that the _get_inspector call of the pollster returns our custom inspector. If you are using another method like HTTP, you may have to define a truly new inspector. So, that is all there is to it: define a discovery class, an inspector class, and a pollster class. Register those in a setup.cfg for your package, and it can be installed and start polling new data from your instances. That data will show up via the normal Ceilometer API and CLI calls – for example, here is a call returning the queries/sec meter: dev@ubuntu:~/devstack$ ceilometer sample-list -m nios.dns.qps -l 10 +--------------------------------------+--------------+-------+--------+-----------+----------------------------+ | Resource ID | Name | Type | Volume | Unit | Timestamp | +--------------------------------------+--------------+-------+--------+-----------+----------------------------+ | e5611555-df6b-4c34-a16e-5bca04ada36c | nios.dns.qps | gauge | 0.0 | queries/s | 2015-10-23T23:01:53.779767 | | ccb589ca-72a9-4ebe-85d3-914832ea0e81 | nios.dns.qps | gauge | 303.0 | queries/s | 2015-10-23T23:01:53.779680 | | e5611555-df6b-4c34-a16e-5bca04ada36c | nios.dns.qps | gauge | 0.0 | queries/s | 2015-10-23T23:01:00.138366 | | ccb589ca-72a9-4ebe-85d3-914832ea0e81 | nios.dns.qps | gauge | 366.0 | queries/s | 2015-10-23T23:01:00.138267 | | e5611555-df6b-4c34-a16e-5bca04ada36c | nios.dns.qps | gauge | 0.0 | queries/s | 2015-10-23T23:00:58.571506 | | ccb589ca-72a9-4ebe-85d3-914832ea0e81 | nios.dns.qps | gauge | 366.0 | queries/s | 2015-10-23T23:00:58.571431 | | e5611555-df6b-4c34-a16e-5bca04ada36c | nios.dns.qps | gauge | 0.0 | queries/s | 2015-10-23T22:58:25.940403 | | ccb589ca-72a9-4ebe-85d3-914832ea0e81 | nios.dns.qps | gauge | 0.0 | queries/s | 2015-10-23T22:58:25.940289 | | e5611555-df6b-4c34-a16e-5bca04ada36c | nios.dns.qps | gauge | 0.0 | queries/s | 2015-10-23T22:57:55.988727 | | ccb589ca-72a9-4ebe-85d3-914832ea0e81 | nios.dns.qps | gauge | 0.0 | queries/s | 2015-10-23T22:57:55.988633 | +--------------------------------------+--------------+-------+--------+-----------+----------------------------+    Click here to further your OpenStack skillset by setting up VPNaaS with our new article.   About the author John Belamaric is a software and systems architect with nearly 20 years of software design and development experience, his current focus is on cloud network automation. He is a key architect of the Infoblox Cloud products, concentrating on OpenStack integration and development. He brings to this his experience as the lead architect for the Infoblox Network Automation product line, along with a wealth of networking, network management, software, and product design knowledge. He is a contributor to both the OpenStack Neutron and Designate projects. He lives in Bethesda, Maryland with his wife Robin and two children, Owen and Audrey.
Read more
  • 0
  • 0
  • 3698

article-image-overview-tdd
Packt
06 Nov 2015
11 min read
Save for later

Overview of TDD

Packt
06 Nov 2015
11 min read
 In this article, by Ravi Gupta, Harmeet Singh, and Hetal Prajapati, authors of the book Test-Driven JavaScript Development explain how testing is one of the most important phases in the development of any project, and in the traditional software development model. Testing is usually executed after the code for functionality is written. Test-driven development (TDD) makes a big difference by writing tests before the actual code. You are going to learn TDD for JavaScript and see how this approach can be utilized in the projects. In this article, you are going to learn the following: Complexity of web pages Understanding TDD Benefits of TDD and common myths (For more resources related to this topic, see here.) Complexity of web pages When Tim Berners-Lee wrote the first ever web browser around 1990, it was supposed to run HTML, neither CSS nor JavaScript. Who knew that WWW will be the most powerful communication medium? Since then, there are now a number of technologies and tools which help us write the code and run it for our needs. We do a lot these days with the help of the Internet. We shop, read, learn, share, and collaborate... well, a few words are not going to suffice to explain what we do on the Internet, are they? Over the period of time, our needs have grown to a very complex level, so is the complexity of code written for websites. It's not plain HTML anymore, not some CSS style, not some basic JavaScript tweaks. That time has passed. Pick any site you visit daily, view source by opening developer tools of the browser, and look at the source code of the site. What do you see? Too much code? Too many styles? Too many scripts? The JavaScript code and CSS code is so huge to keep it in as inline, and we need to keep them in different files, sometimes even different folders to keep them organized. Now, what happens before you publish all the code live? You test it. You test each line and see if that works fine. Well, that's a programmer's job. Zero defect, that's what every organization tries to achieve. When that is in focus, testing comes into picture, more importantly, a development style, which is essentially test driven. As the title says for this article, we're going to keep our focus on test-driven JavaScript development.   Understanding Test-driven development TDD, short for Test-driven development, is a process for software development. Kent Beck, who is known for development of TDD, refers this as "Rediscovery." Kent's answer to a question on Quora can be found at https://www.quora.com/Why-does-Kent-Beck-refer-to-the-rediscovery-of-test-driven-development. "The original description of TDD was in an ancient book about programming. It said you take the input tape, manually type in the output tape you expect, then program until the actual output tape matches the expected output. After I'd written the first xUnit framework in Smalltalk I remembered reading this and tried it out. That was the origin of TDD for me. When describing TDD to older programmers, I often hear, "Of course. How else could you program?" Therefore I refer to my role as "rediscovering" TDD." If you go and try to find references to TDD, you would even get few references from 1968. It's not a new technique, though did not get so much attention yet. Recently, an interest toward TDD is growing, and as a result, there are a number of tools on the Web. For example, Jasmine, Mocha, DalekJS, JsUnit, QUnit, and Karma are among these popular tools and frameworks. More specifically, test-driven JavaScript development is getting popular these days. Test-driven development is a software development process, which enforces a developer to write test before production code. A developer writes a test, expects a behavior, and writes code to make the test pass. It is needless to mention that the test will always fail at the start. Need of testing To err is human. As a developer, it's not easy to find defects in our own code and often we think that our code is perfect. But there are always some chances that a defect is present in the code. Every organization or individual wants to deliver the best software they can. This is one major reason that every software, every piece of code is well tested before its release. Testing helps to detect and correct defects. There are a number of reasons why testing is needed. They are as follows: To check if the software is functioning as per the requirements There will not be just one device or one platform to run your software The end user will perform an action as a programmer you never expected There was a study conducted by National Institute of Standards and Technology (NIST) in 2002, which reported that software bugs cost the U.S. economy around $60 billion annually. With better testing, more than one-third of the cost could be avoided. The earlier the defect is found, the cheaper it is to fix it. A defect found post release would cost 10-100 times more to fix than if it had already been detected and fixed. The report of the study performed by NIST can be found at http://www.nist.gov/director/planning/upload/report02-3.pdf. If we draw a curve for the cost, it comes as an exponential when it comes to cost. The following figure clearly shows that the cost increases as the project matures with time. Sometimes, it's not possible to fix a defect without making changes in the architecture. In those cases, the cost, sometimes, is so much that developing the software from scratch seems like a better option. Benefits of TDD and common myths Every methodology has its own benefits and myths among people. The following sections will analyze the key benefits and most common myths of TDD. Benefits TDD has its own advantages over regular development approaches. There are a number of benefits, which help make a decision of using TDD over the traditional approach. Automated testing: If you did see a website code, you know that it's not easy to maintain and test all the scripts manually and keep them working. A tester may leave a few checks, but automated tests won't. Manual testing is error prone and slow. Lower cost of overall development: With TDD, the number of debugs is significantly decreased. You develop some code; run tests, if you fail, re-doing the development is significantly faster than debugging and fixing it. TDD aims at detecting defect and correcting them at an early stage, which costs much cheaper than detecting and correcting at a later stage or post release. Also, now debugging is very less frequent and significant amount of time is saved. With the help of tools/test runners like Karma, JSTestDriver, and so on, running every JavaScript tests on browser is not needed, which saves significant time in validation and verification while the development goes on. Increased productivity: Apart from time and financial benefits, TDD helps to increase productivity since the developer becomes more focused and tends to write quality code that passes and fulfills the requirement. Clean, maintainable, and flexible code: Since tests are written first, production code is often very neat and simple. When a new piece of code is added, all the tests can be run at once to see if anything failed with the change. Since we try to keep our tests atomic, and our methods also address a single goal, the code automatically becomes clean. At the end of the application development, there would be thousands of test cases which will guarantee that every piece of logic can be tested. The same test cases also act as documentation for users who are new to the development of system, since these tests act as an example of how the code works. Improved quality and reduced bugs: Complex codes invite bugs. Developers when change anything in neat and simple code, they tend to leave less or no bugs at all. They tend to focus on purpose and write code to fulfill the requirement. Keeps technical debt to minimum: This is one of the major benefits of TDD. Not writing unit tests and documentation is a big part, which increases technical debt for a software/project. Since TDD encourages you to write tests first, and if they are well written, they act as documentation, you keep technical debt for these to minimum. As Wikipedia says, A technical debt can be defined as tasks to be performed before a unit can be called complete. If the debt is not repaid, interest also adds up and makes it harder to make changes at a later stage. More about Technical debt can be found at https://en.wikipedia.org/wiki/Technical_debt. Myths Along with the benefits, TDD has some myths as well. Let's check few of them: Complete code coverage: TDD enforces to write tests first and developers write minimum amount of code to pass the test and almost 100% code coverage is done. But that does not guarantee that nothing is missed and the code is bug free. Code coverage tools do not cover all the paths. There can be infinite possibilities in loops. Of course it's not possible and feasible to check all the paths, but a developer is supposed to take care of major and critical paths. A developer is supposed to take care of business logic, flow, and process code most of the times. No need to test integration parts, setter-getter methods for properties, configurations, UI, and so on. Mocking and stubbing is to be used for integrations. No need of debugging the code: Though test-first development makes one think that debugging is not needed, but it's not always true. You need to know the state of the system when a test failed. That will help you to correct and write the code further. No need of QA: TDD cannot always cover everything. QA plays a very important role in testing. UI defects, integration defects are more likely to be caught by a QA. Even though developers are excellent, there are chances of errors. QA will try every kind of input and unexpected behavior that even a programmer did not cover with test cases. They will always try to crash the system with random inputs and discover defects. I can code faster without tests and can also validate for zero defect: While this may stand true for very small software and websites where code is small and writing test cases may increase overall time of development and delivery of the product. But for bigger products, it helps a lot to identify defects at a very early stage and gives a chance to correct at a very low cost. As seen in the previous screenshots of cost of fixing defects for phases and testing types, the cost of correcting a defect increases with time. Truly, whether TDD is required for a project or not, it depends on context. TDD ensures a good design and architecture: TDD encourages developers to write quality code, but it is not a replacement for good design practice and quality code. Will a team of developers be enough to ensure a stable and scalable architecture? Design should still be done by following the standard practices. You need to write all tests first: Another myth says that you need to write all tests first and then the actual production code. Actually, generally an iterative approach is used. Write some tests first, then some code, run the tests, fix the code, run the tests, write more tests, and so on. With TDD, you always test parts of software and keep developing. There are many myths, and covering all of them is not possible. The point is, TDD offers developers a better opportunity of delivering quality code. TDD helps organizations by delivering close to zero-defect products. Summary In this article, you learned about what TDD is. You learned about the benefits and myths of TDD. Resources for Article: Further resources on this subject: Understanding outside-in [article] Jenkins Continuous Integration [article] Understanding TDD [article]
Read more
  • 0
  • 0
  • 3409

article-image-transforming-data-pivot-transform-data-services
Packt
05 Nov 2015
7 min read
Save for later

Transforming data with the Pivot transform in Data Services

Packt
05 Nov 2015
7 min read
Transforming data with the Pivot transform in Data Services In this article by Iwan Shomnikov, author of the book SAP Data Services 4.x Cookbook, you will learn that the Pivot transform belongs to the Data Integrator group of transform objects in Data Services, which are usually all about the generation or transformation (meaning change in the structure) of data. Simply inserting the Pivot transform allows you to convert columns into rows. Pivot transformation increases the amount of rows in a dataset as for every column converted into a row, an extra row is created for every key (the non-pivoted column) pair. Converted columns are called pivot columns. Pivoting rows to columns or columns to rows is quite a common transformation operation in data migration tasks, and traditionally, the simplest way to perform it with the standard SQL language is to use the decode() function inside your SELECT statements. Depending on the complexity of the source and target datasets before and after pivoting, the SELECT statement can be extremely heavy and difficult to understand. Data Services provide a simple and flexible way of pivoting data inside the Data Services ETL code using the Pivot and Reverse_Pivot dataflow object transforms. The following steps show how exactly you can create, configure, and use these transforms in Data Services in order to pivot your data. (For more resources related to this topic, see here.) Getting ready We will use the SQL Server database for the source and target objects, which will be used to demonstrate the Pivot transform available in Data Services. The steps in this section describe the preparation of a source table and the data required in it for a demonstration of the Pivot transform in the Data Services development environment: Create a new database or import the existing test database, AdventureWorks OLTP, available for download and free test use at https://msftdbprodsamples.codeplex.com/releases/view/55330. We will download the database file from the preceding link and deploy it to SQL Server, naming our database AdventureWorks_OLTP. Run the following SQL statements against the AdventureWorks_OLTP database to create a source table and populate it with data: create table Sales.AccountBalance ( [AccountID] integer, [AccountNumber] integer, [Year] integer, [Q1] decimal(10,2), [Q2] decimal(10,2), [Q3] decimal(10,2), [Q4] decimal(10,2)); -- Row 1 insert into Sales.AccountBalance ([AccountID],[AccountNumber],[Year],[Q1],[Q2],[Q3],[Q4]) values (1,100,2015,100.00,150.00,120.00,300.00); -- Row 2 insert into Sales.AccountBalance ([AccountID],[AccountNumber],[Year],[Q1],[Q2],[Q3],[Q4]) values (2,100,2015,50.00,350.00,620.00,180.00); -- Row 3 insert into Sales.AccountBalance ([AccountID],[AccountNumber],[Year],[Q1],[Q2],[Q3],[Q4]) values (3,200,2015,333.33,440.00,12.00,105.50); So, the source table would look similar to the one in the following figure: Create an OLTP datastore in Data Services, referencing the AdventureWorks_OLTP database and AccountBalance import table created in the previous step in it. Create the DS_STAGE datastore in Data Services pointing to the same OLTP database. We will use this datastore as a target to stage in our environment, where we insert the resulting pivoted dataset extracted from the OLTP system. How to do it… This section describes the ETL development process, which takes place in the Data Services Designer application. We will not create any workflow or script object in our test jobs; we will keep things simple and have only one batch job object with a dataflow object inside it performing the migration and pivoting of data from the ACCOUNTBALANCE source table of our OLTP database. Here are the steps to do this: Create a new batch job object and place the new dataflow in it, naming it DF_OLTP_Pivot_STAGE_AccountBalance. Open the dataflow in the workspace window to edit it, and place the ACCOUNTBALANCE source table from the OLTP datastore created in the preparation steps. Link the source table to the Extract query transform, and propagate all the source columns to the target schema. Place the new Pivot transform object in a dataflow and link the Extract query to it. The Pivot transform can be found by navigating to Local Object Library | Transforms | Data Integrator. Open the Pivot transform in the workspace to edit it, and configure its parameters according to the following screenshot:   Close the Pivot transform and link it to another query transform named Prepare_to_Load. Propagate all the source columns to the target schema of the Prepare_to_Load transform, and finally link it to the target ACCOUNTBALANCE template table created in the DS_STAGE datastore. Choose the dbo schema when creating the ACCOUNTBALANCE template table object in this datastore. Before executing the job, open the Prepare_to_Load query transform in a workspace window, double-click on the PIVOT_SEQ column, and select the Primary key checkbox to specify the additional column as being the primary key column for the migrated dataset. Save and run the job, selecting the default execution options. Open the dataflow again and import the target table, putting the Delete data from table before loading flag in the target table loading options. How it works… Pivot columns are columns whose values are merged in one column after the pivoting operation, thus producing an extra row for every pivoted column. Non-pivot columns are columns that are not affected by the pivot operation. As you can see, the pivoting operation denormalizes the dataset, generating more rows. This is why ACCOUNTID does not define the uniqueness of the record anymore, and we have to specify the extra key column, PIVOT_SEQ.   So, you may wonder, why pivot? Why don't we just use data as it is and perform the required operation on data from the columns Q1-Q4? The answer, in the given example, is very simple—it is much more difficult to perform an aggregation when the amounts are spread across different columns. Instead of summarizing using a single column with the sum(AMOUNT) function, we would have to write the sum(Q1 + Q2 + Q3 + Q4) expression every time. Quarters are not the worst part yet; imagine a situation where a table has huge amounts of data stored in columns defining month periods and you have to filter data by these time periods. Of course, the opposite case exists as well; storing data across multiple columns instead of just in one is justified. In this case, if your data structure is not similar to this, you can use the Reverse_Pivot transform, which does exactly the opposite—it converts rows into columns. Look at the following example of a Reverse_Pivot configuration: Reverse pivoting or transformation of rows into columns leads us to introduce another term—Pivot axis column. This is a column that holds categories defining different columns after a reverse pivot operation. It corresponds to the Header column option in the Pivot transform configuration. Summary As you noted in this article, the Pivot and Reverse_Pivot transform objects available in Data Services Designer are a simple and easily configurable way to pivot data of any complexity. The GUI of the Designer tool makes maintaining the ETL process developed in Data Services easy and keeps it readable. If you make any changes to the pivot configuration options, Data Services automatically regenerates the output schema in pivot transforms accordingly.   Resources for Article: Further resources on this subject: Sabermetrics with Apache Spark [article] Understanding Text Search and Hierarchies in SAP HANA [article] Meeting SAP Lumira [article]
Read more
  • 0
  • 0
  • 5624

article-image-e-commerce-mean
Packt
05 Nov 2015
8 min read
Save for later

E-commerce with MEAN

Packt
05 Nov 2015
8 min read
These days e-commerce platforms are widely available. However, as common as they might be, there are instances that after investing a significant amount of time learning how to use a specific tool you might realize that it can not fit your unique e-commerce needs as it promised. Hence, a great advantage of building your own application with an agile framework is that you can quickly meet your immediate and future needs with a system that you fully understand. Adrian Mejia Rosario, the author of the book, Building an E-Commerce Application with MEAN, shows us how MEAN stack (MongoDB, ExpressJS, AngularJS and NodeJS) is a killer JavaScript and full-stack combination. It provides agile development without compromising on performance and scalability. It is ideal for the purpose of building responsive applications with a large user base such as e-commerce applications. Let's have a look at a project using MEAN. (For more resources related to this topic, see here.) Understanding the project structure The applications built with the angular-fullstack generator have many files and directories. Some code goes in the client, other executes in the backend and another portion is just needed for development cases such as the tests suites. It’s important to understand the layout to keep the code organized. The Yeoman generators are time savers! They are created and maintained by the community following the current best practices. It creates many directories and a lot of boilerplate code to get you started. The numbers of unknown files in there might be overwhelming at first. On reviewing the directory structure created, we see that there are three main directories: client, e2e and server: The client folder will contain the AngularJS files and assets. The server directory will contain the NodeJS files, which handles ExpressJS and MongoDB. Finally, the e2e files will contain the AngularJS end-to-end tests. File Structure This is the overview of the file structure of this project: meanshop ├── client │ ├── app - App specific components │ ├── assets - Custom assets: fonts, images, etc… │ └── components - Non-app specific/reusable components │ ├── e2e - Protractor end to end tests │ └── server ├── api - Apps server API ├── auth - Authentication handlers ├── components - App-wide/reusable components ├── config - App configuration │ └── local.env.js - Environment variables │ └── environment - Node environment configuration └── views - Server rendered views Components You might be already familiar with a number of tools used in this project. If that’s not the case, you can read the brief description here. Testing AngularJS comes with a default test runner called Karma and we are going going to leverage its default choices: Karma: JavaScript unit test runner. Jasmine: It's a BDD framework to test JavaScript code. It is executed with Karma. Protractor: They are end-to-end tests for AngularJS. These are the highest levels of testing that run in the browser and simulate user interactions with the app. Tools The following are some of the tools/libraries that we are going to use in order to increase our productivity: GruntJS: It's a tool that serves to automate repetitive tasks, such as a CSS/JS minification, compilation, unit testing, and JS linting. Yeoman (yo): It's a CLI tool to scaffold web projects., It automates directory creation and file creation through generators and also provides command lines for common tasks. Travis CI: Travis CI is a continuous integration tool that runs your test suites every time you commit to the repository. EditorConfig: EditorConfig is an IDE plugin that loads the configuration from a file .editorconfig. For example, you can set indent_size = 2 indent with spaces, tabs, and so on. It’s a time saver and helps maintain consistency across multiple IDEs/teams. SocketIO: It's a library that enables real-time bidirectional communication between the server and the client. Bootstrap: It's a frontend framework for web development. We are going to use it to build the theme thought-out for this project. AngularJS full-stack: It's a generator for Yeoman that will provide useful command lines to quickly generate server/client code and deploy it to Heroku or OpenShift. BabelJS: It's a js-tojs compiler that allows to use features from the next generation JavaScript (ECMAScript 6), currently without waiting for browser support. Git: It's a distributed code versioning control system. Package managers We have package managers for our third-party backend and frontend modules. They are as follows: NPM: It is the default package manager for NodeJS. Bower: It is the frontend package manager that can be used to handle versions and dependencies of libraries and assets used in a web project. The file bower.json contains the packages and versions to install and the file .bowerrc contains the path where those packages are to be installed. The default directory is ./bower_components. Bower packages If you have followed the exact steps to scaffold our app you will have the following frontend components installed: angular angular-cookies angular-mocks angular-resource angular-sanitize angular-scenario angular-ui-router angular-socket-io angular-bootstrap bootstrap es5-shim font-awesome json3 jquery lodash Previewing the final e-commerce app Let’s take a pause from the terminal. In any project, before starting coding, we need to spend some time planning and visualizing what we are aiming for. That’s exactly what we are going to do, draw some wireframes that walk us through the app. Our e-commerce app, MEANshop, will have three main sections: Homepage Marketplace Back-office Homepage The home page will contain featured products, navigation, menus, and basic information, as you can see in the following image: Figure 2 - Wireframe of the homepage Marketplace This section will show all the products, categories, and search results. Figure 3 - Wireframe of the products page Back-office You need to be a registered user to access the back office section, as shown in the following figure:   Figure 4 - Wireframe of the login page After you login, it will present you with different options depending on the role. If you are the seller, you can create new products, such as the following: Figure 5 - Wireframe of the Product creation page If you are an admin, you can do everything that a seller does (create products) plus you can manage all the users and delete/edit products. Understanding requirements for e-commerce applications There’s no better way than to learn new concepts and technologies while developing something useful with it. This is why we are building a real-time e-commerce application from scratch. However, there are many kinds of e-commerce apps. In the following sections we will delimit what we are going to do. Minimum viable product for an e-commerce site Even the largest applications that we see today started small and grew their way up. The minimum viable product (MVP) is strictly the minimum that an application needs to work on. In the e-commerce example, it will be: Add products with title, price, description, photo, and quantity. Guest checkout page for products. One payment integration (for example, Paypal). This is strictly the minimum requirement to get an e-commerce site working. We are going to start with these but by no means will we stop there. We will keep adding features as we go and build a framework that will allow us to extend the functionality with high quality. Defining the requirements We are going to capture our requirements for the e-commerce application with user stories. A user story is a brief description of a feature told from the perspective of a user where he expresses his desire and benefit in the following format: As a <role>, I want <desire> [so that <benefit>] User stories and many other concepts were introduced with the Agile Manifesto. Learn more at https://en.wikipedia.org/wiki/Agile_software_development Here are the features that we are planning to develop through this book that have been captured as user stories: As a seller, I want to create products. As a user, I want to see all published products and its details when I click on them. As a user, I want to search for a product so that I can find what I’m looking for quickly. As a user, I want to have a category navigation menu so that I can narrow down the search results. As a user, I want to have real-time information so that I can know immediately if a product just got sold-out or became available. As a user, I want to check out products as a guest user so that I can quickly purchase an item without registering. As a user, I want to create an account so that I can save my shipping addresses, see my purchase history, and sell products. As an admin, I want to manage user roles so that I can create new admins, sellers, and remove seller permission. As an admin, I want to manage all the products so that I can ban them if they are not appropriate. As an admin, I want to see a summary of the activities and order status. All these stories might seem verbose but they are useful in capturing requirements in a consistent way. They are also handy to develop test cases against it. Summary Now that we have a gist of an e-commerce app with MEAN, lets build a full-fledged e-commerce project with Building an E-Commerce Application with MEAN. Resources for Article:   Further resources on this subject: Introduction to Couchbase [article] Protecting Your Bitcoins [article] DynamoDB Best Practices [article]
Read more
  • 0
  • 0
  • 15867
article-image-task-automation
Packt
05 Nov 2015
33 min read
Save for later

Task Automation

Packt
05 Nov 2015
33 min read
In this article by Kerri Shotts, author of the Mastering PhoneGap Mobile Application Development, you will learn the following topics: Logology, our demonstration app Why Gulp for Task Automation Setting up your app's directory structure Installing Gulp Creating your first Gulp configuration file Performing substitutions Executing Cordova tasks Managing version numbers Supporting ES2015 Linting your code Minifying/uglifying your code (For more resources related to this topic, see here.) Before we begin Before you continue with this article, ensure that you have the following tools installed. The version that was used in this article is listed as well, for your reference: Git (http://git-scm.com, v2.8.3) Node.js (http://nodejs.org, v0.12.2) npm, short for Node Package Manager (typically installed with Node.js, v2.7.4) Cordova 5.x (http://cordova.apache.org, v5.2.0) or PhoneGap 5.x (http://www.phonegap.com, v5.2.2) You'll need to execute the following in each directory in order to build the projects: # On Linux / Mac OS X $ npm install && gulp init % On Windows >npm install > gulp init If you're not intending to build the sample application in the code bundle, be sure to create a new directory that can serve as a container for all the work you'll be doing in this article.. Just remember, each time you create a new directory and copy the prior version to it, you'll need to execute npm install and gulp init to set things up. About Logology I'm calling it Logology—and if you're familiar with any Greek words, you might have already guessed what the app will be: a dictionary. Now, I understand that this is not necessarily the coolest app, but it is sufficient for our purposes. It will help you learn how advanced mobile development is done. By the time we're done, the app will have the following features: Search: The user will be able to search for a term Browse: The user will be able to browse the dictionary Responsive design: The app will size itself appropriately to any display size Accessibility: The app will be usable even if the user has visual difficulties Persistent storage: The app will persist settings and other user-generated information File downloads: The app will be able to download new content Although the app sounds relatively simple, it's complex enough to benefit from task automation. Since it is useful to have task automation in place from the very beginning, we'll install Gulp and verify that it is working with some simple files first, before we really get to the meat of implementing Logology. As such, the app we build in this article is very simple: it exists to verify that our tasks are working correctly. Once we have verified our workflow, we can go on to the more complicated project at hand. You may think that working through this is very time-consuming, but it pays off in the long run. Once you have a workflow that you like, you can take that workflow and apply it to the other apps you may build in the future. This means that future apps can be started almost immediately (just copy the configuration from a previous app). Even if you don't write other apps, the time you saved from having a task runner outweigh the initial setup time. Why Gulpfor task automation? Gulp (http://gulpjs.com) is a task automation utility using the Node.js platform. Unlike some other task runners, one configures Gulp by writing the JavaScript code. The configuration for Gulp is just like any other JavaScript file, which means that if you know JavaScript, you can start defining tasks quickly. Gulp also uses the concept of "streams" (again, from Node.js). This makes Gulp very efficient. Plugins can be inserted within these steams to perform many different transformations, including beautification or uglification, transpilation (for example, ECMAScript 6 to ECMAScript 2015), concatenation, packaging, and much more. If you've performed any sort of piping on the command line, Gulp should feel familiar, because it operates on a similar concept. The output from one process is piped to the next process, which performs any number of transformations, and so on, until the final output is written to another location. Gulp also tries to run as many dependent tasks in parallel as possible. Ideally, this makes running Gulp tasks faster, although it really depends on how your tasks are structured. Other task runners such as Grunt will perform their task steps in sequence, which may result in slower output, although tracing the steps from input to output may be easier to follow when the steps are performed sequentially. That's not to say that Gulp is the best task runner—there are many that are quite good, and you may find that you prefer one of them over Gulp. The skills you learn in this article can easily be transferred to other task running and build systems. Here are some other task runners that are useful: Grunt (http://www.gruntjs.com): This configuration is specified through settings, not code. Tasks are performed sequentially. Cake (http://coffeescript.org/documentation/docs/cake.html): This uses CoffeeScript and the configuration is specified via code, such as Gulp. If you like using CoffeeScript, you might prefer this over Gulp. Broccoli (https://github.com/broccolijs/broccoli): This also uses the configuration through code. Installing Gulp Installing Gulp is easy, but is actually a two-step process. The first step is to install Gulp globally. This installs the command-line utility, but Gulp actually won't work without also being installed locally within our project. If you aren't familiar with Node.js, packages can be installed locally and/or globally. A locally installed package is local to the project's root directory, while a globally installed package is specific to the developer's machine. Project dependencies are tracked in package.json, which makes it easy to replicate your development setup on another machine. Assuming you have Node.js installed and package.json created in your project directory, the installation of Gulp will go very easily. Be sure to be positioned in your project's root directory and then execute the following: $ npm install -g gulp $ npm install --save-dev gulp If you receive an error while running these commands on OS X, you may need to run them with sudo. For example: sudo install -g gulp. You can usually ignore any WARN messages. It's a good idea to be positioned in your project's root directory any time you execute an npm or gulp command. On Linux and OS X, these commands generally will locate the project's root directory automatically, but this isn't guaranteed on all platforms, so it's better to be safe than sorry. That's it! Gulp itself is very easy to install, but most workflows will require additional plugins that work with Gulp. In addition, we'll also install Cordova dependencies for this project. First, let's install the Cordova dependencies: $ npm install --save-dev cordova-lib cordova-ios cordova-android cordova-lib allows us to programmatically interact with Cordova. We can create projects, build them, and emulate them—everything we can do with the Cordova command line we can do with cordova-lib. cordova-ios and cordova-android refer to the iOS and Android platforms that cordova platform add ios android would add. We've made them dependencies for our project, so we can easily control the version we build with. While starting a new project, it's wise to start with the most recent version of Cordova and the requisite platforms. Once you begin, it's usually a good practice to stick with a specific platform version unless there are serious bugs or the like. Next, let's install the Gulp plugins we'll need: $ npm install --save-dev babel-eslint cordova-android cordova-ios cordova-lib cordova-tasks gulp gulp-babel gulp-bump gulp-concat gulp-eslint gulp-jscs gulp-notify gulp-rename gulp-replace-task gulp-sourcemaps gulp-uglify gulp-util               merge-stream rimraf These will take a few moments to install; but when you're done, take a look in package.json. Notice that all the dependencies we added were also added to the devDependencies. This makes it easy to install all the project's dependencies at a later date (say, on a new machine) simply by executing npm install. Before we go on, let's quickly go over what each of the above utility does. We'll go over them in more detail as we progress through the remainder of this article. gulp-babel: Converts ES2015 JavaScript into ES5. If you aren't familiar with ES2015, it has several new features and an improved syntax that makes writing mobile apps that much easier. Unfortunately, because most browsers don't yet natively support the ES2015 features and syntax, it must be transpiled to ES5 syntax. Of course, if you prefer other languages that can be compiled to ES5 JavaScript, you could use those as well (these would include CoffeeScript and similar). gulp-bump: This small utility manages version numbers in package.json. gulp-concat: Concatenates streams together. We can use this to bundle files together. gulp-jscs: Performs the JavaScript code style checks against your code. Supports ES2015. gulp-eslint: Lints your JavaScript code. Supports ES2015. babel-eslint: Provides ES2015 support to gulp-eslint. gulp-notify: This is an optional plugin, but it is handy especially when some of your tasks take a few seconds to run. This plugin will send a notification to your computer's notification panel when something of import occurs. If the plugin can't send it to your notification panel, it logs to the console. gulp-rename: Renames streams. gulp-replace-task: Performs search and replace within streams. gulp-sourcemaps: When transpiling ES2015 to ES5, it can be helpful to have a map between the original source and the transpiled source. This plugin creates them as a part of the workflow. gulp-uglify: Uglifies/minifies code. While useful for code obfuscation, it also reduces the size of your code. gulp-util: Additional utilities for Gulp, such as logging. merge-stream: Merges multiple tasks. rimraf: Easy file deletion. Akin to rm on the command line. Creating your first Gulp configuration file Gulp tasks are defined by the contents of the project's gulpfile.js file. This is a JavaScript program, so the same skills you have with JavaScript will apply here. Furthermore, it's executed by Node.js, so if you have any Node.js knowledge, you can use it to your advantage. This file should be placed in the root directory of your project, and must be named gulpfile.js. The first few lines of your Gulp configuration file will require the Gulp plugins that you'll need in order to complete your tasks. The following lines then specify how to perform various tasks. For example, a very simple configuration might look like this: var gulp = require("gulp"); gulp.task("copy-files", function () { gulp.src(["./src/**/*"])       .pipe(gulp.dest("./build")); }); This configuration only performs one task: it moves all the files contained within src/ to build/. In many ways, this is the simplest form of a build workflow, but it's a bit too simple for our purposes. Note the pattern we use to match all the files. If you need to see the documentation on what patterns are supported, see https://www.npmjs.com/package/glob. To execute the task, one can execute gulp copy-files. Gulp would then execute the task and copy all the files from src/ to build/. What makes Gulp so powerful is the concept of task composition. Tasks can depend on any number of other tasks, and those tasks can depend on yet more tasks. This makes it easy to create complex workflows out of simpler pieces. Furthermore, each task is asynchronous, so it is possible for many tasks with no shared dependencies to operate in parallel. Each task, as you can see in the prior code, is comprised of selecting a series of source files (src()), optionally performing some additional processing on each file (via pipe()), and then writing those files to a destination path (dest()). If no additional processing is specified (as in the prior example), Gulp will simply copy the files that match the wildcard pattern. The beauty of streams, however, is that one can execute any number of transformations before the final data is saved to storage, and so workflows can become very complex. Now that you've seen a simple task, let's get into some more complicated tasks in the next section. How to execute Cordova tasks It's tempting to use the Cordova command-line interface directly, but there's a problem with this: there's no great way to ensure that what you write will work across multiple platforms. If you are certain you'll only work with a specific platform, you can go ahead and execute shell commands instead; but what we're going to do is a bit more flexible. The code in this section is inspired by https://github.com/kamrik/CordovaGulpTemplate. The Cordova CLI is really just a thin wrapper around the cordova-lib project. Everything the Cordova CLI can do, cordova-lib can do as well. Because the Cordova project will be a build artifact, we need to be able to create a Cordova project in addition to building the project. We'll also need to emulate and run the app. To do this, we first require cordova-lib at the top of our Gulp configuration file (following the other require statements): var cordovaLib = require("cordova-lib"); var cordova = cordovaLib.cordova.raw; var rimraf = require("rimraf"); Next, let's create the code to create a new Cordova project in the build directory: var cordovaTasks = {     // CLI: cordova create ./build com.example.app app_name     //              --copy-from template_path create: function create() { return cordova.create(BUILD_DIR, pkg.cordova.id,                               pkg.cordova.name,                 { lib: { www: { url: path.join(__dirname,                                     pkg.cordova.template), link: false                     }                   }                 });     } } Although it's a bit more complicated than cordova create is on the command line, you should be able to see the parallels. The lib object that is passed is simply to provide a template for the project (equivalent to --copy-from on the command line). In our case, package.json specifies that this should come from the blank/ directory. If we don't do this, all our apps would be created with the sample Hello World app that Cordova installs by default. Our blank project template resides in ../blank, relative from the project root. Yours may reside elsewhere (since you're apt to reuse the same template), so package.json can use whatever path you need. Or, you might want the template to be within your project's root; in which case, package.json should use a path inside your project's root directory. We won't create a task to use this just yet — we need to define several other methods to build and emulate Cordova apps: var gutil = require("gulp-util"); var PLATFORM = gutil.env.platform ? gutil.env.platform :"ios";                                                   // or android var BUILD_MODE = gutil.env.mode ? gutil.env.mode :"debug";                                                   // or release var BUILD_PLATFORMS = (gutil.env.for ? gutil.env.for                                     : "ios,android").split(","); var TARGET_DEVICE = gutil.env.target ? "--target=" + gutil.env.target :""; var cordovaTasks = { create: function create() {/* as above */}, cdProject: function cdProject() { process.chdir(path.join(BUILD_DIR, "www"));     }, cdUp: function cdUp() { process.chdir("..");     }, copyConfig: function copyConfig() { return gulp.src([path.join([SOURCE_DIR], "config.xml")])                 .pipe(performSubstitutions())                 .pipe(gulp.dest(BUILD_DIR"));     },     // cordova plugin add ... addPlugins: function addPlugins() { cordovaTasks.cdProject(); return cordova.plugins("add", pkg.cordova.plugins)             .then(cordovaTasks.cdUp);     },     // cordova platform add ... addPlatforms: function addPlatforms() { cordovaTasks.cdProject(); function transformPlatform(platform) { return path.join(__dirname, "node_modules", "cordova-" + platform);         } return cordova.platforms("add", pkg.cordova.platforms.map(transformPlatform))               .then(cordovaTasks.cdUp);     },     // cordova build <platforms> --release|debug     //                           --target=...|--device build: function build() { var target = TARGET_DEVICE; cordovaTasks.cdProject(); if (!target || target === "" || target === "--target=device") { target = "--device";       } return cordova.build({platforms:BUILD_PLATFORMS, options: ["--" + BUILD_MODE, target] })             .then(cordovaTasks.cdUp);     },     // cordova emulate ios|android --release|debug emulate: function emulate() { cordovaTasks.cdProject(); return cordova.emulate({ platforms: [PLATFORM], options: ["--" + BUILD_MODE,                                         TARGET_DEVICE] })             .then(cordovaTasks.cdUp);     },     // cordova run ios|android --release|debug run: function run() { cordovaTasks.cdProject(); return cordova.run({ platforms: [PLATFORM], options: ["--" + BUILD_MODE, "--device",                                     TARGET_DEVICE] })             .then(cordovaTasks.cdUp);     }, init: function() { return this.create()             .then(cordovaTasks.copyConfig)             .then(cordovaTasks.addPlugins)             .then(cordovaTasks.addPlatforms);     } }; Place cordovaTasks prior to projectTasks in your Gulp configuration. If you aren't familiar with promises, you might want to learn more about them. http://www.html5rocks.com/en/tutorials/es6/promises/ is a fantastic resource. Before we explain the preceding code, there's another change you need to make, and that's to projectTasks.copyConfig, because we move copyConfig to cordovaTasks: var projectTasks = { …, copyConfig: function() { return cordovaTasks.copyConfig();     }, ... } Most of the earlier mentioned tasks should be fairly self-explanatory — they correspond directly with their Cordova CLI counterparts. A few, however, need a little more explanation. cdProject / cdUp: These change the current working directory. All the cordova-lib commands after create need to be executed from within the Cordova project directory, not our project's root directory. You should notice them in several of the tasks. addPlatforms: The platforms are added directly from our project's dependencies, rather than from the Cordova CLI. This allows us to control the platform versions we are using. As such, addPlatforms has to do a little more work to specify the actual directory name of each platform. build: This executes the cordova build command. By default, the CLI will build every platform, but it's possible that we might want to control the platforms that are built, hence the use of BUILD_PLATFORMS. On iOS, the build for an emulator is different than the build for a physical device, so we also need a way to specify that, which is what TARGET_DEVICE is for. This will look for emulators with the name specified for the TARGET_DEVICE, but we might want to build for a physical device; in which case, we will look for device (or no target specified at all) and switch over to the --device flag, which forces Cordova to build for a physical device. init: This does the hard work of creating the Cordova project, copying the configuration file (and performing substitutions), adding plugins to the Cordova project, and then adding the platforms. Now is also a good time to mention that we can specify various settings with switches on the Gulp command line. In the earlier snippet, we're supporting the use of --platform to specify the platform to emulate or run, --mode to specify the build mode (debug or release), --for to determine what platforms Cordova will build for, and --target for specifying the target device. The code specifies sane defaults if these switches aren't specified, but they also allow the developer extra control over the workflow, which is very useful. For example, we'll be able to use commands like these: $ gulp build --for ios,android --target device $ gulp emulate --platform ios --target iPhone-6s $ gulp run --platform ios --mode release Next, let's write the code to actually perform various Cordova tasks — it's pretty simple: var projectTasks = {     ..., init: function init() { return cordovaTasks.init();     }, emulateCordova: function emulateCordova() { return cordovaTasks.emulate();     }, runCordova: function runCordova() { return cordovaTasks.run();     }, buildCordova: function buildCordova() { return cordovaTasks.build();     }, clean: function clean(cb) { rimraf(BUILD_DIR, cb);     },     ... }   … gulp.task("clean", projectTasks.clean); gulp.task("init", ["clean"], projectTasks.init); gulp.task("build", ["copy"], projectTasks.buildCordova); gulp.task("emulate", ["copy"], projectTasks.emulateCordova); gulp.task("run", ["copy"], projectTasks.runCordova); There's a catch with the cordovaTasks.create method — it will fail if anything is already in the build/ directory. As you can guess, this could easily happen, so we also created a projectTasks.clean method. This clean method uses rimraf to delete a specified directory. This is equivalent to using rm -rf build. We then build a Gulp task named init that depends on clean. So, whenever we execute gulp init, the old Cordova project will be removed and a new one will be created for us. Finally, note that the build (and other) tasks all depend on copy. This means that all our files in src/ will be copied (and transformed, if necessary) to build/ prior to executing the desired Cordova command. As you can see, our tasks are already becoming very complex, while also remaining graspable when taken singularly. This means we can now use the following tasks in Gulp: $ gulp init                   # create the cordova project;                               # cleaning first if needed $ gulp clean                  # remove the cordova project $ gulp build                  # copy src to build; apply                               # transformations; cordova build $ gulp build --mode release   # do the above, but build in                               # release mode $ gulp build --for ios        # only build for iOS $ gulp build --target=device  # build device versions instead of                               # emulator versions $ gulp emulate --platform ios # copy src to build; apply                               # transformations;                               # cordova emulate ios $ gulp emulate --platform ios --target iPhone-6                               # same as above, but open the                               # iPhone 6 emulator $ gulp run --platform ios     # copy src to build;                               # apply transformations;                               # cordova run ios --device Now, you're welcome to use the earlier code as it is, or you can use an NPM package that takes care of the cordovaTasks portion for you. This has the benefit of drastically shortening your Gulp configuration. We've already included this package in our package.json file — it's named cordova-tasks, and was created by the author, and shares a lot of similarities to the earlier code. To use it, the following needs to go at the top of our configuration file below all the other require statements: var cordova = require("cordova-tasks"); var cordovaTasks = new cordova.CordovaTasks(     {pkg: pkg, basePath: __dirname, buildDir: "build", sourceDir: "src", gulp: gulp, replace: replace}); Then, you can remove the entire cordovaTasks object from your configuration file as well. The projectTasks section needs to change only slightly: var projectTasks = { init: function init() { return cordovaTasks.init();     }, emulateCordova: function emulateCordova() { return cordovaTasks.emulate({buildMode: BUILD_MODE, platform: PLATFORM, options: [TARGET_DEVICE]});     }, runCordova: function runCordova() { return cordovaTasks.run({buildMode: BUILD_MODE, platform: PLATFORM, options: [TARGET_DEVICE]});     }, buildCordova: function buildCordova() { var target = TARGET_DEVICE; if (!target || target === "" || target === "--target=device") { target = "--device";       } return cordovaTasks.build({buildMode: BUILD_MODE, platforms: BUILD_PLATFORMS, options: [target]});     },... } There's one last thing to do: in copyCode change .pipe(performSubstitutions()) to .pipe(cordovaTasks.performSubstitutions()). This is because the cordova-tasks package automatically takes care of all of the substitutions that we need, including version numbers, plugins, platforms, and more. This was one of the more complex sections, so if you've come this far, take a coffee break. Next, we'll worry about managing version numbers. Supporting ES2015 We've already mentioned ES2015 (or EcmaScript 2015) in this article. Now is the moment we actually get to start using it. First, though, we need to modify our copy-code task to transpile from ES2015 to ES5, or our code wouldn't run on any browser that doesn't support the new syntax (that is still quite a few mobile platforms). There are several transpilers available: I prefer Babel (https://babeljs.io). There is a Gulp plugin that we can use that makes this transpilation transformation extremely simple. To do this, we need to add the following to the top of our Gulp configuration: var babel = require("gulp-babel"); var sourcemaps = require("gulp-sourcemaps"); Source maps are an important piece of the debugging puzzle. Because our code will be transformed by the time it is running on our device, it makes debugging a little more difficult since line numbers and the like don't match. Sourcemaps provides the browser with a map between your ES2015 code and the final result so that debugging is a lot easier. Next, let's modify our projectTasks.copyCode method: var projectTasks = { …, copyCode: function copyCode() { var isRelease = (BUILD_MODE === "release"); gulp.src(CODE_FILES)             .pipe(cordovaTasks.performSubstitutions())             .pipe(isRelease ? gutil.noop() : sourcemaps.init())             .pipe(babel())             .pipe(concat("app.js"))             .pipe(isRelease ? gutil.noop() : sourcemaps.write())             .pipe(gulp.dest(CODE_DEST));     },... } Our task is now a little more complex, but that's only because we want to control when the source maps are generated. When babel() is called, it will convert ES2015 code to ES5 and also generate a sourcemap of those changes. This makes debugging easier, but it also increases the file size by quite a large amount. As such, when we're building in release mode, we don't want to include the sourcemaps, so we call gutil.noop instead, which will just do nothing. The sourcemap functionality requires us to call sourcemaps.init prior to any Gulp plugin that might generate sourcemaps. After the plugin that creates the sourcemaps executes, we also have to call sourcemaps.write to save the sourcemap back to the stream. We could also write the sourcemap to a separate .map file by calling sourcemaps.write("."), but you do need to be careful about cleaning that file up while creating a release build. babel is what is doing the actual hard work of converting ES2015 code to ES5. But it does need a little help in the form of a small support library. We'll add this library to src/www/js/lib/ by copying it from the gulp-babel module: $ cp node_modules/babel-core/browser-polyfill.js src/www/js/lib If you don't have the src/www/js/lib/directory yet, you'll need to create it before executing the previous command. Next, we need to edit src/www/index.html to include this script. While we're at it, let's make a few other changes: <!DOCTYPE html> <html> <head> <script src="cordova.js" type="text/javascript"></script> <script src="./js/lib/browser-polyfill.js" type="text/javascript"></script> <script src="./js/app/app.js" type="text/javascript"></script> </head> <body> <p>This is static content..., but below is dynamic content.</p> <div id="demo"></div> </body> </html> Finally, let's write some ES2015 code in src/www/js/app/index.js: function h ( elType, ...children ) { let el = document.createElement(elType); for (let child of children) { if (typeof child !== "object") {           el.textContent = child;       } else if (child instanceof Array) { child.forEach( el.appendChild.bind(el) );       } else { el.appendChild( child );       }   } return el; }   function startApp() { document.querySelector("#demo").appendChild( h("div", h("ul", h("li", "Some information about this app..."), h("li", "App name: {{{NAME}}}"), h("li", "App version: {{{VERSION}}}")       )     )   ); }   document.addEventListener("deviceready", startApp, false); This article isn't about how to write ES2015 code, so I won't bore you with all the details. Suffice it to say, the previous generates a few list items when the app is run using a very simple form of DOM templating. But it does so using the …​ (spread) syntax for variable parameters, the for … of loop and let instead of var. Although it looks a lot like JavaScript, it's definitely different enough that it will take some time to learn how best to use the new features. Linting your code You could execute a gulp emulate --platform ios (or android) right now, and the app should work. But how do we know our code will work when built? Better yet — how can we prevent a build if the code isn't valid? We do this by adding lint tasks to our Gulp configuration file. Linting is a lot like compiling — the linter checks your code for obvious errors and aborts if it finds any. There are various linters available (some better than others), but not all of them support ES2015 syntax yet. The best one that does is ESLint (http://www.eslint.org). Thankfully, there's a very simple Gulp plugin that uses it. We could stop at linting and be done, but code style is also important and can catch out potentially serious issues as well. As such, we're also going to be using the JavaScript Code Style checker or JSCS (https://github.com/jscs-dev/node-jscs). Let's create tasks to lint and check our coding style. First, add the following to the top of our Gulp configuration: var eslint = require("gulp-eslint"); var jscs = require("gulp-jscs"); var CONFIG_DIR = path.join(__dirname, "config"); var CODE_STYLE_FILES = [path.join(SOURCE_DIR, "www", "js", "app", "**", "*.js")]; var CODE_LINT_FILES = [path.join(SOURCE_DIR, "www", "js", "app", "**", "*.js")]; Now, let's create the tasks var projectTasks = { …, checkCodeStyle: function checkCodeStyle() { return gulp.src(CODE_STYLE_FILES) .pipe(jscs({ configPath: path.join(CONFIG_DIR, "jscs.json"), esnext: true })); }, lintCode: function lintCode() { return gulp.src(CODE_LINT_FILES) .pipe(eslint(path.join(CONFIG_DIR, "eslint.json"))) .pipe(eslint.format()) .pipe(eslint.failOnError()); } } … gulp.task("lint", projectTasks.lintCode); gulp.task("code-style", projectTasks.checkCodeStyle); Now, before you run this, you'll need two configuration files to tell each task what should be an error and what shouldn't be. If you want to change the settings, you can do so — the sites for ESLint and JSCS have information on how to modify the configuration files. config/eslint.json must contain "parser": "babel-eslint" in order to force it to use ES2015 syntax. This is set for JSCS in the Gulp configuration, however. config/jscs.json must exist and must not be empty. If you don't need to specify any rules, use an empty JSON object ({}). Now, if you were to execute gulp lint and our source code had a syntax error, you would receive an error message. The same goes for code styles — gulp code-style would generate an error if it didn't like the look of the code. Modify the build, emulate, and run tasks in the Gulp configuration as follows: gulp.task("build", ["lint", "code-style", "copy"], projectTasks.buildCordova); gulp.task("emulate", ["lint", "code-style", "copy"], projectTasks.emulateCordova); gulp.task("run", ["lint", "code-style", "copy"], projectTasks.runCordova); Now, if you execute gulp build and there is a linting or code style error, the build will fail with an error. This gives a little more assurance that our code is at least syntactically valid prior to distributing or running the code. Linting and style checks do not guarantee your code works logically. It just ensures that there are no syntax or style errors. If your program responds incorrectly to a gesture or processes some data incorrectly, a linter won't necessarily catch those issues. Uglifying your code Code uglification or minification sounds a bit painful, but it's a really simple step we can add to our workflow that will reduce the size of our applications when we build in the release mode. Uglification also tends to obfuscate our code a little bit, but don't rely on this for any security — obfuscation can be easily undone. To add the code uglification, add the following to the top of our Gulp configuration: var uglify = require("gulp-uglify"); We can then uglify our code by adding the following code immediately after .pipe(concat("app.js")) in our projectTasks.copyCode method: Next, modify our projectTasks.copyCode method to look like this: .pipe(isRelease ? uglify({preserveComments: "some"}) : gutil.noop()) Notice that we added the uglify method call, but only if the build mode is release. This means that we'll only trigger it if we execute gulp build --mode release. You can, of course, specify additional options. If you want to see all the documentation, visit https://github.com/mishoo/UglifyJS2/. Our options include certain comments (the ones most likely to be license-related) while stripping out all the other comments. Putting it all together You've accomplished quite a bit, but there's one last thing we want to mention: the default task. If gulp is run with no parameters, it looks for a default task to perform. This can be anything you want. To specify this, just add the following to your Gulp configuration: gulp.task("default", ["build"]); Now, if you execute gulp with no specific task, you'll actually start the build task instead. What you want to use for your default task is largely dependent upon your preferences. Your Gulp configuration is now quite large and complex. We've added a few additional features to it (mostly for config.xml). We've also added several other features to the configuration, which you might want to investigate further: BrowserSync for rapid iteration and testing The ability to control whether or not the errors prevent further tasks from being executed Help text Summary In this article, you've learned why a task runner is useful, how to install Gulp, and how to create several tasks of varying complexity to automate building your project and other useful tasks. Resources for Article: Further resources on this subject: Getting Ready to Launch Your PhoneGap App in the Real World [article] Geolocation – using PhoneGap features to improve an app's functionality, write once use everywhere [article] Using Location Data with PhoneGap [article]
Read more
  • 0
  • 0
  • 4604

article-image-synchronizing-tests
Packt
04 Nov 2015
9 min read
Save for later

Synchronizing Tests

Packt
04 Nov 2015
9 min read
In this article by Unmesh Gundecha, author of Selenium Testing Tools Cookbook Second Edition, you will cover the following topics: Synchronizing a test with an implicit wait Synchronizing a test with an explicit wait Synchronizing a test with custom-expected conditions While building automated scripts for a complex web application using Selenium WebDriver, we need to ensure that the test flow is maintained for reliable test automation. When tests are run, the application may not always respond with the same speed. For example, it might take a few seconds for a progress bar to reach 100 percent, a status message to appear, a button to become enabled, and a window or pop-up message to open. You can handle these anticipated timing problems by synchronizing your test to ensure that Selenium WebDriver waits until your application is ready before performing the next step. There are several options that you can use to synchronize your test. In this article, we will see various features of Selenium WebDriver to implement synchronization in tests. (For more resources related to this topic, see here.) Synchronizing a test with an implicit wait The Selenium WebDriver provides an implicit wait for synchronizing tests. When an implicit wait is implemented in tests, if WebDriver cannot find an element in the Document Object Model (DOM), it will wait for a defined amount of time for the element to appear in the DOM. Once the specified wait time is over, it will try searching for the element once again. If the element is not found in specified time, it will throw NoSuchElement exception. In other terms, an implicit wait polls the DOM for a certain amount of time when trying to find an element or elements if they are not immediately available. The default setting is 0. Once set, the implicit wait is set for the life of the WebDriver object's instance. In this recipe, we will briefly explore the use of an implicit wait; however, it is recommended to avoid or minimize the use of an implicit wait. How to do it... Let's create a test on a demo AJAX-enabled application as follows: @Test public void testWithImplicitWait() { //Go to the Demo AjAX Application WebDriver driver = new FirefoxDriver(); driver.get("http://dl.dropbox.com/u/55228056/AjaxDemo.html"); //Set the Implicit Wait time Out to 10 Seconds driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS); try { //Get link for Page 4 and click on it WebElement page4button = driver.findElement(By.linkText("Page 4")); page4button.click(); //Get an element with id page4 and verify it's text WebElement message = driver.findElement(By.id("page4")); assertTrue(message.getText().contains("Nunc nibh tortor")); } catch (NoSuchElementException e) { fail("Element not found!!"); e.printStackTrace(); } finally { driver.quit(); } } How it works... The Selenium WebDriver provides the Timeouts interface for configuring the implicit wait. The Timeouts interface provides an implicitlyWait() method, which accepts the time the driver should wait when searching for an element. In this example, a test will wait for an element to appear in DOM for 10 seconds: driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS); Until the end of a test or an implicit wait is set back to 0, every time an element is searched using the findElement() method, the test will wait for 10 seconds for an element to appear. Using implicit wait may slow down tests when an application responds normally, as it will wait for each element appearing in the DOM and increase the overall execution time. Minimize or avoid using an implicit wait. Use Explicit wait, which provides more control when compared with an implicit wait. See also Synchronizing a test with an explicit wait Synchronizing a test with custom-expected conditions Synchronizing a test with an explicit wait The Selenium WebDriver provides an explicit wait for synchronizing tests, which provides a better way to wait over an implicit wait. Unlike an implicit wait, you can use predefined conditions or custom conditions or wait before proceeding further in the code. The Selenium WebDriver provides WebDriverWait and ExpectedConditions classes for implementing an explicit wait. The ExpectedConditions class provides a set of predefined conditions to wait before proceeding further in the code. The following table shows some common conditions that we frequently come across when automating web browsers supported by the ExpectedConditions class:   Predefined condition    Selenium method An element is visible and enabled elementToBeClickable(By locator) An element is selected elementToBeSelected(WebElement element) Presence of an element presenceOfElementLocated(By locator) Specific text present in an element textToBePresentInElement(By locator, java.lang.String text) Element value textToBePresentInElementValue(By locator, java.lang.String text) Title titleContains(java.lang.String title)  For more conditions, visit http://seleniumhq.github.io/selenium/docs/api/java/index.html. In this recipe, we will explore some of these conditions with the WebDriverWait class. How to do it... Let's implement a test that uses the ExpectedConditions.titleContains() method to implement an explicit wait as follows: @Test public void testExplicitWaitTitleContains() { //Go to the Google Home Page WebDriver driver = new FirefoxDriver(); driver.get("http://www.google.com"); //Enter a term to search and submit WebElement query = driver.findElement(By.name("q")); query.sendKeys("selenium"); query.click(); //Create Wait using WebDriverWait. //This will wait for 10 seconds for timeout before title is updated with search term //If title is updated in specified time limit test will move to the text step //instead of waiting for 10 seconds WebDriverWait wait = new WebDriverWait(driver, 10); wait.until(ExpectedConditions.titleContains("selenium")); //Verify Title assertTrue(driver.getTitle().toLowerCase().startsWith("selenium")); driver.quit(); } How it works... We can define an explicit wait for a set of common conditions using the ExpectedConditions class. First, we need to create an instance of the WebDriverWait class by passing the driver instance and timeout for a wait as follows: WebDriverWait wait = new WebDriverWait(driver, 10); Next, ExpectedCondition is passed to the wait.until() method as follows: wait.until(ExpectedConditions.titleContains("selenium")); The WebDriverWait object will call the ExpectedConditions class object every 500 milliseconds until it returns successfully. See also Synchronizing a test with an implicit wait Synchronizing a test with custom-expected conditions Synchronizing a test with custom-expected conditions With the explicit wait mechanism, we can also build custom-expected conditions along with common conditions using the ExpectedConditions class. This comes in handy when a wait cannot be handled with a common condition supported by the ExpectedConditions class. In this recipe, we will explore how to create a custom condition. How to do it... We will create a test that will create a wait until an element appears on the page using the ExpectedCondition class as follows: @Test public void testExplicitWait() { WebDriver driver = new FirefoxDriver(); driver.get("http://dl.dropbox.com/u/55228056/AjaxDemo.html"); try { WebElement page4button = driver.findElement(By.linkText("Page 4")); page4button.click(); WebElement message = new WebDriverWait(driver, 5) .until(new ExpectedCondition<WebElement>(){ public WebElement apply(WebDriver d) { return d.findElement(By.id("page4")); }}); assertTrue(message.getText().contains("Nunc nibh tortor")); } catch (NoSuchElementException e) { fail("Element not found!!"); e.printStackTrace(); } finally { driver.quit(); } } How it works... The Selenium WebDriver provides the ability to implement the custom ExpectedCondition interface along with the WebDriverWait class for creating a custom-wait condition, as needed by a test. In this example, we created a custom condition, which returns a WebElement object once the inner findElement() method locates the element within a specified timeout as follows: WebElement message = new WebDriverWait(driver, 5) .until(new ExpectedCondition<WebElement>(){ @Override public WebElement apply(WebDriver d) { return d.findElement(By.id("page4")); }}); There's more... A custom wait can be created in various ways. In the following section, we will explore some common examples for implementing a custom wait. Waiting for element's attribute value update Based on the events and actions performed, the value of an element's attribute might change at runtime. For example, a disabled textbox gets enabled based on the user's rights. A custom wait can be created on the attribute value of the element. In the following example, the ExpectedCondition waits for a Boolean return value, based on the attribute value of an element: new WebDriverWait(driver, 10).until(new ExpectedCondition<Boolean>() { public Boolean apply(WebDriver d) { return d.findElement(By.id("userName")).getAttribute("readonly").contains("true"); }}); Waiting for an element's visibility Developers hide or display elements based on the sequence of actions, user rights, and so on. The specific element might exist in the DOM, but are hidden from the user, and when the user performs a certain action, it appears on the page. A custom-wait condition can be created based on the element's visibility as follows: new WebDriverWait(driver, 10).until(new ExpectedCondition<Boolean>() { public Boolean apply(WebDriver d) { return d.findElement(By.id("page4")).isDisplayed(); }}); Waiting for DOM events The web application may be using a JavaScript framework such as jQuery for AJAX and content manipulation. For example, jQuery is used to load a big JSON file from the server asynchronously on the page. While jQuery is reading and processing this file, a test can check its status using the active attribute. A custom wait can be implemented by executing the JavaScript code and checking the return value as follows: new WebDriverWait(driver, 10).until(new ExpectedCondition<Boolean>() { public Boolean apply(WebDriver d) { JavascriptExecutor js = (JavascriptExecutor) d; return (Boolean)js.executeScript("return jQuery.active == 0"); }}); See also Synchronizing a test with an implicit wait Synchronizing a test with an explicit wait Summary In this article, you have learned how the Selenium WebDriver helps in maintaining a reliable automated test. Using the Selenium WebDriver, you also learned how you can synchronize a test using the implicit and the explicit wait methods. You also saw how to synchronize a test with custom-expected conditions. Resources for Article: Further resources on this subject: Javascript Execution With Selenium [article] Learning Selenium Testing Tools With Python [article] Cross-Browser Tests Using Selenium Webdriver [article]
Read more
  • 0
  • 0
  • 1806

article-image-installing-neutron
Packt
04 Nov 2015
15 min read
Save for later

Installing Neutron

Packt
04 Nov 2015
15 min read
We will learn about OpenStack networking in this article by James Denton, who is the author of the book Learning OpenStack Networking (Neutron) - Second Edition. OpenStack Networking, also known as Neutron, provides a network infrastructure as-a-service platform to users of the cloud. In this article, I will guide you through the installation of Neutron networking services on top of the OpenStack environment. Components to be installed include: Neutron API server Modular Layer 2 (ML2) plugin By the end of this article, you will have a basic understanding of the function and operation of various Neutron plugins and agents, as well as a foundation on top of which a virtual switching infrastructure can be built. (For more resources related to this topic, see here.) Basic networking elements in Neutron Neutron constructs the virtual network using elements that are familiar to all system and network administrators, including networks, subnets, ports, routers, load balancers, and more. Using version 2.0 of the core Neutron API, users can build a network foundation composed of the following entities: Network: A network is an isolated layer 2 broadcast domain. Typically reserved for the tenants that created them, networks could be shared among tenants if configured accordingly. The network is the core entity of the Neutron API. Subnets and ports must always be associated with a network. Subnet: A subnet is an IPv4 or IPv6 address block from which IP addresses can be assigned to virtual machine instances. Each subnet must have a CIDR and must be associated with a network. Multiple subnets can be associated with a single network and can be noncontiguous. A DHCP allocation range can be set for a subnet that limits the addresses provided to instances. Port: A port in Neutron represents a virtual switch port on a logical virtual switch. Virtual machine interfaces are mapped to Neutron ports, and the ports define both the MAC address and the IP address to be assigned to the interfaces plugged into them. Neutron port definitions are stored in the Neutron database, which is then used by the respective plugin agent to build and connect the virtual switching infrastructure. Cloud operators and users alike can configure network topologies by creating and configuring networks and subnets, and then instruct services such as Nova to attach virtual devices to ports on these networks. Users can create multiple networks, subnets, and ports, but are limited to thresholds defined by per-tenant quotas set by the cloud administrator. Extending functionality with plugins Neutron introduces support for third-party plugins and drivers that extend network functionality and implementation of the Neutron API. Plugins and drivers can be created that use a variety of software- and hardware-based technologies to implement the network built by operators and users. There are two major plugin types within the Neutron architecture: Core plugin Service plugin A core plugin implements the core Neutron API and is responsible for adapting the logical network described by networks, ports, and subnets into something that can be implemented by the L2 agent and IP address management system running on the host. A service plugin provides additional network services such as routing, load balancing, firewalling, and more. The Neutron API provides a consistent experience to the user despite the chosen networking plugin. For more information on interacting with the Neutron API, visit http://developer.openstack.org/api-ref-networking-v2.html. Modular Layer 2 plugin Prior to the inclusion of the Modular Layer 2 (ML2) plugin in the Havana release of OpenStack, Neutron was limited to using a single core plugin at a time. The ML2 plugin replaces two monolithic plugins in its reference implementation: the LinuxBridge plugin and the Open vSwitch plugin. Their respective agents, however, continue to be utilized and can be configured to work with the ML2 plugin. Drivers The ML2 plugin introduced the concept of type drivers and mechanism drivers to separate the types of networks being implemented and the mechanisms for implementing networks of those types. Type drivers An ML2 type driver maintains type-specific network state, validates provider network attributes, and describes network segments using provider attributes. Provider attributes include network interface labels, segmentation IDs, and network types. Supported network types include local, flat, vlan, gre, and vxlan. Mechanism drivers An ML2 mechanism driver is responsible for taking information established by the type driver and ensuring that it is properly implemented. Multiple mechanism drivers can be configured to operate simultaneously, and can be described using three types of models: Agent-based: This includes LinuxBridge, Open vSwitch, and others Controller-based: This includes OpenDaylight, VMWare NSX, and others Top-of-Rack: This includes Cisco Nexus, Arista, Mellanox, and others The LinuxBridge and Open vSwitch ML2 mechanism drivers are used to configure their respective switching technologies within nodes that host instances and network services. The LinuxBridge driver supports local, flat, vlan, and vxlan network types, while the Open vSwitch driver supports all of those as well as the gre network type. The L2 population driver is used to limit the amount of broadcast traffic that is forwarded across the overlay network fabric. Under normal circumstances, unknown unicast, multicast, and broadcast traffic floods out all tunnels to other compute nodes. This behavior can have a negative impact on the overlay network fabric, especially as the number of hosts in the cloud scales out. As an authority on what instances and other network resources exist in the cloud, Neutron can prepopulate forwarding databases on all hosts to avoid a costly learning operation. When ARP proxy is used, Neutron prepopulates the ARP table on all hosts in a similar manner to avoid ARP traffic from being broadcast across the overlay fabric. ML2 architecture The following diagram demonstrates at a high level how the Neutron API service interacts with the various plugins and agents responsible for constructing the virtual and physical network: Figure 3.1 The preceding diagram demonstrates the interaction between the Neutron API, Neutron plugins and drivers, and services such as the L2 and L3 agents. For more information on the Neutron ML2 plugin architecture, refer to the OpenStack Neutron Modular Layer 2 Plugin Deep Dive video from the 2013 OpenStack Summit in Hong Kong available at https://www.youtube.com/watch?v=whmcQ-vHams. Third-party support Third-party vendors such as PLUMGrid and OpenContrail have implemented support for their respective SDN technologies by developing their own monolithic or ML2 plugins that implement the Neutron API and extended network services. Others, including Cisco, Arista, Brocade, Radware, F5, VMWare, and more, have created plugins that allow Neutron to interface with OpenFlow controllers, load balancers, switches, and other network hardware. For a look at some of the commands related to these plugins, refer to Appendix, Additional Neutron Commands. The configuration and use of these plugins is outside the scope of this article. For more information on the available plugins for Neutron, visit http://docs.openstack.org/admin-guide-cloud/content/section_plugin-arch.html. Network namespaces OpenStack was designed with multitenancy in mind and provides users with the ability to create and manage their own compute and network resources. Neutron supports each tenant having multiple private networks, routers, firewalls, load balancers, and other networking resources. It is able to isolate many of those objects through the use of network namespaces. A network namespace is defined as a logical copy of the network stack with its own routes, firewall rules, and network interface devices. When using the open source reference plugins and drivers, every network, router, and load balancer that is created by a user is represented by a network namespace. When network namespaces are enabled, Neutron is able to provide isolated DHCP and routing services to each network. These services allow users to create overlapping networks with other users in other projects and even other networks in the same project. The following naming convention for network namespaces should be observed: DHCP namespace: qdhcp-<network UUID> Router namespace: qrouter-<router UUID> Load Balancer namespace: qlbaas-<load balancer UUID> A qdhcp namespace contains a DHCP service that provides IP addresses to instances using the DHCP protocol. In a reference implementation, dnsmasq is the process that services DHCP requests. The qdhcp namespace has an interface plugged into the virtual switch and is able to communicate with instances and other devices in the same network or subnet. A qdhcp namespace is created for every network where the associated subnet(s) have DHCP enabled. A qrouter namespace represents a virtual router and is responsible for routing traffic to and from instances in the subnets it is connected to. Like the qdhcp namespace, the qrouter namespace is connected to one or more virtual switches depending on the configuration. A qlbaas namespace represents a virtual load balancer and may run a service such as HAProxy that load balances traffic to instances. The qlbaas namespace is connected to a virtual switch and can communicate with instances and other devices in the same network or subnet. The leading q in the name of the network namespaces stands for Quantum, the original name for the OpenStack Networking service. Network namespaces of the types mentioned earlier will only be seen on nodes running the Neutron DHCP, L3, and LBaaS agents, respectively. These services are typically configured only on controllers or dedicated network nodes. The ip netns list command can be used to list available namespaces, and commands can be executed within the namespace using the following syntax: ip netns exec NAMESPACE_NAME <command> Commands that can be executed in the namespace include ip, route, iptables, and more. The output of these commands corresponds to data specific to the namespace they are executed in. For more information on network namespaces, see the man page for ip netns at http://man7.org/linux/man-pages/man8/ip-netns.8.html. Installing and configuring Neutron services In this installation, the various services that make up OpenStack Networking will be installed on the controller node rather than a dedicated networking node. The compute nodes will run L2 agents that interface with the controller node and provide virtual switch connections to instances. Remember that the configuration settings recommended here and online at docs.openstack.org may not be appropriate for production systems. To install the Neutron API server, the DHCP and metadata agents, and the ML2 plugin on the controller, issue the following command: # apt-get install neutron-server neutron-dhcp-agent neutron-metadata-agent neutron-plugin-ml2 neutron-common python-neutronclient On the compute nodes, only the ML2 plugin is required: # apt-get install neutron-plugin-ml2 Creating the Neutron database Using the mysql client, create the Neutron database and associated user. When prompted for the root password, use openstack: # mysql –u root –p Enter the following SQL statements in the MariaDB [(none)] > prompt: CREATE DATABASE neutron; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron'; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron'; quit; Update the [database] section of the Neutron configuration file at /etc/neutron/neutron.conf on all nodes to use the proper MySQL database connection string based on the preceding values rather than the default value: [database] connection = mysql://neutron:neutron@controller01/neutron Configuring the Neutron user, role, and endpoint in Keystone Neutron requires that you create a user, role, and endpoint in Keystone in order to function properly. When executed from the controller node, the following commands will create a user called neutron in Keystone, associate the admin role with the neutron user, and add the neutron user to the service project: # openstack user create neutron --password neutron # openstack role add --project service --user neutron admin Create a service in Keystone that describes the OpenStack Networking service by executing the following command on the controller node: # openstack service create --name neutron --description "OpenStack Networking" network The service create command will result in the following output: Figure 3.2 To create the endpoint, use the following openstack endpoint create command: # openstack endpoint create --publicurl http://controller01:9696 --adminurl http://controller01:9696 --internalurl http://controller01:9696 --region RegionOne network The resulting endpoint is as follows: Figure 3.3 Enabling packet forwarding Before the nodes can properly forward or route traffic for virtual machine instances, there are three kernel parameters that must be configured on all nodes: net.ipv4.ip_forward net.ipv4.conf.all.rp_filter net.ipv4.conf.default.rp_filter The net.ipv4.ip_forward kernel parameter allows the nodes to forward traffic from the instances to the network. The default value is 0 and should be set to 1 to enable IP forwarding. Use the following command on all nodes to implement this change: # sysctl -w "net.ipv4.ip_forward=1" The net.ipv4.conf.default.rp_filter and net.ipv4.conf.all.rp_filter kernel parameters are related to reverse path filtering, a mechanism intended to prevent certain types of denial of service attacks. When enabled, the Linux kernel will examine every packet to ensure that the source address of the packet is routable back through the interface in which it came. Without this validation, a router can be used to forward malicious packets from a sender who has spoofed the source address to prevent the target machine from responding properly. In OpenStack, anti-spoofing rules are implemented by Neutron on each compute node within iptables. Therefore, the preferred configuration for these two rp_filter values is to disable them by setting them to 0. Use the following sysctl commands on all nodes to implement this change: # sysctl -w "net.ipv4.conf.default.rp_filter=0" # sysctl -w "net.ipv4.conf.all.rp_filter=0" Using sysctl –w makes the changes take effect immediately. However, the changes are not persistent across reboots. To make the changes persistent, edit the /etc/sysctl.conf file on all hosts and add the following lines: net.ipv4.ip_forward = 1 net.ipv4.conf.default.rp_filter = 0 net.ipv4.conf.all.rp_filter = 0 Load the changes into memory on all nodes with the following sysctl command: # sysctl -p Configuring Neutron to use Keystone The Neutron configuration file found at /etc/neutron/neutron.conf has dozens of settings that can be modified to meet the needs of the OpenStack cloud administrator. A handful of these settings must be changed from their defaults as part of this installation. To specify Keystone as the authentication method for Neutron, update the [DEFAULT] section of the Neutron configuration file on all hosts with the following setting: [DEFAULT] auth_strategy = keystone Neutron must also be configured with the appropriate Keystone authentication settings. The username and password for the neutron user in Keystone were set earlier in this article. Update the [keystone_authtoken] section of the Neutron configuration file on all hosts with the following settings: [keystone_authtoken] auth_uri = http://controller01:5000 auth_url = http://controller01:35357 auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = neutron password = neutron Configuring Neutron to use a messaging service Neutron communicates with various OpenStack services on the AMQP messaging bus. Update the [DEFAULT] and [oslo_messaging_rabbit] sections of the Neutron configuration file on all hosts to specify RabbitMQ as the messaging broker: [DEFAULT] rpc_backend = rabbit The RabbitMQ authentication settings should match what was previously configured for the other OpenStack services: [oslo_messaging_rabbit] rabbit_host = controller01 rabbit_userid = openstack rabbit_password = rabbit Configuring Nova to utilize Neutron networking Before Neutron can be utilized as the network manager for Nova Compute services, the appropriate configuration options must be set in the Nova configuration file located at /etc/nova/nova.conf on all hosts. Start by updating the following sections with information on the Neutron API class and URL: [DEFAULT] network_api_class = nova.network.neutronv2.api.API [neutron] url = http://controller01:9696 Then, update the [neutron] section with the proper Neutron credentials: [neutron] auth_strategy = keystone admin_tenant_name = service admin_username = neutron admin_password = neutron admin_auth_url = http://controller01:35357/v2.0 Nova uses the firewall_driver configuration option to determine how to implement firewalling. As the option is meant for use with the nova-network networking service, it should be set to nova.virt.firewall.NoopFirewallDriver to instruct Nova not to implement firewalling when Neutron is in use: [DEFAULT] firewall_driver = nova.virt.firewall.NoopFirewallDriver The security_group_api configuration option specifies which API Nova should use when working with security groups. For installations using Neutron instead of nova-network, this option should be set to neutron as follows: [DEFAULT] security_group_api = neutron Nova requires additional configuration once a mechanism driver has been determined. Configuring Neutron to notify Nova Neutron must be configured to notify Nova of network topology changes. Update the [DEFAULT] and [nova] sections of the Neutron configuration file on the controller node located at /etc/neutron/neutron.conf with the following settings: [DEFAULT] notify_nova_on_port_status_changes = True notify_nova_on_port_data_changes = True nova_url = http://controller01:8774/v2 [nova] auth_url = http://controller01:35357 auth_plugin = password project_domain_id = default user_domain_id = default region_name = RegionOne project_name = service username = nova password = nova Summary Neutron has seen major internal architectural improvements over the last few releases. These improvements have made developing and implementing network features easier for developers and operators, respectively. Neutron maintains the logical network architecture in its database, and network plugins and agents on each node are responsible for configuring virtual and physical network devices accordingly. With the introduction of the ML2 plugin, developers can spend less time implementing the core Neutron API functionality and more time developing value-added features. Now that OpenStack Networking services have been installed across all nodes in the environment, configuration of a layer 2 networking plugin is all that remains before instances can be created. Resources for Article: Further resources on this subject: Installing OpenStack Swift [article] Securing OpenStack Networking [article] The orchestration service for OpenStack [article]
Read more
  • 0
  • 0
  • 8611
article-image-getting-started-tableau-public
Packt
04 Nov 2015
12 min read
Save for later

Getting Started with Tableau Public

Packt
04 Nov 2015
12 min read
In this article by Ashley Ohmann and Matthew Floyd, the authors of Creating Data Stories with tableau Public. Making sense of data is a valued service in today's world. It may be a cliché, but it's true that we are drowning in data and yet, we are thirsting for knowledge. The ability to make sense of data and the skill of using data to tell a compelling story is becoming one of the most valued capabilities in almost every field—business, journalism, retail, manufacturing, medicine, and public service. Tableau Public (for more information, visit www.tableaupublic.com), which is Tableau 's free Cloud-based data visualization client, is a powerfully transformative tool that you can use to create rich, interactive, and compelling data stories. It's a great platform if you wish to explore data through visualization. It enables your consumers to ask and answer questions that are interesting to them. This article is written for people who are new to Tableau Public and would like to learn how to create rich, interactive data visualizations from publicly available data sources that they can easily share with others. Once you publish visualizations and data to Tableau Public, they are accessible to everyone, and they can be viewed and downloaded. A typical Tableau Public data visualization contains public data sets such as sports, politics, public works, crime, census, socioeconomic metrics, and social media sentiment data (you also can create and use your own data). Many of these data sets either are readily available on the Internet, or can accessed via a public records request or search (if they are harder to find, they can be scraped from the Internet). You can now control who can download your visualizations and data sets, which is a feature that was previously available only to the paid subscribers. Tableau Public has a current maximum data set size of 10 million rows and/or 10 GB of data. (For more resources related to this topic, see here.) In this article, we will walk through an introduction to Tableau, which includes the following topics: A discussion on how you can use Tableau Public to tell your data story Examples of organizations that use Tableau Public Downloading and installing the Tableau Public software Logging in to Tableau Public Creating your very own Tableau Public profile Discovering the Tableau Public features and resources Taking a look at the author profiles and galleries on the Tableau website to browse other authors' data visualizations (this is a great way to learn and gather ideas on how to best present our data) An Tableau Public overview Tableau Public allows everyone to tell their data story and create compelling and interactive data visualizations that encourage discovery and learning. Tableau Public is sold at a great price—free! It allows you as a data storyteller to create and publish data visualizations without learning how to code or having special knowledge of web publishing. In fact, you can publish data sets of up to 10 million rows or 10 GB to Tableau Public in a single workbook. Tableau Public is a data discovery tool. It should not be confused with enterprise-grade business intelligence tools, such as Tableau Desktop and Tableau Server, QlikView, and Cognos Insight. Those tools integrate with corporate networks and security protocol as well as server-based data warehouses. Data visualization software is not a new thing. Businesses have used software to generate dashboards and reports for decades. The twist comes with data democracy tools, such as Tableau Public. Journalists and bloggers who would like to augment their reporting of static text and graphics can use these data discovery tools, such as Tableau Public, to create riveting, rich data visualizations, which may comprise one or more charts, graphs, tables, and other objects that may be controlled by the readers to allow for discovery. The people who are active members of the Tableau Public community have a few primary traits in common, they are curious, generous with their knowledge and time, and enjoy conversations that relate data to the world around us. Tableau Public maintains a list of blogs of data visualization experts using Tableau software. In the following screenshot, Tableau Zen Masters, Anya A'hearn of Databrick and Allan Walker, used data on San Francisco bike sharing to show the financial benefits of the Bay Area Bike Share, a city-sponsored 30-minute bike sharing program, as well as a map of both the proposed expansion of the program and how far a person can actually ride a bike in half an hour. This dashboard is featured in the Tableau Public gallery because it relates data to users clearly and concisely. It presents a great public interest story (commuting more efficiently in a notoriously congested city) and then grabs the viewer's attention with maps of current and future offerings. The second dashboard within the analysis is significant as well. The authors described the Geographic Information Systems (GIS) tools that they used to create their innovative maps as well as the methodology that went into the final product so that the users who are new to the tool can learn how to create a similar functionality for their own purposes: Image republished under the terms of fair use, creators: Anya A'hearn and Allan Walker. Source: https://public.tableausoftware.com/views/30Minutes___BayAreaBikeShare/30Minutes___?:embed=y&:loadOrderID=0&:display_count=yes As humans, we relate our experiences to each other in stories, and data points are an important component of stories. They quantify phenomena and, when combined with human actions and emotions, can make them more memorable. When authors create public interest story elements with Tableau Public, readers can interact with the analyses, which creates a highly personal experience and translates into increased participation and decreased abandonment. It's not difficult to embed the Tableau Public visualizations into websites and blogs. It is as easy as copying and pasting JavaScript that Tableau Public renders for you automatically. Using Tableau Public increases accessibility to stories, too. You can view data stories on mobile devices with a web browser and then share it with friends on social media sites such as Twitter and Facebook using Tableau Public's sharing functionality. Stories can be told with the help of text as well as popular and tried-and-true visualization types such as maps, bar charts, lists, heat maps, line charts, and scatterplots. Maps are particularly easier to build in Tableau Public than most other data visualization offerings because Tableau has integrated geocoding (down to the city and postal code) directly into the application. Tableau Public has a built-in date hierarchy that makes it easy for users to drill through time dimensions just by clicking on a button. One of Tableau Software's taglines, Data to the People, is a reflection not only of the ability to distribute analysis sets to thousands of people in one go, but also of the enhanced abilities of nontechnical users to explore their own data easily and derive relevant insights for their own community without having to learn a slew of technical skills. Telling your story with Tableau Public Tableau was originally developed in the Stanford University Computer Science department, where a research project sponsored by the U.S. Department of Defense was launched to study how people can analyze data rapidly. This project merged two branches of computer science, understanding data relationships and computer graphics. This mash-up was discovered to be the best way for people to understand and sometimes digest complex data relationships rapidly and, in effect, to help readers consume data. This project eventually moved from the Stanford campus to the corporate world, and Tableau Software was born. The Tableau usage and adoption has since skyrocketed at the time of writing this book. Tableau is the fastest growing software company in the world and now, Tableau competes directly with the older software manufacturers for data visualization and discovery—Microsoft, IBM, SAS, Qlik, and Tibco, to name a few. Most of these are compared to each other by Gartner in its annual Magic Quadrant. For more information, visit http://www.gartner.com/technology/home.jsp. Tableau Software's flagship program, Tableau Desktop, is commercial software used by many organizations and corporations throughout the world. Tableau Public is the free version of Tableau's offering. It is typically used with nonconfidential data either from the public domain or that which we collected ourselves. This free public offering of Tableau Public is truly unique in the business intelligence and data discovery industry. There is no other software like it—powerful, free, and open to data story authors. There are a few terms in this article that might be new to you. You, as an author, will load data into a workbook, which will be saved by you in the Tableau Public cloud. A visualization is a single graph. It is typically on a worksheet. One or more visualizations are on a dashboard, which is where your users will interact with your data. One of the wonderful features about Tableau Public is that you can load data and visualize it on your own. Traditionally, this has been an activity that was undertaken with the help of programmers at work. With Tableau Public and newer blogging platforms, nonprogrammers can develop data visualization, publish it to the Tableau Public website, and then embed the data visualization on their own website. The basic steps that are required to create a Tableau Public visualization are as follows: Gather your data sources, usually in a spreadsheet or a .csv file. Prepare and format your data to make it usable in Tableau Public. Connect to the data and start building the data visualizations (charts, graphs, and many other objects). Save and publish your data visualization to the Tableau Public website. Embed your data visualization in your web page by using the code that Tableau Public provides. Tableau Public is used by some of the leading news organizations across the world, including The New York Times, The Guardian (UK), National Geographic (US), the Washington Post (US), the Boston Globe (US), La Informacion (Spain), and Época (Brazil). Now, we will discuss installing Tableau Public. Then, we will take a look at how we can find some of these visualizations out there in the wild so that we can learn from others and create our own original visualizations. Installing Tableau Public Let's look at the steps required for the installation of Tableau Public: To download Tableau Public, visit the Tableau Software website at http://public.tableau.com/s/. Enter your e-mail address and click on the Download the App button located at the center of the screen, as shown in following screenshot: The downloaded version of Tableau Public is free, and it is not a limited release or demo version. It is a fully functional version of Tableau Public. Once the download begins, a Thank You screen gives you an option of retrying the download if it does not begin automatically or starts downloading a different version. The version of Tableau Public that gets downloaded automatically is the 64-bit version for Windows. Users of Macs should download the appropriate version for their computers, and users with 32-bit Windows machines should download the 32-bit version. Check your Windows computer system type (32- or 64-bit) by navigating to Start then Computer and right-clicking on the Computer option. Select Properties, and view the System properties. 64-bit systems will be noted as such. 32-bit systems will either state that they are 32-bit ones, or not have any indication of being a 32- or 64-bit system. While the Tableau Public executable file downloads, you can scroll the Thank You page to the lower section to learn more about the new features of Tableau Public 9.0. The speed with which Tableau Public downloads depends on the download speed of your network, and the 109 MB file usually takes a few minutes to download. The TableauPublicDesktop-xbit.msi (where x=32 or 64, depending on which version you selected) is downloaded. Navigate to the .msi file in Windows Explorer or in the browser window and click on Open. Then, click on Run in the Open File - Security Warning dialog box that appears in the following screenshot. The Windows installer starts the Tableau installation process: Once you have opted to Run the application, the next screen prompts you to view the License Agreement and accept its terms: If you wish to read the terms of the license agreement, click on the View License Agreement… button. You can customize the installation if you'd like. Options include the directory in which the files are installed as well as the creation of a desktop icon and a Start Menu shortcut (for Windows machines). If you do not customize the installation, Tableau Public will be installed in the default directory on your computer, and the desktop icon and Start Menu shortcut will be created. Select the checkbox that indicates I have read and accept the terms of this License Agreement, and click on Install. If a User Account Control dialog box appears with the Do you want to allow the following program to install software on this computer? prompt, click on Yes: Tableau Public will be installed on your computer, with the status bar indicating the progress: When Tableau Public has been installed successfully, the home screen opens. Exploring Tableau Public The Tableau Public home screen has several features that allow you to do following operations: Connect to data Open your workbooks Discover the features of Tableau Public Tableau encourages new users to watch the video on this first welcome page. To do so, click on the button named Watch the Getting Started Video. You can start building your first Tableau Public workbook any time. Connecting to data You can connect to the following four different data source types in Tableau Public by clicking on the appropriate format name: Microsoft Excel files Text files with a variety of delimiters Microsoft Access files Odata files Summary In this article, we learned how Tableau Public is commonly used. We also learned how to download and install Tableau Public, explore Tableau Public's features and learn about the Tableau Desktop tool, and discover other authors' data visualizations using the Tableau Galleries and Recommended Authors and Profile Finder function on the Tableau website. Resources for Article: Further resources on this subject: Data Acquisition and Mapping [article] Interacting with Data for Dashboards [article] Moving from Foundational to Advanced Visualizations [article]
Read more
  • 0
  • 0
  • 9700

article-image-architecture-backbone
Packt
04 Nov 2015
18 min read
Save for later

Architecture of Backbone

Packt
04 Nov 2015
18 min read
In this article by Abiee Echamea, author of the book Mastering Backbone.js, you will see that one of the best things about Backbone is the freedom of building applications with the libraries of your choice, no batteries included. Backbone is not a framework but a library. Building applications with it can be challenging as no structure is provided. The developer is responsible for code organization and how to wire the pieces of code across the application; it's a big responsibility. Bad decisions about code organization can lead to buggy and unmaintainable applications that nobody wants to see. In this article, you will learn the following topics: Delegating the right responsibilities to Backbone objects Splitting the application into small and maintainable scripts (For more resources related to this topic, see here.) The big picture We can split application into two big logical parts. The first is an infrastructure part or root application, which is responsible for providing common components and utilities to the whole system. It has handlers to show error messages, activate menu items, manage breadcrumbs, and so on. It also owns common views such as dialog layouts or loading the progress bar. A root application is responsible for providing common components and utilities to the whole system. A root application is the main entry point to the system. It bootstraps the common objects, sets the global configuration, instantiates routers, attaches general services to a global application, renders the main application layout at the body element, sets up third-party plugins, starts a Backbone history, and instantiates, renders, and initializes components such as a header or breadcrumb. However, the root application itself does nothing; it is just the infrastructure to provide services to the other parts that we can call subapplications or modules. Subapplications are small applications that run business value code. It's where the real work happens. Subapplications are focused on a specific domain area, for example, invoices, mailboxes, or chats, and should be decoupled from the other applications. Each subapplication has its own router, entities, and views. To decouple subapplications from the root application, communication is made through a message bus implemented with the Backbone.Events or Backbone.Radio plugin such that services are requested to the application by triggering events instead of call methods on an object. Subapplications are focused on a specific domain area and should be decoupled from the root application and other subapplications. Figure 1.1 shows a component diagram of the application. As you can see, the root application depends on the routers of the subapplications due to the Backbone.history requirement to instantiate all the routers before calling the start method and the root application does this. Once Backbone.history is started, the browser's URL is processed and a route handler in a subapplication is triggered; this is the entry point for subapplications. Additionally, a default route can be defined in the root application for any route that is not handled on the subapplications. Figure 1.1: Logical organization of a Backbone application When you build Backbone applications in this way, you know exactly which object has the responsibility, so debugging and improving the application is easier. Remember, divide and conquer. Also by doing this, you make your code more testable, improving its robustness. Responsibilities of the Backbone objects One of the biggest issues with the Backbone documentation is no clues about how to use its objects. Developers should figure out the responsibilities for each object across the application but at least you have some experience working with Backbone already and this is not an easy task. The next sections will describe the best uses for each Backbone object. In this way, you will have a clearer idea about the scope of responsibilities of Backbone, and this will be the starting point of designing our application architecture. Keep in mind, Backbone is a library with foundation objects, so you will need to bring your own objects and structure to make an awesome Backbone application. Models This is the place where the general business logic lives. A specific business logic should be placed on other sites. A general business logic is all the rules that are so general that they can be used on multiple use cases, while specific business logic is a use case itself. Let's imagine a shopping cart. A model can be an item in the cart. The logic behind this model can include calculating the total by multiplying the unit price by the quantity or setting a new quantity. In this scenario, assume that the shop has a business rule that a customer can buy the same product only three times. This is a specific business rule because it is specific for this business, or how many stores do you know with this rule? These business rules take place on other sites and should be avoided on models. Also, it's a good idea to validate the model data before sending requests to the server. Backbone helps us with the validate method for this, so it's reasonable to put validation logic here too. Models often synchronize the data with the server, so direct calls to servers such as AJAX calls should be encapsulated at the model level. Models are the most basic pieces of information and logic; keep this in mind. Collections Consider collections as data repositories similar to a database. Collections are often used to fetch the data from the server and render its contents as lists or tables. It's not usual to see business logic here. Resource servers have different ways to deal with lists of resources. For instance, while some servers accept a skip parameter for pagination, others have a page parameter for the same purpose. Another case is responses; a server can respond with a plain array while other prefer sending an object with a data, list, or some other key, where an array of objects is placed. There is no standard way. Collections can deal with these issues, making server requests transparent for the rest of the application. Views Views have the responsibility of handling Document Object Model (DOM). Views work closely with the template engines rendering the templates and putting the results in DOM. Listen for low-level events using a jQuery API and transform them into domain ones. Views abstract the user interactions transforming his/her actions into data structures for the application, for example, clicking on a save button in a form view will create a plain object with the information in the input and trigger a domain event such as save:contact with this object attached. Then a domain-specific object can apply domain logic to the data and show a result. Business logic on views should be avoided, but basic form validations are allowed, such as accepting only numbers, but complex validations should be done on the model. Routers Routers have a simple responsibility: listening for URL changes on the browser and transforming them into a call to a handler. A router knows which handler to call for a given URL and also decodes the URL parameters and passes them to the handlers. The root application bootstraps the infrastructure, but routers decide which subapplication will be executed. In this way, routers are a kind of entry point. Domain objects It is possible to develop Backbone applications using only the Backbone objects described in the previous section, but for a medium-to-large application, it's not sufficient. We need to introduce a new kind of object with well-delimited responsibilities that use and coordinate the Backbone foundation objects. Subapplication facade This object is the public interface of a subapplication. Any interaction with the subapplication should be done through its methods. Direct calls to internal objects of the subapplication are discouraged. Typically, methods on this controller are called from the router but can be called from anywhere. The main responsibility of this object is simplifying the subapplication internals, so its work is to fetch the data from the server through models or collections and in case an error occurs during the process, it has to show an error message to the user. Once the data is loaded in a model or collection, it creates a subapplication controller that knows the views that should be rendered and has the handlers to deal with its events. The subapplication facade will transform the URL request into a Backbone data object. It shows the right error message; creates a subapplication controller, and delegates the control to it. The subapplication controller or mediator This object acts as an air traffic controller for the views, models, and collections. With a Backbone data object, it will instantiate and render the appropriate views and then coordinate them. However, the coordination task is not easy in complex layouts. Due to loose coupling reasons, a view cannot call the methods or events of the other views directly. Instead of this, a view triggers an event and the controller handles the event and orchestrates the view's behavior, if necessary. Note how the views are isolated, handling just their owned portion of DOM and triggering events when they need to communicate something. Business logic for simple use cases can be implemented here, but for more complex interactions, another strategy is needed. This object implements the mediator pattern allowing other basic objects such as views and models to keep it simple and allow loose coupling. The logic workflow The application starts bootstrapping common components and then initializes all the routers available for the subapplications and starts Backbone.history. See Figure 1.2, After initialization, the URL on the browser will trigger a route for a subapplication, then a route handler instantiates a subapplication facade object and calls the method that knows how to handle the request. The facade will create a Backbone data object, such as a collection, and fetch the data from the server calling its fetch method. If an error is issued while fetching the data, the subapplication facade will ask the root application to show the error, for example, a 500 Internal Server Error. Figure 1.2: Abstract architecture for subapplications Once the data is in a model or collection, the subapplication facade will instantiate the subapplication object that knows the business rules for the use case and pass the model or collection to it. Then, it renders one or more view with the information of the model or collection and places the results in the DOM. The views will listen for DOM events, for example, click, and transform them into a higher-level event to be consumed by the application object. The subapplication object listens for events on models and views and coordinates them when an event is triggered. When the business rules are not too complex, they can be implemented on this application object, such as deleting a model. Models and views can be in sync with the Backbone events or use a library for bindings such as Backbone.Stickit. In the next section, we will describe this process step by step with code examples for a better understanding of the concepts explained. Route handling The entry point for a subapplication is given by its routes, which ideally share the same namespace. For instance, a contacts subapplication can have these routes: contacts: Lists all the available contacts Contacts/page/:page: Paginates the contacts collection contacts/new: Shows a form to create a new contact contacts/view/:id: Shows an invoice given its ID contacts/edit/:id: Shows a form to edit a contact Note how all the routes start with the /contacts prefix. It's a good practice to use the same prefix for all the subapplication routes. In this way, the user will know where he/she is in the application, and you will have a clean separation of responsibilities. Use the same prefix for all URLs in one subapplication; avoid mixing routes with the other subapplications. When the user points the browser to one of these routes, a route handler is triggered. The function handler parses the URL request and delegates the request to the subapplication object, as follows: var ContactsRouter = Backbone.Router.extend({ routes: { "contacts": "showContactList", "contacts/page/:page": "showContactList", "contacts/new": "createContact", "contacts/view/:id": "showContact", "contacts/edit/:id": "editContact" }, showContactList: function(page) { page = page || 1; page = page > 0 ? page : 1; var region = new Region({el: '#main'}); var app = new ContactsApp({region: region}); app.showContactList(page); }, createContact: function() { var region = new Region({el: '#main'}); var app = new ContactsApp({region: region}); app.showNewContactForm(); }, showContact: function(contactId) { var region = new Region({el: '#main'}); var app = new ContactsApp({region: region}); app.showContactById(contactId); }, editContact: function(contactId) { var region = new Region({el: '#main'}); var app = new ContactsApp({region: region}); app.showContactEditorById(contactId); } }); The validation of the URL parameters should be done on the router as shown in the showContactList method. Once the validation is done, ContactsRouter instantiates an application object, ContactsApp, which is a facade for the Contacts subapplication; finally, ContactsRouter calls an API method to handle the user request. The router doesn't know anything about business logic; it just knows how to decode the URL requests and which object to call in order to handle the request. Here, the region object points to an existing DOM node by passing the application and tells us where the application should be rendered. The subapplication facade A subapplication is composed of smaller pieces that handle specific use cases. In the case of the contacts app, a use case can be see a contact, create a new contact, or edit a contact. The implementation of these use cases is separated on different objects that handle views, events, and business logic for a specific use case. The facade basically fetches the data from the server, handles the connection errors, and creates the objects needed for the use case, as shown here: function ContactsApp(options) { this.region = options.region; this.showContactList = function(page) { App.trigger("loading:start"); new ContactCollection().fetch({ success: _.bind(function(collection, response, options) { this._showList(collection); App.trigger("loading:stop"); }, this), fail: function(collection, response, options) { App.trigger("loading:stop"); App.trigger("server:error", response); } }); }; this._showList = function(contacts) { var contactList = new ContactList({region: this.region}); contactList.showList(contacts); } this.showNewContactForm = function() { this._showEditor(new Contact()); }; this.showContactEditorById = function(contactId) { new Contact({id: contactId}).fetch({ success: _.bind(function(model, response, options) { this._showEditor(model); App.trigger("loading:stop"); }, this), fail: function(collection, response, options) { App.trigger("loading:stop"); App.trigger("server:error", response); } }); }; this._showEditor = function(contact) { var contactEditor = new ContactEditor({region: this.region}); contactEditor.showEditor(contact); } this.showContactById = function(contactId) { new Contact({id: contactId}).fetch({ success: _.bind(function(model, response, options) { this._showViewer(model); App.trigger("loading:stop"); }, this), fail: function(collection, response, options) { App.trigger("loading:stop"); App.trigger("server:error", response); } }); }; this._showViewer = function(contact) { var contactViewer = new ContactViewer({region: this.region}); contactViewer.showContact(contact); } } The simplest handler is showNewContactForm, which is called when the user wants to create a new contact. This creates a new Contact object and passes to the _showEditor method, which will render an editor for a blank Contact. The handler doesn't need to know how to do this because the ContactEditor application will do the job. Other handlers follow the same pattern, triggering an event for the root application to show a loading widget to the user while fetching the data from the server. Once the server responds successfully, it calls another method to handle the result. If an error occurs during the operation, it triggers an event to the root application to show a friendly error to the user. Handlers receive an object and create an application object that renders a set of views and handles the user interactions. The object created will respond to the action of the users, that is, let's imagine the object handling a form to save a contact. When users click on the save button, it will handle the save process and maybe show a message such as Are you sure want to save the changes and take the right action? The subapplication mediator The responsibility of the subapplication mediator object is to render the required layout and views to be showed to the user. It knows which views need to be rendered and in which order, so instantiate the views with the models if needed and put the results on the DOM. After rendering the necessary views, it will listen for user interactions as Backbone events triggered from the views; methods on the object will handle the interaction as described in the use cases. The mediator pattern is applied to this object to coordinate efforts between the views. For example, imagine that we have a form with contact data. As the user made some input in the edition form, other views will render a preview business card for the contact; in this case, the form view will trigger changes to the application object and the application object will tell the business card view to use a new set of data each time. As you can see, the views are decoupled and this is the objective of the application object. The following snippet shows the application that shows a list of contacts. It creates a ContactListView view, which knows how to render a collection of contacts and pass the contacts collection to be rendered: var ContactList = function(options) { _.extend(this, Backbone.Events); this.region = options.region; this.showList = function(contacts) { var contactList = new ContactListView({ collection: contacts }); this.region.show(contactList); this.listenTo(contactList, "item:contact:delete", this._deleteContact); } this._deleteContact = function(contact) { if (confirm('Are you sure?')) { contact.collection.remove(contact); } } this.close = function() { this.stopListening(); } } The ContactListView view will be responsible for transforming this into the DOM nodes and responding to collection events such as adding a new contact or removing one. Once the view is initialized, it is rendered on a specific region previously specified. When the view is finally on DOM, the application listens for the "item:contact:delete" event, which will be triggered if the user clicks on a delete button rendered for each contact. To see a contact, a ContactViewer application is responsible for managing the use case, which is as follows: var ContactViewer = function(options) { _.extend(this, Backbone.Events); this.region = options.region; this.showContact = function(contact) { var contactView = new ContactView({model: contact}); this.region.show(contactView); this.listenTo(contactView, "contact:delete", this._deleteContact); }, this._deleteContact = function(contact) { if (confirm("Are you sure?")) { contact.destroy({ success: function() { App.router.navigate("/contacts", true); }, error: function() { alert("Something goes wrong"); } }); } } } It's the same situation, that is, the contact list creates a view that manages the DOM interactions, renders on the specified region, and listens for events. From the details view of a contact, users can delete them. Similar to a list, a _deleteContact method handles the event, but the difference is when a contact is deleted, the application is redirected to the list of contacts, which is the expected behavior. You can see how the handler uses the root application infrastructure by calling the navigate method of the global App.router. The handler forms to create or edit contacts are very similar, so the same ContactEditor can be used for both the cases. This object will show a form to the user and will wait for the save action, as shown in the following code: var ContactEditor = function(options) { _.extend(this, Backbone.Events) this.region = options.region; this.showEditor = function(contact) { var contactForm = new ContactForm({model: contact}); this.region.show(contactForm); this.listenTo(contactForm, "contact:save", this._saveContact); }, this._saveContact = function(contact) { contact.save({ success: function() { alert("Successfully saved"); App.router.navigate("/contacts"); }, error: function() { alert("Something goes wrong"); } }); } } In this case, the model can have modifications in its data. In simple layouts, the views and model can work nicely with the model-view data bindings, so no extra code is needed. In this case, we will assume that the model is updated as the user puts in information in the form, for example, Backbone.Stickit. When the save button is clicked, a "contact:save" event is triggered and the application responds with the _saveContact method. See how the method issues a save call to the standard Backbone model and waits for the result. In successful requests, a message will be displayed and the user is redirected to the contact list. In errors, a message will tell the user that the application found a problem while saving the contact. The implementation details about the views are outside of the scope of this article, but you can abstract the work made by this object by seeing the snippets in this section. Summary In this article, we started by describing in a general way how a Backbone application works. It describes two main parts, a root application and subapplications. A root application provides common infrastructure to the other smaller and focused applications that we call subapplications. Subapplications are loose-coupled with the other subapplications and should own resources such as views, controllers, routers, and so on. A subapplication manages a small part of the system and no more. Communication between the subapplications and root application is made through an event-driven bus, such as Backbone.Events or Backbone.Radio. The user interacts with the application using views that a subapplication renders. A subapplication mediator orchestrates interaction between the views, models, and collections. It also handles the business logic such as saving or deleting a resource. Resources for Article: Further resources on this subject: Object-Oriented JavaScript with Backbone Classes [article] Building a Simple Blog [article] Marionette View Types and Their Use [article]
Read more
  • 0
  • 0
  • 4182
Modal Close icon
Modal Close icon