Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-improve-mobile-rank-reducing-file-size-part-2
Tobiah Marks
22 May 2015
5 min read
Save for later

Improve mobile rank by reducing file size, Part 2

Tobiah Marks
22 May 2015
5 min read
In part 1 of this series, I explained how file size can affect your rank on mobile app stores. For this part, I will offer a few suggestions to keep your file size down in your games. How can I reduce file size? The difference of even just 10MBs could prevent thousands of uninstalls over time. So, take the time to audit your games assets before shipping it. Can you reduce the file size? It is worth the extra time and effort. Here are some ideas to help reduce the size of your app. Design your assets with file size in mind When designing your game keep in mind what unique assets are needed, what can be generated on the fly, and what doesn't need to be there at all. That fancy menu border might look great in the concept drawing, but would a simple beveled edge look almost as nice? If so, you'll end up using way less texture files, not to mention reduce work for your artist. Whenever it would look ok, use a repeatable texture rather than a larger image. When you have to use a larger asset, ask yourself if you can break it up into smaller elements. Breaking up images into multiple files has other advantages. For example, it could allow you to add parallax scrolling effects to create the perception of depth. Generate assets dynamically It makes sense that you would have different colored buttons in different parts of a game, but do you need a separate image file for each one? Could you instead have a grey "template" button and recolor it programmatically? Background music for games can also be a huge hog of disk space. Yet, you don't want the same 30 second loop to repeat over and over and drive players crazy. Try layering your music! Have various 30 to 60 second "base" loops (Ex. base/drums) and then randomly layer on 15 to 90 second "tunes" (Ex. guitar/sax/whatever melody) on top. That way, the player will hear a randomly generated "song" each time they play. The song may have repeating elements, but the unique way it's streamed together will be good enough to keep the player from getting bored. Compress your assets Use the compression format that makes the most sense. JPGs are great for heavy compression, although they are notorious for artifacting. PNGs are great for sprites, as they allow transparency. Make note if you're using PNG-8 or PNG-24. PNG-8 allows for up to 256 different colors, and PNG-24 supports up to 16 million. Do you really need all 16 million colors, or can you make your asset look nice using only 256? It isn't wrong to use PNG-24, or even PNG-32 if you need per pixel alpha transparency. Just make sure you aren't using them when a more compressed version would look just as nice. Also, remember to crush them. Remove junk code It seems like every advertiser out there wants you to integrate their SDK. "Get set up in five minutes!" they'll claim. Well, that's right, but often you aren't using all the features they offer. You may only end up using one aspect of their framework. Take the time to go through their SDK and look at what you really need. Can this be simplified? Can whole files and assets be removed if you're not using them? It's not uncommon for companies to bundle in lots of stuff even if you don't need it. If you can, try to cut the fat and remove the parts of the SDK you aren't using. Also, consider using ad mediation solution to reduce the number of advertiser SDKs you need to import. Remove temporary files If your game downloads or generates any files, keep close track of them. When you don't need them anymore, clean them up! During development you will constantly install, uninstall, and reinstall your game. You may not notice the rate certain file(s) grow over time. In the real world, players will likely only install your app once per device they use. You don't want to accidentally have your game become bloated. What if I can't reduce my size? This post isn't a one stop solution that will solve all of your App Store Optimization problems. My goal is to make you think about your file size during your development, and to recommend to take meaningful effort to reduce it. Gamers can be forgiving for certain types of games, but only if it's warranted by impressive graphics or hours of content. Even then, bottom line is that the larger you are the more likely players will uninstall over time. Next Steps I hope this two part series inspired you to think about different ways you can optimize your app store rank without just pouring money on the problem. If you liked it, didn't like it, or had any questions or comments please feel free to reach out to me directly! My website and contact information are located below. About the author Right after graduating college in 2009, Tobiah Marks started his own independent game development company called "Yobonja" with a couple of friends. They made dozens of games, their most popular of which is a physics based puzzle game called "Blast Monkeys". The game was the #1 App on the Android Marketplace for over six months. Tobiah stopped tracking downloads in 2012 after the game passed 12 million, and people still play it and its sequel today. In 2013, Tobiah decided to go from full-time to part-time indie as he got an opportunity to join Microsoft as a Game Evangelist. His job now is to talk to developers, teach them how to develop better games, and help their companies be more successful. You can follow him on twitter @TobiahMarks , read his blog at http://www.tobiahmarks.com/, or listen to his podcast Be Indie Now where he interviews other independent game developers.
Read more
  • 0
  • 0
  • 2033

article-image-nodejs-fundamentals
Packt
22 May 2015
17 min read
Save for later

Node.js Fundamentals

Packt
22 May 2015
17 min read
This article is written by Krasimir Tsonev, the author of Node.js By Example. Node.js is one of the most popular JavaScript-driven technologies nowadays. It was created in 2009 by Ryan Dahl and since then, the framework has evolved into a well-developed ecosystem. Its package manager is full of useful modules and developers around the world have started using Node.js in their production environments. In this article, we will learn about the following: Node.js building blocks The main capabilities of the environment The package management of Node.js (For more resources related to this topic, see here.) Understanding the Node.js architecture Back in the days, Ryan was interested in developing network applications. He found out that most high performance servers followed similar concepts. Their architecture was similar to that of an event loop and they worked with nonblocking input/output operations. These operations would permit other processing activities to continue before an ongoing task could be finished. These characteristics are very important if we want to handle thousands of simultaneous requests. Most of the servers written in Java or C use multithreading. They process every request in a new thread. Ryan decided to try something different—a single-threaded architecture. In other words, all the requests that come to the server are processed by a single thread. This may sound like a nonscalable solution, but Node.js is definitely scalable. We just have to run different Node.js processes and use a load balancer that distributes the requests between them. Ryan needed something that is event-loop-based and which works fast. As he pointed out in one of his presentations, big companies such as Google, Apple, and Microsoft invest a lot of time in developing high performance JavaScript engines. They have become faster and faster every year. There, event-loop architecture is implemented. JavaScript has become really popular in recent years. The community and the hundreds of thousands of developers who are ready to contribute made Ryan think about using JavaScript. Here is a diagram of the Node.js architecture: In general, Node.js is made up of three things: V8 is Google's JavaScript engine that is used in the Chrome web browser (https://developers.google.com/v8/) A thread pool is the part that handles the file input/output operations. All the blocking system calls are executed here (http://software.schmorp.de/pkg/libeio.html) The event loop library (http://software.schmorp.de/pkg/libev.html) On top of these three blocks, we have several bindings that expose low-level interfaces. The rest of Node.js is written in JavaScript. Almost all the APIs that we see as built-in modules and which are present in the documentation, are written in JavaScript. Installing Node.js A fast and easy way to install Node.js is by visiting and downloading the appropriate installer for your operating system. For OS X and Windows users, the installer provides a nice, easy-to-use interface. For developers that use Linux as an operating system, Node.js is available in the APT package manager. The following commands will set up Node.js and Node Package Manager (NPM): sudo apt-get updatesudo apt-get install nodejssudo apt-get install npm Running Node.js server Node.js is a command-line tool. After installing it, the node command will be available on our terminal. The node command accepts several arguments, but the most important one is the file that contains our JavaScript. Let's create a file called server.js and put the following code inside: var http = require('http');http.createServer(function (req, res) {   res.writeHead(200, {'Content-Type': 'text/plain'});   res.end('Hello Worldn');}).listen(9000, '127.0.0.1');console.log('Server running at http://127.0.0.1:9000/'); If you run node ./server.js in your console, you will have the Node.js server running. It listens for incoming requests at localhost (127.0.0.1) on port 9000. The very first line of the preceding code requires the built-in http module. In Node.js, we have the require global function that provides the mechanism to use external modules. We will see how to define our own modules in a bit. After that, the scripts continue with the createServer and listen methods on the http module. In this case, the API of the module is designed in such a way that we can chain these two methods like in jQuery. The first one (createServer) accepts a function that is also known as a callback, which is called every time a new request comes to the server. The second one makes the server listen. The result that we will get in a browser is as follows: Defining and using modules JavaScript as a language does not have mechanisms to define real classes. In fact, everything in JavaScript is an object. We normally inherit properties and functions from one object to another. Thankfully, Node.js adopts the concepts defined by CommonJS—a project that specifies an ecosystem for JavaScript. We encapsulate logic in modules. Every module is defined in its own file. Let's illustrate how everything works with a simple example. Let's say that we have a module that represents this book and we save it in a file called book.js: // book.jsexports.name = 'Node.js by example';exports.read = function() {   console.log('I am reading ' + exports.name);} We defined a public property and a public function. Now, we will use require to access them: // script.jsvar book = require('./book.js');console.log('Name: ' + book.name);book.read(); We will now create another file named script.js. To test our code, we will run node ./script.js. The result in the terminal looks like this: Along with exports, we also have module.exports available. There is a difference between the two. Look at the following pseudocode. It illustrates how Node.js constructs our modules: var module = { exports: {} };var exports = module.exports;// our codereturn module.exports; So, in the end, module.exports is returned and this is what require produces. We should be careful because if at some point we apply a value directly to exports or module.exports, we may not receive what we need. Like at the end of the following snippet, we set a function as a value and that function is exposed to the outside world: exports.name = 'Node.js by example';exports.read = function() {   console.log('Iam reading ' + exports.name);}module.exports = function() { ... } In this case, we do not have an access to .name and .read. If we try to execute node ./script.js again, we will get the following output: To avoid such issues, we should stick to one of the two options—exports or module.exports—but make sure that we do not have both. We should also keep in mind that by default, require caches the object that is returned. So, if we need two different instances, we should export a function. Here is a version of the book class that provides API methods to rate the books and that do not work properly: // book.jsvar ratePoints = 0;exports.rate = function(points) {   ratePoints = points;}exports.getPoints = function() {   return ratePoints;} Let's create two instances and rate the books with different points value: // script.jsvar bookA = require('./book.js');var bookB = require('./book.js');bookA.rate(10);bookB.rate(20);console.log(bookA.getPoints(), bookB.getPoints()); The logical response should be 10 20, but we got 20 20. This is why it is a common practice to export a function that produces a different object every time: // book.jsmodule.exports = function() {   var ratePoints = 0;   return {     rate: function(points) {         ratePoints = points;     },     getPoints: function() {         return ratePoints;     }   }} Now, we should also have require('./book.js')() because require returns a function and not an object anymore. Managing and distributing packages Once we understand the idea of require and exports, we should start thinking about grouping our logic into building blocks. In the Node.js world, these blocks are called modules (or packages). One of the reasons behind the popularity of Node.js is its package management. Node.js normally comes with two executables—node and npm. NPM is a command-line tool that downloads and uploads Node.js packages. The official site, , acts as a central registry. When we create a package via the npm command, we store it there so that every other developer may use it. Creating a module Every module should live in its own directory, which also contains a metadata file called package.json. In this file, we have set at least two properties—name and version: {   "name": "my-awesome-nodejs-module",   "version": "0.0.1"} We can place whatever code we like in the same directory. Once we publish the module to the NPM registry and someone installs it, he/she will get the same files. For example, let's add an index.js file so that we have two files in the package: // index.jsconsole.log('Hello, this is my awesome Node.js module!'); Our module does only one thing—it displays a simple message to the console. Now, to upload the modules, we need to navigate to the directory containing the package.json file and execute npm publish. This is the result that we should see: We are ready. Now our little module is listed in the Node.js package manager's site and everyone is able to download it. Using modules In general, there are three ways to use the modules that are already created. All three ways involve the package manager: We may install a specific module manually. Let's say that we have a folder called project. We open the folder and run the following: npm install my-awesome-nodejs-module The manager automatically downloads the latest version of the module and puts it in a folder called node_modules. If we want to use it, we do not need to reference the exact path. By default, Node.js checks the node_modules folder before requiring something. So, just require('my-awesome-nodejs-module') will be enough. The installation of modules globally is a common practice, especially if we talk about command-line tools made with Node.js. It has become an easy-to-use technology to develop such tools. The little module that we created is not made as a command-line program, but we can still install it globally by running the following code: npm install my-awesome-nodejs-module -g Note the -g flag at the end. This is how we tell the manager that we want this module to be a global one. When the process finishes, we do not have a node_modules directory. The my-awesome-nodejs-module folder is stored in another place on our system. To be able to use it, we have to add another property to package.json, but we'll talk more about this in the next section. The resolving of dependencies is one of the key features of the package manager of Node.js. Every module can have as many dependencies as you want. These dependences are nothing but other Node.js modules that were uploaded to the registry. All we have to do is list the needed packages in the package.json file: {    "name": "another-module",    "version": "0.0.1",    "dependencies": {        "my-awesome-nodejs-module": "0.0.1"      } } Now we don't have to specify the module explicitly and we can simply execute npm install to install our dependencies. The manager reads the package.json file and saves our module again in the node_modules directory. It is good to use this technique because we may add several dependencies and install them at once. It also makes our module transferable and self-documented. There is no need to explain to other programmers what our module is made up of. Updating our module Let's transform our module into a command-line tool. Once we do this, users will have a my-awesome-nodejs-module command available in their terminals. There are two changes in the package.json file that we have to make: {   "name": "my-awesome-nodejs-module",   "version": "0.0.2",   "bin": "index.js"} A new bin property is added. It points to the entry point of our application. We have a really simple example and only one file—index.js. The other change that we have to make is to update the version property. In Node.js, the version of the module plays important role. If we look back, we will see that while describing dependencies in the package.json file, we pointed out the exact version. This ensures that in the future, we will get the same module with the same APIs. Every number from the version property means something. The package manager uses Semantic Versioning 2.0.0 (http://semver.org/). Its format is MAJOR.MINOR.PATCH. So, we as developers should increment the following: MAJOR number if we make incompatible API changes MINOR number if we add new functions/features in a backwards-compatible manner PATCH number if we have bug fixes Sometimes, we may see a version like 2.12.*. This means that the developer is interested in using the exact MAJOR and MINOR version, but he/she agrees that there may be bug fixes in the future. It's also possible to use values like >=1.2.7 to match any equal-or-greater version, for example, 1.2.7, 1.2.8, or 2.5.3. We updated our package.json file. The next step is to send the changes to the registry. This could be done again with npm publish in the directory that holds the JSON file. The result will be similar. We will see the new 0.0.2 version number on the screen: Just after this, we may run npm install my-awesome-nodejs-module -g and the new version of the module will be installed on our machine. The difference is that now we have the my-awesome-nodejs-module command available and if you run it, it displays the message written in the index.js file: Introducing built-in modules Node.js is considered a technology that you can use to write backend applications. As such, we need to perform various tasks. Thankfully, we have a bunch of helpful built-in modules at our disposal. Creating a server with the HTTP module We already used the HTTP module. It's perhaps the most important one for web development because it starts a server that listens on a particular port: var http = require('http');http.createServer(function (req, res) {   res.writeHead(200, {'Content-Type': 'text/plain'});   res.end('Hello Worldn');}).listen(9000, '127.0.0.1');console.log('Server running at http://127.0.0.1:9000/'); We have a createServer method that returns a new web server object. In most cases, we run the listen method. If needed, there is close, which stops the server from accepting new connections. The callback function that we pass always accepts the request (req) and response (res) objects. We can use the first one to retrieve information about incoming request, such as, GET or POST parameters. Reading and writing to files The module that is responsible for the read and write processes is called fs (it is derived from filesystem). Here is a simple example that illustrates how to write data to a file: var fs = require('fs');fs.writeFile('data.txt', 'Hello world!', function (err) {   if(err) { throw err; }   console.log('It is saved!');}); Most of the API functions have synchronous versions. The preceding script could be written with writeFileSync, as follows: fs.writeFileSync('data.txt', 'Hello world!'); However, the usage of the synchronous versions of the functions in this module blocks the event loop. This means that while operating with the filesystem, our JavaScript code is paused. Therefore, it is a best practice with Node to use asynchronous versions of methods wherever possible. The reading of the file is almost the same. We should use the readFile method in the following way: fs.readFile('data.txt', function(err, data) {   if (err) throw err;   console.log(data.toString());}); Working with events The observer design pattern is widely used in the world of JavaScript. This is where the objects in our system subscribe to the changes happening in other objects. Node.js has a built-in module to manage events. Here is a simple example: var events = require('events'); var eventEmitter = new events.EventEmitter(); var somethingHappen = function() {    console.log('Something happen!'); } eventEmitter .on('something-happen', somethingHappen) .emit('something-happen'); The eventEmitter object is the object that we subscribed to. We did this with the help of the on method. The emit function fires the event and the somethingHappen handler is executed. The events module provides the necessary functionality, but we need to use it in our own classes. Let's get the book idea from the previous section and make it work with events. Once someone rates the book, we will dispatch an event in the following manner: // book.js var util = require("util"); var events = require("events"); var Class = function() { }; util.inherits(Class, events.EventEmitter); Class.prototype.ratePoints = 0; Class.prototype.rate = function(points) {    ratePoints = points;    this.emit('rated'); }; Class.prototype.getPoints = function() {    return ratePoints; } module.exports = Class; We want to inherit the behavior of the EventEmitter object. The easiest way to achieve this in Node.js is by using the utility module (util) and its inherits method. The defined class could be used like this: var BookClass = require('./book.js'); var book = new BookClass(); book.on('rated', function() {    console.log('Rated with ' + book.getPoints()); }); book.rate(10); We again used the on method to subscribe to the rated event. The book class displays that message once we set the points. The terminal then shows the Rated with 10 text. Managing child processes There are some things that we can't do with Node.js. We need to use external programs for the same. The good news is that we can execute shell commands from within a Node.js script. For example, let's say that we want to list the files in the current directory. The file system APIs do provide methods for that, but it would be nice if we could get the output of the ls command: // exec.js var exec = require('child_process').exec; exec('ls -l', function(error, stdout, stderr) {    console.log('stdout: ' + stdout);    console.log('stderr: ' + stderr);    if (error !== null) {        console.log('exec error: ' + error);    } }); The module that we used is called child_process. Its exec method accepts the desired command as a string and a callback. The stdout item is the output of the command. If we want to process the errors (if any), we may use the error object or the stderr buffer data. The preceding code produces the following screenshot: Along with the exec method, we have spawn. It's a bit different and really interesting. Imagine that we have a command that not only does its job, but also outputs the result. For example, git push may take a few seconds and it may send messages to the console continuously. In such cases, spawn is a good variant because we get an access to a stream: var spawn = require('child_process').spawn; var command = spawn('git', ['push', 'origin', 'master']); command.stdout.on('data', function (data) {    console.log('stdout: ' + data); }); command.stderr.on('data', function (data) {    console.log('stderr: ' + data); }); command.on('close', function (code) {    console.log('child process exited with code ' + code); }); Here, stdout and stderr are streams. They dispatch events and if we subscribe to these events, we will get the exact output of the command as it was produced. In the preceding example, we run git push origin master and sent the full command responses to the console. Summary Node.js is used by many companies nowadays. This proves that it is mature enough to work in a production environment. In this article, we saw what the fundamentals of this technology are. We covered some of the commonly used cases. Resources for Article: Further resources on this subject: AngularJS Project [article] Exploring streams [article] Getting Started with NW.js [article]
Read more
  • 0
  • 0
  • 5816

article-image-getting-started-nwjs
Packt
21 May 2015
19 min read
Save for later

Getting Started with NW.js

Packt
21 May 2015
19 min read
In this article by Alessandro Benoit, author of the book NW.js Essentials, we will learn that until a while ago, developing a desktop application that was compatible with the most common operating systems required an enormous amount of expertise, different programming languages, and logics for each platform. (For more resources related to this topic, see here.) Yet, for a while now, the evolution of web technologies has brought to our browsers many web applications that have nothing to envy from their desktop alternative. Just think of Google apps such as Gmail and Calendar, which, for many, have definitely replaced the need for a local mail client. All of this has been made possible thanks to the amazing potential of the latest implementations of the Browser Web API combined with the incredible flexibility and speed of the latest server technologies. Although we live in a world increasingly interconnected and dependent on the Internet, there is still the need for developing desktop applications for a number of reasons: To overcome the lack of vertical applications based on web technologies To implement software solutions where data security is essential and cannot be compromised by exposing data on the Internet To make up for any lack of connectivity, even temporary Simply because operating systems are still locally installed Once it's established that we cannot completely get rid of desktop applications and that their implementation on different platforms requires an often prohibitive learning curve, it comes naturally to ask: why not make desktop applications out of the very same technologies used in web development? The answer, or at least one of the answers, is NW.js! NW.js doesn't need any introduction. With more than 20,000 stars on GitHub (in the top four hottest C++ projects of the repository-hosting service) NW.js is definitely one of the most promising projects to create desktop applications with web technologies. Paraphrasing the description on GitHub, NW.js is a web app runtime that allows the browser DOM to access Node.js modules directly. Node.js is responsible for hardware and operating system interaction, while the browser serves the graphic interface and implements all the functionalities typical of web applications. Clearly, the use of the two technologies may overlap; for example, if we were to make an asynchronous call to the API of an online service, we could use either a Node.js HTTP client or an XMLHttpRequest Ajax call inside the browser. Without going into technical details, in order to create desktop applications with NW.js, all you need is a decent understanding of Node.js and some expertise in developing HTML5 web apps. In this article, we are going to dissect the topic dwelling on these points: A brief technical digression on how NW.js works An analysis of the pros and cons in order to determine use scenarios Downloading and installing NW.js Development tools Making your first, simple "Hello World" application Important notes about NW.js (also known as Node-Webkit) and io.js Before January 2015, since the project was born, NW.js was known as Node-Webkit. Moreover, with Node.js getting a little sluggish, much to the concern of V8 JavaScript engine updates, from version 0.12.0, NW.js is not based on Node.js but on io.js, an npm-compatible platform originally based on Node.js. For the sake of simplicity, we will keep referring to Node.js even when talking about io.js as long as this does not affect a proper comprehension of the subject. NW.js under the hood As we stated in the introduction, NW.js, made by Roger Wang of Intel's Open Source Technology Center (Shanghai office) in 2011, is a web app runtime based on Node.js and the Chromium open source browser project. To understand how it works, we must first analyze its two components: Node.js is an efficient JavaScript runtime written in C++ and based on theV8 JavaScript engine developed by Google. Residing in the operating system's application layer, Node.js can access hardware, filesystems, and networking functionalities, enabling its use in a wide range of fields, from the implementation of web servers to the creation of control software for robots. (As we stated in the introduction, NW.js has replaced Node.js with io.js from version 0.12.0.) WebKit is a layout engine that allows the rendering of web pages starting from the DOM, a tree of objects representing the web page. NW.js is actually not directly based on WebKit but on Blink, a fork of WebKit developed specifically for the Chromium open source browser project and based on the V8 JavaScript engine as is the case with Node.js. Since the browser, for security reasons, cannot access the application layer and since Node.js lacks a graphical interface, Roger Wang had the insight of combining the two technologies by creating NW.js. The following is a simple diagram that shows how Node.js has been combined with WebKit in order to give NW.js applications access to both the GUI and the operating system: In order to integrate the two systems, which, despite speaking the same language, are very different, a couple of tricks have been adopted. In the first place, since they are both event-driven (following a logic of action/reaction rather than a stream of operations), the event processing has been unified. Secondly, the Node context was injected into WebKit so that it can access it. The amazing thing about it is that you'll be able to program all of your applications' logic in JavaScript with no concerns about where Node.js ends and WebKit begins. Today, NW.js has reached version 0.12.0 and, although still young, is one of the most promising web app runtimes to develop desktop applications adopting web technologies. Features and drawbacks of NW.js Let's check some of the features that characterize NW.js: NW.js allows us to realize modern desktop applications using HTML5, CSS3, JS, WebGL, and the full potential of Node.js, including the use of third-party modules The Native UI API allows you to implement native lookalike applications with the support of menus, clipboards, tray icons, and file binding Since Node.js and WebKit run within the same thread, NW.js has excellent performance With NW.js, it is incredibly easy to port existing web applications to desktop applications Thanks to the CLI and the presence of third-party tools, it's really easy to debug, package, and deploy applications on Microsoft Windows, Mac OS, and Linux However, all that glitters is not gold. There are some cons to consider when developing an application with NW.js: Size of the application: Since a copy of NW.js (70-90 MB) must be distributed along with each application, the size of the application makes it quite expensive compared to native applications. Anyway, if you're concerned about download times, compressing NW.js for distribution will save you about half the size. Difficulties in distributing your application through Mac App Store: In this article, it will not be discussed (just do a search on Google), but even if the procedure is rather complex, you can distribute your NW.js application through Mac App Store. At the moment, it is not possible to deploy a NW.js application on Windows Store due to the different architecture of .appx applications. Missing support for iOS or Android: Unlike other SDKs and libraries, at the moment, it is not possible to deploy an NW.js application on iOS or Android, and it does not seem to be possible to do so in the near future. However, the portability of the HTML, JavaScript, and CSS code that can be distributed on other platforms with tools such as PhoneGap or TideSDK should be considered. Unfortunately, this is not true for all of the features implemented using Node.js. Stability: Finally, the platform is still quite young and not bug-free. NW.js – usage scenarios The flexibility and good performance of NW.js allows its use in countless scenarios, but, for convenience, I'm going to report only a few notable ones: Development tools Implementation of the GUI around existing CLI tools Multimedia applications Web services clients Video games The choice of development platform for a new project clearly depends only on the developer; for the overall aim of confronting facts, it may be useful to consider some specific scenarios where the use of NW.js might not be recommended: When developing for a specific platform, graphic coherence is essential, and, perhaps, it is necessary to distribute the application through a store If the performance factor limits the use of the preceding technologies If the application does a massive use of the features provided by the application layer via Node.js and it has to be distributed to mobile devices Popular NW.js applications After summarizing the pros and cons of NW.js, let's not forget the real strength of the platform—the many applications built on top of NW.js that have already been distributed. We list a few that are worth noting: Wunderlist for Windows 7: This is a to-do list / schedule management app used by millions Cellist: This is an HTTP debugging proxy available on Mac App Store Game Dev Tycoon: This is one of the first NW.js games that puts you in the shoes of a 1980s game developer Intel® XDK: This is an HTML5 cross-platform solution that enables developers to write web and hybrid apps Downloading and installing NW.js Installing NW.js is pretty simple, but there are many ways to do it. One of the easiest ways is probably to run npm install nw from your terminal, but for the educational purposes, we're going to manually download and install it in order to properly understand how it works. You can find all the download links on the project website at http://nwjs.io/ or in the Downloads section on the GitHub project page at https://github.com/nwjs/nw.js/; from here, download the package that fits your operating system. For example, as I'm writing this article, Node-Webkit is at version 0.12.0, and my operating system is Mac OS X Yosemite 10.10 running on a 64-bit MacBook Pro; so, I'm going to download the nwjs-v0.12.0-osx-x64.zip file. Packages for Mac and Windows are zipped, while those for Linux are in the tar.gz format. Decompress the files and proceed, depending on your operating system, as follows. Installing NW.js on Mac OS X Inside the archive, we're going to find three files: Credits.html: This contains credits and licenses of all the dependencies of NW.js nwjs.app: This is the actual NW.js executable nwjc: This is a CLI tool used to compile your source code in order to protect it Before v0.12.0, the filename of nwjc was nwsnapshot. Currently, the only file that interests us is nwjs.app (the extension might not be displayed depending on the OS configuration). All we have to do is copy this file in the /Applications folder—your main applications folder. If you'd rather install NW.js using Homebrew Cask, you can simply enter the following command in your terminal: $ brew cask install nw If you are using Homebrew Cask to install NW.js, keep in mind that the Cask repository might not be updated and that the nwjs.app file will be copied in ~/Applications, while a symlink will be created in the /Applications folder. Installing NW.js on Microsoft Windows Inside the Microsoft Windows NW.js package, we will find the following files: credits.html: This contains the credits and licenses of all NW.js dependencies d3dcompiler_47.dll: This is the Direct3D library ffmpegsumo.dll: This is a media library to be included in order to use the <video> and <audio> tags icudtl.dat: This is an important network library libEGL.dll: This is the WebGL and GPU acceleration libGLESv2.dll: This is the WebGL and GPU acceleration locales/: This is the languages folder nw.exe: This is the actual NW.js executable nw.pak: This is an important JS library pdf.dll: This library is used by the web engine for printing nwjc.exe: This is a CLI tool to compile your source code in order to protect it Some of the files in the folder will be omitted during the final distribution of our application, but for development purposes, we are simply going to copy the whole content of the folder to C:/Tools/nwjs. Installing NW.js on Linux On Linux, the procedure can be more complex depending on the distribution you use. First, copy the downloaded archive into your home folder if you have not already done so, and then open the terminal and type the following command to unpack the archive (change the version accordingly to the one downloaded): $ gzip -dc nwjs-v0.12.0-linux-x64.tar.gz | tar xf - Now, rename the newly created folder in nwjs with the following command: $ mv ~/nwjs-v0.12.0-linux-x64 ~/nwjs Inside the nwjs folder, we will find the following files: credits.html: This contains the credits and licenses of all the dependencies of NW.js icudtl.dat This is an important network library libffmpegsumo.so: This is a media library to be included in order to use the <video> and <audio> tags locales/: This is a languages folder nw: This is the actual NW.js executable nw.pak: This is an important JS library nwjc: This is a CLI tool to compile your source code in order to protect it Open the folder inside the terminal and try to run NW.js by typing the following: $ cd nwjs$ ./nw If you get the following error, you are probably using a version of Ubuntu later than 13.04, Fedora later than 18, or another Linux distribution that uses libudev.so.1 instead of libudev.so.0: otherwise, you're good to go to the next step: error while loading shared libraries: libudev.so.0: cannot open shared object file: No such file or directory Until NW.js is updated to support libudev.so.1, there are several solutions to solve the problem. For me, the easiest solution is to type the following terminal command inside the directory containing nw: $ sed -i 's/udev.so.0/udev.so.1/g' nw This will replace the string related to libudev, within the application code, with the new version. The process may take a while, so wait for the terminal to return the cursor before attempting to enter the following: $ ./nw Eventually, the NW.js window should open properly. Development tools As you'll make use of third-party modules of Node.js, you're going to need npm in order to download and install all the dependencies; so, Node.js (http://nodejs.org/) or io.js (https://iojs.org/) must be obviously installed in your development environment. I know you cannot wait to write your first application, but before you start, I would like to introduce you to Sublime Text 2. It is a simple but sophisticated IDE, which, thanks to the support for custom build scripts, allows you to run (and debug) NW.js applications from inside the editor itself. If I wasn't convincing and you'd rather keep using your favorite IDE, you can skip to the next section; otherwise, follow these steps to install and configure Sublime Text 2: Download and install Sublime Text 2 for your platform from http://www.sublimetext.com/. Open it and from the top menu, navigate to Tools | Build System | New Build System. A new edit screen will open; paste the following code depending on your platform: On Mac OS X: {"cmd": ["nwjs", "--enable-logging",     "${project_path:${file_path}}"],"working_dir": "${project_path:${file_path}}","path": "/Applications/nwjs.app/Contents/MacOS/"} On Microsoft Windows: {"cmd": ["nw.exe", "--enable-logging",     "${project_path:${file_path}}"],"working_dir": "${project_path:${file_path}}","path": "C:/Tools/nwjs/","shell": true} On Linux: {"cmd": ["nw", "--enable-logging",     "${project_path:${file_path}}"],"working_dir": "${project_path:${file_path}}","path": "/home/userName/nwjs/"} Type Ctrl + S (Cmd + S on Mac) and save the file as nw-js.sublime-build. Perfect! Now you are ready to run your applications directly from the IDE. There are a lot of packages, such as SublimeLinter, LiveReload, and Node.js code completion, available to Sublime Text 2. In order to install them, you have to install Package Control first. Just open https://sublime.wbond.net/installation and follow the instructions. Writing and running your first "Hello World" app Finally, we are ready to write our first simple application. We're going to revisit the usual "Hello World" application by making use of a Node.js module for markdown parsing. "Markdown is a plain text formatting syntax designed so that it can be converted to HTML and many other formats using a tool by the same name."                                                                                                              – Wikipedia Let's create a Hello World folder and open it in Sublime Text 2 or in your favorite IDE. Now open a new package.json file and type in the following JSON code: {"name": "nw-hello-world","main": "index.html","dependencies": {   "markdown": "0.5.x"}} The package.json manifest file is essential for distribution as it determines many of the window properties and primary information about the application. Moreover, during the development process, you'll be able to declare all of the dependencies. In this specific case, we are going to assign the application name, the main file, and obviously our dependency, the markdown module, written by Dominic Baggott. If you so wish, you can create the package.json manifest file using the npm init command from the terminal as you're probably used to already when creating npm packages. Once you've saved the package.json file, create an index.html file that will be used as the main application file and type in the following code: <!DOCTYPE html><html><head>   <title>Hello World!</title></head><body>   <script>   <!--Here goes your code-->   </script></body></html> As you can see, it's a very common HTML5 boilerplate. Inside the script tag, let's add the following: var markdown = require("markdown").markdown,   div = document.createElement("div"),   content = "#Hello World!n" +   "We are using **io.js** " +   "version *" + process.version + "*"; div.innerHTML = markdown.toHTML(content);document.body.appendChild(div); What we do here is require the markdown module and then parse the content variable through it. To keep it as simple as possible, I've been using Vanilla JavaScript to output the parsed HTML to the screen. In the highlighted line of code, you may have noticed that we are using process.version, a property that is a part of the Node.js context. If you try to open index.html in a browser, you'd get the Reference Error: require is not defined error as Node.js has not been injected into the WebKit process. Once you have saved the index.html file, all that is left is to install the dependencies by running the following command from the terminal inside the project folder: $ npm install And we are ready to run our first application! Running NW.js applications on Sublime Text 2 If you opted for Sublime Text 2 and followed the procedure in the development tools section, simply navigate to Project | Save Project As and save the hello-world.sublime-project file inside the project folder. Now, in the top menu, navigate to Tools | Build System and select nw-js. Finally, press Ctrl + B (or Cmd + B on Mac) to run the program. If you have opted for a different IDE, just follow the upcoming steps depending on your operating system. Running NW.js applications on Microsoft Windows Open the command prompt and type: C:> c:Toolsnwjsnw.exe c:pathtotheproject On Microsoft Windows, you can also drag the folder containing package.json to nw.exe in order to open it. Running NW.js applications on Mac OS Open the terminal and type: $ /Applications/nwjs.app/Contents/MacOS/nwjs /path/to/the/project/ Or, if running NW.js applications inside the directory containing package.json, type: $ /Applications/nwjs.app/Contents/MacOS/nwjs. As you can see in Mac OS X, the NW.js kit's executable binary is in a hidden directory within the .app file. Running NW.js applications on Linux Open the terminal and type: $ ~/nwjs/nw /path/to/the/project/ Or, if running NW.js applications inside the directory containing package.json, type: $ ~/nwjs/nw . Running the application, you may notice that a few errors are thrown depending on your platform. As I stated in the pros and cons section, NW.js is still young, so that's quite normal, and probably we're talking about minor issues. However, you can search in the NW.js GitHub issues page in order to check whether they've already been reported; otherwise, open a new issue—your help would be much appreciated. Now, regardless of the operating system, a window similar to the following one should appear: As illustrated, the process.version object variable has been printed properly as Node.js has correctly been injected and can be accessed from the DOM. Perhaps, the result is a little different than what you expected since the top navigation bar of Chromium is visible. Do not worry! You can get rid of the navigation bar at any time simply by adding the window.toolbar = false parameter to the manifest file, but for now, it's important that the bar is visible in order to debug the application. Summary In this article, you discovered how NW.js works under the hood, the recommended tools for development, a few usage scenarios of the library, and eventually, how to run your first, simple application using third-party modules of Node.js. I really hope I haven't bored you too much with the theoretical concepts underlying the functioning of NW.js; I really did my best to keep it short.
Read more
  • 0
  • 0
  • 21053

article-image-jira-overview
Packt
20 May 2015
13 min read
Save for later

JIRA – an Overview

Packt
20 May 2015
13 min read
In this article by Ravi Sagar, author of the book, Mastering JIRA, we get introduced with JIRA and its concepts, applications, and uses. Also, we get introduced with JQL, project reports and the tools used for the reports. Finally, we learn how to import a simple CSV file in JIRA. Atlassian JIRA is a proprietary issue tracking system. It is used for tracking bugs, managing issues, and provides project management functionalities. There are many such tools available in the market, but the best thing about JIRA is that, it can be configured easily and it offers a wide range of customizations. Out of the box, JIRA offers defect/bug tracking functionality, but it can be customized to act like a helpdesk system, simple test management suite, or a project management system with end to end traceability. (For more resources related to this topic, see here.) Applications, uses, and examples The ability to customize JIRA is what makes it popular among various companies who use it. There are various applications of JIRA as mentioned as follows: Defect/bug tracking Change requests Helpdesk/support tickets Project management Test case management Requirements management Process management Let's take a look implementation of test case management: Issue Types: Test Campaign: This will be the standard issue type Test Case: This will be sub-task Workflow for Test Campaign: New states:    Published    Under Execution Condition:    A Test Campaign will only pass when all the Test Cases are passed    Only reporter can move this Test Campaign to the Closed state Post Function:    When the Test Campaign is closed send an email to everyone in a particular group Workflow for Test Case: New States:    Blocked    Passed    Failed    In Review Condition:    On the assign can move the Test Case to Passed state Post Function:    When the Test Case is moved to Failed state, change the issue priority to Major Custom Fields: The following table shows the various fields and their descriptions: Name Type Values Field configuration Category Select List     Customer Name Select List     Steps to Reproduce Text Area   Mandatory Expected input Text Area   Mandatory Expected output Text Area   Mandatory Pre-Condition Text Area     Post-Condition Text Area     Campaign Type Select List Unit Functional Endurance Benchmark Robustness Security Backward Compatibility Certification with Baseline   Automation Status Select List Automatic Manual Partially Automatic   JIRA core concepts Let us take a look at the architecture of JIRA in the following diagram; it will help you to understand the core concepts.   Project Categories: When there are too many projects in JIRA, it becomes important to segregate them into various categories. JIRA will let you create several categories that could represent the business units, clients, or teams in your company. Projects: A JIRA project is a collection of issues. Your team could use a JIRA project to coordinate the development of a product, track a project, manage a help desk, and more, depending on your requirements. Components: Components are sub-sections of a project. They are used to group issues within a project into smaller parts. Versions: Versions are points-in-time for a project. They help you schedule and organize your releases. Issue Types: JIRA will let you create more than one issue types that are different from each other in terms of what kind of information they store. JIRA comes with default Issue Types like Bug, Task, and Sub-Task but you can create more Issue Types that can follow their own workflow as well as have different set of fields. Sub-Tasks: Issue Types are of two types—standard and sub-tasks, which are children of a standard task. For instance, you could have Test Campaign as standard issue type and Test Cases as sub-tasks. Introduction to JQL JIRA Query Language, better known as JQL is one of the best features in JIRA that lets you search the issues efficiently and offers lots of handy features. The best part about JQL is that it is very easy to learn, thanks to the autocomplete functionality in the Advanced Search that helps the user with suggestions based on keywords typed. JQL consists of questions whether single or multiple , can be combined together to form complex questions. Basic JQL syntax JQL has a field followed by an operator. For instance, to retrieve all the issues of the project "CSTA", you can use a simple query like this: project = CSTA Now within this project, if you want to find the issues assigned to a specific user, use the following query: project = CSTA and assignee = ravisagar There might be several hundred issues assigned to a user and maybe we just want to focus on issues whose priority is either Critical or Blocker. project = CSTA and assignee = ravisagar and priority in (Blocker, Critical) What if instead of issues assigned to a specific user we want to find the issues assigned to all other users except one? It can be achieved in the following way: project = CSTA and assignee != ravisagar and priority in (Blocker, Critical) So you can see that JQL consists of one or more queries. Project reports Once you start using JIRA for issue tracking of any type, it becomes imperative to derive useful information out of it. JIRA comes with built in reports that show real time statistics for projects, users, and other fields. Let's take a look at each of these reports. Open any project in JIRA that contains lot of issues and has around 5 to 10 users, which are either assignee or reporters. When you open any project page, the default view is the Summary view that contains a 30 day Summary report and Activity Stream that shows whatever is happening in the project like creation of new issues, update of status, comments, and basically any change in the project. On the left hand side of the project summary page, there are links for Issues and Reports. Average Age Report This report displays the average number of days for which issues are in unresolved state on a given date. Created vs. Resolved Issues Report This report displays the number of issues that were created over the period of time versus the number of issues that were resolved in that period. Pie Chart Report This chart shows the breakup of data. For instance, in your project, if you are interested in finding out the issue count for all the issue types, then this report can be used to fetch this information. Recently Created Issues Report This report displays the statistical information on number of issues created so for the selected period and days. The report also displays status of the issues. Resolution Time Report There are cases when you are interested in understanding the speed of your team every month. How soon can your team resolve the issues? This report displays the average resolution time of the issues in a given month. Single Level Group By Report It is a simple report that just lists the issues grouped by a particular field like Assignee, Issue Type, Resolution, Status, Priority, and so on. Time Since Issues Report This report is useful in finding out how many issues were created in a specific quarter over the past one year and, not only that, there are various date based fields supported by this report. Time Tracking Report This comprehensive report displays the estimated effort and remaining effort of all the issues. Not only that, the report will also give you indication on the overall progress of the project. User Workload Report This report can tell us about the occupancy of the resources in all the projects. It really helps in distributing the tasks among Users. Version Workload Report If your project has various versions that are related to the actual releases or fixes, then it becomes important to understand the status of all such issues. Gadgets for reporting JIRA comes with lot of useful gadgets that you can add in the dashboard and use for reporting purpose. Let's take a look at some of these gadgets. Activity Stream This gadget will display all the latest updates in your JIRA instance. It is also possible to limit this stream to a particular filter too. This gadget is quite useful, as it displays up to the date information in the dashboard. Created vs. Resolved Chart The project summary page has a chart to display the issues that were created and resolved in the past 30 days. There is a similar gadget to display this information. You can also change the duration from 30 days to whatever you like. This gadget can be created for a specific project. Pie Chart Just like the Pie Chart, which is available in the project reports, there is a similar gadget that you can add in the dashboard. For instance, for a particular project, you can generate the Pie Chart based on Priority: Issue Statistics This gadget is quite useful in generating simple statistics for various fields. Here, we are interested in finding out the breakup of project in terms of issue status: Two Dimensional Filter Statistics The Issue Statistics gadget can display break up of project issues for every Status. What if you want to further segregate that information? For instance, how many issues are open and to which Issue Type they belong to? In such scenarios, Two Dimensional Filter Statistics can be used. You just need to select two fields that will be used to generate this report, one for x axis and another for y axis. These are some of the common gadgets that can be used in the dashboard; however, there are many more gadgets. Click on the Add Gadget option on top-corner to see all such gadgets in your JIRA instance. Some gadgets come out of the box with JIRA and others are part of add-ons that you can install. After you select all these gadgets in your dashboard, it will look like the following screenshot: This is the new dashboard that we have just created and configured for a specific project, but it is also possible to create more than one dashboard. Just click on the Create Dashboard option under Tools on top right-hand side corner to add another dashboard. If you have more than one dashboard, then you can switch between them using the links on the left-hand side of the screen as shown in the following screenshot: Simple CSV import Lets us understand how to perform a simple import of CSV data. The first thing to do is prepare the CSV file that can be imported in JIRA. For this exercise, we will import the Issues in a particular project; these issues will have data like Issue Summary, Status, Dates, and few other fields. Preparing the CSV file We are going to use MS Excel for preparing the CSV file with the following data: If your existing tool has the option to export directly into the CSV file then you can skip this step, but we recommend reviewing your data before importing it into JIRA. Usually, CSV import will not work if the format of the CSV file and the data is not correct. It is very easy to generate a CSV file from an Excel file. Follow these steps: Go to File | Save As | File name: and select Save as type: as CSV (comma delimited). If you don't have Microsoft Excel installed, you can use Libre Office Calc, which is an open source alternative for Microsoft Office Excel. You can open the CSV file to verify its format too: Our CSV file has the following fields: CSV Field Purpose Project JIRA's project key needs to be specified in this field Summary The issue summary is mandatory and needs to be specified in the CSV file Issue Type It is important to specify the Issue Type Status This is the status of the Issue; these are the workflow states that need to exist in JIRA and the project workflow should have states that are going to be imported through a CSV file Priority The priorities mentioned here should exist in JIRA before import Resolution The resolutions mentioned here should exist in JIRA before import Assignee This specifies the Assignee of the Issue Reporter This specifies the Reporter of the Issue Created This is the Issue creation date Resolved This is the Issue resolution date Performing CSV import Once your CSV file is prepared, then you are ready to perform the import operation in JIRA: Go to JIRA Administration | System | External System Import | Import from Comma-separated values (CSV) (under IMPORT & EXPORT): In the File import screen, for CSV Source File field, click on the Browse button to select the CSV file that you just prepared on your machine. Once you select the CSV file, the Next button will be enabled: In the Setup screen, select Import to Project as DOPT, which is the name of our project. Verify the Date format and it should match the format of the date values in the CSV file. Press the Next button to continue. In the Map fields screen, we need to map the fields in the CSV file to JIRA fields. This step is crucial because, in your old system the field name can be different from JIRA fields, so in this step map these fields to the respective JIRA fields. Press the Next button to continue. In the Map values screen, map the values of the Status, in fact this mapping of field values can be done for any field. In our case, the values in the status field are same as in JIRA so click on Begin Import button. You will finally get the confirmation that issues have been imported successfully. If you encounter any errors during CSV import, then it is usually due to some problem with the CSV format. Read the error messages carefully and correct those issues. As mentioned earlier, the CSV import needs to be performed on test environment first. User and group management JIRA is a web based application to track project issues that are assigned to people. The users to whom these issues will be assigned need to exist in the system. When JIRA is deployed in any organization, the first thing that should be done is to gather the list of people who would be using this tool; hence, their accounts need to be created in JIRA. Each user will have their username and password that is unique to them and allows them to login to the system. JIRA has its own internal authentication mechanism as well as the ability to integrate with Lightweight Directory Access Protocol (LDAP). Summary In this article, we got introduced with JIRA, and various concepts of JIRA. We also got introduced with JQL, project reports and the tools used for these reports. Finally, we learned how to import a simple CSV file in JIRA. Resources for Article: Further resources on this subject: Project Management [article] Securing your JIRA 4 [article] Advanced JIRA 5.2 Features [article]
Read more
  • 0
  • 0
  • 6510

article-image-introducing-web-components
Packt
19 May 2015
16 min read
Save for later

Introducing Web Components

Packt
19 May 2015
16 min read
In this article by Sandeep Kumar Patel, author of the book Learning Web Component Development, we will learn about the web component specification in detail. Web component is changing the web application development process. It comes with standard and technical features, such as templates, custom elements, Shadow DOM, and HTML Imports. The main topics that we will cover in this article about web component specification are as follows: What are web components? Benefits and challenges of web components The web component architecture Template element HTML Import (For more resources related to this topic, see here.) What are web components? Web components are a W3C specification to build a standalone component for web applications. It helps developers leverage the development process to build reusable and reliable widgets. A web application can be developed in various ways, such as page focus development and navigation-based development, where the developer writes the code based on the requirement. All of these approaches fulfil the present needs of the application, but may fail in the reusability perspective. This problem leads to component-based development. Benefits and challenges of web components There are many benefits of web components: A web component can be used in multiple applications. It provides interoperability between frameworks, developing the web component ecosystem. This makes it reusable. A web component has a template that can be used to put the entire markup separately, making it more maintainable. As web components are developed using HTML, CSS, and JavaScript, it can run on different browsers. This makes it platform independent. Shadow DOM provides encapsulation mechanism to style, script, and HTML markup. This encapsulation mechanism provides private scope and prevents the content of the component being affected by the external document. Equally, some of the challenges for a web component include: Implementation: The W3C web component specification is very new to the browser technology and not completely implemented by the browsers. Shared resource: A web component has its own scoped resources. There may be cases where some of the resources between the components are common. Performance: Increase in the number of web components takes more time to get used inside the DOM. Polyfill size: The polyfill are a workaround for a feature that is not currently implemented by the browsers. These polyfill files have a large memory foot print. SEO: As the HTML markup present inside the template is inert, it creates problems in the search engine for the indexing of web pages. The web component architecture The W3C web component specification has four main building blocks for component development. Web component development is made possible by template, HTML Imports, Shadow DOM, and custom elements and decorators. However, decorators do not have a proper specification at present, which results in the four pillars of web component paradigm. The following diagram shows the building blocks of web component: These four pieces of technology power a web component that can be reusable across the application. In the coming section, we will explore these features in detail and understand how they help us in web component development. Template element The HTML <template> element contains the HTML markup, style, and script, which can be used multiple times. The templating process is nothing new to a web developer. Handlebars, Mustache, and Dust are the templating libraries that are already present and heavily used for web application development. To streamline this process of template use, W3C web component specification has included the <template> element. This template element is very new to web development, so it lacks features compared to the templating libraries such as Handlebars.js that are present in the market. In the near future, it will be equipped with new features, but, for now, let's explore the present template specification. Template element detail The HTML <template> element is an HTMLTemplateElement interface. The interface definition language (IDL) definition of the template element is listed in the following code: interface HTMLTemplateElement : HTMLElement {readonly attribute DocumentFragment content;}; The preceding code is written in IDL language. This IDL language is used by the W3C for writing specification. Browsers that support HTML Import must implement the aforementioned IDL. The details of the preceding code are listed here: HTMLTemplateElement: This is the template interface and extends the HTMLElement class. content: This is the only attribute of the HTML template element. It returns the content of the template and is read-only in nature. DocumentFragment: This is a return type of the content attribute. DocumentFragment is a lightweight version of the document and does not have a parent. To find out more about DocumentFargment, use the following link: https://developer.mozilla.org/en/docs/Web/API/DocumentFragment Template feature detection The HTML <template> element is very new to web application development and not completely implemented by all browsers. Before implementing the template element, we need to check the browser support. The JavaScript code for template support in a browser is listed in the following code: <!DOCTYPE html><html><head lang="en"><meta charset="UTF-8"><title>Web Component: template support</title></head><body><h1 id="message"></h1><script>var isTemplateSupported = function () {var template = document.createElement("template");return 'content' in template;};var isSupported = isTemplateSupported(),message = document.getElementById("message");if (isSupported) {message.innerHTML = "Template element is supported by thebrowser.";} else {message.innerHTML = "Template element is not supported bythe browser.";}</script></body></html> In the preceding code, the isTemplateSupported method checks the content property present inside the template element. If the content attribute is present inside the template element, this method returns either true or false. If the template element is supported by the browser, the h1 element will show the support message. The browser that is used to run the preceding code is Chrome 39 release. The output of the preceding code is shown in following screenshot: The preceding screenshot shows that the browser used for development is supporting the HTML template element. There is also a great online tool called "Can I Use for checking support for the template element in the current browser. To check out the template support in the browser, use the following link: http://caniuse.com/#feat=template The following screenshot shows the current status of the support for the template element in the browsers using the Can I Use online tool: Inert template The HTML content inside the template element is inert in nature until it is activated. The inertness of template content contributes to increasing the performance of the web application. The following code demonstrates the inertness of the template content: <!DOCTYPE html><html><head lang="en"><meta charset="UTF-8"><title>Web Component: A inert template content example.</title></head><body><div id="message"></div><template id="aTemplate"><img id="profileImage"src="http://www.gravatar.com/avatar/c6e6c57a2173fcbf2afdd5fe6786e92f.png"><script>alert("This is a script.");</script></template><script>(function(){var imageElement =document.getElementById("profileImage"),messageElement = document.getElementById("message");messageElement.innerHTML = "IMG element "+imageElement;})();</script></body></html> In the preceding code, a template contains an image element with the src attribute, pointing to a Gravatar profile image, and an inline JavaScript alert method. On page load, the document.getElementById method is looking for an HTML element with the #profileImage ID. The output of the preceding code is shown in the following screenshot: The preceding screenshot shows that the script is not able to find the HTML element with the profileImage ID and renders null in the browser. From the preceding screenshot it is evident that the content of the template is inert in nature. Activating a template By default, the content of the <template> element is inert and are not part of the DOM. The two different ways that can be used to activate the nodes are as follows: Cloning a node Importing a node Cloning a node The cloneNode method can be used to duplicate a node. The syntax for the cloneNode method is listed as follows: <Node> <target node>.cloneNode(<Boolean parameter>) The details of the preceding code syntax are listed here: This method can be applied on a node that needs to be cloned. The return type of this method is Node. The input parameter for this method is of the Boolean type and represents a type of cloning. There are 2 different types of cloning, listed as follows: Deep cloning: In deep cloning, the children of the targeted node also get copied. To implement deep cloning, the Boolean input parameter to cloneNode method needs to be true. Shallow cloning: In shallow cloning, only the targeted node is copied without the children. To implement shallow cloning the Boolean input parameter to cloneNode method needs to be false. The following code shows the use of the cloneNode method to copy the content of a template, having the h1 element with some text: <!DOCTYPE html><html><head lang="en"><meta charset="UTF-8"><title>Web Component: Activating template using cloneNode method</title></head><body><div id="container"></div><template id="aTemplate"><h1>Template is activated using cloneNode method.</h1></template><script>var aTemplate = document.querySelector("#aTemplate"),container = document.getElementById("container"),templateContent = aTemplate.content,activeContent = templateContent.cloneNode(true);container.appendChild(activeContent);</script></body></html> In the preceding code, the template element has the aTemplate ID and is referenced using the querySelector method. The HTML markup content inside the template is then retrieved using a content property and saved in a templateContent variable. The cloneNode method is then used for deep cloning to get the activated node that is later appended to a div element. The following screenshot shows the output of the preceding code: To find out more about the cloneNode method visit: https://developer.mozilla.org/en-US/docs/Web/API/Node.cloneNode Importing a node The importNode method is another way of activating the template content. The syntax for the aforementioned method is listed in the following code: <Node> document.importNode(<target node>,<Boolean parameter>) The details of the preceding code syntax are listed as follows: This method returns a copy of the node from an external document. This method takes two input parameters. The first parameter is the target node that needs to be copied. The second parameter is a Boolean flag and represents the way the target node is cloned. If the Boolean flag is false, the importNode method makes a shallow copy, and for a true value, it makes a deep copy. The following code shows the use of the importNode method to copy the content of a template containing an h1 element with some text: <!DOCTYPE html><html><head lang="en"><meta charset="UTF-8"><title>Web Component: Activating template using importNode method</title></head><body><div id="container"></div><template id="aTemplate"><h1>Template is activated using importNode method.</h1></template><script>var aTemplate = document.querySelector("#aTemplate"),container = document.getElementById("container"),templateContent = aTemplate.content,activeContent = document.importNode(templateContent,true);container.appendChild(activeContent);</script></body></html> In the preceding code, the template element has the aTemplate ID and is referenced using the querySelector method. The HTML markup content inside the template is then retrieved using the content property and saved in the templateContent variable. The importNode method is then used for deep cloning to get the activated node that is later appended to a div element. The following screenshot shows the output of the preceding code: To find out more about the importNode method, visit: http://mdn.io/importNode HTML Import The HTML Import is another important piece of technology of the W3C web component specification. It provides a way to include another HTML document present in a file with the current document. HTML Imports provide an alternate solution to the Iframe element, and are also great for resource bundling. The syntax of the HTML Imports is listed as follows: <link rel="import" href="fileName.html"> The details of the preceding syntax are listed here: The HTML file can be imported using the <link> tag and the rel attribute with import as the value. The href string points to the external HTML file that needs to be included in the current document. The HTML import element is implemented by the HTMLElementLink class. The IDL definition of HTML Import is listed in the following code: partial interface LinkImport {readonly attribute Document? import;};HTMLLinkElement implements LinkImport; The preceding code shows IDL for the HTML Import where the parent interface is LinkImport which has the readonly attribute import. The HTMLLinkElement class implements the LinkImport parent interface. The browser that supports HTML Import must implement the preceding IDL. HTML Import feature detection The HTML Import is new to the browser and may not be supported by all browsers. To check the support of the HTML Import in the browser, we need to check for the import property that is present inside a <link> element. The code to check the HTML import support is as follows: <!DOCTYPE html><html><head lang="en"><meta charset="UTF-8"><title>Web Component: HTML import support</title></head><body><h1 id="message"></h1><script>var isImportSupported = function () {var link = document.createElement("link");return 'import' in link;};var isSupported = isImportSupported(),message = document.getElementById("message");if (isSupported) {message.innerHTML = "Import is supported by the browser.";} else {message.innerHTML = "Import is not supported by thebrowser.";}</script></body></html> The preceding code has a isImportSupported function, which returns the Boolean value for HTML import support in the current browser. The function creates a <link> element and then checks the existence of an import attribute using the in operator. The following screenshot shows the output of the preceding code: The preceding screenshot shows that the import is supported by the current browser as the isImportSupported method returns true. The Can I Use tool can also be utilized for checking support for the HTML Import in the current browser. To check out the template support in the browser, use the following link: http://caniuse.com/#feat=imports The following screenshot shows the current status of support for the HTML Import in browsers using the Can I Use online tool: Accessing the HTML Import document The HTML Import includes the external document to the current page. We can access the external document content using the import property of the link element. In this section, we will learn how to use the import property to refer to the external document. The message.html file is an external HTML file document that needs to be imported. The content of the message.html file is as follows: <h1>This is from another HTML file document.</h1> The following code shows the HTML document where the message.html file is loaded and referenced by the import property: <!DOCTYPE html><html><head lang="en"><link rel="import" href="message.html"></head><body><script>(function(){var externalDocument =document.querySelector('link[rel="import"]').import;headerElement = externalDocument.querySelector('h1')document.body.appendChild(headerElement.cloneNode(true));})();</script></body></html> The details of the preceding code are listed here: In the header section, the <link> element is importing the HTML document present inside the message.html file. In the body section, an inline <script> element using the document.querySelector method is referencing the link elements having the rel attribute with the import value. Once the link element is located, the content of this external document is copied using the import property to the externalDocument variable. The header h1 element inside the external document is then located using a quesrySelector method and saved to the headerElement variable. The header element is then deep copied using the cloneNode method and appended to the body element of the current document. The following screenshot shows the output of the preceding code: HTML Import events The HTML <link> element with the import attribute supports two event handlers. These two events are listed "as follows: load: This event is fired when the external HTML file is imported successfully onto the current page. A JavaScript function can be attached to the onload attribute, which can be executed on a successful load of the external HTML file. error: This event is fired when the external HTML file is not loaded or found(HTTP code 404 not found). A JavaScript function can be attached to the onerror attribute, which can be executed on error of importing the external HTML file. The following code shows the use of these two event types while importing the message.html file to the current page: <!DOCTYPE html><html><head lang="en"><script async>function handleSuccess(e) {//import load Successfulvar targetLink = e.target,externalDocument = targetLink.import;headerElement = externalDocument.querySelector('h1'),clonedHeaderElement = headerElement.cloneNode(true);document.body.appendChild(clonedHeaderElement);}function handleError(e) {//Error in loadalert("error in import");}</script><link rel="import" href="message.html"onload="handleSuccess(event)"onerror="handleError(event)"></head><body></body></html> The details of the preceding code are listed here: handleSuccess: This method is attached to the onload attribute which is executed on the successful load of message.html in the current document. The handleSuccess method imports the document present inside the message.html file, then it finds the h1 element, and makes a deep copy of it . The cloned h1 element then gets appended to the body element. handleError: This method is attached to the onerror attribute of the <link> element. This method will be executed if the message.html file is not found. As the message.html file is imported successfully, the handleSuccess method gets executed and header element h1 is rendered in the browser. The following screenshot shows the output of the preceding code: Summary In this article, we learned about the web component specification. We also explored the building blocks of web components such as HTML Imports and templates. Resources for Article: Further resources on this subject: Learning D3.js Mapping [Article] Machine Learning [Article] Angular 2.0 [Article]
Read more
  • 0
  • 0
  • 2968

article-image-building-mobile-games-craftyjs-and-phonegap-part-2
Robi Sen
15 May 2015
7 min read
Save for later

Building Mobile Games with Crafty.js and PhoneGap - Part 2

Robi Sen
15 May 2015
7 min read
Building Mobile Games with Crafty.js and PhoneGap - Part 2 Let’s continue making a simple turn-based RPG-like game based on Pascal Rettig’s Crafty Workshop presentation with PhoneGap. In the second part of this two-part series, you will learn how to add sprites to a game, control them, and work with mouse/touch events. Adding sprites OK, let’s add some sprites to the mix using open source sprite sheets from RLTiles. All of the resources at RLTiles are in public domain, but the ones we will need are the dungeon tiles, which you can find here, and the monsters, which you can find here. To use them, first create a new folder under your www root directory in your PhoneGap project called assets. Then, click on the Dungeon link and right-click on the dungeon  sprite sheet and select Save as. Save it as dungeon.png to your assets directory. Do the same with monsters, but call it characters.png.  Now, edit index.html to look like listing 1. Listing 1: Loading sprites in Crafty <!DOCTYPE html> <html> <head></head> <body> <div id="game"></div> <script type="text/javascript" src="lib/crafty.js"></script> <script> var WIDTH = 500, HEIGHT = 320; Crafty.init(WIDTH, HEIGHT); // Background Crafty.background("black"); //let’s loads some assets for the game // This will create entities called floor, wall1, and stairs Crafty.sprite(32,"assets/dungeon.png", { floor: [0,0], wall1: [2,1], stairs: [3,1] }); // This will create entities called hero1 and blob1 Crafty.sprite(32,"assets/characters.png", { hero: [11,4],goblin1: [8,14] }); Crafty.scene("loading", function() { Crafty.load(["assets/dungeon.png","assets/characters.png"], function() { Crafty.scene("main"); // Run the main scene console.log("Done loading"); }, function(e) { //progress }, function(e) { //somethig is wrong, error loading console.log("Error,failed to load", e) }); }); Crafty.scene("loading"); // Let's draw us a Hero and a Goblin Crafty.scene("main",function() { Crafty.background("#FFF"); var player = Crafty.e("2D, Canvas, hero") .attr({x:0, y:0}); var goblin = Crafty.e("2D, Canvas, goblin1") .attr({x:50, y:50}); }); </script> </body> </html> There are a couple of things to note in this code. Firstly, we are using Crafty.sprite to load sprites from a sprite file. The first attribute in the sprite(), 32, references the size of the sprite. The second is the location of the sprite sheet. Next, we set the name of each sprite we want to load and its location. For example, floor(0,0) means grab the very first sprite on the sprite sheet, assign it the label floor, and load it into memory. Next is a very important Crafty function; Crafty.scene(). In Crafty, scenes are a way to organize your game objects and easily transition between levels or screens. In our case, we first use Crafty.scene() to load a bunch of assets, our sprite sheets, and when done, we tell it to call the main() scene. Next, we actually call loading, which loads our assets and then calls the main() scene. In the main() scene, we create the player and goblin entities. Try saving the file and loading it in your browser. You should see something like figure 1.   Figure 1: Loading the hero and goblin sprites in Chrome Movement Now that we have figured out how to load the sprites, let’s figure out how to move them. First, we want to move and control our hero. To do this, we want to make a component, which is an abstracted set of data or behaviors we can then assign to an entity. To do that, open your index.html file again and edit it to look like listing 2. Listing 2: Controlling the hero <!DOCTYPE html> <html> <head></head> <body> <div id="game"></div> <script type="text/javascript" src="lib/crafty.js"></script> <script> var WIDTH = 500, HEIGHT = 320; Crafty.init(WIDTH, HEIGHT); Crafty.sprite(32,"assets/dungeon.png", { floor: [0,0], wall1: [2,1], stairs: [3,1] }); Crafty.sprite(32,"assets/characters.png", { hero: [11,4], goblin1: [8,14] }); // create a simple object that describes player movement Crafty.c("PlayerControls", { init: function() { //let’s now make the hero move wherever we touch Crafty.addEvent(this, Crafty.stage.elem, 'mousedown', function(e) { // let’s simulate an 8-way controller or old school joystick console.log("the values are; x= " + e.clientX ); if (e.clientX<player.x&& (e.clientX - player.x)< 32) {player.x= player.x - 32;} else if (e.clientX>player.x&& (e.clientX - player.x) > 32){ player.x = player.x + 32; } else {player.x = player.x} if (e.clientY<player.y&& (e.clientY - player.y)< 32) {player.y= player.y - 32;} else if (e.clientY>player.y&& (e.clientY - player.y) > 32){ player.y = player.y + 32; } else {player.y = player.y} Crafty.trigger('Turn'); console.log('mousedown at (' + e.clientX + ', ' + e.clientY + ')'); }); } }); Crafty.scene("loading", function() { Crafty.load(["assets/dungeon.png","assets/characters.png"], function() { Crafty.scene("main"); // Run the main scene console.log("Done loading"); }, function(e) { //progress }, function(e) { //somethig is wrong, error loading console.log("Error,failed to load", e) }); }); Crafty.scene("loading"); // Let's draw us a Hero and a mean Goblin Crafty.scene("main",function() { Crafty.background("#FFF"); player = Crafty.e("2D, Canvas,Fourway, PlayerControls, hero") .attr({x:0, y:0}) goblin = Crafty.e("2D, Canvas, goblin1") .attr({x:50, y:50}); }); </script> </body> </html> In listing 2, the main thing to focus on is the PlayerControls component defined by Crafty.c(). In the component, we are going to simulate a typical 8-way controller. For our PlayerControls component, we want the player to only be able to move one tile, which is 32 pixels, each time they select a direction they want to move. We do this by using Crafty.addEvent and having it update the player’s location based on the direction of where the user touched, which is derived by getting the relative location of the user’s touch from client.x, client.y in relation to the hero’s position, which is player.x, player.y. Save the file and view it. View the file using the inspect element option, and you should see something like figure 2.   Figure 2: Controlling the hero You can now control the movement of the hero in the game. Summary In this two-part series, you learned about working with Crafty.js. Specifically, you learned how to work with the Crafty API, create entities, work with sprites, create components, and control entities via mouse/touch. About the author Robi Sen, CSO at Department 13, is an experienced inventor, serial entrepreneur, and futurist whose dynamic twenty-plus-year career in technology, engineering, and research has led him to work on cutting edge projects for DARPA, TSWG, SOCOM, RRTO, NASA, DOE, and the DOD. Robi also has extensive experience in the commercial space, including the co-creation of several successful start-up companies. He has worked with companies such as UnderArmour, Sony, CISCO, IBM, and many others to help build new products and services. Robi specializes in bringing his unique vision and thought process to difficult and complex problems, allowing companies and organizations to find innovative solutions that they can rapidly operationalize or go to market with.
Read more
  • 0
  • 0
  • 3801
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-integrating-quick-and-benefits-behavior-driven-development-part-1
Benjamin Reed
13 May 2015
6 min read
Save for later

Integrating Quick and the Benefits of Behavior Driven Development (Part 1)

Benjamin Reed
13 May 2015
6 min read
In recent years, testing has become more wide-spread. Software engineers have routinely recognized that tests have the power to assist in the elimination of bugs. At the same time, tests can help minimize the unexpected behaviors that arise when many people commit changes to the same source code. Testing Methodologies on Various Platforms Test Driven Development is a methodology that requires developers to write the tests before they write the actual implementation. The new tests initially fail. However, the engineer is confident that the module performs as expected when the tests pass. Test Driven Development has been expanded into Behavior-Driven Development (BDD). BDD focuses on clear communication between all collaborators. It describes all levels of software in an effective, readable language, and focuses on the expected “behaviors” of a module. As a disclaimer, there is not a mass-consensus on unit testing methodologies. Therefore, I will detail the benefits of this methodology. The final decision rests in the comfort of the engineers and the readability of the tests. Looking towards contemporary web development, a developer sees that testing is both accepted and expected. Frameworks like Ruby on Rails natively support and encourage testing out of the box. Node.js has also embraced testing with frameworks like Mocha and should.js. Many companies, including Twitter, list the ability to write sound unit tests as a prerequisite for job positions. The testing community is thriving with tools such as RSpec and hosted continuous-integration services like Travis CI or Circle CI. On the other hand, unit testing in iOS Development has become an abyss. Shadowing at a known development company, I looked over their source code. I asked, “Where are your tests?” Of course, I was told that the engineers adequately test before deploying, and “they do not need unit tests.” Coming from a Ruby on Rails background, I was distraught. Surely, this must only be confined to this company. Later, when I went to Apple’s Worldwide Development Conference, I was shocked to learn that testing is very neglected across the entire community. Apple is telling developers to write tests, and they are releasing “new” unit testing frameworks (XCTest). They are even building new tools to aid with continuous integration testing (Xcode server), and there are other teams introducing performance tests. Unfortunately, there is a fundamental problem. iOS tests are not fun, they are not descriptive, they are not well-structured, and they lack enthusiastic support. If developers refuse to test the outputs of their methods and their user interface interactions, there is no chance that they will take the time to write tests to put a metric on performance and measure baselines. The Downfalls of the Standard Testing Framework Ultimately, iOS developers need a new structure and enthusiasm from the web development community. Many developers, including myself, have come across the Quick testing framework on GitHub. Quick promotes BDD, and it works with Objective-C and Swift. Both old, new, and hybrid projects are supported! To dramatize some of the differences, let’s look at an XCTestCase: // (Swift) Apple’s XCTest import XCTest class ArithmeticTests: XCTestCase { var calculator : Calculator! override func setUp() { calculator = Calculator() } func testCalculatorAddition() { let expected = 4 let actual = calculator.add(2, 2) XCTAssertEqual(expected, actual, "2 + 2 doesn't equal 4") } func testCalculatorSubtraction() { let expected = 4 let actual = calculator.subtract(10, 6) XCTAssertEqual(expected, actual, "10 - 6 doesn't equal 4") } func testCalculatorMultiplication() { let expected = 16 let actual = calculator.multiply(4, 4) XCTAssertEqual(expected, actual, "4 * 4 doesn't equal 16") } func testCalculatorDivision() { let expected = 10 let actual = calculator.divide(100, 10) XCTAssertEqual(expected, actual, "100 / 10 dosen't equal 10") } } The first thing I notice is the incredible amount of redundancy. Notice, each function begins with the “test” prefix. However, developers already know that these are test cases. Just look at the name of the class and its superclass.   To minimize the amount of redundancy, one could bypass the expected and actual variable declarations. They could put everything on one line, like so: XCTAssertEqual(10, calculator.divide(100, 10), “100 / 10 doesn’t equal 10”) However, most developers will agree that this becomes less concise and readable when more arguments are passed into the function. An even more bothersome element is the message that is attached to each assertion. Can the error message not be implied by the two values and the name of the test? This is obviously possible, because Quick does it. At the same time, it looks like the error message is somehow part of the assertion, and any developer who is not familiar with this type of assertion would be confused without reading some guide or the XCTest documentation. Quick and its Advantages Let’s contrast this by applying the concepts of BDD using the Quick framework: // (Swift) Quick import Quick import Nimble class ArithmeticSpec: QuickSpec { override func spec() { describe("Calculator") { var calculator : Calculator! beforeEach { calculator = Calculator() } it("can add") { let value = calculator.add(2, 2) expect(value).to(equal(4)) // expect(calculator.add(2, 2)).to(equal(4)) } it("can subtract") { let value = calculator.subtract(10, 6) expect(value).to(equal(4)) // expect(calculator.subtract(10, 6)).to(equal(4)) } it("can multiply") { let value = calculator.multiply(4, 4) expect(value).to(equal(16)) // expect(calculator.multiply(4, 4)).to(equal(16)) } it("can divide") { let value = calculator.divide(100, 10) expect(value).to(equal(10)) // expect(calculator.divide(100, 10)).to(equal(10)) } } } } We’ve done several important things here. First, we made the tests more modular. Under the Arithmetic specification, there could be other objects besides a Calculator. Notice we are describing the behaviors of each. For instance, if we had an Arithmetic Sequence Recognizer, we could describe it with its own test cases. Everything related can stay grouped under an Arithmetic “module.” Second, we simply described the behaviors of the Calculatorin English. We said things like, “it("can add")” which is understandable to anyone. At the same time, we replaced assertions and error messages with expectations. Once again, our test reads as a plain English sentence: “expect(value).to(equal(4))”. Third, we removed most of the redundancy. Quick will automatically generate the messages for failing tests. For readability, I separated the calculation and the expectation. However, I included a comment with the single-line version that is very popular. To me, both appear more readable than an XCTest assertion. In part 2, I will demonstrate how to sufficiently test an iPhone app with Quick. I’ll also include the source code, so it should be easy to follow along. About the Author Benjamin Reed began Computer Science classes at a nearby university in Nashville during his sophomore year in high school. Since then, he has become an advocate for open source. He is now pursing degrees in Computer Science and Mathematics fulltime. The Ruby community has intrigued him, and he openly expresses support for the Rails framework. When asked, he believes that studying Rails has led him to some of the best practices and, ultimately, has made him a better programmer. iOS development is one of his hobbies, and he enjoys scouting out new projects on GitHub. On GitHub, he’s appropriately named @codeblooded. On Twitter, he’s @benreedDev.
Read more
  • 0
  • 0
  • 4276

article-image-building-basic-express-site
Packt
12 May 2015
34 min read
Save for later

Building a Basic Express Site

Packt
12 May 2015
34 min read
In this article by Ben Augarten, Marc Kuo, Eric Lin, Aidha Shaikh, Fabiano Pereira Soriani, Geoffrey Tisserand, Chiqing Zhang, Kan Zhang, authors of the book Express.js Blueprints, we will see how it uses Google Chrome's JavaScript engine, V8, to execute code. Node.js is single-threaded and event-driven. It uses non-blocking I/O to squeeze every ounce of processing power out of the CPU. Express builds on top of Node.js, providing all of the tools necessary to develop robust web applications with node. In addition, by utilizing Express, one gains access to a host of open source software to help solve common pain points in development. The framework is unopinionated, meaning it does not guide you one way or the other in terms of implementation or interface. Because it is unopinionated, the developer has more control and can use the framework to accomplish nearly any task; however, the power Express offers is easily abused. In this book, you will learn how to use the framework in the right way by exploring the following different styles of an application: Setting up Express for a static site Local user authentication OAuth with passport Profile pages Testing (For more resources related to this topic, see here.) Setting up Express for a static site To get our feet wet, we'll first go over how to respond to basic HTTP requests. In this example, we will handle several GET requests, responding first with plaintext and then with static HTML. However, before we get started, you must install two essential tools: node and npm, which is the node package manager. Navigate to https://nodejs.org/download/ to install node and npm. Saying Hello, World in Express For those unfamiliar with Express, we will start with a basic example—Hello World! We'll start with an empty directory. As with any Node.js project, we will run the following code to generate our package.json file, which keeps track of metadata about the project, such as dependencies, scripts, licenses, and even where the code is hosted: $ npm init The package.json file keeps track of all of our dependencies so that we don't have versioning issues, don't have to include dependencies with our code, and can deploy fearlessly. You will be prompted with a few questions. Choose the defaults for all except the entry point, which you should set to server.js. There are many generators out there that can help you generate new Express applications, but we'll create the skeleton this time around. Let's install Express. To install a module, we use npm to install the package. We use the --save flag to tell npm to add the dependency to our package.json file; that way, we don't need to commit our dependencies to the source control. We can just install them based on the contents of the package.json file (npm makes this easy): $ npm install --save express We'll be using Express v4.4.0 throughout this book. Warning: Express v4.x is not backwards compatible with the versions before it. You can create a new file server.js as follows: var express = require('express'); var app = express();   app.get('/', function(req, res, next) { res.send('Hello, World!'); });   app.listen(3000); console.log('Express started on port 3000'); This file is the entry point for our application. It is here that we generate an application, register routes, and finally listen for incoming requests on port 3000. The require('express') method returns a generator of applications. We can continually create as many applications as we want; in this case, we only created one, which we assigned to the variable app. Next, we register a GET route that listens for GET requests on the server root, and when requested, sends the string 'Hello, World' to the client. Express has methods for all of the HTTP verbs, so we could have also done app.post, app.put, app.delete, or even app.all, which responds to all HTTP verbs. Finally, we start the app listening on port 3000, then log to standard out. It's finally time to start our server and make sure everything works as expected. $ node server.js We can validate that everything is working by navigating to http://localhost:3000 in our browser or curl -v localhost:3000 in your terminal. Jade templating We are now going to extract the HTML we send to the client into a separate template. After all, it would be quite difficult to render full HTML pages simply by using res.send. To accomplish this, we will use a templating language frequently in conjunction with Express -- jade. There are many templating languages that you can use with Express. We chose Jade because it greatly simplifies writing HTML and was created by the same developer of the Express framework. $ npm install --save jade After installing Jade, we're going to have to add the following code to server.js: app.set('view engine', 'jade'); app.set('views', __dirname + '/views');   app.get('/', function(req, res, next) { res.render('index'); }); The preceding code sets the default view engine for Express—sort of like telling Express that in the future it should assume that, unless otherwise specified, templates are in the Jade templating language. Calling app.set sets a key value pair for Express internals. You can think of this sort of application like wide configuration. We could call app.get (view engine) to retrieve our set value at any time. We also specify the folder that Express should look into to find view files. That means we should create a views directory in our application and add a file, index.jade to it. Alternatively, if you want to include many different template types, you could execute the following: app.engine('jade', require('jade').__express); app.engine('html', require('ejs').__express); app.get('/html', function(req, res, next) { res.render('index.html'); });   app.get(/'jade, function(req, res, next) { res.render('index.jade'); }); Here, we set custom template rendering based on the extension of the template we want to render. We use the Jade renderer for .jade extensions and the ejs renderer for .html extensions and expose both of our index files by different routes. This is useful if you choose one templating option and later want to switch to a new one in an incremental way. You can refer to the source for the most basic of templates. Local user authentication The majority of applications require user accounts. Some applications only allow authentication through third parties, but not all users are interested in authenticating through third parties for privacy reasons, so it is important to include a local option. Here, we will go over best practices when implementing local user authentication in an Express app. We'll be using MongoDB to store our users and Mongoose as an ODM (Object Document Mapper). Then, we'll leverage passport to simplify the session handling and provide a unified view of authentication. Downloading the example code You can download the example code files from your account at http://www.packtpub.com for all the Packt Publishing books you have purchased. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you. User object modeling We will leverage passportjs to handle user authentication. Passport centralizes all of the authentication logic and provides convenient ways to authenticate locally in addition to third parties, such as Twitter, Google, Github, and so on. First, install passport and the local authentication strategy as follows: $ npm install --save passport-local In our first pass, we will implement a local authentication strategy, which means that users will be able to register locally for an account. We start by defining a user model using Mongoose. Mongoose provides a way to define schemas for objects that we want to store in MongoDB and then provide a convenient way to map between stored records in the database and an in-memory representation. Mongoose also provides convenient syntax to make many MongoDB queries and perform CRUD operations on models. Our user model will only have an e-mail, password, and timestamp for now. Before getting started, we need to install Mongoose: $ npm install --save mongoose bcrypt validator Now we define the schema for our user in models/user.js as follows: Var mongoose = require('mongoose');   var userSchema = new mongoose.Schema({ email: {    type: String,    required: true,    unique: true }, password: {    type: String,    required: true }, created_at: {    type: Date,    default: Date.now } });   userSchema.pre('save', function(next) { if (!this.isModified('password')) {    return next(); } this.password = User.encryptPassword(this.password); next(); }); Here, we create a schema that describes our users. Mongoose has convenient ways to describe the required and unique fields as well as the type of data that each property should hold. Mongoose does all the validations required under the hood. We don't require many user fields for our first boilerplate application—e-mail, password, and timestamp to get us started. We also use Mongoose middleware to rehash a user's password if and when they decide to change it. Mongoose exposes several hooks to run user-defined callbacks. In our example, we define a callback to be invoked before Mongoose saves a model. That way, every time a user is saved, we'll check to see whether their password was changed. Without this middleware, it would be possible to store a user's password in plaintext, which is not only a security vulnerability but would break authentication. Mongoose supports two kinds of middleware – serial and parallel. Parallel middleware can run asynchronous functions and gets an additional callback to invoke; you'll learn more about Mongoose middleware later in this book. Now, we want to add validations to make sure that our data is correct. We'll use the validator library to accomplish this, as follows: Var validator = require('validator');   User.schema.path('email').validate(function(email) { return validator.isEmail(email); });   User.schema.path('password').validate(function(password) { return validator.isLength(password, 6); });   var User = mongoose.model('User', userSchema); module.exports = User; We added validations for e-mail and password length using a library called validator, which provides a lot of convenient validators for different types of fields. Validator has validations based on length, URL, int, upper case; essentially, anything you would want to validate (and don't forget to validate all user input!). We also added a host of helper functions regarding registration, authentication, as well as encrypting passwords that you can find in models/user.js. We added these to the user model to help encapsulate the variety of interactions we want using the abstraction of a user. For more information on Mongoose, see http://mongoosejs.com/. You can find more on passportjs at http://passportjs.org/. This lays out the beginning of a design pattern called MVC—model, view, controller. The basic idea is that you encapsulate separate concerns in different objects: the model code knows about the database, storage, and querying; the controller code knows about routing and requests/responses; and the view code knows what to render for users. Introducing Express middleware Passport is authentication middleware that can be used with Express applications. Before diving into passport, we should go over Express middleware. Express is a connect framework, which means it uses the connect middleware. Connecting internally has a stack of functions that handle requests. When a request comes in, the first function in the stack is given the request and response objects along with the next() function. The next() function when called, delegates to the next function in the middleware stack. Additionally, you can specify a path for your middleware, so it is only called for certain paths. Express lets you add middleware to an application using the app.use() function. In fact, the HTTP handlers we already wrote are a special kind of middleware. Internally, Express has one level of middleware for the router, which delegates to the appropriate handler. Middleware is extraordinarily useful for logging, serving static files, error handling, and more. In fact, passport utilizes middleware for authentication. Before anything else happens, passport looks for a cookie in the request, finds metadata, and then loads the user from the database, adds it to req, user, and then continues down the middleware stack. Setting up passport Before we can make full use of passport, we need to tell it how to do a few important things. First, we need to instruct passport how to serialize a user to a session. Then, we need to deserialize the user from the session information. Finally, we need to tell passport how to tell if a given e-mail/password combination represents a valid user as given in the following: // passport.js var passport = require('passport'); var LocalStrategy = require('passport-local').Strategy; var User = require('mongoose').model('User');   passport.serializeUser(function(user, done) { done(null, user.id); });   passport.deserializeUser(function(id, done) { User.findById(id, done); }); Here, we tell passport that when we serialize a user, we only need that user's id. Then, when we want to deserialize a user from session data, we just look up the user by their ID! This is used in passport's middleware, after the request is finished, we take req.user and serialize their ID to our persistent session. When we first get a request, we take the ID stored in our session, retrieve the record from the database, and populate the request object with a user property. All of this functionality is provided transparently by passport, as long as we provide definitions for these two functions as given in the following: function authFail(done) { done(null, false, { message: 'incorrect email/password combination' }); }   passport.use(new LocalStrategy(function(email, password, done) { User.findOne({    email: email }, function(err, user) {    if (err) return done(err);    if (!user) {      return authFail(done);    }    if (!user.validPassword(password)) {      return authFail(done);    }    return done(null, user); }); })); We tell passport how to authenticate a user locally. We create a new LocalStrategy() function, which, when given an e-mail and password, will try to lookup a user by e-mail. We can do this because we required the e-mail field to be unique, so there should only be one user. If there is no user, we return an error. If there is a user, but they provided an invalid password, we still return an error. If there is a user and they provided the correct password, then we tell passport that the authentication request was a success by calling the done callback with the valid user. Registering users Now, we add routes for registration, both a view with a basic form and backend logic to create a user. First, we will create a user controller. Up until now, we have thrown our routes in our server.js file, but this is generally bad practice. What we want to do is have separate controllers for each kind of route that we want. We have seen the model portion of MVC. Now it's time to take a look at controllers. Our user controller will have all the routes that manipulate the user model. Let's create a new file in a new directory, controllers/user.js: // controllers/user.js var User = require('mongoose').model('User');   module.exports.showRegistrationForm = function(req, res, next) { res.render('register'); };   module.exports.createUser = function(req, res, next) { User.register(req.body.email, req.body.password, function(err, user) {    if (err) return next(err);    req.login(user, function(err) {      if (err) return next(err);      res.redirect('/');    }); }); }; Note that the User model takes care of the validations and registration logic; we just provide callback. Doing this helps consolidate the error handling and generally makes the registration logic easier to understand. If the registration was successful, we call req.login, a function added by passport, which creates a new session for that user and that user will be available as req.user on subsequent requests. Finally, we register the routes. At this point, we also extract the routes we previously added to server.js to their own file. Let's create a new file called routes.js as follows: // routes.js app.get('/users/register', userRoutes.showRegistrationForm); app.post('/users/register', userRoutes.createUser); Now we have a file dedicated to associating controller handlers with actual paths that users can access. This is generally good practice because now we have a place to come visit and see all of our defined routes. It also helps unclutter our server.js file, which should be exclusively devoted to server configuration. For details, as well as the registration templates used, see the preceding code. Authenticating users We have already done most of the work required to authenticate users (or rather, passport has). Really, all we need to do is set up routes for authentication and a form to allow users to enter their credentials. First, we'll add handlers to our user controller: // controllers/user.js module.exports.showLoginForm = function(req, res, next) { res.render('login'); };   module.exports.createSession = passport.authenticate('local', { successRedirect: '/', failureRedirect: '/login' }); Let's deconstruct what's happening in our login post. We create a handler that is the result of calling passport.authenticate('local', …). This tells passport that the handler uses the local authentication strategy. So, when someone hits that route, passport will delegate to our LocalStrategy. If they provided a valid e-mail/password combination, our LocalStrategy will give passport the now authenticated user, and passport will redirect the user to the server root. If the e-mail/password combination was unsuccessful, passport will redirect the user to /login so they can try again. Then, we will bind these callbacks to routes in routes.js: app.get('/users/login', userRoutes.showLoginForm); app.post('/users/login', userRoutes.createSession); At this point, we should be able to register an account and login with those same credentials. (see tag 0.2 for where we are right now). OAuth with passport Now we will add support for logging into our application using Twitter, Google, and GitHub. This functionality is useful if users don't want to register a separate account for your application. For these users, allowing OAuth through these providers will increase conversions and generally make for an easier registration process for users. Adding OAuth to user model Before adding OAuth, we need to keep track of several additional properties on our user model. We keep track of these properties to make sure we can look up user accounts provided there is information to ensure we don't allow duplicate accounts and allow users to link multiple third-party accounts by using the following code: var userSchema = new mongoose.Schema({ email: {    type: String,    required: true,    unique: true }, password: {    type: String, }, created_at: {    type: Date,    default: Date.now }, twitter: String, google: String, github: String, profile: {    name: { type: String, default: '' },    gender: { type: String, default: '' },    location: { type: String, default: '' },    website: { type: String, default: '' },    picture: { type: String, default: '' } }, }); First, we add a property for each provider, in which we will store a unique identifier that the provider gives us when they authorize with that provider. Next, we will store an array of tokens, so we can conveniently access a list of providers that are linked to this account; this is useful if you ever want to let a user register through one and then link to others for viral marketing or extra user information. Finally, we keep track of some demographic information about the user that the providers give to us so we can provide a better experience for our users. Getting API tokens Now, we need to go to the appropriate third parties and register our application to receive application keys and secret tokens. We will add these to our configuration. We will use separate tokens for development and production purposes (for obvious reasons!). For security reasons, we will only have our production tokens as environment variables on our final deploy server, not committed to version control. I'll wait while you navigate to the third-party websites and add their tokens to your configuration as follows: // config.js twitter: {    consumerKey: process.env.TWITTER_KEY || 'VRE4lt1y0W3yWTpChzJHcAaVf',    consumerSecret: process.env.TWITTER_SECRET || 'TOA4rNzv9Cn8IwrOi6MOmyV894hyaJks6393V6cyLdtmFfkWqe',    callbackURL: '/auth/twitter/callback' }, google: {    clientID: process.env.GOOGLE_ID || '627474771522-uskkhdsevat3rn15kgrqt62bdft15cpu.apps.googleusercontent.com',    clientSecret: process.env.GOOGLE_SECRET || 'FwVkn76DKx_0BBaIAmRb6mjB',    callbackURL: '/auth/google/callback' }, github: {    clientID: process.env.GITHUB_ID || '81b233b3394179bfe2bc',    clientSecret: process.env.GITHUB_SECRET || 'de0322c0aa32eafaa84440ca6877ac5be9db9ca6',    callbackURL: '/auth/github/callback' } Of course, you should never commit your development keys publicly either. Be sure to either not commit this file or to use private source control. The best idea is to only have secrets live on machines ephemerally (usually as environment variables). You especially should not use the keys that I provided here! Third-party registration and login Now we need to install and implement the various third-party registration strategies. To install third-party registration strategies run the following command: npm install --save passport-twitter passport-google-oAuth passport-github Most of these are extraordinarily similar, so I will only show the TwitterStrategy, as follows: passport.use(new TwitterStrategy(config.twitter, function(req, accessToken, tokenSecret, profile, done) { User.findOne({ twitter: profile.id }, function(err, existingUser) {      if (existingUser) return done(null, existingUser);      var user = new User();      // Twitter will not provide an email address. Period.      // But a person's twitter username is guaranteed to be unique      // so we can "fake" a twitter email address as follows:      // username@twitter.mydomain.com user.email = profile.username + "@twitter." + config.domain + ".com";      user.twitter = profile.id;      user.tokens.push({ kind: 'twitter', accessToken: accessToken, tokenSecret: tokenSecret });      user.profile.name = profile.displayName;      user.profile.location = profile._json.location;      user.profile.picture = profile._json.profile_image_url;      user.save(function(err) {        done(err, user);      });    }); })); Here, I included one example of how we would do this. First, we pass a new TwitterStrategy to passport. The TwitterStrategy takes our Twitter keys and callback information and a callback is used to make sure we can register the user with that information. If the user is already registered, then it's a no-op; otherwise we save their information and pass along the error and/or successfully saved user to the callback. For the others, refer to the source. Profile pages It is finally time to add profile pages for each of our users. To do so, we're going to discuss more about Express routing and how to pass request-specific data to Jade templates. Often times when writing a server, you want to capture some portion of the URL to use in the controller; this could be a user id, username, or anything! We'll use Express's ability to capture URL parts to get the id of the user whose profile page was requested. URL params Express, like any good web framework, supports extracting data from URL parts. For example, you can do the following: app.get('/users/:id', function(req, res, next) { console.log(req.params.id); } In the preceding example, we will print whatever comes after /users/ in the request URL. This allows an easy way to specify per user routes, or routes that only make sense in the context of a specific user, that is, a profile page only makes sense when you specify a specific user. We will use this kind of routing to implement our profile page. For now, we want to make sure that only the logged-in user can see their own profile page (we can change this functionality later): app.get('/users/:id', function(req, res, next) { if (!req.user || (req.user.id != req.params.id)) {    return next('Not found'); } res.render('users/profile', { user: req.user.toJSON() }); }); Here, we check first that the user is signed in and that the requested user's id is the same as the logged-in user's id. If it isn't, then we return an error. If it is, then we render the users/profile.jade template with req.user as the data. Profile templates We already looked at models and controllers at length, but our templates have been underwhelming. Finally, we'll show how to write some basic Jade templates. This section will serve as a brief introduction to the Jade templating language, but does not try to be comprehensive. The code for Profile templates is as follows: html body    h1      =user.email    h2      =user.created_at    - for (var prop in user.profile)      if user.profile[prop]        h4          =prop + "=" + user.profile[prop] Notably, because in the controller we passed in the user to the view, we can access the variable user and it refers to the logged-in user! We can execute arbitrary JavaScript to render into the template by prefixing it with = --. In these blocks, we can do anything we would normally do, including string concatenation, method invocation, and so on. Similarly, we can include JavaScript code that is not intended to be written as HTML by prefixing it with - like we did with the for loop. This basic template prints out the user's e-mail, the created_at timestamp, as well as all of the properties in their profile, if any. For a more in-depth look at Jade, please see http://jade-lang.com/reference/. Testing Testing is essential for any application. I will not dwell on the whys, but instead assume that you are angry with me for skipping this topic in the previous sections. Testing Express applications tend to be relatively straightforward and painless. The general format is that we make fake requests and then make certain assertions about the responses. We could also implement finer-grained unit tests for more complex logic, but up until now almost everything we did is straightforward enough to be tested on a per route basis. Additionally, testing at the API level provides a more realistic view of how real customers will be interacting with your website and makes tests less brittle in the face of refactoring code. Introducing Mocha Mocha is a simple, flexible, test framework runner. First, I would suggest installing Mocha globally so you can easily run tests from the command line as follows: $ npm install --save-dev –g mocha The --save-dev option saves mocha as a development dependency, meaning we don't have to install Mocha on our production servers. Mocha is just a test runner. We also need an assertion library. There are a variety of solutions, but should.js syntax, written by the same person as Express and Mocha, gives a clean syntax to make assertions: $ npm install --save-dev should The should.js syntax provides BDD assertions, such as 'hello'.should.equal('hello') and [1,2].should.have.length(2). We can start with a Hello World test example by creating a test directory with a single file, hello-world.js, as shown in the following code: var should = require('should');   describe('The World', function() { it('should say hello', function() {    'Hello, World'.should.equal('Hello, World'); }); it('should say hello asynchronously!', function(done) {    setTimeout(function() {      'Hello, World'.should.equal('Hello, World');      done();    }, 300); }); }); We have two different tests both in the same namespace, The World. The first test is an example of a synchronous test. Mocha executes the function we give to it, sees that no exception gets thrown and the test passes. If, instead, we accept a done argument in our callback, as we do in the second example, Mocha will intelligently wait until we invoke the callback before checking the validity of our test. For the most part, we will use the second version, in which we must explicitly invoke the done argument to finish our test because it makes more sense to test Express applications. Now, if we go back to the command line, we should be able to run Mocha (or node_modules/.bin/mocha if you didn't install it globally) and see that both of the tests we wrote pass! Testing API endpoints Now that we have a basic understanding of how to run tests using Mocha and make assertions with should syntax, we can apply it to test local user registration. First, we need to introduce another npm module that will help us test our server programmatically and make assertions about what kind of responses we expect. The library is called supertest: $ npm install --save-dev supertest The library makes testing Express applications a breeze and provides chainable assertions. Let's take a look at an example usage to test our create user route,as shown in the following code: var should = require('should'),    request = require('supertest'),    app = require('../server').app,    User = require('mongoose').model('User');   describe('Users', function() { before(function(done) {    User.remove({}, done); }); describe('registration', function() {    it('should register valid user', function(done) {      request(app)        .post('/users/register')       .send({          email: "test@example.com",          password: "hello world"        })        .expect(302)        .end(function(err, res) {          res.text.should.containEql("Redirecting to /");          done(err);        });    }); }); }); First, notice that we used two namespaces: Users and registration. Now, before we run any tests, we remove all users from the database. This is useful to ensure we know where we're starting the tests This will delete all of your saved users though, so it's useful to use a different database in the test environment. Node detects the environment by looking at the NODE_ENV environment variable. Typically it is test, development, staging, or production. We can do so by changing the database URL in our configuration file to use a different local database when in a test environment and then run Mocha tests with NODE_ENV=test mocha. Now, on to the interesting bits! Supertest exposes a chainable API to make requests and assertions about responses. To make a request, we use request(app). From there, we specify the HTTP method and path. Then, we can specify a JSON body to send to the server; in this case, an example user registration form. On registration, we expect a redirect, which is a 302 response. If that assertion fails, then the err argument in our callback will be populated, and the test will fail when we use done(err). Additionally, we validate that we were redirected to the route we expect, the server root /. Automate builds and deploys All of this development is relatively worthless without a smooth process to build and deploy your application. Fortunately, the node community has written a variety of task runners. Among these are Grunt and Gulp, two of the most popular task runners. Both work seamlessly with Express and provide a set of utilities for us to use, including concatenating and uglifying JavaScript, compiling sass/less, and reloading the server on local file changes. We'll focus on Grunt, for simplicity. Introducing the Gruntfile Grunt itself is a simple task runner, but its extensibility and plugin architecture lets you install third-party scripts to run in predefined tasks. To give us an idea of how we might use Grunt, we're going to write our css in sass and then use Grunt to compile sass to css. Through this example, we'll explore the different ideas that Grunt introduces. First, you need to install cli globally to install the plugin that compiles sass to css: $ npm install -g grunt-cli $ npm install --save grunt grunt-contrib-sass Now we need to create Gruntfile.js, which contains instructions for all of the tasks and build targets that we need. To do this perform the following: // Gruntfile.js module.exports = function(grunt) { grunt.loadNpmTasks('grunt-contrib-sass'); grunt.initConfig({    sass: {      dist: {        files: [{          expand: true,          cwd: "public/styles",          src: ["**.scss"],          dest: "dist/styles",          ext: ".css"        }]      }    } }); } Let's go over the major parts. Right at the top, we require the plugin we will use, grunt-contrib-sass. This tells grunt that we are going to configure a task called sass. In our definition of the task sass, we specify a target, dist, which is commonly used for tasks that produce production files (minified, concatenated, and so on). In that task, we build our file list dynamically, telling Grunt to look in /public/styles/ recursively for all .scss files, then compile them all to the same paths in /dist/styles. It is useful to have two parallel static directories, one for development and one for production, so we don't have to look at minified code in development. We can invoke this target by executing grunt sass or grunt sass:dist. It is worth noting that we don't explicitly concatenate the files in this task, but if we use @imports in our main sass file, the compiler will concatenate everything for us. We can also configure Grunt to run our test suite. To do this, let's add another plugin -- npm install --save-dev grunt-mocha-test. Now we have to add the following code to our Gruntfile.js file: grunt.loadNpmTasks('grunt-mocha-test'); grunt.registerTask('test', 'mochaTest'); ...   mochaTest: {    test: {      src: ["test/**.js"]    } } Here, the task is called mochaTest and we register a new task called test that simply delegates to the mochaTest task. This way, it is easier to remember how to run tests. Similarly, we could have specified a list of tasks to run if we passed an array of strings as the second argument to registerTask. This is a sampling of what can be accomplished with Grunt. For an example of a more robust Gruntfile, check out the source. Continuous integration with Travis Travis CI provides free continuous integration for open source projects as well as paid options for closed source applications. It uses a git hook to automatically test your application after every push. This is useful to ensure no regression was introduced. Also, there could be dependency problems only revealed in CI that local development masks; Travis is the first line of defense for these bugs. It takes your source, runs npm install to install the dependencies specified in package.json, and then runs the npm test to run your test suite. Travis accepts a configuration file called travis.yml. These typically look like this: language: node_js node_js: - "0.11" - "0.10" - "0.8" services: - mongodb We can specify the versions of node that we want to test against as well as the services that we rely on (specifically MongoDB). Now we have to update our test command in package.json to run grunt test. Finally, we have to set up a webhook for the repository in question. We can do this on Travis by enabling the repository. Now we just have to push our changes and Travis will make sure all the tests pass! Travis is extremely flexible and you can use it to accomplish most tasks related to continuous integration, including automatically deploying successful builds. Deploying Node.js applications One of the easiest ways to deploy Node.js applications is to utilize Heroku, a platform as a service provider. Heroku has its own toolbelt to create and deploy Heroku apps from your machine. Before getting started with Heroku, you will need to install its toolbelt. Please go to https://toolbelt.heroku.com/ to download the Heroku toolbelt. Once installed, you can log in to Heroku or register via the web UI and then run Heroku login. Heroku uses a special file, called the Procfile, which specifies exactly how to run your application. Our Procfile looks like this: web: node server.js Extraordinarily simple: in order to run the web server, just run node server.js. In order to verify that our Procfile is correct, we can run the following locally: $ foreman start Foreman looks at the Procfile and uses that to try to start our server. Once that runs successfully, we need to create a new application and then deploy our application to Heroku. Be sure to commit the Procfile to version control: $ heroku create$ git push heroku master Heroku will create a new application and URL in Heroku, as well as a git remote repository named heroku. Pushing that remote actually triggers a deploy of your code. If you do all of this, unfortunately your application will not work. We don't have a Mongo instance for our application to talk to! First we have to request MongoDB from Heroku: $ heroku addons:add mongolab // don't worry, it's free This spins up a shared MongoDB instance and gives our application an environment variable named MONOGOLAB_URI, which we should use as our MongoDB connect URI. We need to change our configuration file to reflect these changes. In our configuration file, in production, for our database URL, we should look at the environment variable MONGOLAB_URI. Also, be sure that Express is listening on process.env.PORT || 3000, or else you will receive stra With all of that set up, we can commit our changes and push the changes once again to Heroku. Hopefully, this time it works! To view the application logs for debugging purposes, just use the Heroku toolbelt: $ heroku logs One last thing about deploying Express applications: sometimes applications crash, software isn't perfect. We should anticipate crashes and have our application respond accordingly (by restarting itself). There are many server monitoring tools, including pm2 and forever. We use forever because of its simplicity. $ npm install --save forever Then, we update our Procfile to reflect our use of forever: // Procfileweb: node_modules/.bin/forever server.js Now, forever will automatically restart our application, if it crashes for any strange reason. You can also set up Travis to automatically push successful builds to your server, but that goes beyond the deployment we will do in this book. Summary In this article, we got our feet wet in the world of node and using the Express framework. We went over everything from Hello World and MVC to testing and deployments. You should feel comfortable using basic Express APIs, but also feel empowered to own a Node.js application across the entire stack. Resources for Article: Further resources on this subject: Testing a UI Using WebDriverJS [article] Applications of WebRTC [article] Amazon Web Services [article]
Read more
  • 0
  • 0
  • 2399

article-image-angularjs-web-application-development-cookbook
Packt
08 May 2015
2 min read
Save for later

AngularJS Web Application Development Cookbook

Packt
08 May 2015
2 min read
Architect performant applications and implement best practices in AngularJS. Packed with easy-to-follow recipes, this practical guide will show you how to unleash the full might of the AngularJS framework. Skip straight to practical solutions and quick, functional answers to your problems without hand-holding or slogging through the basics. (For more resources related to this topic, see here.) Some highlights include: Architecting recursive directives Extensively customizing your search filter Custom routing attributes Animating ngRepeat Animating ngInclude, ngView, and ngIf Animating ngSwitch Animating ngClass, and class attributes Animating ngShow, and ngHide The goal of this text is to have you walk away from reading about an AngularJS concept armed with a solid understanding of how it works, insight into the best ways to wield it in real-world applications, and annotated code examples to get you started. Why you should buy this book A collection of recipes demonstrating optimal organization, scaleable architecture, and best practices for use in small and large-scale production applications. Each recipe contains complete, functioning examples and detailed explanations on how and why they are organized and built that way, as well as alternative design choices for different situations. The author of this book is a full stack developer at DoorDash (YC S13), where he joined as the first engineer. He led their adoption of AngularJS, and he also focuses on the infrastructural, predictive, and data projects within the company. Matt has a degree in Computer Engineering from the University of Illinois at Urbana-Champaign. He is the author of the video series Learning AngularJS, available through O'Reilly Media. Previously, he worked as an engineer at several educational technology start-ups. Almost every example in this book has been added to JSFiddle, with the links provided in the book. This allows you to merely visit a URL in order to test and modify the code with no setup of any kind, on any major browser and on any major operating system. Resources for Article:  Further resources on this subject: Working with Live Data and AngularJS [article] Angular Zen [article] AngularJS Project [article]
Read more
  • 0
  • 0
  • 2093

article-image-learning-ms-dynamics-ax-2012-programming
Packt
08 May 2015
3 min read
Save for later

Learning MS Dynamics AX 2012 Programming

Packt
08 May 2015
3 min read
Overwhelmed with 1000+ pages of continuous and unending streak of documentation? Searching for a concise, yet all-in-one, guide for learning the latest version of MS Dynamics AX 2012? Well, you're welcome. Learning MS Dynamics AX 2012 Programming, as the name suggests, is an ideal book on programming customizable solutions using Microsoft Dynamics AX 2012. It is an updated edition of the very popular Microsoft Dynamics AX 2009 Programming: Getting Started authored by Erlend Dalen. Continuing the same legacy, lead author Mohammed Rasheed goes even further with explaining the concepts of Dynamics AX using the latest version, MS Dynamics AX 2012 R3, with ample number of real-world examples wherever necessary. The book follows a structured approach to unveil the brilliant tools available in Dynamics AX 2012 along with the best practices to implement efficient solutions in your own environment. The book starts with giving an overview of the new Dynamics AX architecture and tools available to a developer for programming his own solution. We also get to experiment with a simple program that allows us to print some text on the console. After this comprehensive introduction, you get a dedicated chapter on X++ language underlining its importance for programming intelligent solutions in Dynamics AX 2012. There is no doubt that even a novice learner will understand the concepts explained in such detail, down to the data types and operators. Instead of bombarding you with complex real-life examples, a huge chunk of the first half of the book is dedicated to various operations performed on data, the most important aspect of programming. You will find dedicated chapters on storing, searching, and manipulating data in Dynamics AX, along with how data interacts with users. Once you have acquainted yourself with the basics of Dynamics AX programming, you are smoothly driven into the real world of AX with introduction to various important modules such as Inventory and Ledger, among others. The complete working of one of the most elusive concepts of Dynamics AX, the journals, is explained with suitable examples. At some point, you might get struck with the idea of creating a fantastic new module in AX that solves some of the difficulties your customers might be having. For this very purpose, this book allows you to learn how to create a new module from scratch, all complete with basics of creating number sequences, parameter tables, and the security framework. Why limit ourselves to X++ for programming in Dynamics AX, when you can also do that using .NET? This book also caters to the appetite of programmers who find solace in .NET. You will be able to use .NET classes as reference classes in AX using the Common Language Runtime. You will also learn how to use the AX logic from external applications by using the .NET Business Connector. Finally, we expand our horizons and implement web services that expose the AX logic over remote networks. In addition to that, you will learn how to publish and consume a web service using IIS. You will also learn how to create .aspx pages in Microsoft SharePoint based on the templates that come with the Enterprise Portal, and how to create Dynamics AX user controls that will expose data from AX to the Enterprise Portal. In all, it turns out that you will no longer face the nightmare of adapting to the new architecture of MS Dynamics AX 2012, as long as you have as good a companion as this book. Resources for Article: Further resources on this subject: Working with Data in Forms [article] Financial Management with Microsoft Dynamics AX 2012 R3 [article] .NET 4.5 Parallel Extensions – Async [article]
Read more
  • 0
  • 0
  • 2543
article-image-learning-informatica-powercenter-9x
Packt
08 May 2015
3 min read
Save for later

Learning Informatica PowerCenter 9.x

Packt
08 May 2015
3 min read
Informatica Corporation (Informatica), a multi-million dollar company established on February 1993, is an independent provider of enterprise data integration and data quality software and services. Informatica PowerCenter is the most widely used tool of Informatica across the globe for various data integration processes. Informatica PowerCenter tool helps integration of data from almost any business system in almost any format. This flexibility of PowerCenter to handle almost any data makes it most widely tool in the data integration world. (For more resources related to this topic, see here.) Key features Learn the functionalities of each component in the Informatica PowerCenter tool and deploy them to accomplish executive reporting using logical data stores Learn the core features of Informatica PowerCenter tool along with its administration and architectural aspects Develop skills to extract data and efficiently utilize it with the help of world’s most widely used integration tool, and make a promising career in Informatica PowerCenter Difference in approach The simple thought behind this book is to put all the essential ingredients of Informatica, starting from basic things, such as downloads, extraction, and installation to working on client tools and high-level aspects, such as scheduling, migration, and so on. There are multiple blogs available across the Internet that talk about the Informatica tool but none presents end-to-end answers. We have tried to put up all the steps and processes in a systematic manner to help you easily start with the learning. In this book, you will get a step-by-step procedure for every aspect of the Informatica PowerCenter tool. While writing this book, the author has kept in mind the importance of live, practical exposure of the graphical interface of the tool to the audience and hence, you will notice a lot of screenshots illustrating the steps to help you understand and follow the process. The chapters area arranged in such a way that all the aspects of the Informatica PowerCenter tool are covered, and they are in a proper flow in order to achieve the functionality. Here is a gist regarding the significant aspects of the book: Installation of Informatica and the information regarding the administrator console of the PowerCenter tool The basic and advanced topics of the Designer Screen. Implementation of the different types of Slowly Changing Dimension Understanding of the Workflow Manager Monitoring the code Implementation of mapping using different types of transformations Classification of transformations Usage of Repository Manager Required skills Before you make your mind up about learning Informatica, it is always recommended that you have a basic understanding of SQL and Unix. Though these are not mandatory and you can easily use 90 percent of the Informatica PowerCenter tool without knowledge of these, the confidence to work in real-time SQL and Unix projects is a must-have in your kitty. People who know SQL will easily understand that ETL tools are nothing but a graphical representation of SQL. Unix is utilized in Informatica PowerCenter with the scripting aspect, which makes your life easy in some scenarios. Summary Informatica PowerCenter has emerged as one of the most useful ETLs employed to build enterprise data warehouses. The PowerCenter tool can make your life easy and can offer you some great career path if learnt properly. This book will thereby help you get a know-how of PowerCenter. Resources for Article: Further resources on this subject: Transition to Readshift [article] Cloudera Hadoop and HP Vertica [article] Learning to Fly with Force.com [article]
Read more
  • 0
  • 0
  • 2286

article-image-frontend-soa-taming-beast-frontend-web-development
Wesley Cho
08 May 2015
6 min read
Save for later

Frontend SOA: Taming the beast of frontend web development

Wesley Cho
08 May 2015
6 min read
Frontend web development is a difficult domain for creating scalable applications.  There are many challenges when it comes to architecture, such as how to best organize HTML, CSS, and JavaScript files, or how to create build tooling to allow an optimal development & production environment. In addition, complexity has increased measurably. Templating & routing have been transplanted to the concern of frontend web engineers as a result of the push towards single page applications (SPAs).  A wealth of frameworks can be found as listed on todomvc.com.  AngularJS is one that rose to prominence almost two years ago on the back of declarative html, strong testability, and two-way data binding, but even now it is seeing some churn due to Angular 2.0 breaking backwards compatibility completely and the rise of React, which is Facebook’s new view layer bringing the idea of a virtual DOM for performance optimization not previously seen in frontend web architecture.  Angular 2.0 itself is also looking like a juggernaut with decoupled components that harkens to more pure JavaScript & is already boasting of performance gains of roughly 5x compared to Angular 1.x. With this much churn, frontend web apps have become difficult to architect for the long term.  This requires us to take a step back and think about the direction of browsers. The Future of Browsers We know that ECMAScript 6 (ES6) is already making its headway into browsers - ES6 changes how JavaScript is structured greatly with a proper module system, and adds a lot of syntactical sugar.  Web Components are also going to change how we build our views as well. Instead of: .home-view { ... } We will be writing: <template id=”home-view”> <style> … </style> <my-navbar></my-navbar> <my-content></my-content> <script> … </script> </template> <home-view></home-view> <script> var proto = Object.create(HTMLElement.prototype); proto.createdCallback = function () { var root = this.createRoot(); var template = document.querySelector(‘#home-view’); var clone = document.importNode(template.content, true); root.appendChild(clone); }; document.registerElement(‘home-view’, { prototype: proto }); </script> This is drastically different from how we build components now.  In addition, libraries & frameworks are already being built with this in mind.  Angular 2 is using annotations provided by Traceur, Google’s ES6 + ES7 to ES5 transpiler, to provide syntactical sugar for creating one way bindings to the DOM and to DOM events.  React and Ember also have plans to integrate Web Components into their workflows.  Aurelia is already structured in a way to take advantage of it when it drops. What can we do to future proof ourselves for when these technologies drop? Solution  For starters, it is important to realize that creating HTML and CSS is relatively cheap compared to managing a complex JavaScript codebase built on top of a framework or library.  Frontend web development is seeing architecture pains that have already been solved in other domains, except it has the additional problem of the standard challenge of integrating UI into that structure.  This seems to suggest that the solution is to create a frontend service-oriented architecture (SOA) where most of the heavy logic is offloaded to pure JavaScript with only utility library additions (i.e. Underscore/Lodash).  This would allow us to choose view layers with relative ease, and move fast in case it turns out a particular view library/framework turns out not to meet requirements.  It also prevents the endemic problem of having to rewrite whole codebases due to having to swap out libraries/frameworks. For example, consider this sample Angular controller (a similarly contrived example can be created using other pieces of tech as well): angular.module(‘DemoApp’) .controller(‘DemoCtrl’, function ($scope, $http) { $scope.getItems = function () { $http.get(‘/items/’) .then(function (response) { $scope.items = response.data.items; $scope.$emit(‘items:received’, $scope.items); }); }; }); This sample controller has a method getItems that fetches items, updates the model, and then emits the information so that parent views have access to that change.  This is ugly because it hardcodes application structure hierarchy and mixes it with server query logic, which is a separate concern.  In addition, it also mixes the usage of Angular’s internals into the application code, tying some pure abstract logic heavily in with the framework’s internals.  It is not all that uncommon to see developers make these simple architecture mistakes. With the proper module system that ES6 brings, this simplifies to (items.js): import {fetch} from ‘fetch’; export class items { getAll() { return fetch.get(‘/items’) .then(function (response) { return response.json(); }); } }; And demoCtrl.js: import {BaseCtrl} from ‘./baseCtrl.js’; import {items} from ‘./items’; export class DemoCtrl extends BaseCtrl { constructor() { super(); } getItems() { let self = this; return Items.getAll() .then(function (items) { self.items = items; return items; }); } }; And main.js: import {items} from ‘./items’; import {DemoCtrl} from ‘./DemoCtrl’; angular.module(‘DemoApp’, []) .factory(‘items’, items) .controller(‘DemoCtrl’, DemoCtrl);  If you want to use anything from $scope, you can modify the usage of DemoCtrl straight in the controller definition and just instantiate it inside the function.  With promises, which are also available natively in ES6, you can chain upon them in the implementation of DemoCtrl in the Angular code base. The kicker about this approach is that this can also be done currently in ES5, and is not limited with using Angular - it applies equally as well with any other library or framework, such as Backbone, Ember, and React!  It also allows you to churn out very testable code. I recommend this as a best practice for architecting complex frontend web apps - the only caveat is if the other aspects of engineering prevent this from being a possibility, such as the business requirements of time and people resources available.  This approach allows us to tame the beast of maintaining & scaling frontend web apps while still being able to adapt quickly to the constantly changing landscape. About this author Wesley Cho is a senior frontend engineer at Jiff (http://www.jiff.com/).  He has contributed features & bug fixes and reported numerous issues to numerous libraries in the Angular ecosystem, including AngularJS, Ionic, UI Bootstrap, and UI Router, as well as authored several libraries.
Read more
  • 0
  • 0
  • 4112

article-image-learning-beaglebone
Packt
08 May 2015
3 min read
Save for later

Learning BeagleBone

Packt
08 May 2015
3 min read
Today it is hard to deny the influence of technology in our lives. We live in an era where pretty much is automated and computerized. Among all the technological advancement that humankind has achieved, the invention of yet another important device, the BeagleBone, adds more relevance to our lives as technology progresses. Outperforming in its rudimentary stage, the BeagleBone is now equipped to deliver its promise of helping developers innovate. (For more resources related to this topic, see here.) Arranged in a chronological order, this book unfolds the amazing BeagleBone encompassing the right set of features that you need as a beginner. This collation of pages will walk you through the basics of BeagleBone boards along with exercises to guide a new user through the process of using the BeagleBone for the first time. Driving the current technology, you will find yourself at the center of innovation, programming in a standalone fashion BeagleBone White and the BeagleBone Black. As you progress, you will: Unbox a new BeagleBone Connect to external electronics with GPIO pins, analog inputs, and fast boot into Angstrom Linux Build a basic configuration of a desktop or a laptop system and program a BeagleBone board Practice simple exercises using the basic resources than what is on the board Build and refine an LED flasher Connect your BeagleBone to mobile devices Expand the BeagleBone for Bluetooth connectivity This book is directed to beginners who want to use BeagleBone as a vehicle for their learning. Makers who want to use BeagleBone to control their latest product and anyone who wants to learn to leverage current mobile technology. You can apply this knowledge on your own projects or adapt one of the many open source projects for BeagleBone. In the course of your project, you will learn more advanced techniques as you encounter hurdles. The theory presented here will provide a foundation to help surmount the challenges from your own projects. After going through the exercises in this book, thereby building an understanding of the essentials of the BeagleBone, you will not only be equipped with the tools that will magnify your capabilities, but also inspired to commence your journey in this hardware era. Now that you have a foundation, go forth and build your embedded device with the BeagleBone! Resources for Article: Further resources on this subject: Protecting GPG Keys in BeagleBone [article] Making the Unit Very Mobile - Controlling Legged Movement [article] Pulse width modulator [article]
Read more
  • 0
  • 0
  • 7530
article-image-learning-ngui-unity
Packt
08 May 2015
2 min read
Save for later

Learning NGUI for Unity

Packt
08 May 2015
2 min read
NGUI is a fantastic GUI toolkit for Unity 3D, allowing fast and simple GUI creation and powerful functionality, with practically zero code required. NGUI is a robust UI system both powerful and optimized. It is an effective plugin for Unity, which gives you the power to create beautiful and complex user interfaces while reducing performance costs. Compared to Unity's GUI features, NGUI is much more powerful and optimized. GUI development in Unity requires users to createUI features by scripting lines that display labels, textures and other UI element on the screen. These lines have to be written inside a special function, OnGUI(), that is called for every frame. However, this is no longer necessary with NGUI, as they makethe GUI process much simpler. This book by Charles Pearson, the author of Learning NGUI for Unity, will help you leverage the power of NGUI for Unity to create stunning mobile and PC games and user interfaces. Based on this, this book covers the following topics: Getting started with NGUI Creating NGUI widgets Enhancing your UI C# with NGUI Atlas and font customization The in-game user interface 3D user interface Going mobile Screen sizes and aspect ratios User experience and best practices This book is a practical tutorial that will guide you through creating a fully functional and localized main menu along with 2D and 3D in-game user interfaces. The book starts by teaching you about NGUI's workflow and creating a basic UI, before gradually moving on to building widgets and enhancing your UI. You will then switch to the Android platform to take care of different issues mobile devices may encounter. By the end of this book, you will have the knowledge to create ergonomic user interfaces for your existing and future PC or mobile games and applications developed with Unity 3D and NGUI. The best part of this book is that it covers the user experience and also talks about the best practices to follow when using NGUI for Unity. If you are a Unity 3D developer who wants to create an effective and user-friendly GUI using NGUI for Unity, then this book is for you. Prior knowledge of C# scripting is expected; however, no knowledge of NGUI is required. Resources for Article: Further resources on this subject: Unity Networking – The Pong Game [article] Unit and Functional Tests [article] Components in Unity [article]
Read more
  • 0
  • 0
  • 15083

article-image-vmware-vrealize-operations-performance-and-capacity-management
Packt
08 May 2015
4 min read
Save for later

VMware vRealize Operations Performance and Capacity Management

Packt
08 May 2015
4 min read
Virtualization is what allows companies like Dropbox and Spotify to operate internationally with ever-growing customer bases. From virtualizing desktops, applications, and operating systems to creating highly-available platforms that enable developers to quickly host operating systems and entire content delivery networks, this book centers on the tools, techniques, and platforms that administrators and developers use to decouple and utilize hardware and infrastructure resources to power applications and web services. Key pointers vCenter, vSphere, VMware, VM, Virtualization, SDDC Counters, key counters, metric groups, vRealize, ESXi Cluster, Datastore, Datastore Cluster, Datacenter CPU, Network, Disk, Storage, Contention, Utilization, Memory vSwitch, vMotion, Capacity Management, Performance Management, Dashboards, vC Ops What the book covers Content-wise, the book is split into two main parts. The first part provides the foundation and theory. The second part provides the solutions and sample use cases. It aims to clear up the misunderstandings that customers have about SDDC. It explains why a VM is radically different from a physical server, and hence a virtual data center is fundamentally different from a physical data center. It then covers the aspects of management that are affected. It also covers the practical aspects of this book, as they show how sample solutions are implemented. The chapters in the book provide both performance management and capacity management. How the book differs Virtualization is one of the biggest shifts in IT history. Almost all large enterprises are embarking on a journey to transform the IT department into a service provider. VMware vRealize Operations Management is a suite of products that automates operations management using patented analytics and an integrated approach to performance, capacity, and configuration management. vCenter Operations Manager is the most important component of this suite that helps Administrators to maintain and troubleshoot your VMware environment as well as your physical environment. Written in a light and easy-to-follow language, the book differs in a way as it covers the complex topic of managing performance and capacity when the datacentre is software defined. It sets the foundation by demystifying deep rooted misunderstanding on virtualization and virtual machine. How will the book help you Master the not-so-obvious differences between a physical server and a virtual machine that customers struggle with during management of virtual datacentre Educate and convince your peers on why and how performance and capacity management change in virtual datacentre Correct many misperceptions about virtualization Know how your peers operationalize their vRealize Operations Master all the key metrics in vSphere and vRealize Operations Be confident in performance troubleshooting with vSphere and vRealize Operations See real-life examples of how super metric and advance dashboards make management easier Develop rich, custom dashboards with interaction and super metrics Unlearn the knowledge that makes performance and capacity management difficult in SDDC Master the counters in vCenter and vRealize Operations by knowing what they mean and their interdependencies Build rich dashboards using a practical and easy-to-follow approach supported with real-life examples Summary The book would teach how to get the best out of vCenter Operations in managing performance and capacity in a Software-Defined datacenter. The book starts by explaining the difference between a Software-Defined datacentre and classic physical datacentre, and how it impacts both architecture and operations. From this strategic view, the book then zooms into the most common challenge, which is performance management. The book then covers all the key counters in both vSphere and vRealize Operations, explains their dependency, and provides practical guidance on values you should expect in a healthy environment. At the end, the book puts the theory together and provides real-life examples created together with customers. This book is an invaluable resource for those embarking on a journey to master Virtualization. Resources for Article: Further resources on this subject: Backups in the VMware View Infrastructure [Article] VMware vCenter Operations Manager Essentials - Introduction to vCenter Operations Manager [Article] An Introduction to VMware Horizon Mirage [Article]
Read more
  • 0
  • 0
  • 3110
Modal Close icon
Modal Close icon