Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-nodejs-fundamentals
Packt
22 May 2015
17 min read
Save for later

Node.js Fundamentals

Packt
22 May 2015
17 min read
This article is written by Krasimir Tsonev, the author of Node.js By Example. Node.js is one of the most popular JavaScript-driven technologies nowadays. It was created in 2009 by Ryan Dahl and since then, the framework has evolved into a well-developed ecosystem. Its package manager is full of useful modules and developers around the world have started using Node.js in their production environments. In this article, we will learn about the following: Node.js building blocks The main capabilities of the environment The package management of Node.js (For more resources related to this topic, see here.) Understanding the Node.js architecture Back in the days, Ryan was interested in developing network applications. He found out that most high performance servers followed similar concepts. Their architecture was similar to that of an event loop and they worked with nonblocking input/output operations. These operations would permit other processing activities to continue before an ongoing task could be finished. These characteristics are very important if we want to handle thousands of simultaneous requests. Most of the servers written in Java or C use multithreading. They process every request in a new thread. Ryan decided to try something different—a single-threaded architecture. In other words, all the requests that come to the server are processed by a single thread. This may sound like a nonscalable solution, but Node.js is definitely scalable. We just have to run different Node.js processes and use a load balancer that distributes the requests between them. Ryan needed something that is event-loop-based and which works fast. As he pointed out in one of his presentations, big companies such as Google, Apple, and Microsoft invest a lot of time in developing high performance JavaScript engines. They have become faster and faster every year. There, event-loop architecture is implemented. JavaScript has become really popular in recent years. The community and the hundreds of thousands of developers who are ready to contribute made Ryan think about using JavaScript. Here is a diagram of the Node.js architecture: In general, Node.js is made up of three things: V8 is Google's JavaScript engine that is used in the Chrome web browser (https://developers.google.com/v8/) A thread pool is the part that handles the file input/output operations. All the blocking system calls are executed here (http://software.schmorp.de/pkg/libeio.html) The event loop library (http://software.schmorp.de/pkg/libev.html) On top of these three blocks, we have several bindings that expose low-level interfaces. The rest of Node.js is written in JavaScript. Almost all the APIs that we see as built-in modules and which are present in the documentation, are written in JavaScript. Installing Node.js A fast and easy way to install Node.js is by visiting and downloading the appropriate installer for your operating system. For OS X and Windows users, the installer provides a nice, easy-to-use interface. For developers that use Linux as an operating system, Node.js is available in the APT package manager. The following commands will set up Node.js and Node Package Manager (NPM): sudo apt-get updatesudo apt-get install nodejssudo apt-get install npm Running Node.js server Node.js is a command-line tool. After installing it, the node command will be available on our terminal. The node command accepts several arguments, but the most important one is the file that contains our JavaScript. Let's create a file called server.js and put the following code inside: var http = require('http');http.createServer(function (req, res) {   res.writeHead(200, {'Content-Type': 'text/plain'});   res.end('Hello Worldn');}).listen(9000, '127.0.0.1');console.log('Server running at http://127.0.0.1:9000/'); If you run node ./server.js in your console, you will have the Node.js server running. It listens for incoming requests at localhost (127.0.0.1) on port 9000. The very first line of the preceding code requires the built-in http module. In Node.js, we have the require global function that provides the mechanism to use external modules. We will see how to define our own modules in a bit. After that, the scripts continue with the createServer and listen methods on the http module. In this case, the API of the module is designed in such a way that we can chain these two methods like in jQuery. The first one (createServer) accepts a function that is also known as a callback, which is called every time a new request comes to the server. The second one makes the server listen. The result that we will get in a browser is as follows: Defining and using modules JavaScript as a language does not have mechanisms to define real classes. In fact, everything in JavaScript is an object. We normally inherit properties and functions from one object to another. Thankfully, Node.js adopts the concepts defined by CommonJS—a project that specifies an ecosystem for JavaScript. We encapsulate logic in modules. Every module is defined in its own file. Let's illustrate how everything works with a simple example. Let's say that we have a module that represents this book and we save it in a file called book.js: // book.jsexports.name = 'Node.js by example';exports.read = function() {   console.log('I am reading ' + exports.name);} We defined a public property and a public function. Now, we will use require to access them: // script.jsvar book = require('./book.js');console.log('Name: ' + book.name);book.read(); We will now create another file named script.js. To test our code, we will run node ./script.js. The result in the terminal looks like this: Along with exports, we also have module.exports available. There is a difference between the two. Look at the following pseudocode. It illustrates how Node.js constructs our modules: var module = { exports: {} };var exports = module.exports;// our codereturn module.exports; So, in the end, module.exports is returned and this is what require produces. We should be careful because if at some point we apply a value directly to exports or module.exports, we may not receive what we need. Like at the end of the following snippet, we set a function as a value and that function is exposed to the outside world: exports.name = 'Node.js by example';exports.read = function() {   console.log('Iam reading ' + exports.name);}module.exports = function() { ... } In this case, we do not have an access to .name and .read. If we try to execute node ./script.js again, we will get the following output: To avoid such issues, we should stick to one of the two options—exports or module.exports—but make sure that we do not have both. We should also keep in mind that by default, require caches the object that is returned. So, if we need two different instances, we should export a function. Here is a version of the book class that provides API methods to rate the books and that do not work properly: // book.jsvar ratePoints = 0;exports.rate = function(points) {   ratePoints = points;}exports.getPoints = function() {   return ratePoints;} Let's create two instances and rate the books with different points value: // script.jsvar bookA = require('./book.js');var bookB = require('./book.js');bookA.rate(10);bookB.rate(20);console.log(bookA.getPoints(), bookB.getPoints()); The logical response should be 10 20, but we got 20 20. This is why it is a common practice to export a function that produces a different object every time: // book.jsmodule.exports = function() {   var ratePoints = 0;   return {     rate: function(points) {         ratePoints = points;     },     getPoints: function() {         return ratePoints;     }   }} Now, we should also have require('./book.js')() because require returns a function and not an object anymore. Managing and distributing packages Once we understand the idea of require and exports, we should start thinking about grouping our logic into building blocks. In the Node.js world, these blocks are called modules (or packages). One of the reasons behind the popularity of Node.js is its package management. Node.js normally comes with two executables—node and npm. NPM is a command-line tool that downloads and uploads Node.js packages. The official site, , acts as a central registry. When we create a package via the npm command, we store it there so that every other developer may use it. Creating a module Every module should live in its own directory, which also contains a metadata file called package.json. In this file, we have set at least two properties—name and version: {   "name": "my-awesome-nodejs-module",   "version": "0.0.1"} We can place whatever code we like in the same directory. Once we publish the module to the NPM registry and someone installs it, he/she will get the same files. For example, let's add an index.js file so that we have two files in the package: // index.jsconsole.log('Hello, this is my awesome Node.js module!'); Our module does only one thing—it displays a simple message to the console. Now, to upload the modules, we need to navigate to the directory containing the package.json file and execute npm publish. This is the result that we should see: We are ready. Now our little module is listed in the Node.js package manager's site and everyone is able to download it. Using modules In general, there are three ways to use the modules that are already created. All three ways involve the package manager: We may install a specific module manually. Let's say that we have a folder called project. We open the folder and run the following: npm install my-awesome-nodejs-module The manager automatically downloads the latest version of the module and puts it in a folder called node_modules. If we want to use it, we do not need to reference the exact path. By default, Node.js checks the node_modules folder before requiring something. So, just require('my-awesome-nodejs-module') will be enough. The installation of modules globally is a common practice, especially if we talk about command-line tools made with Node.js. It has become an easy-to-use technology to develop such tools. The little module that we created is not made as a command-line program, but we can still install it globally by running the following code: npm install my-awesome-nodejs-module -g Note the -g flag at the end. This is how we tell the manager that we want this module to be a global one. When the process finishes, we do not have a node_modules directory. The my-awesome-nodejs-module folder is stored in another place on our system. To be able to use it, we have to add another property to package.json, but we'll talk more about this in the next section. The resolving of dependencies is one of the key features of the package manager of Node.js. Every module can have as many dependencies as you want. These dependences are nothing but other Node.js modules that were uploaded to the registry. All we have to do is list the needed packages in the package.json file: {    "name": "another-module",    "version": "0.0.1",    "dependencies": {        "my-awesome-nodejs-module": "0.0.1"      } } Now we don't have to specify the module explicitly and we can simply execute npm install to install our dependencies. The manager reads the package.json file and saves our module again in the node_modules directory. It is good to use this technique because we may add several dependencies and install them at once. It also makes our module transferable and self-documented. There is no need to explain to other programmers what our module is made up of. Updating our module Let's transform our module into a command-line tool. Once we do this, users will have a my-awesome-nodejs-module command available in their terminals. There are two changes in the package.json file that we have to make: {   "name": "my-awesome-nodejs-module",   "version": "0.0.2",   "bin": "index.js"} A new bin property is added. It points to the entry point of our application. We have a really simple example and only one file—index.js. The other change that we have to make is to update the version property. In Node.js, the version of the module plays important role. If we look back, we will see that while describing dependencies in the package.json file, we pointed out the exact version. This ensures that in the future, we will get the same module with the same APIs. Every number from the version property means something. The package manager uses Semantic Versioning 2.0.0 (http://semver.org/). Its format is MAJOR.MINOR.PATCH. So, we as developers should increment the following: MAJOR number if we make incompatible API changes MINOR number if we add new functions/features in a backwards-compatible manner PATCH number if we have bug fixes Sometimes, we may see a version like 2.12.*. This means that the developer is interested in using the exact MAJOR and MINOR version, but he/she agrees that there may be bug fixes in the future. It's also possible to use values like >=1.2.7 to match any equal-or-greater version, for example, 1.2.7, 1.2.8, or 2.5.3. We updated our package.json file. The next step is to send the changes to the registry. This could be done again with npm publish in the directory that holds the JSON file. The result will be similar. We will see the new 0.0.2 version number on the screen: Just after this, we may run npm install my-awesome-nodejs-module -g and the new version of the module will be installed on our machine. The difference is that now we have the my-awesome-nodejs-module command available and if you run it, it displays the message written in the index.js file: Introducing built-in modules Node.js is considered a technology that you can use to write backend applications. As such, we need to perform various tasks. Thankfully, we have a bunch of helpful built-in modules at our disposal. Creating a server with the HTTP module We already used the HTTP module. It's perhaps the most important one for web development because it starts a server that listens on a particular port: var http = require('http');http.createServer(function (req, res) {   res.writeHead(200, {'Content-Type': 'text/plain'});   res.end('Hello Worldn');}).listen(9000, '127.0.0.1');console.log('Server running at http://127.0.0.1:9000/'); We have a createServer method that returns a new web server object. In most cases, we run the listen method. If needed, there is close, which stops the server from accepting new connections. The callback function that we pass always accepts the request (req) and response (res) objects. We can use the first one to retrieve information about incoming request, such as, GET or POST parameters. Reading and writing to files The module that is responsible for the read and write processes is called fs (it is derived from filesystem). Here is a simple example that illustrates how to write data to a file: var fs = require('fs');fs.writeFile('data.txt', 'Hello world!', function (err) {   if(err) { throw err; }   console.log('It is saved!');}); Most of the API functions have synchronous versions. The preceding script could be written with writeFileSync, as follows: fs.writeFileSync('data.txt', 'Hello world!'); However, the usage of the synchronous versions of the functions in this module blocks the event loop. This means that while operating with the filesystem, our JavaScript code is paused. Therefore, it is a best practice with Node to use asynchronous versions of methods wherever possible. The reading of the file is almost the same. We should use the readFile method in the following way: fs.readFile('data.txt', function(err, data) {   if (err) throw err;   console.log(data.toString());}); Working with events The observer design pattern is widely used in the world of JavaScript. This is where the objects in our system subscribe to the changes happening in other objects. Node.js has a built-in module to manage events. Here is a simple example: var events = require('events'); var eventEmitter = new events.EventEmitter(); var somethingHappen = function() {    console.log('Something happen!'); } eventEmitter .on('something-happen', somethingHappen) .emit('something-happen'); The eventEmitter object is the object that we subscribed to. We did this with the help of the on method. The emit function fires the event and the somethingHappen handler is executed. The events module provides the necessary functionality, but we need to use it in our own classes. Let's get the book idea from the previous section and make it work with events. Once someone rates the book, we will dispatch an event in the following manner: // book.js var util = require("util"); var events = require("events"); var Class = function() { }; util.inherits(Class, events.EventEmitter); Class.prototype.ratePoints = 0; Class.prototype.rate = function(points) {    ratePoints = points;    this.emit('rated'); }; Class.prototype.getPoints = function() {    return ratePoints; } module.exports = Class; We want to inherit the behavior of the EventEmitter object. The easiest way to achieve this in Node.js is by using the utility module (util) and its inherits method. The defined class could be used like this: var BookClass = require('./book.js'); var book = new BookClass(); book.on('rated', function() {    console.log('Rated with ' + book.getPoints()); }); book.rate(10); We again used the on method to subscribe to the rated event. The book class displays that message once we set the points. The terminal then shows the Rated with 10 text. Managing child processes There are some things that we can't do with Node.js. We need to use external programs for the same. The good news is that we can execute shell commands from within a Node.js script. For example, let's say that we want to list the files in the current directory. The file system APIs do provide methods for that, but it would be nice if we could get the output of the ls command: // exec.js var exec = require('child_process').exec; exec('ls -l', function(error, stdout, stderr) {    console.log('stdout: ' + stdout);    console.log('stderr: ' + stderr);    if (error !== null) {        console.log('exec error: ' + error);    } }); The module that we used is called child_process. Its exec method accepts the desired command as a string and a callback. The stdout item is the output of the command. If we want to process the errors (if any), we may use the error object or the stderr buffer data. The preceding code produces the following screenshot: Along with the exec method, we have spawn. It's a bit different and really interesting. Imagine that we have a command that not only does its job, but also outputs the result. For example, git push may take a few seconds and it may send messages to the console continuously. In such cases, spawn is a good variant because we get an access to a stream: var spawn = require('child_process').spawn; var command = spawn('git', ['push', 'origin', 'master']); command.stdout.on('data', function (data) {    console.log('stdout: ' + data); }); command.stderr.on('data', function (data) {    console.log('stderr: ' + data); }); command.on('close', function (code) {    console.log('child process exited with code ' + code); }); Here, stdout and stderr are streams. They dispatch events and if we subscribe to these events, we will get the exact output of the command as it was produced. In the preceding example, we run git push origin master and sent the full command responses to the console. Summary Node.js is used by many companies nowadays. This proves that it is mature enough to work in a production environment. In this article, we saw what the fundamentals of this technology are. We covered some of the commonly used cases. Resources for Article: Further resources on this subject: AngularJS Project [article] Exploring streams [article] Getting Started with NW.js [article]
Read more
  • 0
  • 0
  • 5816

article-image-getting-started-nwjs
Packt
21 May 2015
19 min read
Save for later

Getting Started with NW.js

Packt
21 May 2015
19 min read
In this article by Alessandro Benoit, author of the book NW.js Essentials, we will learn that until a while ago, developing a desktop application that was compatible with the most common operating systems required an enormous amount of expertise, different programming languages, and logics for each platform. (For more resources related to this topic, see here.) Yet, for a while now, the evolution of web technologies has brought to our browsers many web applications that have nothing to envy from their desktop alternative. Just think of Google apps such as Gmail and Calendar, which, for many, have definitely replaced the need for a local mail client. All of this has been made possible thanks to the amazing potential of the latest implementations of the Browser Web API combined with the incredible flexibility and speed of the latest server technologies. Although we live in a world increasingly interconnected and dependent on the Internet, there is still the need for developing desktop applications for a number of reasons: To overcome the lack of vertical applications based on web technologies To implement software solutions where data security is essential and cannot be compromised by exposing data on the Internet To make up for any lack of connectivity, even temporary Simply because operating systems are still locally installed Once it's established that we cannot completely get rid of desktop applications and that their implementation on different platforms requires an often prohibitive learning curve, it comes naturally to ask: why not make desktop applications out of the very same technologies used in web development? The answer, or at least one of the answers, is NW.js! NW.js doesn't need any introduction. With more than 20,000 stars on GitHub (in the top four hottest C++ projects of the repository-hosting service) NW.js is definitely one of the most promising projects to create desktop applications with web technologies. Paraphrasing the description on GitHub, NW.js is a web app runtime that allows the browser DOM to access Node.js modules directly. Node.js is responsible for hardware and operating system interaction, while the browser serves the graphic interface and implements all the functionalities typical of web applications. Clearly, the use of the two technologies may overlap; for example, if we were to make an asynchronous call to the API of an online service, we could use either a Node.js HTTP client or an XMLHttpRequest Ajax call inside the browser. Without going into technical details, in order to create desktop applications with NW.js, all you need is a decent understanding of Node.js and some expertise in developing HTML5 web apps. In this article, we are going to dissect the topic dwelling on these points: A brief technical digression on how NW.js works An analysis of the pros and cons in order to determine use scenarios Downloading and installing NW.js Development tools Making your first, simple "Hello World" application Important notes about NW.js (also known as Node-Webkit) and io.js Before January 2015, since the project was born, NW.js was known as Node-Webkit. Moreover, with Node.js getting a little sluggish, much to the concern of V8 JavaScript engine updates, from version 0.12.0, NW.js is not based on Node.js but on io.js, an npm-compatible platform originally based on Node.js. For the sake of simplicity, we will keep referring to Node.js even when talking about io.js as long as this does not affect a proper comprehension of the subject. NW.js under the hood As we stated in the introduction, NW.js, made by Roger Wang of Intel's Open Source Technology Center (Shanghai office) in 2011, is a web app runtime based on Node.js and the Chromium open source browser project. To understand how it works, we must first analyze its two components: Node.js is an efficient JavaScript runtime written in C++ and based on theV8 JavaScript engine developed by Google. Residing in the operating system's application layer, Node.js can access hardware, filesystems, and networking functionalities, enabling its use in a wide range of fields, from the implementation of web servers to the creation of control software for robots. (As we stated in the introduction, NW.js has replaced Node.js with io.js from version 0.12.0.) WebKit is a layout engine that allows the rendering of web pages starting from the DOM, a tree of objects representing the web page. NW.js is actually not directly based on WebKit but on Blink, a fork of WebKit developed specifically for the Chromium open source browser project and based on the V8 JavaScript engine as is the case with Node.js. Since the browser, for security reasons, cannot access the application layer and since Node.js lacks a graphical interface, Roger Wang had the insight of combining the two technologies by creating NW.js. The following is a simple diagram that shows how Node.js has been combined with WebKit in order to give NW.js applications access to both the GUI and the operating system: In order to integrate the two systems, which, despite speaking the same language, are very different, a couple of tricks have been adopted. In the first place, since they are both event-driven (following a logic of action/reaction rather than a stream of operations), the event processing has been unified. Secondly, the Node context was injected into WebKit so that it can access it. The amazing thing about it is that you'll be able to program all of your applications' logic in JavaScript with no concerns about where Node.js ends and WebKit begins. Today, NW.js has reached version 0.12.0 and, although still young, is one of the most promising web app runtimes to develop desktop applications adopting web technologies. Features and drawbacks of NW.js Let's check some of the features that characterize NW.js: NW.js allows us to realize modern desktop applications using HTML5, CSS3, JS, WebGL, and the full potential of Node.js, including the use of third-party modules The Native UI API allows you to implement native lookalike applications with the support of menus, clipboards, tray icons, and file binding Since Node.js and WebKit run within the same thread, NW.js has excellent performance With NW.js, it is incredibly easy to port existing web applications to desktop applications Thanks to the CLI and the presence of third-party tools, it's really easy to debug, package, and deploy applications on Microsoft Windows, Mac OS, and Linux However, all that glitters is not gold. There are some cons to consider when developing an application with NW.js: Size of the application: Since a copy of NW.js (70-90 MB) must be distributed along with each application, the size of the application makes it quite expensive compared to native applications. Anyway, if you're concerned about download times, compressing NW.js for distribution will save you about half the size. Difficulties in distributing your application through Mac App Store: In this article, it will not be discussed (just do a search on Google), but even if the procedure is rather complex, you can distribute your NW.js application through Mac App Store. At the moment, it is not possible to deploy a NW.js application on Windows Store due to the different architecture of .appx applications. Missing support for iOS or Android: Unlike other SDKs and libraries, at the moment, it is not possible to deploy an NW.js application on iOS or Android, and it does not seem to be possible to do so in the near future. However, the portability of the HTML, JavaScript, and CSS code that can be distributed on other platforms with tools such as PhoneGap or TideSDK should be considered. Unfortunately, this is not true for all of the features implemented using Node.js. Stability: Finally, the platform is still quite young and not bug-free. NW.js – usage scenarios The flexibility and good performance of NW.js allows its use in countless scenarios, but, for convenience, I'm going to report only a few notable ones: Development tools Implementation of the GUI around existing CLI tools Multimedia applications Web services clients Video games The choice of development platform for a new project clearly depends only on the developer; for the overall aim of confronting facts, it may be useful to consider some specific scenarios where the use of NW.js might not be recommended: When developing for a specific platform, graphic coherence is essential, and, perhaps, it is necessary to distribute the application through a store If the performance factor limits the use of the preceding technologies If the application does a massive use of the features provided by the application layer via Node.js and it has to be distributed to mobile devices Popular NW.js applications After summarizing the pros and cons of NW.js, let's not forget the real strength of the platform—the many applications built on top of NW.js that have already been distributed. We list a few that are worth noting: Wunderlist for Windows 7: This is a to-do list / schedule management app used by millions Cellist: This is an HTTP debugging proxy available on Mac App Store Game Dev Tycoon: This is one of the first NW.js games that puts you in the shoes of a 1980s game developer Intel® XDK: This is an HTML5 cross-platform solution that enables developers to write web and hybrid apps Downloading and installing NW.js Installing NW.js is pretty simple, but there are many ways to do it. One of the easiest ways is probably to run npm install nw from your terminal, but for the educational purposes, we're going to manually download and install it in order to properly understand how it works. You can find all the download links on the project website at http://nwjs.io/ or in the Downloads section on the GitHub project page at https://github.com/nwjs/nw.js/; from here, download the package that fits your operating system. For example, as I'm writing this article, Node-Webkit is at version 0.12.0, and my operating system is Mac OS X Yosemite 10.10 running on a 64-bit MacBook Pro; so, I'm going to download the nwjs-v0.12.0-osx-x64.zip file. Packages for Mac and Windows are zipped, while those for Linux are in the tar.gz format. Decompress the files and proceed, depending on your operating system, as follows. Installing NW.js on Mac OS X Inside the archive, we're going to find three files: Credits.html: This contains credits and licenses of all the dependencies of NW.js nwjs.app: This is the actual NW.js executable nwjc: This is a CLI tool used to compile your source code in order to protect it Before v0.12.0, the filename of nwjc was nwsnapshot. Currently, the only file that interests us is nwjs.app (the extension might not be displayed depending on the OS configuration). All we have to do is copy this file in the /Applications folder—your main applications folder. If you'd rather install NW.js using Homebrew Cask, you can simply enter the following command in your terminal: $ brew cask install nw If you are using Homebrew Cask to install NW.js, keep in mind that the Cask repository might not be updated and that the nwjs.app file will be copied in ~/Applications, while a symlink will be created in the /Applications folder. Installing NW.js on Microsoft Windows Inside the Microsoft Windows NW.js package, we will find the following files: credits.html: This contains the credits and licenses of all NW.js dependencies d3dcompiler_47.dll: This is the Direct3D library ffmpegsumo.dll: This is a media library to be included in order to use the <video> and <audio> tags icudtl.dat: This is an important network library libEGL.dll: This is the WebGL and GPU acceleration libGLESv2.dll: This is the WebGL and GPU acceleration locales/: This is the languages folder nw.exe: This is the actual NW.js executable nw.pak: This is an important JS library pdf.dll: This library is used by the web engine for printing nwjc.exe: This is a CLI tool to compile your source code in order to protect it Some of the files in the folder will be omitted during the final distribution of our application, but for development purposes, we are simply going to copy the whole content of the folder to C:/Tools/nwjs. Installing NW.js on Linux On Linux, the procedure can be more complex depending on the distribution you use. First, copy the downloaded archive into your home folder if you have not already done so, and then open the terminal and type the following command to unpack the archive (change the version accordingly to the one downloaded): $ gzip -dc nwjs-v0.12.0-linux-x64.tar.gz | tar xf - Now, rename the newly created folder in nwjs with the following command: $ mv ~/nwjs-v0.12.0-linux-x64 ~/nwjs Inside the nwjs folder, we will find the following files: credits.html: This contains the credits and licenses of all the dependencies of NW.js icudtl.dat This is an important network library libffmpegsumo.so: This is a media library to be included in order to use the <video> and <audio> tags locales/: This is a languages folder nw: This is the actual NW.js executable nw.pak: This is an important JS library nwjc: This is a CLI tool to compile your source code in order to protect it Open the folder inside the terminal and try to run NW.js by typing the following: $ cd nwjs$ ./nw If you get the following error, you are probably using a version of Ubuntu later than 13.04, Fedora later than 18, or another Linux distribution that uses libudev.so.1 instead of libudev.so.0: otherwise, you're good to go to the next step: error while loading shared libraries: libudev.so.0: cannot open shared object file: No such file or directory Until NW.js is updated to support libudev.so.1, there are several solutions to solve the problem. For me, the easiest solution is to type the following terminal command inside the directory containing nw: $ sed -i 's/udev.so.0/udev.so.1/g' nw This will replace the string related to libudev, within the application code, with the new version. The process may take a while, so wait for the terminal to return the cursor before attempting to enter the following: $ ./nw Eventually, the NW.js window should open properly. Development tools As you'll make use of third-party modules of Node.js, you're going to need npm in order to download and install all the dependencies; so, Node.js (http://nodejs.org/) or io.js (https://iojs.org/) must be obviously installed in your development environment. I know you cannot wait to write your first application, but before you start, I would like to introduce you to Sublime Text 2. It is a simple but sophisticated IDE, which, thanks to the support for custom build scripts, allows you to run (and debug) NW.js applications from inside the editor itself. If I wasn't convincing and you'd rather keep using your favorite IDE, you can skip to the next section; otherwise, follow these steps to install and configure Sublime Text 2: Download and install Sublime Text 2 for your platform from http://www.sublimetext.com/. Open it and from the top menu, navigate to Tools | Build System | New Build System. A new edit screen will open; paste the following code depending on your platform: On Mac OS X: {"cmd": ["nwjs", "--enable-logging",     "${project_path:${file_path}}"],"working_dir": "${project_path:${file_path}}","path": "/Applications/nwjs.app/Contents/MacOS/"} On Microsoft Windows: {"cmd": ["nw.exe", "--enable-logging",     "${project_path:${file_path}}"],"working_dir": "${project_path:${file_path}}","path": "C:/Tools/nwjs/","shell": true} On Linux: {"cmd": ["nw", "--enable-logging",     "${project_path:${file_path}}"],"working_dir": "${project_path:${file_path}}","path": "/home/userName/nwjs/"} Type Ctrl + S (Cmd + S on Mac) and save the file as nw-js.sublime-build. Perfect! Now you are ready to run your applications directly from the IDE. There are a lot of packages, such as SublimeLinter, LiveReload, and Node.js code completion, available to Sublime Text 2. In order to install them, you have to install Package Control first. Just open https://sublime.wbond.net/installation and follow the instructions. Writing and running your first "Hello World" app Finally, we are ready to write our first simple application. We're going to revisit the usual "Hello World" application by making use of a Node.js module for markdown parsing. "Markdown is a plain text formatting syntax designed so that it can be converted to HTML and many other formats using a tool by the same name."                                                                                                              – Wikipedia Let's create a Hello World folder and open it in Sublime Text 2 or in your favorite IDE. Now open a new package.json file and type in the following JSON code: {"name": "nw-hello-world","main": "index.html","dependencies": {   "markdown": "0.5.x"}} The package.json manifest file is essential for distribution as it determines many of the window properties and primary information about the application. Moreover, during the development process, you'll be able to declare all of the dependencies. In this specific case, we are going to assign the application name, the main file, and obviously our dependency, the markdown module, written by Dominic Baggott. If you so wish, you can create the package.json manifest file using the npm init command from the terminal as you're probably used to already when creating npm packages. Once you've saved the package.json file, create an index.html file that will be used as the main application file and type in the following code: <!DOCTYPE html><html><head>   <title>Hello World!</title></head><body>   <script>   <!--Here goes your code-->   </script></body></html> As you can see, it's a very common HTML5 boilerplate. Inside the script tag, let's add the following: var markdown = require("markdown").markdown,   div = document.createElement("div"),   content = "#Hello World!n" +   "We are using **io.js** " +   "version *" + process.version + "*"; div.innerHTML = markdown.toHTML(content);document.body.appendChild(div); What we do here is require the markdown module and then parse the content variable through it. To keep it as simple as possible, I've been using Vanilla JavaScript to output the parsed HTML to the screen. In the highlighted line of code, you may have noticed that we are using process.version, a property that is a part of the Node.js context. If you try to open index.html in a browser, you'd get the Reference Error: require is not defined error as Node.js has not been injected into the WebKit process. Once you have saved the index.html file, all that is left is to install the dependencies by running the following command from the terminal inside the project folder: $ npm install And we are ready to run our first application! Running NW.js applications on Sublime Text 2 If you opted for Sublime Text 2 and followed the procedure in the development tools section, simply navigate to Project | Save Project As and save the hello-world.sublime-project file inside the project folder. Now, in the top menu, navigate to Tools | Build System and select nw-js. Finally, press Ctrl + B (or Cmd + B on Mac) to run the program. If you have opted for a different IDE, just follow the upcoming steps depending on your operating system. Running NW.js applications on Microsoft Windows Open the command prompt and type: C:> c:Toolsnwjsnw.exe c:pathtotheproject On Microsoft Windows, you can also drag the folder containing package.json to nw.exe in order to open it. Running NW.js applications on Mac OS Open the terminal and type: $ /Applications/nwjs.app/Contents/MacOS/nwjs /path/to/the/project/ Or, if running NW.js applications inside the directory containing package.json, type: $ /Applications/nwjs.app/Contents/MacOS/nwjs. As you can see in Mac OS X, the NW.js kit's executable binary is in a hidden directory within the .app file. Running NW.js applications on Linux Open the terminal and type: $ ~/nwjs/nw /path/to/the/project/ Or, if running NW.js applications inside the directory containing package.json, type: $ ~/nwjs/nw . Running the application, you may notice that a few errors are thrown depending on your platform. As I stated in the pros and cons section, NW.js is still young, so that's quite normal, and probably we're talking about minor issues. However, you can search in the NW.js GitHub issues page in order to check whether they've already been reported; otherwise, open a new issue—your help would be much appreciated. Now, regardless of the operating system, a window similar to the following one should appear: As illustrated, the process.version object variable has been printed properly as Node.js has correctly been injected and can be accessed from the DOM. Perhaps, the result is a little different than what you expected since the top navigation bar of Chromium is visible. Do not worry! You can get rid of the navigation bar at any time simply by adding the window.toolbar = false parameter to the manifest file, but for now, it's important that the bar is visible in order to debug the application. Summary In this article, you discovered how NW.js works under the hood, the recommended tools for development, a few usage scenarios of the library, and eventually, how to run your first, simple application using third-party modules of Node.js. I really hope I haven't bored you too much with the theoretical concepts underlying the functioning of NW.js; I really did my best to keep it short.
Read more
  • 0
  • 0
  • 21053

article-image-introducing-web-components
Packt
19 May 2015
16 min read
Save for later

Introducing Web Components

Packt
19 May 2015
16 min read
In this article by Sandeep Kumar Patel, author of the book Learning Web Component Development, we will learn about the web component specification in detail. Web component is changing the web application development process. It comes with standard and technical features, such as templates, custom elements, Shadow DOM, and HTML Imports. The main topics that we will cover in this article about web component specification are as follows: What are web components? Benefits and challenges of web components The web component architecture Template element HTML Import (For more resources related to this topic, see here.) What are web components? Web components are a W3C specification to build a standalone component for web applications. It helps developers leverage the development process to build reusable and reliable widgets. A web application can be developed in various ways, such as page focus development and navigation-based development, where the developer writes the code based on the requirement. All of these approaches fulfil the present needs of the application, but may fail in the reusability perspective. This problem leads to component-based development. Benefits and challenges of web components There are many benefits of web components: A web component can be used in multiple applications. It provides interoperability between frameworks, developing the web component ecosystem. This makes it reusable. A web component has a template that can be used to put the entire markup separately, making it more maintainable. As web components are developed using HTML, CSS, and JavaScript, it can run on different browsers. This makes it platform independent. Shadow DOM provides encapsulation mechanism to style, script, and HTML markup. This encapsulation mechanism provides private scope and prevents the content of the component being affected by the external document. Equally, some of the challenges for a web component include: Implementation: The W3C web component specification is very new to the browser technology and not completely implemented by the browsers. Shared resource: A web component has its own scoped resources. There may be cases where some of the resources between the components are common. Performance: Increase in the number of web components takes more time to get used inside the DOM. Polyfill size: The polyfill are a workaround for a feature that is not currently implemented by the browsers. These polyfill files have a large memory foot print. SEO: As the HTML markup present inside the template is inert, it creates problems in the search engine for the indexing of web pages. The web component architecture The W3C web component specification has four main building blocks for component development. Web component development is made possible by template, HTML Imports, Shadow DOM, and custom elements and decorators. However, decorators do not have a proper specification at present, which results in the four pillars of web component paradigm. The following diagram shows the building blocks of web component: These four pieces of technology power a web component that can be reusable across the application. In the coming section, we will explore these features in detail and understand how they help us in web component development. Template element The HTML <template> element contains the HTML markup, style, and script, which can be used multiple times. The templating process is nothing new to a web developer. Handlebars, Mustache, and Dust are the templating libraries that are already present and heavily used for web application development. To streamline this process of template use, W3C web component specification has included the <template> element. This template element is very new to web development, so it lacks features compared to the templating libraries such as Handlebars.js that are present in the market. In the near future, it will be equipped with new features, but, for now, let's explore the present template specification. Template element detail The HTML <template> element is an HTMLTemplateElement interface. The interface definition language (IDL) definition of the template element is listed in the following code: interface HTMLTemplateElement : HTMLElement {readonly attribute DocumentFragment content;}; The preceding code is written in IDL language. This IDL language is used by the W3C for writing specification. Browsers that support HTML Import must implement the aforementioned IDL. The details of the preceding code are listed here: HTMLTemplateElement: This is the template interface and extends the HTMLElement class. content: This is the only attribute of the HTML template element. It returns the content of the template and is read-only in nature. DocumentFragment: This is a return type of the content attribute. DocumentFragment is a lightweight version of the document and does not have a parent. To find out more about DocumentFargment, use the following link: https://developer.mozilla.org/en/docs/Web/API/DocumentFragment Template feature detection The HTML <template> element is very new to web application development and not completely implemented by all browsers. Before implementing the template element, we need to check the browser support. The JavaScript code for template support in a browser is listed in the following code: <!DOCTYPE html><html><head lang="en"><meta charset="UTF-8"><title>Web Component: template support</title></head><body><h1 id="message"></h1><script>var isTemplateSupported = function () {var template = document.createElement("template");return 'content' in template;};var isSupported = isTemplateSupported(),message = document.getElementById("message");if (isSupported) {message.innerHTML = "Template element is supported by thebrowser.";} else {message.innerHTML = "Template element is not supported bythe browser.";}</script></body></html> In the preceding code, the isTemplateSupported method checks the content property present inside the template element. If the content attribute is present inside the template element, this method returns either true or false. If the template element is supported by the browser, the h1 element will show the support message. The browser that is used to run the preceding code is Chrome 39 release. The output of the preceding code is shown in following screenshot: The preceding screenshot shows that the browser used for development is supporting the HTML template element. There is also a great online tool called "Can I Use for checking support for the template element in the current browser. To check out the template support in the browser, use the following link: http://caniuse.com/#feat=template The following screenshot shows the current status of the support for the template element in the browsers using the Can I Use online tool: Inert template The HTML content inside the template element is inert in nature until it is activated. The inertness of template content contributes to increasing the performance of the web application. The following code demonstrates the inertness of the template content: <!DOCTYPE html><html><head lang="en"><meta charset="UTF-8"><title>Web Component: A inert template content example.</title></head><body><div id="message"></div><template id="aTemplate"><img id="profileImage"src="http://www.gravatar.com/avatar/c6e6c57a2173fcbf2afdd5fe6786e92f.png"><script>alert("This is a script.");</script></template><script>(function(){var imageElement =document.getElementById("profileImage"),messageElement = document.getElementById("message");messageElement.innerHTML = "IMG element "+imageElement;})();</script></body></html> In the preceding code, a template contains an image element with the src attribute, pointing to a Gravatar profile image, and an inline JavaScript alert method. On page load, the document.getElementById method is looking for an HTML element with the #profileImage ID. The output of the preceding code is shown in the following screenshot: The preceding screenshot shows that the script is not able to find the HTML element with the profileImage ID and renders null in the browser. From the preceding screenshot it is evident that the content of the template is inert in nature. Activating a template By default, the content of the <template> element is inert and are not part of the DOM. The two different ways that can be used to activate the nodes are as follows: Cloning a node Importing a node Cloning a node The cloneNode method can be used to duplicate a node. The syntax for the cloneNode method is listed as follows: <Node> <target node>.cloneNode(<Boolean parameter>) The details of the preceding code syntax are listed here: This method can be applied on a node that needs to be cloned. The return type of this method is Node. The input parameter for this method is of the Boolean type and represents a type of cloning. There are 2 different types of cloning, listed as follows: Deep cloning: In deep cloning, the children of the targeted node also get copied. To implement deep cloning, the Boolean input parameter to cloneNode method needs to be true. Shallow cloning: In shallow cloning, only the targeted node is copied without the children. To implement shallow cloning the Boolean input parameter to cloneNode method needs to be false. The following code shows the use of the cloneNode method to copy the content of a template, having the h1 element with some text: <!DOCTYPE html><html><head lang="en"><meta charset="UTF-8"><title>Web Component: Activating template using cloneNode method</title></head><body><div id="container"></div><template id="aTemplate"><h1>Template is activated using cloneNode method.</h1></template><script>var aTemplate = document.querySelector("#aTemplate"),container = document.getElementById("container"),templateContent = aTemplate.content,activeContent = templateContent.cloneNode(true);container.appendChild(activeContent);</script></body></html> In the preceding code, the template element has the aTemplate ID and is referenced using the querySelector method. The HTML markup content inside the template is then retrieved using a content property and saved in a templateContent variable. The cloneNode method is then used for deep cloning to get the activated node that is later appended to a div element. The following screenshot shows the output of the preceding code: To find out more about the cloneNode method visit: https://developer.mozilla.org/en-US/docs/Web/API/Node.cloneNode Importing a node The importNode method is another way of activating the template content. The syntax for the aforementioned method is listed in the following code: <Node> document.importNode(<target node>,<Boolean parameter>) The details of the preceding code syntax are listed as follows: This method returns a copy of the node from an external document. This method takes two input parameters. The first parameter is the target node that needs to be copied. The second parameter is a Boolean flag and represents the way the target node is cloned. If the Boolean flag is false, the importNode method makes a shallow copy, and for a true value, it makes a deep copy. The following code shows the use of the importNode method to copy the content of a template containing an h1 element with some text: <!DOCTYPE html><html><head lang="en"><meta charset="UTF-8"><title>Web Component: Activating template using importNode method</title></head><body><div id="container"></div><template id="aTemplate"><h1>Template is activated using importNode method.</h1></template><script>var aTemplate = document.querySelector("#aTemplate"),container = document.getElementById("container"),templateContent = aTemplate.content,activeContent = document.importNode(templateContent,true);container.appendChild(activeContent);</script></body></html> In the preceding code, the template element has the aTemplate ID and is referenced using the querySelector method. The HTML markup content inside the template is then retrieved using the content property and saved in the templateContent variable. The importNode method is then used for deep cloning to get the activated node that is later appended to a div element. The following screenshot shows the output of the preceding code: To find out more about the importNode method, visit: http://mdn.io/importNode HTML Import The HTML Import is another important piece of technology of the W3C web component specification. It provides a way to include another HTML document present in a file with the current document. HTML Imports provide an alternate solution to the Iframe element, and are also great for resource bundling. The syntax of the HTML Imports is listed as follows: <link rel="import" href="fileName.html"> The details of the preceding syntax are listed here: The HTML file can be imported using the <link> tag and the rel attribute with import as the value. The href string points to the external HTML file that needs to be included in the current document. The HTML import element is implemented by the HTMLElementLink class. The IDL definition of HTML Import is listed in the following code: partial interface LinkImport {readonly attribute Document? import;};HTMLLinkElement implements LinkImport; The preceding code shows IDL for the HTML Import where the parent interface is LinkImport which has the readonly attribute import. The HTMLLinkElement class implements the LinkImport parent interface. The browser that supports HTML Import must implement the preceding IDL. HTML Import feature detection The HTML Import is new to the browser and may not be supported by all browsers. To check the support of the HTML Import in the browser, we need to check for the import property that is present inside a <link> element. The code to check the HTML import support is as follows: <!DOCTYPE html><html><head lang="en"><meta charset="UTF-8"><title>Web Component: HTML import support</title></head><body><h1 id="message"></h1><script>var isImportSupported = function () {var link = document.createElement("link");return 'import' in link;};var isSupported = isImportSupported(),message = document.getElementById("message");if (isSupported) {message.innerHTML = "Import is supported by the browser.";} else {message.innerHTML = "Import is not supported by thebrowser.";}</script></body></html> The preceding code has a isImportSupported function, which returns the Boolean value for HTML import support in the current browser. The function creates a <link> element and then checks the existence of an import attribute using the in operator. The following screenshot shows the output of the preceding code: The preceding screenshot shows that the import is supported by the current browser as the isImportSupported method returns true. The Can I Use tool can also be utilized for checking support for the HTML Import in the current browser. To check out the template support in the browser, use the following link: http://caniuse.com/#feat=imports The following screenshot shows the current status of support for the HTML Import in browsers using the Can I Use online tool: Accessing the HTML Import document The HTML Import includes the external document to the current page. We can access the external document content using the import property of the link element. In this section, we will learn how to use the import property to refer to the external document. The message.html file is an external HTML file document that needs to be imported. The content of the message.html file is as follows: <h1>This is from another HTML file document.</h1> The following code shows the HTML document where the message.html file is loaded and referenced by the import property: <!DOCTYPE html><html><head lang="en"><link rel="import" href="message.html"></head><body><script>(function(){var externalDocument =document.querySelector('link[rel="import"]').import;headerElement = externalDocument.querySelector('h1')document.body.appendChild(headerElement.cloneNode(true));})();</script></body></html> The details of the preceding code are listed here: In the header section, the <link> element is importing the HTML document present inside the message.html file. In the body section, an inline <script> element using the document.querySelector method is referencing the link elements having the rel attribute with the import value. Once the link element is located, the content of this external document is copied using the import property to the externalDocument variable. The header h1 element inside the external document is then located using a quesrySelector method and saved to the headerElement variable. The header element is then deep copied using the cloneNode method and appended to the body element of the current document. The following screenshot shows the output of the preceding code: HTML Import events The HTML <link> element with the import attribute supports two event handlers. These two events are listed "as follows: load: This event is fired when the external HTML file is imported successfully onto the current page. A JavaScript function can be attached to the onload attribute, which can be executed on a successful load of the external HTML file. error: This event is fired when the external HTML file is not loaded or found(HTTP code 404 not found). A JavaScript function can be attached to the onerror attribute, which can be executed on error of importing the external HTML file. The following code shows the use of these two event types while importing the message.html file to the current page: <!DOCTYPE html><html><head lang="en"><script async>function handleSuccess(e) {//import load Successfulvar targetLink = e.target,externalDocument = targetLink.import;headerElement = externalDocument.querySelector('h1'),clonedHeaderElement = headerElement.cloneNode(true);document.body.appendChild(clonedHeaderElement);}function handleError(e) {//Error in loadalert("error in import");}</script><link rel="import" href="message.html"onload="handleSuccess(event)"onerror="handleError(event)"></head><body></body></html> The details of the preceding code are listed here: handleSuccess: This method is attached to the onload attribute which is executed on the successful load of message.html in the current document. The handleSuccess method imports the document present inside the message.html file, then it finds the h1 element, and makes a deep copy of it . The cloned h1 element then gets appended to the body element. handleError: This method is attached to the onerror attribute of the <link> element. This method will be executed if the message.html file is not found. As the message.html file is imported successfully, the handleSuccess method gets executed and header element h1 is rendered in the browser. The following screenshot shows the output of the preceding code: Summary In this article, we learned about the web component specification. We also explored the building blocks of web components such as HTML Imports and templates. Resources for Article: Further resources on this subject: Learning D3.js Mapping [Article] Machine Learning [Article] Angular 2.0 [Article]
Read more
  • 0
  • 0
  • 2968

article-image-building-basic-express-site
Packt
12 May 2015
34 min read
Save for later

Building a Basic Express Site

Packt
12 May 2015
34 min read
In this article by Ben Augarten, Marc Kuo, Eric Lin, Aidha Shaikh, Fabiano Pereira Soriani, Geoffrey Tisserand, Chiqing Zhang, Kan Zhang, authors of the book Express.js Blueprints, we will see how it uses Google Chrome's JavaScript engine, V8, to execute code. Node.js is single-threaded and event-driven. It uses non-blocking I/O to squeeze every ounce of processing power out of the CPU. Express builds on top of Node.js, providing all of the tools necessary to develop robust web applications with node. In addition, by utilizing Express, one gains access to a host of open source software to help solve common pain points in development. The framework is unopinionated, meaning it does not guide you one way or the other in terms of implementation or interface. Because it is unopinionated, the developer has more control and can use the framework to accomplish nearly any task; however, the power Express offers is easily abused. In this book, you will learn how to use the framework in the right way by exploring the following different styles of an application: Setting up Express for a static site Local user authentication OAuth with passport Profile pages Testing (For more resources related to this topic, see here.) Setting up Express for a static site To get our feet wet, we'll first go over how to respond to basic HTTP requests. In this example, we will handle several GET requests, responding first with plaintext and then with static HTML. However, before we get started, you must install two essential tools: node and npm, which is the node package manager. Navigate to https://nodejs.org/download/ to install node and npm. Saying Hello, World in Express For those unfamiliar with Express, we will start with a basic example—Hello World! We'll start with an empty directory. As with any Node.js project, we will run the following code to generate our package.json file, which keeps track of metadata about the project, such as dependencies, scripts, licenses, and even where the code is hosted: $ npm init The package.json file keeps track of all of our dependencies so that we don't have versioning issues, don't have to include dependencies with our code, and can deploy fearlessly. You will be prompted with a few questions. Choose the defaults for all except the entry point, which you should set to server.js. There are many generators out there that can help you generate new Express applications, but we'll create the skeleton this time around. Let's install Express. To install a module, we use npm to install the package. We use the --save flag to tell npm to add the dependency to our package.json file; that way, we don't need to commit our dependencies to the source control. We can just install them based on the contents of the package.json file (npm makes this easy): $ npm install --save express We'll be using Express v4.4.0 throughout this book. Warning: Express v4.x is not backwards compatible with the versions before it. You can create a new file server.js as follows: var express = require('express'); var app = express();   app.get('/', function(req, res, next) { res.send('Hello, World!'); });   app.listen(3000); console.log('Express started on port 3000'); This file is the entry point for our application. It is here that we generate an application, register routes, and finally listen for incoming requests on port 3000. The require('express') method returns a generator of applications. We can continually create as many applications as we want; in this case, we only created one, which we assigned to the variable app. Next, we register a GET route that listens for GET requests on the server root, and when requested, sends the string 'Hello, World' to the client. Express has methods for all of the HTTP verbs, so we could have also done app.post, app.put, app.delete, or even app.all, which responds to all HTTP verbs. Finally, we start the app listening on port 3000, then log to standard out. It's finally time to start our server and make sure everything works as expected. $ node server.js We can validate that everything is working by navigating to http://localhost:3000 in our browser or curl -v localhost:3000 in your terminal. Jade templating We are now going to extract the HTML we send to the client into a separate template. After all, it would be quite difficult to render full HTML pages simply by using res.send. To accomplish this, we will use a templating language frequently in conjunction with Express -- jade. There are many templating languages that you can use with Express. We chose Jade because it greatly simplifies writing HTML and was created by the same developer of the Express framework. $ npm install --save jade After installing Jade, we're going to have to add the following code to server.js: app.set('view engine', 'jade'); app.set('views', __dirname + '/views');   app.get('/', function(req, res, next) { res.render('index'); }); The preceding code sets the default view engine for Express—sort of like telling Express that in the future it should assume that, unless otherwise specified, templates are in the Jade templating language. Calling app.set sets a key value pair for Express internals. You can think of this sort of application like wide configuration. We could call app.get (view engine) to retrieve our set value at any time. We also specify the folder that Express should look into to find view files. That means we should create a views directory in our application and add a file, index.jade to it. Alternatively, if you want to include many different template types, you could execute the following: app.engine('jade', require('jade').__express); app.engine('html', require('ejs').__express); app.get('/html', function(req, res, next) { res.render('index.html'); });   app.get(/'jade, function(req, res, next) { res.render('index.jade'); }); Here, we set custom template rendering based on the extension of the template we want to render. We use the Jade renderer for .jade extensions and the ejs renderer for .html extensions and expose both of our index files by different routes. This is useful if you choose one templating option and later want to switch to a new one in an incremental way. You can refer to the source for the most basic of templates. Local user authentication The majority of applications require user accounts. Some applications only allow authentication through third parties, but not all users are interested in authenticating through third parties for privacy reasons, so it is important to include a local option. Here, we will go over best practices when implementing local user authentication in an Express app. We'll be using MongoDB to store our users and Mongoose as an ODM (Object Document Mapper). Then, we'll leverage passport to simplify the session handling and provide a unified view of authentication. Downloading the example code You can download the example code files from your account at http://www.packtpub.com for all the Packt Publishing books you have purchased. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you. User object modeling We will leverage passportjs to handle user authentication. Passport centralizes all of the authentication logic and provides convenient ways to authenticate locally in addition to third parties, such as Twitter, Google, Github, and so on. First, install passport and the local authentication strategy as follows: $ npm install --save passport-local In our first pass, we will implement a local authentication strategy, which means that users will be able to register locally for an account. We start by defining a user model using Mongoose. Mongoose provides a way to define schemas for objects that we want to store in MongoDB and then provide a convenient way to map between stored records in the database and an in-memory representation. Mongoose also provides convenient syntax to make many MongoDB queries and perform CRUD operations on models. Our user model will only have an e-mail, password, and timestamp for now. Before getting started, we need to install Mongoose: $ npm install --save mongoose bcrypt validator Now we define the schema for our user in models/user.js as follows: Var mongoose = require('mongoose');   var userSchema = new mongoose.Schema({ email: {    type: String,    required: true,    unique: true }, password: {    type: String,    required: true }, created_at: {    type: Date,    default: Date.now } });   userSchema.pre('save', function(next) { if (!this.isModified('password')) {    return next(); } this.password = User.encryptPassword(this.password); next(); }); Here, we create a schema that describes our users. Mongoose has convenient ways to describe the required and unique fields as well as the type of data that each property should hold. Mongoose does all the validations required under the hood. We don't require many user fields for our first boilerplate application—e-mail, password, and timestamp to get us started. We also use Mongoose middleware to rehash a user's password if and when they decide to change it. Mongoose exposes several hooks to run user-defined callbacks. In our example, we define a callback to be invoked before Mongoose saves a model. That way, every time a user is saved, we'll check to see whether their password was changed. Without this middleware, it would be possible to store a user's password in plaintext, which is not only a security vulnerability but would break authentication. Mongoose supports two kinds of middleware – serial and parallel. Parallel middleware can run asynchronous functions and gets an additional callback to invoke; you'll learn more about Mongoose middleware later in this book. Now, we want to add validations to make sure that our data is correct. We'll use the validator library to accomplish this, as follows: Var validator = require('validator');   User.schema.path('email').validate(function(email) { return validator.isEmail(email); });   User.schema.path('password').validate(function(password) { return validator.isLength(password, 6); });   var User = mongoose.model('User', userSchema); module.exports = User; We added validations for e-mail and password length using a library called validator, which provides a lot of convenient validators for different types of fields. Validator has validations based on length, URL, int, upper case; essentially, anything you would want to validate (and don't forget to validate all user input!). We also added a host of helper functions regarding registration, authentication, as well as encrypting passwords that you can find in models/user.js. We added these to the user model to help encapsulate the variety of interactions we want using the abstraction of a user. For more information on Mongoose, see http://mongoosejs.com/. You can find more on passportjs at http://passportjs.org/. This lays out the beginning of a design pattern called MVC—model, view, controller. The basic idea is that you encapsulate separate concerns in different objects: the model code knows about the database, storage, and querying; the controller code knows about routing and requests/responses; and the view code knows what to render for users. Introducing Express middleware Passport is authentication middleware that can be used with Express applications. Before diving into passport, we should go over Express middleware. Express is a connect framework, which means it uses the connect middleware. Connecting internally has a stack of functions that handle requests. When a request comes in, the first function in the stack is given the request and response objects along with the next() function. The next() function when called, delegates to the next function in the middleware stack. Additionally, you can specify a path for your middleware, so it is only called for certain paths. Express lets you add middleware to an application using the app.use() function. In fact, the HTTP handlers we already wrote are a special kind of middleware. Internally, Express has one level of middleware for the router, which delegates to the appropriate handler. Middleware is extraordinarily useful for logging, serving static files, error handling, and more. In fact, passport utilizes middleware for authentication. Before anything else happens, passport looks for a cookie in the request, finds metadata, and then loads the user from the database, adds it to req, user, and then continues down the middleware stack. Setting up passport Before we can make full use of passport, we need to tell it how to do a few important things. First, we need to instruct passport how to serialize a user to a session. Then, we need to deserialize the user from the session information. Finally, we need to tell passport how to tell if a given e-mail/password combination represents a valid user as given in the following: // passport.js var passport = require('passport'); var LocalStrategy = require('passport-local').Strategy; var User = require('mongoose').model('User');   passport.serializeUser(function(user, done) { done(null, user.id); });   passport.deserializeUser(function(id, done) { User.findById(id, done); }); Here, we tell passport that when we serialize a user, we only need that user's id. Then, when we want to deserialize a user from session data, we just look up the user by their ID! This is used in passport's middleware, after the request is finished, we take req.user and serialize their ID to our persistent session. When we first get a request, we take the ID stored in our session, retrieve the record from the database, and populate the request object with a user property. All of this functionality is provided transparently by passport, as long as we provide definitions for these two functions as given in the following: function authFail(done) { done(null, false, { message: 'incorrect email/password combination' }); }   passport.use(new LocalStrategy(function(email, password, done) { User.findOne({    email: email }, function(err, user) {    if (err) return done(err);    if (!user) {      return authFail(done);    }    if (!user.validPassword(password)) {      return authFail(done);    }    return done(null, user); }); })); We tell passport how to authenticate a user locally. We create a new LocalStrategy() function, which, when given an e-mail and password, will try to lookup a user by e-mail. We can do this because we required the e-mail field to be unique, so there should only be one user. If there is no user, we return an error. If there is a user, but they provided an invalid password, we still return an error. If there is a user and they provided the correct password, then we tell passport that the authentication request was a success by calling the done callback with the valid user. Registering users Now, we add routes for registration, both a view with a basic form and backend logic to create a user. First, we will create a user controller. Up until now, we have thrown our routes in our server.js file, but this is generally bad practice. What we want to do is have separate controllers for each kind of route that we want. We have seen the model portion of MVC. Now it's time to take a look at controllers. Our user controller will have all the routes that manipulate the user model. Let's create a new file in a new directory, controllers/user.js: // controllers/user.js var User = require('mongoose').model('User');   module.exports.showRegistrationForm = function(req, res, next) { res.render('register'); };   module.exports.createUser = function(req, res, next) { User.register(req.body.email, req.body.password, function(err, user) {    if (err) return next(err);    req.login(user, function(err) {      if (err) return next(err);      res.redirect('/');    }); }); }; Note that the User model takes care of the validations and registration logic; we just provide callback. Doing this helps consolidate the error handling and generally makes the registration logic easier to understand. If the registration was successful, we call req.login, a function added by passport, which creates a new session for that user and that user will be available as req.user on subsequent requests. Finally, we register the routes. At this point, we also extract the routes we previously added to server.js to their own file. Let's create a new file called routes.js as follows: // routes.js app.get('/users/register', userRoutes.showRegistrationForm); app.post('/users/register', userRoutes.createUser); Now we have a file dedicated to associating controller handlers with actual paths that users can access. This is generally good practice because now we have a place to come visit and see all of our defined routes. It also helps unclutter our server.js file, which should be exclusively devoted to server configuration. For details, as well as the registration templates used, see the preceding code. Authenticating users We have already done most of the work required to authenticate users (or rather, passport has). Really, all we need to do is set up routes for authentication and a form to allow users to enter their credentials. First, we'll add handlers to our user controller: // controllers/user.js module.exports.showLoginForm = function(req, res, next) { res.render('login'); };   module.exports.createSession = passport.authenticate('local', { successRedirect: '/', failureRedirect: '/login' }); Let's deconstruct what's happening in our login post. We create a handler that is the result of calling passport.authenticate('local', …). This tells passport that the handler uses the local authentication strategy. So, when someone hits that route, passport will delegate to our LocalStrategy. If they provided a valid e-mail/password combination, our LocalStrategy will give passport the now authenticated user, and passport will redirect the user to the server root. If the e-mail/password combination was unsuccessful, passport will redirect the user to /login so they can try again. Then, we will bind these callbacks to routes in routes.js: app.get('/users/login', userRoutes.showLoginForm); app.post('/users/login', userRoutes.createSession); At this point, we should be able to register an account and login with those same credentials. (see tag 0.2 for where we are right now). OAuth with passport Now we will add support for logging into our application using Twitter, Google, and GitHub. This functionality is useful if users don't want to register a separate account for your application. For these users, allowing OAuth through these providers will increase conversions and generally make for an easier registration process for users. Adding OAuth to user model Before adding OAuth, we need to keep track of several additional properties on our user model. We keep track of these properties to make sure we can look up user accounts provided there is information to ensure we don't allow duplicate accounts and allow users to link multiple third-party accounts by using the following code: var userSchema = new mongoose.Schema({ email: {    type: String,    required: true,    unique: true }, password: {    type: String, }, created_at: {    type: Date,    default: Date.now }, twitter: String, google: String, github: String, profile: {    name: { type: String, default: '' },    gender: { type: String, default: '' },    location: { type: String, default: '' },    website: { type: String, default: '' },    picture: { type: String, default: '' } }, }); First, we add a property for each provider, in which we will store a unique identifier that the provider gives us when they authorize with that provider. Next, we will store an array of tokens, so we can conveniently access a list of providers that are linked to this account; this is useful if you ever want to let a user register through one and then link to others for viral marketing or extra user information. Finally, we keep track of some demographic information about the user that the providers give to us so we can provide a better experience for our users. Getting API tokens Now, we need to go to the appropriate third parties and register our application to receive application keys and secret tokens. We will add these to our configuration. We will use separate tokens for development and production purposes (for obvious reasons!). For security reasons, we will only have our production tokens as environment variables on our final deploy server, not committed to version control. I'll wait while you navigate to the third-party websites and add their tokens to your configuration as follows: // config.js twitter: {    consumerKey: process.env.TWITTER_KEY || 'VRE4lt1y0W3yWTpChzJHcAaVf',    consumerSecret: process.env.TWITTER_SECRET || 'TOA4rNzv9Cn8IwrOi6MOmyV894hyaJks6393V6cyLdtmFfkWqe',    callbackURL: '/auth/twitter/callback' }, google: {    clientID: process.env.GOOGLE_ID || '627474771522-uskkhdsevat3rn15kgrqt62bdft15cpu.apps.googleusercontent.com',    clientSecret: process.env.GOOGLE_SECRET || 'FwVkn76DKx_0BBaIAmRb6mjB',    callbackURL: '/auth/google/callback' }, github: {    clientID: process.env.GITHUB_ID || '81b233b3394179bfe2bc',    clientSecret: process.env.GITHUB_SECRET || 'de0322c0aa32eafaa84440ca6877ac5be9db9ca6',    callbackURL: '/auth/github/callback' } Of course, you should never commit your development keys publicly either. Be sure to either not commit this file or to use private source control. The best idea is to only have secrets live on machines ephemerally (usually as environment variables). You especially should not use the keys that I provided here! Third-party registration and login Now we need to install and implement the various third-party registration strategies. To install third-party registration strategies run the following command: npm install --save passport-twitter passport-google-oAuth passport-github Most of these are extraordinarily similar, so I will only show the TwitterStrategy, as follows: passport.use(new TwitterStrategy(config.twitter, function(req, accessToken, tokenSecret, profile, done) { User.findOne({ twitter: profile.id }, function(err, existingUser) {      if (existingUser) return done(null, existingUser);      var user = new User();      // Twitter will not provide an email address. Period.      // But a person's twitter username is guaranteed to be unique      // so we can "fake" a twitter email address as follows:      // username@twitter.mydomain.com user.email = profile.username + "@twitter." + config.domain + ".com";      user.twitter = profile.id;      user.tokens.push({ kind: 'twitter', accessToken: accessToken, tokenSecret: tokenSecret });      user.profile.name = profile.displayName;      user.profile.location = profile._json.location;      user.profile.picture = profile._json.profile_image_url;      user.save(function(err) {        done(err, user);      });    }); })); Here, I included one example of how we would do this. First, we pass a new TwitterStrategy to passport. The TwitterStrategy takes our Twitter keys and callback information and a callback is used to make sure we can register the user with that information. If the user is already registered, then it's a no-op; otherwise we save their information and pass along the error and/or successfully saved user to the callback. For the others, refer to the source. Profile pages It is finally time to add profile pages for each of our users. To do so, we're going to discuss more about Express routing and how to pass request-specific data to Jade templates. Often times when writing a server, you want to capture some portion of the URL to use in the controller; this could be a user id, username, or anything! We'll use Express's ability to capture URL parts to get the id of the user whose profile page was requested. URL params Express, like any good web framework, supports extracting data from URL parts. For example, you can do the following: app.get('/users/:id', function(req, res, next) { console.log(req.params.id); } In the preceding example, we will print whatever comes after /users/ in the request URL. This allows an easy way to specify per user routes, or routes that only make sense in the context of a specific user, that is, a profile page only makes sense when you specify a specific user. We will use this kind of routing to implement our profile page. For now, we want to make sure that only the logged-in user can see their own profile page (we can change this functionality later): app.get('/users/:id', function(req, res, next) { if (!req.user || (req.user.id != req.params.id)) {    return next('Not found'); } res.render('users/profile', { user: req.user.toJSON() }); }); Here, we check first that the user is signed in and that the requested user's id is the same as the logged-in user's id. If it isn't, then we return an error. If it is, then we render the users/profile.jade template with req.user as the data. Profile templates We already looked at models and controllers at length, but our templates have been underwhelming. Finally, we'll show how to write some basic Jade templates. This section will serve as a brief introduction to the Jade templating language, but does not try to be comprehensive. The code for Profile templates is as follows: html body    h1      =user.email    h2      =user.created_at    - for (var prop in user.profile)      if user.profile[prop]        h4          =prop + "=" + user.profile[prop] Notably, because in the controller we passed in the user to the view, we can access the variable user and it refers to the logged-in user! We can execute arbitrary JavaScript to render into the template by prefixing it with = --. In these blocks, we can do anything we would normally do, including string concatenation, method invocation, and so on. Similarly, we can include JavaScript code that is not intended to be written as HTML by prefixing it with - like we did with the for loop. This basic template prints out the user's e-mail, the created_at timestamp, as well as all of the properties in their profile, if any. For a more in-depth look at Jade, please see http://jade-lang.com/reference/. Testing Testing is essential for any application. I will not dwell on the whys, but instead assume that you are angry with me for skipping this topic in the previous sections. Testing Express applications tend to be relatively straightforward and painless. The general format is that we make fake requests and then make certain assertions about the responses. We could also implement finer-grained unit tests for more complex logic, but up until now almost everything we did is straightforward enough to be tested on a per route basis. Additionally, testing at the API level provides a more realistic view of how real customers will be interacting with your website and makes tests less brittle in the face of refactoring code. Introducing Mocha Mocha is a simple, flexible, test framework runner. First, I would suggest installing Mocha globally so you can easily run tests from the command line as follows: $ npm install --save-dev –g mocha The --save-dev option saves mocha as a development dependency, meaning we don't have to install Mocha on our production servers. Mocha is just a test runner. We also need an assertion library. There are a variety of solutions, but should.js syntax, written by the same person as Express and Mocha, gives a clean syntax to make assertions: $ npm install --save-dev should The should.js syntax provides BDD assertions, such as 'hello'.should.equal('hello') and [1,2].should.have.length(2). We can start with a Hello World test example by creating a test directory with a single file, hello-world.js, as shown in the following code: var should = require('should');   describe('The World', function() { it('should say hello', function() {    'Hello, World'.should.equal('Hello, World'); }); it('should say hello asynchronously!', function(done) {    setTimeout(function() {      'Hello, World'.should.equal('Hello, World');      done();    }, 300); }); }); We have two different tests both in the same namespace, The World. The first test is an example of a synchronous test. Mocha executes the function we give to it, sees that no exception gets thrown and the test passes. If, instead, we accept a done argument in our callback, as we do in the second example, Mocha will intelligently wait until we invoke the callback before checking the validity of our test. For the most part, we will use the second version, in which we must explicitly invoke the done argument to finish our test because it makes more sense to test Express applications. Now, if we go back to the command line, we should be able to run Mocha (or node_modules/.bin/mocha if you didn't install it globally) and see that both of the tests we wrote pass! Testing API endpoints Now that we have a basic understanding of how to run tests using Mocha and make assertions with should syntax, we can apply it to test local user registration. First, we need to introduce another npm module that will help us test our server programmatically and make assertions about what kind of responses we expect. The library is called supertest: $ npm install --save-dev supertest The library makes testing Express applications a breeze and provides chainable assertions. Let's take a look at an example usage to test our create user route,as shown in the following code: var should = require('should'),    request = require('supertest'),    app = require('../server').app,    User = require('mongoose').model('User');   describe('Users', function() { before(function(done) {    User.remove({}, done); }); describe('registration', function() {    it('should register valid user', function(done) {      request(app)        .post('/users/register')       .send({          email: "test@example.com",          password: "hello world"        })        .expect(302)        .end(function(err, res) {          res.text.should.containEql("Redirecting to /");          done(err);        });    }); }); }); First, notice that we used two namespaces: Users and registration. Now, before we run any tests, we remove all users from the database. This is useful to ensure we know where we're starting the tests This will delete all of your saved users though, so it's useful to use a different database in the test environment. Node detects the environment by looking at the NODE_ENV environment variable. Typically it is test, development, staging, or production. We can do so by changing the database URL in our configuration file to use a different local database when in a test environment and then run Mocha tests with NODE_ENV=test mocha. Now, on to the interesting bits! Supertest exposes a chainable API to make requests and assertions about responses. To make a request, we use request(app). From there, we specify the HTTP method and path. Then, we can specify a JSON body to send to the server; in this case, an example user registration form. On registration, we expect a redirect, which is a 302 response. If that assertion fails, then the err argument in our callback will be populated, and the test will fail when we use done(err). Additionally, we validate that we were redirected to the route we expect, the server root /. Automate builds and deploys All of this development is relatively worthless without a smooth process to build and deploy your application. Fortunately, the node community has written a variety of task runners. Among these are Grunt and Gulp, two of the most popular task runners. Both work seamlessly with Express and provide a set of utilities for us to use, including concatenating and uglifying JavaScript, compiling sass/less, and reloading the server on local file changes. We'll focus on Grunt, for simplicity. Introducing the Gruntfile Grunt itself is a simple task runner, but its extensibility and plugin architecture lets you install third-party scripts to run in predefined tasks. To give us an idea of how we might use Grunt, we're going to write our css in sass and then use Grunt to compile sass to css. Through this example, we'll explore the different ideas that Grunt introduces. First, you need to install cli globally to install the plugin that compiles sass to css: $ npm install -g grunt-cli $ npm install --save grunt grunt-contrib-sass Now we need to create Gruntfile.js, which contains instructions for all of the tasks and build targets that we need. To do this perform the following: // Gruntfile.js module.exports = function(grunt) { grunt.loadNpmTasks('grunt-contrib-sass'); grunt.initConfig({    sass: {      dist: {        files: [{          expand: true,          cwd: "public/styles",          src: ["**.scss"],          dest: "dist/styles",          ext: ".css"        }]      }    } }); } Let's go over the major parts. Right at the top, we require the plugin we will use, grunt-contrib-sass. This tells grunt that we are going to configure a task called sass. In our definition of the task sass, we specify a target, dist, which is commonly used for tasks that produce production files (minified, concatenated, and so on). In that task, we build our file list dynamically, telling Grunt to look in /public/styles/ recursively for all .scss files, then compile them all to the same paths in /dist/styles. It is useful to have two parallel static directories, one for development and one for production, so we don't have to look at minified code in development. We can invoke this target by executing grunt sass or grunt sass:dist. It is worth noting that we don't explicitly concatenate the files in this task, but if we use @imports in our main sass file, the compiler will concatenate everything for us. We can also configure Grunt to run our test suite. To do this, let's add another plugin -- npm install --save-dev grunt-mocha-test. Now we have to add the following code to our Gruntfile.js file: grunt.loadNpmTasks('grunt-mocha-test'); grunt.registerTask('test', 'mochaTest'); ...   mochaTest: {    test: {      src: ["test/**.js"]    } } Here, the task is called mochaTest and we register a new task called test that simply delegates to the mochaTest task. This way, it is easier to remember how to run tests. Similarly, we could have specified a list of tasks to run if we passed an array of strings as the second argument to registerTask. This is a sampling of what can be accomplished with Grunt. For an example of a more robust Gruntfile, check out the source. Continuous integration with Travis Travis CI provides free continuous integration for open source projects as well as paid options for closed source applications. It uses a git hook to automatically test your application after every push. This is useful to ensure no regression was introduced. Also, there could be dependency problems only revealed in CI that local development masks; Travis is the first line of defense for these bugs. It takes your source, runs npm install to install the dependencies specified in package.json, and then runs the npm test to run your test suite. Travis accepts a configuration file called travis.yml. These typically look like this: language: node_js node_js: - "0.11" - "0.10" - "0.8" services: - mongodb We can specify the versions of node that we want to test against as well as the services that we rely on (specifically MongoDB). Now we have to update our test command in package.json to run grunt test. Finally, we have to set up a webhook for the repository in question. We can do this on Travis by enabling the repository. Now we just have to push our changes and Travis will make sure all the tests pass! Travis is extremely flexible and you can use it to accomplish most tasks related to continuous integration, including automatically deploying successful builds. Deploying Node.js applications One of the easiest ways to deploy Node.js applications is to utilize Heroku, a platform as a service provider. Heroku has its own toolbelt to create and deploy Heroku apps from your machine. Before getting started with Heroku, you will need to install its toolbelt. Please go to https://toolbelt.heroku.com/ to download the Heroku toolbelt. Once installed, you can log in to Heroku or register via the web UI and then run Heroku login. Heroku uses a special file, called the Procfile, which specifies exactly how to run your application. Our Procfile looks like this: web: node server.js Extraordinarily simple: in order to run the web server, just run node server.js. In order to verify that our Procfile is correct, we can run the following locally: $ foreman start Foreman looks at the Procfile and uses that to try to start our server. Once that runs successfully, we need to create a new application and then deploy our application to Heroku. Be sure to commit the Procfile to version control: $ heroku create$ git push heroku master Heroku will create a new application and URL in Heroku, as well as a git remote repository named heroku. Pushing that remote actually triggers a deploy of your code. If you do all of this, unfortunately your application will not work. We don't have a Mongo instance for our application to talk to! First we have to request MongoDB from Heroku: $ heroku addons:add mongolab // don't worry, it's free This spins up a shared MongoDB instance and gives our application an environment variable named MONOGOLAB_URI, which we should use as our MongoDB connect URI. We need to change our configuration file to reflect these changes. In our configuration file, in production, for our database URL, we should look at the environment variable MONGOLAB_URI. Also, be sure that Express is listening on process.env.PORT || 3000, or else you will receive stra With all of that set up, we can commit our changes and push the changes once again to Heroku. Hopefully, this time it works! To view the application logs for debugging purposes, just use the Heroku toolbelt: $ heroku logs One last thing about deploying Express applications: sometimes applications crash, software isn't perfect. We should anticipate crashes and have our application respond accordingly (by restarting itself). There are many server monitoring tools, including pm2 and forever. We use forever because of its simplicity. $ npm install --save forever Then, we update our Procfile to reflect our use of forever: // Procfileweb: node_modules/.bin/forever server.js Now, forever will automatically restart our application, if it crashes for any strange reason. You can also set up Travis to automatically push successful builds to your server, but that goes beyond the deployment we will do in this book. Summary In this article, we got our feet wet in the world of node and using the Express framework. We went over everything from Hello World and MVC to testing and deployments. You should feel comfortable using basic Express APIs, but also feel empowered to own a Node.js application across the entire stack. Resources for Article: Further resources on this subject: Testing a UI Using WebDriverJS [article] Applications of WebRTC [article] Amazon Web Services [article]
Read more
  • 0
  • 0
  • 2399

article-image-learning-informatica-powercenter-9x
Packt
08 May 2015
3 min read
Save for later

Learning Informatica PowerCenter 9.x

Packt
08 May 2015
3 min read
Informatica Corporation (Informatica), a multi-million dollar company established on February 1993, is an independent provider of enterprise data integration and data quality software and services. Informatica PowerCenter is the most widely used tool of Informatica across the globe for various data integration processes. Informatica PowerCenter tool helps integration of data from almost any business system in almost any format. This flexibility of PowerCenter to handle almost any data makes it most widely tool in the data integration world. (For more resources related to this topic, see here.) Key features Learn the functionalities of each component in the Informatica PowerCenter tool and deploy them to accomplish executive reporting using logical data stores Learn the core features of Informatica PowerCenter tool along with its administration and architectural aspects Develop skills to extract data and efficiently utilize it with the help of world’s most widely used integration tool, and make a promising career in Informatica PowerCenter Difference in approach The simple thought behind this book is to put all the essential ingredients of Informatica, starting from basic things, such as downloads, extraction, and installation to working on client tools and high-level aspects, such as scheduling, migration, and so on. There are multiple blogs available across the Internet that talk about the Informatica tool but none presents end-to-end answers. We have tried to put up all the steps and processes in a systematic manner to help you easily start with the learning. In this book, you will get a step-by-step procedure for every aspect of the Informatica PowerCenter tool. While writing this book, the author has kept in mind the importance of live, practical exposure of the graphical interface of the tool to the audience and hence, you will notice a lot of screenshots illustrating the steps to help you understand and follow the process. The chapters area arranged in such a way that all the aspects of the Informatica PowerCenter tool are covered, and they are in a proper flow in order to achieve the functionality. Here is a gist regarding the significant aspects of the book: Installation of Informatica and the information regarding the administrator console of the PowerCenter tool The basic and advanced topics of the Designer Screen. Implementation of the different types of Slowly Changing Dimension Understanding of the Workflow Manager Monitoring the code Implementation of mapping using different types of transformations Classification of transformations Usage of Repository Manager Required skills Before you make your mind up about learning Informatica, it is always recommended that you have a basic understanding of SQL and Unix. Though these are not mandatory and you can easily use 90 percent of the Informatica PowerCenter tool without knowledge of these, the confidence to work in real-time SQL and Unix projects is a must-have in your kitty. People who know SQL will easily understand that ETL tools are nothing but a graphical representation of SQL. Unix is utilized in Informatica PowerCenter with the scripting aspect, which makes your life easy in some scenarios. Summary Informatica PowerCenter has emerged as one of the most useful ETLs employed to build enterprise data warehouses. The PowerCenter tool can make your life easy and can offer you some great career path if learnt properly. This book will thereby help you get a know-how of PowerCenter. Resources for Article: Further resources on this subject: Transition to Readshift [article] Cloudera Hadoop and HP Vertica [article] Learning to Fly with Force.com [article]
Read more
  • 0
  • 0
  • 2286

article-image-frontend-soa-taming-beast-frontend-web-development
Wesley Cho
08 May 2015
6 min read
Save for later

Frontend SOA: Taming the beast of frontend web development

Wesley Cho
08 May 2015
6 min read
Frontend web development is a difficult domain for creating scalable applications.  There are many challenges when it comes to architecture, such as how to best organize HTML, CSS, and JavaScript files, or how to create build tooling to allow an optimal development & production environment. In addition, complexity has increased measurably. Templating & routing have been transplanted to the concern of frontend web engineers as a result of the push towards single page applications (SPAs).  A wealth of frameworks can be found as listed on todomvc.com.  AngularJS is one that rose to prominence almost two years ago on the back of declarative html, strong testability, and two-way data binding, but even now it is seeing some churn due to Angular 2.0 breaking backwards compatibility completely and the rise of React, which is Facebook’s new view layer bringing the idea of a virtual DOM for performance optimization not previously seen in frontend web architecture.  Angular 2.0 itself is also looking like a juggernaut with decoupled components that harkens to more pure JavaScript & is already boasting of performance gains of roughly 5x compared to Angular 1.x. With this much churn, frontend web apps have become difficult to architect for the long term.  This requires us to take a step back and think about the direction of browsers. The Future of Browsers We know that ECMAScript 6 (ES6) is already making its headway into browsers - ES6 changes how JavaScript is structured greatly with a proper module system, and adds a lot of syntactical sugar.  Web Components are also going to change how we build our views as well. Instead of: .home-view { ... } We will be writing: <template id=”home-view”> <style> … </style> <my-navbar></my-navbar> <my-content></my-content> <script> … </script> </template> <home-view></home-view> <script> var proto = Object.create(HTMLElement.prototype); proto.createdCallback = function () { var root = this.createRoot(); var template = document.querySelector(‘#home-view’); var clone = document.importNode(template.content, true); root.appendChild(clone); }; document.registerElement(‘home-view’, { prototype: proto }); </script> This is drastically different from how we build components now.  In addition, libraries & frameworks are already being built with this in mind.  Angular 2 is using annotations provided by Traceur, Google’s ES6 + ES7 to ES5 transpiler, to provide syntactical sugar for creating one way bindings to the DOM and to DOM events.  React and Ember also have plans to integrate Web Components into their workflows.  Aurelia is already structured in a way to take advantage of it when it drops. What can we do to future proof ourselves for when these technologies drop? Solution  For starters, it is important to realize that creating HTML and CSS is relatively cheap compared to managing a complex JavaScript codebase built on top of a framework or library.  Frontend web development is seeing architecture pains that have already been solved in other domains, except it has the additional problem of the standard challenge of integrating UI into that structure.  This seems to suggest that the solution is to create a frontend service-oriented architecture (SOA) where most of the heavy logic is offloaded to pure JavaScript with only utility library additions (i.e. Underscore/Lodash).  This would allow us to choose view layers with relative ease, and move fast in case it turns out a particular view library/framework turns out not to meet requirements.  It also prevents the endemic problem of having to rewrite whole codebases due to having to swap out libraries/frameworks. For example, consider this sample Angular controller (a similarly contrived example can be created using other pieces of tech as well): angular.module(‘DemoApp’) .controller(‘DemoCtrl’, function ($scope, $http) { $scope.getItems = function () { $http.get(‘/items/’) .then(function (response) { $scope.items = response.data.items; $scope.$emit(‘items:received’, $scope.items); }); }; }); This sample controller has a method getItems that fetches items, updates the model, and then emits the information so that parent views have access to that change.  This is ugly because it hardcodes application structure hierarchy and mixes it with server query logic, which is a separate concern.  In addition, it also mixes the usage of Angular’s internals into the application code, tying some pure abstract logic heavily in with the framework’s internals.  It is not all that uncommon to see developers make these simple architecture mistakes. With the proper module system that ES6 brings, this simplifies to (items.js): import {fetch} from ‘fetch’; export class items { getAll() { return fetch.get(‘/items’) .then(function (response) { return response.json(); }); } }; And demoCtrl.js: import {BaseCtrl} from ‘./baseCtrl.js’; import {items} from ‘./items’; export class DemoCtrl extends BaseCtrl { constructor() { super(); } getItems() { let self = this; return Items.getAll() .then(function (items) { self.items = items; return items; }); } }; And main.js: import {items} from ‘./items’; import {DemoCtrl} from ‘./DemoCtrl’; angular.module(‘DemoApp’, []) .factory(‘items’, items) .controller(‘DemoCtrl’, DemoCtrl);  If you want to use anything from $scope, you can modify the usage of DemoCtrl straight in the controller definition and just instantiate it inside the function.  With promises, which are also available natively in ES6, you can chain upon them in the implementation of DemoCtrl in the Angular code base. The kicker about this approach is that this can also be done currently in ES5, and is not limited with using Angular - it applies equally as well with any other library or framework, such as Backbone, Ember, and React!  It also allows you to churn out very testable code. I recommend this as a best practice for architecting complex frontend web apps - the only caveat is if the other aspects of engineering prevent this from being a possibility, such as the business requirements of time and people resources available.  This approach allows us to tame the beast of maintaining & scaling frontend web apps while still being able to adapt quickly to the constantly changing landscape. About this author Wesley Cho is a senior frontend engineer at Jiff (http://www.jiff.com/).  He has contributed features & bug fixes and reported numerous issues to numerous libraries in the Angular ecosystem, including AngularJS, Ionic, UI Bootstrap, and UI Router, as well as authored several libraries.
Read more
  • 0
  • 0
  • 4112
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-angularjs-web-application-development-cookbook
Packt
08 May 2015
2 min read
Save for later

AngularJS Web Application Development Cookbook

Packt
08 May 2015
2 min read
Architect performant applications and implement best practices in AngularJS. Packed with easy-to-follow recipes, this practical guide will show you how to unleash the full might of the AngularJS framework. Skip straight to practical solutions and quick, functional answers to your problems without hand-holding or slogging through the basics. (For more resources related to this topic, see here.) Some highlights include: Architecting recursive directives Extensively customizing your search filter Custom routing attributes Animating ngRepeat Animating ngInclude, ngView, and ngIf Animating ngSwitch Animating ngClass, and class attributes Animating ngShow, and ngHide The goal of this text is to have you walk away from reading about an AngularJS concept armed with a solid understanding of how it works, insight into the best ways to wield it in real-world applications, and annotated code examples to get you started. Why you should buy this book A collection of recipes demonstrating optimal organization, scaleable architecture, and best practices for use in small and large-scale production applications. Each recipe contains complete, functioning examples and detailed explanations on how and why they are organized and built that way, as well as alternative design choices for different situations. The author of this book is a full stack developer at DoorDash (YC S13), where he joined as the first engineer. He led their adoption of AngularJS, and he also focuses on the infrastructural, predictive, and data projects within the company. Matt has a degree in Computer Engineering from the University of Illinois at Urbana-Champaign. He is the author of the video series Learning AngularJS, available through O'Reilly Media. Previously, he worked as an engineer at several educational technology start-ups. Almost every example in this book has been added to JSFiddle, with the links provided in the book. This allows you to merely visit a URL in order to test and modify the code with no setup of any kind, on any major browser and on any major operating system. Resources for Article:  Further resources on this subject: Working with Live Data and AngularJS [article] Angular Zen [article] AngularJS Project [article]
Read more
  • 0
  • 0
  • 2093

article-image-nodejs-building-maintainable-codebase
Benjamin Reed
06 May 2015
8 min read
Save for later

NodeJS: Building a Maintainable Codebase

Benjamin Reed
06 May 2015
8 min read
NodeJS has become the most anticipated web development technology since Ruby on Rails. This is not an introduction to Node. First, you must realize that NodeJS is not a direct competitor to Rails or Django. Instead, Node is a collection of libraries that allow JavaScript to run on the v8 runtime. Node powers many tools, and some of the tools have nothing to do with a scaling web application. For instance, GitHub’s Atom editor is built on top of Node. Its web application frameworks, like Express, are the competitors. This article can apply to all environments using Node. Second, Node is designed under the asynchronous ideology. Not all of the operations in Node are asynchronous. Many libraries offer synchronous and asynchronous options. A Node developer must decipher the best operation for his or her needs. Third, you should have a solid understanding of the concept of a callback in Node. Over the course of two weeks, a team attempted to refactor a Rails app to be an Express application. We loved the concepts behind Node, and we truly believed that all we needed was a barebones framework. We transferred our controller logic over to Express routes in a weekend. As a beginning team, I will analyze some of the pitfalls that we came across. Hopefully, this will help you identify strategies to tackle Node with your team. First, attempt to structure callbacks and avoid anonymous functions. As we added more and more logic, we added more and more callbacks. Everything was beautifully asynchronous, and our code would successfully run. However, we soon found ourselves debugging an anonymous function nested inside of other anonymous functions. In other words, the codebase was incredibly difficult to follow. Anyone starting out with Node could potentially notice the novice “spaghetti code.” Here’s a simple example of nested callbacks: router.put('/:id', function(req, res) { console.log("attempt to update bathroom"); models.User.find({ where: {id: req.param('id')} }).success(function (user) { var raw_cell = req.param('cell') ? req.param('cell') : user.cell; var raw_email = req.param('email') ? req.param('email') : user.email; var raw_username = req.param('username') ? req.param('username') : user.username; var raw_digest = req.param('digest') ? req.param('digest') : user.digest; user.cell = raw_cell; user.email = raw_email; user.username = raw_username; user.digest = raw_digest; user.updated_on = new Date(); user.save().success(function () { res.json(user); }).error(function () { res.json({"status": "error"}); }); }) .error(function() { res.json({"status": "error"}); }) }); Notice that there are many success and error callbacks. Locating a specific callback is not difficult if the whitespace is perfect or the developer can count closing brackets back up to the destination. However, this is pretty nasty to any newcomer. And this illegibility will only increase as the application becomes more complex. A developer may get this response: {"status": "error"} Where did this response come from? Did the ORM fail to update the object? Did it fail to find the object in the first place? A developer could add descriptions to the json in the chained error callbacks, but there has to be a better way. Let’s extract some of the callbacks into separate methods: router.put('/:id', function(req, res) { var id = req.param('id'); var query = { where: {id: id} }; // search for user models.User.find(query).success(function (user) { // parse req parameters var raw_cell = req.param('cell') ? req.param('cell') : user.cell; var raw_email = req.param('email') ? req.param('email') : user.email; var raw_username = req.param('username') ? req.param('username') : user.username; // set user attributes user.cell = raw_cell; user.email = raw_email; user.username = raw_username; user.updated_on = new Date(); // attempt to save user user.save() .success(SuccessHandler.userSaved(res, user)) .error(ErrorHandler.userNotSaved(res, id)); }) .error(ErrorHandler.userNotFound(res, id)) }); var ErrorHandler = { userNotFound: function(res, user_id) { res.json({"status": "error", "description": "The user with the specified id could not be found.", "user_id": user_id}); }, userNotSaved: function(res, user_id) { res.json({"status": "error", "description": "The update to the user with the specified id could not be completed.", "user_id": user_id}); } }; var SuccessHandler = { userSaved: function(res, user) { res.json(user); } } This seemed to help clean up our minimal sample. There is now only one anonymous function. The code seems to be a lot more readable and independent. However, our code is still cluttered by chaining success and error callbacks. One could make these global mutable variables, or, perhaps we can consider another approach. Futures, also known as promises, are becoming more prominent. Twitter has adopted them in Scala. It is definitely something to consider. Next, do what makes your team comfortable and productive. At the same time, do not compromise the integrity of the project. There are numerous posts that encourage certain styles over others. There are also extensive posts on the subject of CoffeeScript. If you aren’t aware, CoffeeScript is a language with some added syntactic flavor that compiles to JavaScript. Our team was primarily ruby developers, and it definitely appealed to us. When we migrated some of the project over to CoffeeScript, we found that our code was a lot shorter and appeared more legible. GitHub uses CoffeeScript for the Atom text editor to this day, and the Rails community has openly embraced it. The majority of node module documentation will use JavaScript, so CoffeeScript developers will have to become acquainted with translation. There are some problems with CoffeeScript being ES6 ready, and there are some modules that are clearly not meant to be utilized in CoffeeScript. CoffeeScript is an open source project, but it has appears to have a good backbone and a stable community. If your developers are more comfortable with it, utilize it. When it comes to open source projects, everyone tends to trust them. In the purest form, open source projects are absolutely beautiful. They make the lives of all of the developers better. Nobody has to re-implement the wheel unless they choose. Obviously, both Node and CoffeeScript are open source. However, the community is very new, and it is dangerous to assume that any package you find on NPM is stable. For us, the problem occurred when we searched for an ORM. We truly missed ActiveRecord, and we assumed that other projects would work similarly.  We tried several solutions, and none of them interacted the way we wanted. Besides expressing our entire schema in a JavaScript format, we found relations to be a bit of a hack. Settling on one, we ran our server. And our database cleared out. That’s fine in development, but we struggled to find a way to get it into production. We needed more documentation. Also, the module was not designed with CoffeeScript in mind. We practically needed to revert to JavaScript. In contrast, the Node community has openly embraced some NoSQL databases, such as MongoDB. They are definitely worth considering.   Either way, make sure that your team’s dependencies are very well documented. There should be a written documentation for each exposed object, function, etc. To sum everything up, this article comes down to two fundamental things learned in any computer science class: write modular code and document everything. Do your research on Node and find a style that is legible for your team and any newcomers. A NodeJS project can only be maintained if developers utilizing the framework recognize the importance of the project in the future. If your code is messy now, it will only become messier. If you cannot find necessary information in a module’s documentation, you probably will miss other information when there is a problem in production. Don’t take shortcuts. A node application can only be as good as its developers and dependencies. About the Author Benjamin Reed began Computer Science classes at a nearby university in Nashville during his sophomore year in high school. Since then, he has become an advocate for open source. He is now pursing degrees in Computer Science and Mathematics fulltime. The Ruby community has intrigued him, and he openly expresses support for the Rails framework. When asked, he believes that studying Rails has led him to some of the best practices and, ultimately, has made him a better programmer. iOS development is one of his hobbies, and he enjoys scouting out new projects on GitHub. On GitHub, he’s appropriately named @codeblooded. On Twitter, he’s @benreedDev.
Read more
  • 0
  • 0
  • 2586

article-image-firebase
Packt
04 May 2015
8 min read
Save for later

Using Firebase: Learn how and why to use Firebase

Packt
04 May 2015
8 min read
In this article by Manoj Waikar, author of the book Data-oriented Development with AngularJS, we will learn a brief description about various types of persistence mechanisms, local versus hosted databases, what Firebase is, why to use it, and different use cases where Firebase can be useful. (For more resources related to this topic, see here.) We can write web applications by using the frameworks of our choice—be it server-side MVC frameworks, client-side MVC frameworks, or some combination of these. We can also use a persistence store (a database) of our choice—be it an RDBMS or a more modern NoSQL store. However, making our applications real time (meaning, if you are viewing a page and data related to that page gets updated, then the page should be updated or at least you should get a notification to refresh the page) is not a trivial task and we have to start thinking about push notifications and what not. This does not happen with Firebase. Persistence One of the very early decisions a developer or a team has to make when building any production-quality application is the choice of a persistent storage mechanism. Until a few years ago, this choice, more often than not, boiled down to a relational database such as Oracle, SQL Server, or PostgreSQL. However, the rise of NoSQL solutions such as MongoDB (http://www.mongodb.org/) and CouchDB (http://couchdb.apache.org/)—document-oriented databases or Redis (http://redis.io/), Riak (http://basho.com/riak/), keyvalue stores, Neo4j (http://www.neo4j.org/), and a graph database—has widened the choice for us. Please check the Wikipedia page on NoSQL (http://en.wikipedia.org/wiki/NoSQL) solutions for a detailed list of various NoSQL solutions including their classification and performance characteristics. There is one more buzzword that everyone must have already heard of, Cloud, the short form for cloud computing. Cloud computing briefly means that shared resources (or software) are provided to consumers on a paid/free basis over a network (typically, the Internet). So, we now have the luxury of choosing our preferred RDBMS or NoSQL database as a hosted solution. Consequently, we have one more choice to make—whether to install the database locally (on our own machine or inside the corporate network) or use a hosted solution (in the cloud). As with everything else, there are pros and cons to each of the approaches. The pros of a local database are fast access and one-time buying cost (if it's not an open source database), and the cons include the initial setup time. If you have to evaluate some another database, then you'll have to install the other database as well. The pros of a hosted solution are ease of use and minimal initial setup time, and the cons are the need for a reliable Internet connection, cost (again, if it's not a free option), and so on. Considering the preceding pros and cons, it's a safe bet to use a hosted solution when you are still evaluating different databases and only decide later between a local or a hosted solution, when you've finally zeroed in on your database of choice. What is Firebase? So, where does Firebase fit into all of this? Firebase is a NoSQL database that stores data as simple JSON documents. We can, therefore, compare it to other document-oriented databases such as CouchDB (which also stores data as JSON) or MongoDB (which stores data in the BSON, which stands for binary JSON, format). Although Firebase is a database with a RESTful API, it's also a real-time database, which means that the data is synchronized between different clients and with the backend server almost instantaneously. This implies that if the underlying data is changed by one of the clients, it gets streamed in real time to every connected client; hence, all the other clients automatically get updates with the newest set of data (without anyone having to refresh these clients manually). So, to summarize, Firebase is an API and a cloud service that gives us a real-time and scalable (NoSQL) backend. It has libraries for most server-side languages/frameworks such as Node.js, Java, Python, PHP, Ruby, and Clojure. It has official libraries for Node.js and Java and unofficial third-party libraries for Python, Ruby, and PHP. It also has libraries for most of the leading client-side frameworks such as AngularJS, Backbone, Ember, React, and mobile platforms such as iOS and Android. Firebase – Benefits and why to use? Firebase offers us the following benefits: It is a cloud service (a hosted solution), so there isn't any setup involved. Data is stored as native JSON, so what you store is what you see (on the frontend, fetched through a REST API)—WYSIWYS. Data is safe because Firebase requires 2048-bit SSL encryption for all data transfers. Data is replicated and backed-up to multiple secure locations, so there are minimal chances of data loss. When data changes, apps update instantly across devices. Our apps can work offline—as soon as we get connectivity, the data is synchronized instantly. Firebase gives us lightning fast data synchronization. So, combined with AngularJS, it gives us three-way data binding between HTML, JavaScript, and our backend (data). With two-way data binding, whenever our (JavaScript) model changes, the view (HTML) updates itself and vice versa. But, with three-way data binding, even when the data in our database changes, our JavaScript model gets updated, and consequently, the view gets updated as well. Last but not the least, it has libraries for the most popular server-side languages/frameworks (such as Node.js, Ruby, Java, and Python) as well as the popular client-side frameworks (such as Backbone, Ember, and React), including AngularJS. The Firebase binding for AngularJS is called AngularFire (https://www.firebase.com/docs/web/libraries/angular/). Firebase use cases Now that you've read how Firebase makes it easy to write applications that update in real time, you might still be wondering what kinds of applications are most suited for use with Firebase. Because, as often happens in the enterprise world, either you are not at liberty to choose all the components of your stack or you might have an existing application and you just have to add some new features to it. So, let's study the three main scenarios where Firebase can be a good fit for your needs. Apps with Firebase as the only backend This scenario is feasible if: You are writing a brand-new application or rewriting an existing one from scratch You don't have to integrate with legacy systems or other third-party services Your app doesn't need to do heavy data processing or it doesn't have complex user authentication requirements In such scenarios, Firebase is the only backend store you'll need and all dynamic content and user data can be stored and retrieved from it. Existing apps with some features powered by Firebase This scenario is feasible if you already have a site and want to add some real-time capabilities to it without touching other parts of the system. For example, you have a working website and just want to add chat capabilities, or maybe, you want to add a comment feed that updates in real time or you have to show some real-time notifications to your users. In this case, the clients can connect to your existing server (for existing features) and they can connect to Firebase for the newly added real-time capabilities. So, you can use Firebase together with the existing server. Both client and server code powered by Firebase In some use cases, there might be computationally intensive code that can't be run on the client. In situations like these, Firebase can act as an intermediary between the server and your clients. So, the server talks to the clients by manipulating data in Firebase. The server can connect to Firebase using either the Node.js library (for Node.js-based server-side applications) or through the REST API (for other server-side languages). Similarly, the server can listen to the data changes made by the clients and can respond appropriately. For example, the client can place tasks in a queue that the server will process later. One or more servers can then pick these tasks from the queue and do the required processing (as per their availability) and then place the results back in Firebase so that the clients can read them. Firebase is the API for your product You might not have realized by now (but you will once you see some examples) that as soon as we start saving data in Firebase, the REST API keeps building side-by-side for free because of the way data is stored as a JSON tree and is associated on different URLs. Think for a moment if you had a relational database as your persistence store; you would then need to specially write REST APIs (which are obviously preferable to old RPC-style web services) by using the framework available for your programming language to let external teams or customers get access to data. Then, if you wanted to support different platforms, you would need to provide libraries for all those platforms whereas Firebase already provides real-time SDKs for JavaScript, Objective-C, and Java. So, Firebase is not just a real-time persistence store, but it doubles up as an API layer too. Summary In this article, we learned a brief description about Firebase is, why to use it, and different use cases where Firebase can be useful. Resources for Article: Further resources on this subject: AngularJS Performance [article] An introduction to testing AngularJS directives [article] Our App and Tool Stack [article]
Read more
  • 0
  • 0
  • 5775

article-image-angular-20
Packt
30 Apr 2015
12 min read
Save for later

Angular 2.0

Packt
30 Apr 2015
12 min read
Angular 2.0 was officially announced in ng-conference on October 2014. Angular 2.0 will not be a major update to the previous version, but it is a complete rewrite of the entire framework and will include major changes. In this article by Mohammad Wadood Majid, coauthor of the book Mastering AngularJS for .NET Developers, we will learn the following topics: Why Angular 2.0 Design and features of Angular 2.0 AtScript Routing solution Dependency injection Annotations Instance scope Child injector Data binding and templating (For more resources related to this topic, see here.) Why Angular 2.0 AngularJS is one of the most popular open source frameworks available for client-side web application development. From the last few years, AngularJS's adaption and community support has been remarkable. The current AngularJS Version 1.3 is stable and used by many developers. There are over 1600 applications inside Google that use AngularJS 1.2 or 1.3. In the last few years, the Web has changed significantly, such as in the past, it was very difficult to build a cross-browser application; however, today's browsers are more consistent in their DOM implementations and the Web will continue to change. Angular 2.0 will address the following concerns: Mobile: Angular 2.0 will focus on mobile application development. Modular: Different modules will be removed from the core AngularJS, which will result in a better performance. Angular 2.0 will provide us the ability to pick the module parts we need. Modern: Angular 2.0 will include ECMAScript 6 (ES6). ECMAScript is a scripting language standard developed by Ecma International. It is widely used in client-side scripting, such as JavaScript, JScript, and ActionScript on the Web. Performance: AngularJS was developed around 5 years ago and it was not developed for developers. It was a tool targeting developers to quickly create persistent HTML forms. However, over time, it has been used to build more complex applications. The Angular 1.x team worked over the years to make changes to the current design, allowing it to continue to be relevant as needed for modern web applications. However, there are limits to improve the current AngularJS framework. A number of these limits are related to the performance that results to the current binding and template infrastructure. In order to fix these problems, a new infrastructure is required. In these days, the modern browser already supports some of the features of ES6, but the final implementation in progress will be available in 2015. With new features, developers will be able to describe their own views (template element) and package them for distribution to other developers (HTML imports). When in 2015 all these new features are available in all the browsers, the developer will be able to create as many endeavors for reusable components as required to resolve common problems. However, most of the frameworks, such as AngularJS 1.x, are not prepared, the data binding of the AngularJS 1.x framework works on the assumption of a small number of known HTML elements. In order to take advantage of the new components, an implementation in Angular is required. Design and features of AngularJS 2.0 The current AngularJS framework design in an amalgamation of the changing Web and the general computing landscape; however, it still needs some changes. The current Angular 1.x framework cannot work with new web components, as it lends itself to mobile applications and pushes its own module and class API against the standards. To answer these issues, the AngularJS team is coming up with the AngularJS 2.0 framework. AngularJS 2.0 is a reimaging of AngularJS 1.x for the modern web browser. The following are the changes in Angular 2.0: AtScript AtScript is a language used to develop AngularJS 2.0, which is a superset of ES6. It's managed by the Traceur compiler with ES6 to produce the ES5 code and it will use the TypeScript's syntax to generate runtime type proclamations instead of compile time checks. However, the developer will still be able to use the plain JavaScript (ES5) instead to using AtScript to write AngularJS 2.0 applications. The following is an example of an AtScript code: import {Component} from 'angulat';   import {server} from './server';   @Componenet({selector: 'test'})   export class MyNewComponent {   constructor(serve:Server){      this.sever=server } } In the preceding code, the import and the class come from ES6. There are constructor functions and a server parameter that specifies a type. In AtScript, this type is used to generate a runtime type assertion. The reference is stored, so that the dependency injection framework can use it. The @Component annotation is a metadata annotation. When we decorate some code within @Component, the compiler generates code that instantiates the annotation and stores it in a known location, so that it can be accessed by the AngularJS 2.0 framework. Routing solution In AngularJS 1.x, the routing was designed to handle a few simple cases. As the framework grew, more features were added to it. AngularJS 2.0 includes the following basic routing features, but will still be able to extend: JSON route configuration Optional convention over configuration Static, parameterized, and splat route patterns URL resolver Query string Push state or hash change Navigation model Document title updates 404 route handling Location service History manipulation Child router Screen activate: canActivate activate deactivate Dependency Injection The main feature of AngularJS 1.x was Dependency Injection (DI). It is very easy to used DI and follow the divide and conquer software development approach. In this way, the complex problems can be abstracted together and the applications that are developed in this way can be assembled at runtime with the use of DI. However, there are few issues in the AngularJS 1.x framework. First, the DI implementation was associated with minification; DI was dependant on parsing parameter names from functions, and whenever the names were changed, they were no longer matching with the services, controllers, and other components. Second, the missing features, which are more common to advance server-side DI features, are available in .NET and Java. These two features add constraints to scope control and child injectors. Annotations With the use of AtScript in the AngularJS 2.0 framework, a way to relate metadata with any function was introduced. The formatting data for metadata with AtScript is strong in the face of minification and is easy to write by pointer with ES5. The instance scope In the AngularJS framework 1.x, in the DI container, all the instances were singletons. The same is the case with AngularJS 2.0 by default. However, to get different behavior, we need to use services, providers, constants, and so on. The following code can be used to create a new instance every time the DI. It will become more useful if you create your own scope identifiers for use in the combination with child injectors, as shown: @TransientScope   Export class MyClass{…} The child injector The child injector is a major new feature in AngularJS 2.0. The child injector is inherited from its parent; however, the child injector has the ability to override the parents at the child level. Using this new feature, we can call certain type of objects in the application that will be automatically overridden in various scopes. For example, when a new route has a child route ability, each child route creates its own child injector. This allows each route to inherit from parent routes or to override those services during different navigation scenarios. Data binding and templating The data binding and template are considered a single unit while developing the application. In other words, the data binding and templates go hand in hand while writing an application with the AngularJS framework. When we bind the DOM, the HTML is handed to the template compiler. The compiler goes across the HTML to find out any directives, binding expressions, event handlers, and so on. All of the data is extracted from the DOM to data structures, which can be used to instantiate the template. During this phase, some processing is done on the data, for example, parsing the binding expression. Every node that contains the instructions is tagged with the class to cache the result of the process, so that work does not need to be repeated. Dynamic loading Dynamic loading was missing in AngularJS 1.x. It is very hard to add new directives or controllers at runtime. However, dynamic loading is added to Angular 2.0. Whenever any template is compiled, not only is the compiler provided with a template, but it is also provided with a component definition. The component definition contains metadata of directives, filters, and so on. This confirms that the necessary dependencies are loaded before the template gets managed by the compiler. Directives Directives in the AngularJS framework are meant to extend the HTML. In AngularJS 1.x, the Directive Definition Object (DDO) is used to create directives. In AngularJS 2.0, the directives are made simpler. There are three types of directives in AngularJS 2.0: The component directive: This is a collection of a view and a controller to create custom components. It can be used as an HTML element as well as a router that can map routes to the components. The decorator directive: Use this directive to decorate the HTML element with additional behavior, such as ng-show. The template directive: This directive transforms HTML into a reusable template. The directive developer can control how the template is instantiated and inserted into the DOM, such as ng-if and ng-repeat. The controller in AngularJS 2.0 is not a part of the component. However, the component contains the view and controller, where view is an HTML and controller is JavaScript. In AngularJS 2.0, the developer creates a class with some annotations, as shown in the following code: @dirComponent({   Selector: 'divTabContainter'   Directives:[NgRepeat]   })   Export class TabContainer{      constructor (panes:Query<Pane>){      this.panes=panes      } select(selectPane:Pane){…}   } In the preceding code, the controller of the component is a class. The dependencies are injected automatically into the constructor because the child injectors will be used. It can get access to any service up to the DOM hierarchy as well as it will local to service element. It can be seen in the preceding code that Query is injected. This is a special collection that is automatically synchronized with the child elements and lets us know when anything is added or removed. Templates In the preceding section, we created a dirTabContainer directive using AngularJS 2.0. The following code shows how to use the preceding directive in the DOM: <template>      <div class="border">          <div class="tabs">              <div [ng-repeat|pane]="panes" class="tab"   (^click)="select(pane)">                  <img [src]="pane.icon"><span>${pane.name}</span>              </div>          </div>          <content>        </content>      </div> </template> As you can see in the preceding code, in the <img [src]="pane.icon"><span>${pane.name}</span> image tag, the src attribute is surrounded with [], which tells us that the attribute has binding express. When we see ${}, it means that there is an expression that should be interpolated in the content. These bindings are unidirectional from the model or controller to the view. If you see div in the preceding <div [ng-repeat|pane]="panes" class="tab" (^click)="select(pane)"> template code, it is noticeable that ng-repeat is a template directive and is included with | and the pane word, where pane is the local variable. (^click) indicates that there is an event handler, where ^ means that the event is not a directory attached to the DOM, rather, we let it bubble and will be handled at the document level. In the following code example, we will compare the code of the Angular framework 1.x and AngularJS 2.0; let's create a hello world example for this demonstration. The following code is used to enable the write option in the AngularJS framework 1.x: var module = angular.module("example", []);   module.controller("FormExample", function() {   this.username = "World";   });   <div ng-controller="FormExample as ctrl">   <input ng-model="ctrl.username"> Hello {{ctrl.username}}!   </div> The following code is used to write in the AngluraJS framework 1.x: @Component({   selector: 'form-example'   })   @Template({   // we are binding the input element to the control object // defined in the component's class   inline: '<input [control]="username">Hello            {{username.value}}!', directives: [forms]   })   class FormExample {   constructor() {      this.username = new Control('World');   }   } In the preceding code example, TypeScript 1.5 is used, which will support the metadata annotations. However, the preceding code can be written in the ES5/ES6 JavaScript. More information on annotations can be found in the annotation guide at https://docs.google.com/document/d/1uhs-a41dp2z0NLs-QiXYY-rqLGhgjmTf4iwBad2myzY/edit#heading=h.qbaubqkoiqds. Here are some explanations from TypeScript 1.5: Form behavior cannot be unit tested without compiling the associated template. This is required because certain parts of the application behavior are contained in the template. We want to enable the dynamically generated data-driven forms in AngularJS 2.0 although it is present in AngularJS 1.x. This is because in Angular 1.x, this is not easy. The difficulty to reason your template statically arises because the ng-model directive was built using a generic two-way data binding. An atomic form that can easily be validated or reverted to its original state is required, which is missing from AngularJS 1.x. Although AngularJS 2.0 uses an extra level of indirection, it grants major benefits. The control object decouples form behavior from the template, so that you can test it in isolation. Tests are simpler to write and faster to execute. Summary In this article, we introduced the Angular 2.0 framework; it may not be a major update to the previous version, but it is a complete rewrite of the entire framework and will include breaking changes. We also talked about certain AngularJS 2.0 changes. AngularJS 2.0 will hopefully be released by the end of 2015. Resources for Article: Further resources on this subject: Setting Up The Rig [article] AngularJS Project [article] Working with Live Data and AngularJS [article]
Read more
  • 0
  • 0
  • 4466
article-image-less-external-applications-and-frameworks
Packt
30 Apr 2015
11 min read
Save for later

Less with External Applications and Frameworks

Packt
30 Apr 2015
11 min read
In this article by Bass Jobsen, author of the book Less Web Development Essentials - Second Edition, we will cover the following topics: WordPress and Less Using Less with the Play framework, AngularJS, Meteor, and Rails (For more resources related to this topic, see here.) WordPress and Less Nowadays, WordPress is not only used for weblogs, but it can also be used as a content management system for building a website. The WordPress system, written in PHP, has been split into the core system, plugins, and themes. The plugins add additional functionalities to the system, and the themes handle the look and feel of a website built with WordPress. They work independently of each other and are also independent of the theme. The theme does not depend on plugins. WordPress themes define the global CSS for a website, but every plugin can also add its own CSS code. The WordPress theme developers can use Less to compile the CSS code of the themes and the plugins. Using the Sage theme by Roots with Less Sage is a WordPress starter theme. You can use it to build your own theme. The theme is based on HTML5 Boilerplate (http://html5boilerplate.com/) and Bootstrap. Visit the Sage theme website at https://roots.io/sage/. Sage can also be completely built using Gulp. More information about how to use Gulp and Bower for the WordPress development can be found at https://roots.io/sage/docs/theme-development/. After downloading Sage, the Less files can be found at assets/styles/. These files include Bootstrap's Less files. The assets/styles/main.less file imports the main Bootstrap Less file, bootstrap.less. Now, you can edit main.less to customize your theme. You will have to rebuild the Sage theme after the changes you make. You can use all of the Bootstrap's variables to customize your build. JBST with a built-in Less compiler JBST is also a WordPress starter theme. JBST is intended to be used with the so-called child themes. More information about the WordPress child themes can be found at https://codex.wordpress.org/Child_Themes. After installing JBST, you will find a Less compiler under Appearance in your Dashboard pane, as shown in the following screenshot: JBST's built-in Less compiler in the WordPress Dashboard The built-in Less compiler can be used to fully customize your website using Less. Bootstrap also forms the skeleton of JBST, and the default settings are gathered by the a11y bootstrap theme mentioned earlier. JBST's Less compiler can be used in the following different ways: First, the compiler accepts any custom-written Less (and CSS) code. For instance, to change the color of the h1 elements, you should simply edit and recompile the code as follows: h1 {color: red;} Secondly, you can edit Bootstrap's variables and (re)use Bootstrap's mixins. To set the background color of the navbar component and add a custom button, you can use the code block mentioned here in the Less compiler: @navbar-default-bg:             blue; .btn-colored { .button-variant(blue;red;green); } Thirdly, you can set JBST's built-in Less variables as follows: @footer_bg_color: black; Lastly, JBST has its own set of mixins. To set a custom font, you can edit the code as shown here: .include-custom-font(@family: arial,@font-path, @path:   @custom-font-dir, @weight: normal, @style: normal); In the preceding code, the parameters mentioned were used to set the font name (@family) and the path name to the font files (@path/@font-path). The @weight and @style parameters set the font's properties. For more information, visit https://github.com/bassjobsen/Boilerplate-JBST-Child-Theme. More Less code blocks can also be added to a special file (wpless2css/wpless2css.less or less/custom.less); these files will give you the option to add, for example, a library of prebuilt mixins. After adding the library using this file, the mixins can also be used with the built-in compiler. The Semantic UI WordPress theme The Semantic UI, as discussed earlier, offers its own WordPress plugin. The plugin can be downloaded from https://github.com/ProjectCleverWeb/Semantic-UI-WordPress. After installing and activating this theme, you can use your website directly with the Semantic UI. With the default setting, your website will look like the following screenshot: Website built with the Semantic UI WordPress theme WordPress plugins and Less As discussed earlier, the WordPress plugins have their own CSS. This CSS will be added to the page like a normal style sheet, as shown here: <link rel='stylesheet' id='plugin-name'   href='//domain/wp-content/plugin-name/plugin-name.css?ver=2.1.2'     type='text/css' media='all' /> Unless a plugin provides the Less files for their CSS code, it will not be easy to manage its styles with Less. The WP Less to CSS plugin The WP Less to CSS plugin, which can be found at http://wordpress.org/plugins/wp-less-to-css/, offers the possibility of styling your WordPress website with Less. As seen earlier, you can enter the Less code along with the built-in compiler of JBST. This code will then be compiled into the website's CSS. This plugin compiles Less with the PHP Less compiler, Less.php. Using Less with the Play framework The Play framework helps you in building lightweight and scalable web applications by using Java or Scala. It will be interesting to learn how to integrate Less with the workflow of the Play framework. You can install the Play framework from https://www.playframework.com/. To learn more about the Play framework, you can also read, Learning Play! Framework 2, Andy Petrella, Packt Publishing. To read Petrella's book, visit https://www.packtpub.com/web-development/learning-play-framework-2. To run the Play framework, you need JDK 6 or later. The easiest way to install the Play framework is by using the Typesafe activator tool. After installing the activator tool, you can run the following command: > activator new my-first-app play-scala The preceding command will install a new app in the my-first-app directory. Using the play-java option instead of the play-scala option in the preceding command will lead to the installation of a Java-based app. Later on, you can add the Scala code in a Java app or the Java code in a Scala app. After installing a new app with the activator command, you can run it by using the following commands: cd my-first-app activator run Now, you can find your app at http://localhost:9000. To enable the Less compilation, you should simply add the sbt-less plugin to your plugins.sbt file as follows: addSbtPlugin("com.typesafe.sbt" % "sbt-less" % "1.0.6") After enabling the plugin, you can edit the build.sbt file so as to configure Less. You should save the Less files into app/assets/stylesheets/. Note that each file in app/assets/stylesheets/ will compile into a separate CSS file. The CSS files will be saved in public/stylesheets/ and should be called in your templates with the HTML code shown here: <link rel="stylesheet"   href="@routes.Assets.at("stylesheets/main.css")"> In case you are using a library with more files imported into the main file, you can define the filters in the build.sbt file. The filters for these so-called partial source files can look like the following code: includeFilter in (Assets, LessKeys.less) := "*.less" excludeFilter in (Assets, LessKeys.less) := "_*.less" The preceding filters ensure that the files starting with an underscore are not compiled into CSS. Using Bootstrap with the Play framework Bootstrap is a CSS framework. Bootstrap's Less code includes many files. Keeping your code up-to-date by using partials, as described in the preceding section, will not work well. Alternatively, you can use WebJars with Play for this purpose. To enable the Bootstrap WebJar, you should add the code shown here to your build.sbt file: libraryDependencies += "org.webjars" % "bootstrap" % "3.3.2" When using the Bootstrap WebJar, you can import Bootstrap into your project as follows: @import "lib/bootstrap/less/bootstrap.less"; AngularJS and Less AngularJS is a structural framework for dynamic web apps. It extends the HTML syntax, and this enables you to create dynamic web views. Of course, you can use AngularJS with Less. You can read more about AngularJS at https://angularjs.org/. The HTML code shown here will give you an example of what repeating the HTML elements with AngularJS will look like: <!doctype html> <html ng-app> <head>    <title>My Angular App</title> </head> <body ng-app>      <ul>      <li ng-repeat="item in [1,2,3]">{{ item }}</li>    </ul> <script   src="https://ajax.googleapis.com/ajax/libs/angularjs/1.3.12/&    angular.min.js"></script> </body> </html> This code should make your page look like the following screenshot: Repeating the HTML elements with AngularJS The ngBoilerplate system The ngBoilerplate system is an easy way to start a project with AngularJS. The project comes with a directory structure for your application and a Grunt build process, including a Less task and other useful libraries. To start your project, you should simply run the following commands on your console: > git clone git://github.com/ngbp/ngbp > cd ngbp > sudo npm -g install grunt-cli karma bower > npm install > bower install > grunt watch And then, open ///path/to/ngbp/build/index.html in your browser. After installing ngBoilerplate, you can write the Less code into src/less/main.less. By default, only src/less/main.less will be compiled into CSS; other libraries and other codes should be imported into this file. Meteor and Less Meteor is a complete open-source platform for building web and mobile apps in pure JavaScript. Meteor focuses on fast development. You can publish your apps for free on Meteor's servers. Meteor is available for Linux and OS X. You can also install it on Windows. Installing Meteor is as simple as running the following command on your console: > curl https://install.meteor.com | /bin/sh You should install the Less package for compiling the CSS code of the app with Less. You can install the Less package by running the command shown here: > meteor add less Note that the Less package compiles every file with the .less extension into CSS. For each file with the .less extension, a separate CSS file is created. When you use the partial Less files that should only be imported (with the @import directive) and not compiled into the CSS code itself, you should give these partials the .import.less extension. When using the CSS frameworks or libraries with many partials, renaming the files by adding the .import.less extension will hinder you in updating your code. Also running postprocess tasks for the CSS code is not always possible. Many packages for Meteor are available at https://atmospherejs.com/. Some of these packages can help you solve the issue with using partials mentioned earlier. To use Bootstrap, you can use the meteor-bootstrap package. The meteor-bootstrap package can be found at https://github.com/Nemo64/meteor-bootstrap. The meteor-bootstrap package requires the installation of the Less package. Other packages provide you postprocsess tasks, such as autoprefixing your code. Ruby on rails and Less Ruby on Rails, or Rails, for short is a web application development framework written in the Ruby language. Those who want to start developing with Ruby on Rails can read the Getting Started with Rails guide, which can be found at http://guides.rubyonrails.org/getting_started.html. In this section, you can read how to integrate Less into a Ruby on Rails app. After installing the tools and components required for starting with Rails, you can launch a new application by running the following command on your console: > rails new blog Now, you should integrate Less with Rails. You can use less-rails (https://github.com/metaskills/less-rails) to bring Less to Rails. Open the Gemfile file, comment on the sass-rails gem, and add the less-rails gem, as shown here: #gem 'sass-rails', '~> 5.0' gem 'less-rails' # Less gem 'therubyracer' # Ruby Then, create a controller called welcome with an action called index by running the following command: > bin/rails generate controller welcome index The preceding command will generate app/views/welcome/index.html.erb. Open app/views/welcome/index.html.erb and make sure that it contains the HTML code as shown here: <h1>Welcome#index</h1> <p>Find me in app/views/welcome/index.html.erb</p> The next step is to create a file, app/assets/stylesheets/welcome.css.less, with the Less code. The Less code in app/assets/stylesheets/welcome.css.less looks as follows: @color: red; h1 { color: @color; } Now, start a web server with the following command: > bin/rails server Finally, you can visit the application at http://localhost:3000/. The application should look like the example shown here: The Rails app Summary In this article, you learned how to use Less WordPress, Play, Meteor, AngularJS, Ruby on Rails. Resources for Article: Further resources on this subject: Media Queries with Less [article] Bootstrap 3 and other applications [article] Getting Started with Bootstrap [article]
Read more
  • 0
  • 0
  • 5392

article-image-how-to-integrate-social-media-into-wordpress-website
Packt
29 Apr 2015
6 min read
Save for later

How to integrate social media with your WordPress website

Packt
29 Apr 2015
6 min read
In this article by Karol Krol, the author of the WordPress 4.x Complete, we will look at how we can integrate our website with social media. We will list some more ways in which you can make your site social media friendly, and also see why you'd want to do that in the first place. Let's start with the why. In this day and age, social media is one of the main drivers of traffic for many sites. Even if you just want to share your content with friends and family, or you have some serious business plans regarding your site, you need to have at least some level of social media integration. Even if you install just simple social media share buttons, you will effectively encourage your visitors to pass on your content to their followers, thus expanding your reach and making your content more popular. (For more resources related to this topic, see here.) Making your blog social media friendly There are a handful of ways to make your site social media friendly. The most common approaches are as follows: Social media share buttons, which allow your visitors to share your content with their friends and followers Social media APIs integration, which make your content look better on social media (design wise) Automatic content distribution to social media Social media metrics tracking Let's discuss these one by one. Setting up social media share buttons There are hundreds of social media plugins available out there that allow you to display a basic set of social media buttons on your site. The one I advise you to use is called Social Share Starter (http://bit.ly/sss-plugin). Its main advantage is that it's optimized to work on new and low-traffic sites, and doesn't show any negative social proof when displaying the buttons and their share numbers. Setting up social media APIs' integration The next step worth taking to make your content appear more attractive on social media is to integrate it with some social media APIs; particularly that of Twitter. What exactly their API is and how it works isn't very relevant for the WordPress discussion we're having here. So instead, let's just focus on what the outcome of integrating your site with this API is. Here's what a standard tweet mentioning a website usually looks like (please notice the overall design, not the text contents): Here's a different tweet, mentioning an article from a site that has Twitter's (Twitter Cards) API enabled: This looks much better. Luckily, having this level of Twitter integration is quite easy. All you need is a plugin called JM Twitter Cards (available at https://wordpress.org/plugins/jm-twitter-cards/). After installing and activating it, you will be guided through the process of setting everything up and approving your site with Twitter (mandatory step). Setting up automatic content distribution to social media The idea behind automatic social media distribution of your content is that you don't have to remember to do so manually whenever you publish a new post. Instead of copying and pasting the URL address of your new post by hand to each individual social media platform, you can have this done automatically. This can be done in many ways, but let's discuss the two most usable ones, the Jetpack and Revive Old Post plugins. The Jetpack plugin The Jetpack plugin is available at https://wordpress.org/plugins/jetpack/. One of Jetpack's modules is called Publicize. You can activate it by navigating to the Jetpack | Settings section of the wp-admin. After doing so, you will be able to go to Settings | Sharing and integrate your site with one of the six available social media platforms: After going through the process of authorizing the plugin with each service, your site will be fully capable of posting each of your new posts to social media automatically. The Revive Old Post plugin The Revive Old Post plugin is available at https://revive.social/plugins/revive-old-post. While the Jetpack plugin takes the newest posts on your site and distributes them to your various social media accounts, the Revive Old Post plugin does the same with your archived posts, ultimately giving them a new life. Hence the name Revive Old Post. After downloading and activating this plugin, go to its section in the wp-admin Revive Old Post. Then, switch to the Accounts tab. There, you can enable the plugin to work with your social media accounts by clicking on the authorization buttons: Then, go to the General settings tab and handle the time intervals and other details of how you want the plugin to work with your social media accounts. When you're done, just click on the SAVE button. At this point, the plugin will start operating automatically and distribute your random archived posts to your social media accounts. Note that it's probably a good idea not to share things too often if you don't want to anger your followers and make them unfollow you. For that reason, I wouldn't advise posting more than once a day. Setting up social media metrics tracking The final element in our social media integration puzzle is setting up some kind of tracking mechanism that would tell us how popular our content is on social media (in terms of shares). Granted, you can do this manually by going to each of your posts and checking their share numbers individually (provided you have the Social Share Starter plugin installed). However, there's a quicker method, and it involves another plugin. This one is called Social Metrics Tracker and you can get it at https://wordpress.org/plugins/social-metrics-tracker/. In short, this plugin collects social share data from a number of platforms and then displays them to you in a single readable dashboard view. After you install and activate the plugin, you'll need to give it a couple of minutes for it to crawl through your social media accounts and get the data. Soon after that, you will be able to visit the plugin's dashboard by going to the Social Metrics section in the wp-admin: For some webhosts and setups, this plugin might end up consuming too much of the server's resources. If this happens, consider activating it only occasionally to check your results and then deactivate it again. Doing this even once a week will still give you a great overview of how well your content is performing on social media. This closes our short guide on how to integrate your WordPress site with social media. I'll admit that we're just scratching the surface here and that there's a lot more that can be done. There are new social media plugins being released literally every week. That being said, the methods described here are more than enough to make your WordPress site social media friendly and enable you to share your content effectively with your friends, family, and audience. Summary Here, we talked about social media integration, tools, and plugins that can make your life a lot easier as an online content publisher. Resources for Article: Further resources on this subject: FAQs on WordPress 3 [article] Creating Blog Content in WordPress [article] Customizing WordPress Settings for SEO [article]
Read more
  • 0
  • 0
  • 5233

article-image-recording-your-first-test
Packt
24 Apr 2015
17 min read
Save for later

Recording Your First Test

Packt
24 Apr 2015
17 min read
JMeter comes with a built-in test script recorder, also referred to as a proxy server (http://en.wikipedia.org/wiki/Proxy_server), to aid you in recording test plans. The test script recorder, once configured, watches your actions as you perform operations on a website, creates test sample objects for them, and eventually stores them in your test plan, which is a JMX file. In addition, JMeter gives you the option to create test plans manually, but this is mostly impractical for recording nontrivial testing scenarios. You will save a whole lot of time using the proxy recorder, as you will be seeing in a bit. So without further ado, in this article by Bayo Erinle, author of Performance Testing with JMeter - Second Edition, let's record our first test! For this, we will record the browsing of JMeter's own official website as a user will normally do. For the proxy server to be able to watch your actions, it will need to be configured. This entails two steps: Setting up the HTTP(S) Test Script Recorder within JMeter. Setting the browser to use the proxy. (For more resources related to this topic, see here.) Configuring the JMeter HTTP(S) Test Script Recorder The first step is to configure the proxy server in JMeter. To do this, we perform the following steps: Start JMeter. Add a thread group, as follows: Right-click on Test Plan and navigate to Add | Threads (User) | Thread Group. Add the HTTP(S) Test Script Recorder element, as follows: Right-click on WorkBench and navigate to Add | Non-Test Elements | HTTP(S) Test Script Recorder. Change the port to 7000 (1) (under Global Settings). You can use a different port, if you choose to. What is important is to choose a port that is not currently used by an existing process on the machine. The default is 8080. Under the Test plan content section, choose the option Test Plan > Thread Group (2) from the Target Controller drop-down. This allows the recorded actions to be targeted to the thread group we created in step 2. Under the Test plan content section, choose the option Put each group in a new transaction controller (3) from the Grouping drop-down. This allows you to group a series of requests constituting a page load. We will see more on this topic later. Click on Add suggested Excludes (under URL Patterns to Exclude). This instructs the proxy server to bypass recording requests of a series of elements that are not relevant to test execution. These include JavaScript files, stylesheets, and images. Thankfully, JMeter provides a handy button that excludes the often excluded elements. Click on the Start button at the bottom of the HTTP(S) Test Script Recorder component. Accept the Root CA certificate by clicking on the OK button. With these settings, the proxy server will start on port 7000, and monitor all requests going through that port and record them to a test plan using the default recording controller. For details, refer to the following screenshot: Configuring the JMeter HTTP(S) Test Script Recorder   In older versions of JMeter (before version 2.10), the now HTTP(S) Test Script Recorder was referred to as HTTP Proxy Server. While we have configured the HTTP(S) Test Script Recorder manually, the newer versions of JMeter (version 2.10 and later) come with prebundled templates that make commonly performed tasks, such as this, a lot easier. Using the bundled recorder template, we can set up the script recorder with just a few button clicks. To do this, click on the Templates…(1) button right next to the New file button on the toolbar. Then select Select Template as Recording (2). Change the port to your desired port (for example, 7000) and click on the Create (3) button. Refer to the following screenshot: Configuring the JMeter HTTP(S) Test Script Recorder through the template Recorder Setting up your browser to use the proxy server There are several ways to set up the browser of your choice to use the proxy server. We'll go over two of the most common ways, starting with my personal favorite, which is using a browser extension. Using a browser extension Google Chrome and Firefox have vibrant browser plugin ecosystems that allow you to extend the capabilities of your browser with each plugin that you choose. For setting up a proxy, I really like FoxyProxy (http://getfoxyproxy.org/). It is a neat add-on to the browser that allows you to set up various proxy settings and toggle between them on the fly without having to mess around with setting systems on the machine. It really makes the work hassle free. Thankfully, FoxyProxy has a plugin for Internet Explorer, Chrome, and Firefox. If you are using any of these, you are lucky! Go ahead and grab it! Changing the machine system settings For those who would rather configure the proxy natively on their operating system, we have provided the following steps for Windows and Mac OS. On Windows OS, perform the following steps for configuring a proxy: Click on Start, then click on Control Panel. Click on Network and Internet. Click on Internet Options. In the Internet Options dialog box, click on the Connections tab. Click on the Local Area Network (LAN) Settings button. To enable the use of a proxy server, select the checkbox for Use a proxy server for your LAN (These settings will not apply to dial-up or VPN connections), as shown in the following screenshot. In the proxy Address box, enter localhost in the IP address. In the Port number text box, enter 7000 (to match the port you set up for your JMeter proxy earlier). If you want to bypass the proxy server for local IP addresses, select the Bypass proxy server for local addresses checkbox. Click on OK to complete the proxy configuration process. Manually setting proxy on Windows 7 On Mac OS, perform the following steps to configure a proxy: Go to System Preference. Click on Network. Click on the Advanced… button. Go to the Proxies tab. Select the Web Proxy (HTTP) checkbox. Under Web Proxy Server, enter localhost. For port, enter 7000 (to match the port you set up for your JMeter proxy earlier). Do the same for Secure Web Proxy (HTTPS). Click on OK. Manually setting proxy on Mac OS For all other systems, please consult the related operating system documentation. Now that is all out of the way and the connections have been made, let's get to recording using the following steps: Point your browser to http://jmeter.apache.org/. Click on the Changes link under About. Click on the User Manual link under Documentation. Stop the HTTP(S) Test Script Recorder by clicking on the Stop button, so that it doesn't record any more activities. If you have done everything correctly, your actions will be recorded under the test plan. Refer to the following screenshot for details. Congratulations! You have just recorded your first test plan. Admittedly, we have just scrapped the surface of recording test plans, but we are off to a good start. Recording your first scenario Running your first recorded scenario We can go right ahead and replay or run our recorded scenario now, but before that let's add a listener or two to give us feedback on the results of the execution. There is no limit to the amount of listeners we can attach to a test plan, but we will often use only one or two. For our test plan, let's add three listeners for illustrative purposes. Let's add a Graph Results listener, a View Results Tree listener, and an Aggregate Report listener. Each listener gathers a different kind of metric that can help analyze performance test results as follows: Right-click on Test Plan and navigate to Add | Listener | View Results Tree. Right-click on Test Plan and navigate to Add | Listener | Aggregate Report. Right-click on Test Plan and navigate to Add | Listener | Graph Results. Just as we can see more interesting data, let's change some settings at the thread group level, as follows: Click on Thread Group. Under Thread Properties set the values as follows:     Number of Threads (users): 10     Ramp-Up Period (in seconds): 15     Loop Count: 30 This will set our test plan up to run for ten users, with all users starting their test within 15 seconds, and have each user perform the recorded scenario 30 times. Before we can proceed with test execution, save the test plan by clicking on the save icon. Once saved, click on the start icon (the green play icon on the menu) and watch the test run. As the test runs, you can click on the Graph Results listener (or any of the other two) and watch results gathering in real time. This is one of the many features of JMeter. From the Aggregate Report listener, we can deduce that there were 600 requests made to both the changes link and user manual links, respectively. Also, we can see that most users (90% Line) got very good responses below 200 milliseconds for both. In addition, we see what the throughput is per second for the various links and see that there were no errors during our test run. Results as seen through this Aggregate Report listener Looking at the View Results Tree listener, we can see exactly the changes link requests that failed and the reasons for their failure. This can be valuable information to developers or system engineers in diagnosing the root cause of the errors.   Results as seen via the View Results Tree Listener The Graph Results listener also gives a pictorial representation of what is seen in the View Tree listener in the preceding screenshot. If you click on it as the test goes on, you will see the graph get drawn in real time as the requests come in. The graph is a bit self-explanatory with lines representing the average, median, deviation, and throughput. The Average, Median, and Deviation all show average, median, and deviation of the number of samplers per minute, respectively, while the Throughput shows the average rate of network packets delivered over the network for our test run in bits per minute. Please consult a website, for example, Wikipedia for further detailed explanation on the precise meanings of these terms. The graph is also interactive and you can go ahead and uncheck/check any of the irrelevant/relevant data. For example, we mostly care about the average and throughput. Let's uncheck Data, Median, and Deviation and you will see that only the data plots for Average and Throughput remain. Refer to the following screenshot for details. With our little recorded scenario, you saw some major components that constitute a JMeter test plan. Let's record another scenario, this time using another application that will allow us to enter form values. Excilys Bank case study We'll borrow a website created by the wonderful folks at Excilys, a company focused on delivering skills and services in IT (http://www.excilys.com/). It's a light banking web application created for illustrative purposes. Let's start a new test plan, set up the test script recorder like we did previously, and start recording. Results as seen through this Graph Results Listener Let's start with the following steps: Point your browser to http://excilysbank.aws.af.cm/public/login.html. Enter the username and password in the login form, as follows: Username: user1 Password: password1 Click on the PERSONNAL CHECKING link. Click on the Transfers tab. Click on My Accounts. Click on the Joint Checking link. Click on the Transfers tab. Click on the Cards tab. Click on the Operations tab. Click on the Log out button. Stop the proxy server by clicking on the Stop button. This concludes our recorded scenario. At this point, we can add listeners for gathering results of our execution and then replay the recorded scenario as we did earlier. If we do, we will be in for a surprise (that is, if we don't use the bundled recorder template). We will have several failed requests after login, since we have not included the component to manage sessions and cookies needed to successfully replay this scenario. Thankfully, JMeter has such a component and it is called HTTP Cookie Manager. This seemingly simple, yet powerful component helps maintain an active session through HTTP cookies, once our client has established a connection with the server after login. It ensures that a cookie is stored upon successful authentication and passed around for subsequent requests, hence allowing those to go through. Each JMeter thread (that is, user) has its own cookie storage area. That is vital since you won't want a user gaining access to the site under another user's identity. This becomes more apparent when we test for websites requiring authentication and authorization (like the one we just recorded) for multiple users. Let's add this to our test plan by right-clicking on Test Plan and navigating to Add | Config Element | HTTP Cookie Manager. Once added, we can now successfully run our test plan. At this point, we can simulate more load by increasing the number of threads at the thread group level. Let's go ahead and do that. If executed, the test plan will now pass, but this is not realistic. We have just emulated one user, repeating five times essentially. All threads will use the credentials of user1, meaning that all threads log in to the system as user1. That is not what we want. To make the test realistic, what we want is each thread authenticating as a different user of the application. In reality, your bank creates a unique user for you, and only you or your spouse will be privileged to see your account details. Your neighbor down the street, if he used the same bank, won't get access to your account (at least we hope not!). So with that in mind, let's tweak the test to accommodate such a scenario. Parameterizing the script We begin by adding a CSV Data Set Config component (Test Plan | Add | Config Element | CSV Data Set Config) to our test plan. Since it is expensive to generate unique random values at runtime due to high CPU and memory consumption, it is advisable to define that upfront. The CSV Data Set Config component is used to read lines from a file and split them into variables that can then be used to feed input into the test plan. JMeter gives you a choice for the placement of this component within the test plan. You would normally add the component at the HTTP request level of the request that needs values fed from it. In our case, this will be the login HTTP request, where the username and password are entered. Another is to add it at the thread group level, that is, as a direct child of the thread group. If a particular dataset is applied to only a thread group, it makes sense to add it at this level. The third place where this component can be placed is at the Test Plan root level. If a dataset applies to all running threads, then it makes sense to add it at the root level. In our opinion, this also makes your test plans more readable and maintainable, as it is easier to see what is going on when inspecting or troubleshooting a test plan since this component can easily be seen at the root level rather than being deeply nested at other levels. So for our scenario, let's add this at the Test Plan root level. You can always move the components around using drag and drop even after adding them to the test plan. CSV Data Set Config Once added, the Filename entry is all that is needed if you have included headers in the input file. For example, if the input file is defined as follows: user, password, account_id user1, password1, 1 If the Variable Names field is left blank, then JMeter will use the first line of the input file as the variable names for the parameters. In cases where headers are not included, the variable names can be entered here. The other interesting setting here is Sharing mode. By default, this defaults to All threads, meaning all running threads will use the same set of data. So in cases where you have two threads running, Thread1 will use the first line as input data, while Thread2 will use the second line. If the number of running threads exceeds the input data then entries will be reused from the top of the file, provided that Recycle on EOF is set to True (the default). The other options for sharing modes include Current thread group and Current thread. Use the former for cases where the dataset is specific for a certain thread group and the latter for cases where the dataset is specific to each thread. The other properties of the component are self-explanatory and additional information can be found in JMeter's online user guide. Now that the component is added, we need to parameterize the login HTTP request with the variable names defined in our file (or the csvconfig component) so that the values can be dynamically bound during test execution. We do this by changing the value of the username to ${user} and password to ${password}, respectively, on the HTTP login request. The values between the ${} match the headers defined in the input file or the values specified in the Variable Names entry of the CSV Data Set Config component. Binding parameter values for HTTP requests We can now run our test plan and it should work as earlier, only this time the values are dynamically bound through the configuration we have set up. So far, we have run for a single user. Let's increase the thread group properties and run for ten users, with a ramp-up of 30 seconds, for one iteration. Now let's rerun our test. Examining the test results, we notice some requests failed with a status code of 403 (http://en.wikipedia.org/wiki/HTTP_403), which is an access denied error. This is because we are trying to access an account that does not seem to be the logged-in user. In our sample, all users made a request for account number 4, which only one user (user1) is allowed to see. You can trace this by adding a View Tree listener to the test plan and returning the test. If you closely examine some of the HTTP requests in the Request tab of the View Results Treelistener, you'll notice requests as follows: /private/bank/account/ACC1/operations.html /private/bank/account/ACC1/year/2013/month/1/page/0/operations.json … Observant readers would have noticed that our input data file also contains an account_id column. We can leverage this column so that we can parameterize all requests containing account numbers to pick the right accounts for each logged-in user. To do this, consider the following line of code: /private/bank/account/ACC1/operations.html Change this to the following line of code: /private/bank/account/ACC${account_id}/operations.html Now, consider the following line of code: /private/bank/account/ACC1/year/2013/month/1/page/0/operations.json Change this to the following line of code: /private/bank/account/ACC${account_id}/year/2013/month/1/page/0/operations.json Make similar changes to the rest of the code. Go ahead and do this for all such requests. Once completed, we can now rerun our test plan and, this time, things are logically correct and will work fine. You can also verify that if all works as expected after the test execution by examining the View Results Tree listener, clicking on some account requests URL, and changing the response display from text to HTML, you should see an account other than ACCT1. Summary We have covered quite a lot in this article. You learned how to configure JMeter and our browsers to help record test plans. In addition, you learned about some built-in components that can help us feed data into our test plan and/or extract data from server responses. Resources for Article:   Further resources on this subject: Execution of Test Plans [article] Performance Testing Fundamentals [article] Data Acquisition and Mapping [article]
Read more
  • 0
  • 0
  • 1972
article-image-using-mock-objects-test-interactions
Packt
23 Apr 2015
25 min read
Save for later

Using Mock Objects to Test Interactions

Packt
23 Apr 2015
25 min read
In this article by Siddharta Govindaraj, author of the book Test-Driven Python Development, we will look at the Event class. The Event class is very simple: receivers can register with the event to be notified when the event occurs. When the event fires, all the receivers are notified of the event. (For more resources related to this topic, see here.) A more detailed description is as follows: Event classes have a connect method, which takes a method or function to be called when the event fires When the fire method is called, all the registered callbacks are called with the same parameters that are passed to the fire method Writing tests for the connect method is fairly straightforward—we just need to check that the receivers are being stored properly. But, how do we write the tests for the fire method? This method does not change any state or store any value that we can assert on. The main responsibility of this method is to call other methods. How do we test that this is being done correctly? This is where mock objects come into the picture. Unlike ordinary unit tests that assert on object state, mock objects are used to test that the interactions between multiple objects occurs as it should. Hand writing a simple mock To start with, let us look at the code for the Event class so that we can understand what the tests need to do. The following code is in the file event.py in the source directory: class Event:    """A generic class that provides signal/slot functionality"""      def __init__(self):        self.listeners = []      def connect(self, listener):        self.listeners.append(listener)      def fire(self, *args, **kwargs):        for listener in self.listeners:            listener(*args, **kwargs) The way this code works is fairly simple. Classes that want to get notified of the event should call the connect method and pass a function. This will register the function for the event. Then, when the event is fired using the fire method, all the registered functions will be notified of the event. The following is a walk-through of how this class is used: >>> def handle_event(num): ...   print("I got number {0}".format(num)) ... >>> event = Event() >>> event.connect(handle_event) >>> event.fire(3) I got number 3 >>> event.fire(10) I got number 10 As you can see, every time the fire method is called, all the functions that registered with the connect method get called with the given parameters. So, how do we test the fire method? The walk-through above gives a hint. What we need to do is to create a function, register it using the connect method, and then verify that the method got notified when the fire method was called. The following is one way to write such a test: import unittest from ..event import Event   class EventTest(unittest.TestCase):    def test_a_listener_is_notified_when_an_event_is_raised(self):        called = False        def listener():            nonlocal called            called = True          event = Event()        event.connect(listener)        event.fire()        self.assertTrue(called) Put this code into the test_event.py file in the tests folder and run the test. The test should pass. The following is what we are doing: First, we create a variable named called and set it to False. Next, we create a dummy function. When the function is called, it sets called to True. Finally, we connect the dummy function to the event and fire the event. If the dummy function was successfully called when the event was fired, then the called variable would be changed to True, and we assert that the variable is indeed what we expected. The dummy function we created above is an example of a mock. A mock is simply an object that is substituted for a real object in the test case. The mock then records some information such as whether it was called, what parameters were passed, and so on, and we can then assert that the mock was called as expected. Talking about parameters, we should write a test that checks that the parameters are being passed correctly. The following is one such test:    def test_a_listener_is_passed_right_parameters(self):        params = ()        def listener(*args, **kwargs):            nonlocal params            params = (args, kwargs)        event = Event()        event.connect(listener)        event.fire(5, shape="square")        self.assertEquals(((5, ), {"shape":"square"}), params) This test is the same as the previous one, except that it saves the parameters that are then used in the assert to verify that they were passed properly. At this point, we can see some repetition coming up in the way we set up the mock function and then save some information about the call. We can extract this code into a separate class as follows: class Mock:    def __init__(self):        self.called = False        self.params = ()      def __call__(self, *args, **kwargs):        self.called = True        self.params = (args, kwargs) Once we do this, we can use our Mock class in our tests as follows: class EventTest(unittest.TestCase):    def test_a_listener_is_notified_when_an_event_is_raised(self):        listener = Mock()        event = Event()        event.connect(listener)        event.fire()        self.assertTrue(listener.called)      def test_a_listener_is_passed_right_parameters(self):        listener = Mock()        event = Event()        event.connect(listener)        event.fire(5, shape="square")        self.assertEquals(((5, ), {"shape": "square"}),             listener.params) What we have just done is to create a simple mocking class that is quite lightweight and good for simple uses. However, there are often times when we need much more advanced functionality, such as mocking a series of calls or checking the order of specific calls. Fortunately, Python has us covered with the unittest.mock module that is supplied as a part of the standard library. Using the Python mocking framework The unittest.mock module provided by Python is an extremely powerful mocking framework, yet at the same time it is very easy to use. Let us redo our tests using this library. First, we need to import the mock module at the top of our file as follows: from unittest import mock Next, we rewrite our first test as follows: class EventTest(unittest.TestCase):    def test_a_listener_is_notified_when_an_event_is_raised(self):        listener = mock.Mock()        event = Event()        event.connect(listener)        event.fire()        self.assertTrue(listener.called) The only change that we've made is to replace our own custom Mock class with the mock.Mock class provided by Python. That is it. With that single line change, our test is now using the inbuilt mocking class. The unittest.mock.Mock class is the core of the Python mocking framework. All we need to do is to instantiate the class and pass it in where it is required. The mock will record if it was called in the called instance variable. How do we check that the right parameters were passed? Let us look at the rewrite of the second test as follows:    def test_a_listener_is_passed_right_parameters(self):        listener = mock.Mock()        event = Event()        event.connect(listener)        event.fire(5, shape="square")        listener.assert_called_with(5, shape="square") The mock object automatically records the parameters that were passed in. We can assert on the parameters by using the assert_called_with method on the mock object. The method will raise an assertion error if the parameters don't match what was expected. In case we are not interested in testing the parameters (maybe we just want to check that the method was called), then we can pass the value mock.ANY. This value will match any parameter passed. There is a subtle difference in the way normal assertions are called compared to assertions on mocks. Normal assertions are defined as a part of the unittest.Testcase class. Since our tests inherit from that class, we call the assertions on self, for example, self.assertEquals. On the other hand, the mock assertion methods are a part of the mock object, so you call them on the mock object, for example, listener.assert_called_with. Mock objects have the following four assertions available out of the box: assert_called_with: This method asserts that the last call was made with the given parameters assert_called_once_with: This assertion checks that the method was called exactly once and was with the given parameters assert_any_call: This checks that the given call was made at some point during the execution assert_has_calls: This assertion checks that a list of calls occurred The four assertions are very subtly different, and that shows up when the mock has been called more than one. The assert_called_with method only checks the last call, so if there was more than one call, then the previous calls will not be asserted. The assert_any_call method will check if a call with the given parameters occurred anytime during execution. The assert_called_once_with assertion asserts for a single call, so if the mock was called more than once during execution, then this assert would fail. The assert_has_calls assertion can be used to assert that a set of calls with the given parameters occurred. Note that there might have been more calls than what we checked for in the assertion, but the assertion would still pass as long as the given calls are present. Let us take a closer look at the assert_has_calls assertion. Here is how we can write the same test using this assertion:    def test_a_listener_is_passed_right_parameters(self):        listener = mock.Mock()        event = Event()        event.connect(listener)        event.fire(5, shape="square")        listener.assert_has_calls([mock.call(5, shape="square")]) The mocking framework internally uses _Call objects to record calls. The mock.call function is a helper to create these objects. We just call it with the expected parameters to create the required call objects. We can then use these objects in the assert_has_calls assertion to assert that the expected call occurred. This method is useful when the mock was called multiple times and we want to assert only some of the calls. Mocking objects While testing the Event class, we only needed to mock out single functions. A more common use of mocking is to mock a class. Take a look at the implementation of the Alert class in the following: class Alert:    """Maps a Rule to an Action, and triggers the action if the rule    matches on any stock update"""      def __init__(self, description, rule, action):        self.description = description        self.rule = rule        self.action = action      def connect(self, exchange):        self.exchange = exchange        dependent_stocks = self.rule.depends_on()        for stock in dependent_stocks:            exchange[stock].updated.connect(self.check_rule)      def check_rule(self, stock):        if self.rule.matches(self.exchange):            self.action.execute(self.description) Let's break down how this class works as follows: The Alert class takes a Rule and an Action in the initializer. When the connect method is called, it takes all the dependent stocks and connects to their updated event. The updated event is an instance of the Event class that we saw earlier. Each Stock class has an instance of this event, and it is fired whenever a new update is made to that stock. The listener for this event is the self.check_rule method of the Alert class. In this method, the alert checks if the new update caused a rule to be matched. If the rule matched, it calls the execute method on the Action. Otherwise, nothing happens. This class has a few requirements, as shown in the following, that need to be met. Each of these needs to be made into a unit test. If a stock is updated, the class should check if the rule matches If the rule matches, then the corresponding action should be executed If the rule doesn't match, then nothing happens There are a number of different ways in which we could test this; let us go through some of the options. The first option is not to use mocks at all. We could create a rule, hook it up to a test action, and then update the stock and verify that the action was executed. The following is what such a test would look like: import unittest from datetime import datetime from unittest import mock   from ..alert import Alert from ..rule import PriceRule from ..stock import Stock   class TestAction:    executed = False      def execute(self, description):        self.executed = True   class AlertTest(unittest.TestCase):    def test_action_is_executed_when_rule_matches(self):        exchange = {"GOOG": Stock("GOOG")}        rule = PriceRule("GOOG", lambda stock: stock.price > 10)       action = TestAction()        alert = Alert("sample alert", rule, action)        alert.connect(exchange)        exchange["GOOG"].update(datetime(2014, 2, 10), 11)        self.assertTrue(action.executed) This is the most straightforward option, but it requires a bit of code to set up and there is the TestAction that we need to create just for the test case. Instead of creating a test action, we could instead replace it with a mock action. We can then simply assert on the mock that it got executed. The following code shows this variation of the test case:    def test_action_is_executed_when_rule_matches(self):        exchange = {"GOOG": Stock("GOOG")}        rule = PriceRule("GOOG", lambda stock: stock.price > 10)        action = mock.MagicMock()       alert = Alert("sample alert", rule, action)        alert.connect(exchange)        exchange["GOOG"].update(datetime(2014, 2, 10), 11)        action.execute.assert_called_with("sample alert") A couple of observations about this test: If you notice, alert is not the usual Mock object that we have been using so far, but a MagicMock object. A MagicMock object is like a Mock object but it has special support for Python's magic methods which are present on all classes, such as __str__, hasattr. If we don't use MagicMock, we may sometimes get errors or strange behavior if the code uses any of these methods. The following example illustrates the difference: >>> from unittest import mock >>> mock_1 = mock.Mock() >>> mock_2 = mock.MagicMock() >>> len(mock_1) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: object of type 'Mock' has no len() >>> len(mock_2) 0 >>>  In general, we will be using MagicMock in most places where we need to mock a class. Using Mock is a good option when we need to mock stand alone functions, or in rare situations where we specifically don't want a default implementation for the magic methods. The other observation about the test is the way methods are handled. In the test above, we created a mock action object, but we didn't specify anywhere that this mock class should contain an execute method and how it should behave. In fact, we don't need to. When a method or attribute is accessed on a mock object, Python conveniently creates a mock method and adds it to the mock class. Therefore, when the Alert class calls the execute method on our mock action object, that method is added to our mock action. We can then check that the method was called by asserting on action.execute.called. The downside of Python's behavior of automatically creating mock methods when they are accessed is that a typo or change in interface can go unnoticed. For example, suppose we rename the execute method in all the Action classes to run. But if we run our test cases, it still passes. Why does it pass? Because the Alert class calls the execute method, and the test only checks that the execute method was called, which it was. The test does not know that the name of the method has been changed in all the real Action implementations and that the Alert class will not work when integrated with the actual actions. To avoid this problem, Python supports using another class or object as a specification. When a specification is given, the mock object only creates the methods that are present in the specification. All other method or attribute accesses will raise an error. Specifications are passed to the mock at initialization time via the spec parameter. Both the Mock as well as MagicMock classes support setting a specification. The following code example shows the difference when a spec parameter is set compared to a default Mock object: >>> from unittest import mock >>> class PrintAction: ...     def run(self, description): ...         print("{0} was executed".format(description)) ...   >>> mock_1 = mock.Mock() >>> mock_1.execute("sample alert") # Does not give an error <Mock name='mock.execute()' id='54481752'>   >>> mock_2 = mock.Mock(spec=PrintAction) >>> mock_2.execute("sample alert") # Gives an error Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:Python34libunittestmock.py", line 557, in __getattr__    raise AttributeError("Mock object has no attribute %r" % name) AttributeError: Mock object has no attribute 'execute' Notice in the above example that mock_1 goes ahead and executes the execute method without any error, even though the method has been renamed in the PrintAction. On the other hand, by giving a spec, the method call to the nonexistent execute method raises an exception. Mocking return values The second variant above showed how we could use a mock Action class in the test instead of a real one. In the same way, we can also use a mock rule instead of creating a PriceRule in the test. The alert calls the rule to see whether the new stock update caused the rule to be matched. What the alert does depends on whether the rule returned True or False. All the mocks we've created so far have not had to return a value. We were just interested in whether the right call was made or not. If we mock the rule, then we will have to configure it to return the right value for the test. Fortunately, Python makes that very simple to do. All we have to do is to set the return value as a parameter in the constructor to the mock object as follows: >>> matches = mock.Mock(return_value=True) >>> matches() True >>> matches(4) True >>> matches(4, "abcd") True As we can see above, the mock just blindly returns the set value, irrespective of the parameters. Even the type or number of parameters is not considered. We can use the same procedure to set the return value of a method in a mock object as follows: >>> rule = mock.MagicMock() >>> rule.matches = mock.Mock(return_value=True) >>> rule.matches() True >>>  There is another way to set the return value, which is very convenient when dealing with methods in mock objects. Each mock object has a return_value attribute. We simply set this attribute to the return value and every call to the mock will return that value, as shown in the following: >>> from unittest import mock >>> rule = mock.MagicMock() >>> rule.matches.return_value = True >>> rule.matches() True >>>  In the example above, the moment we access rule.matches, Python automatically creates a mock matches object and puts it in the rule object. This allows us to directly set the return value in one statement without having to create a mock for the matches method. Now that we've seen how to set the return value, we can go ahead and change our test to use a mocked rule object, as shown in the following:    def test_action_is_executed_when_rule_matches(self):        exchange = {"GOOG": Stock("GOOG")}        rule = mock.MagicMock(spec=PriceRule)        rule.matches.return_value = True        rule.depends_on.return_value = {"GOOG"}        action = mock.MagicMock()        alert = Alert("sample alert", rule, action)        alert.connect(exchange)        exchange["GOOG"].update(datetime(2014, 2, 10), 11)        action.execute.assert_called_with("sample alert") There are two calls that the Alert makes to the rule: one to the depends_on method and the other to the matches method. We set the return value for both of them and the test passes. In case no return value is explicitly set for a call, the default return value is to return a new mock object. The mock object is different for each method that is called, but is consistent for a particular method. This means if the same method is called multiple times, the same mock object will be returned each time. Mocking side effects Finally, we come to the Stock class. This is the final dependency of the Alert class. We're currently creating Stock objects in our test, but we could replace it with a mock object just like we did for the Action and PriceRule classes. The Stock class is again slightly different in behavior from the other two mock objects. The update method doesn't just return a value—it's primary behavior in this test is to trigger the updated event. Only if this event is triggered will the rule check occur. In order to do this, we must tell our mock stock class to fire the event when the update event is called. Mock objects have a side_effect attribute to enable us to do just this. There are many reasons we might want to set a side effect. Some of them are as follows: We may want to call another method, like in the case of the Stock class, which needs to fire the event when the update method is called. To raise an exception: this is particularly useful when testing error situations. Some errors such as a network timeout might be very difficult to simulate, and it is better to test using a mock that simply raises the appropriate exception. To return multiple values: these may be different values each time the mock is called, or specific values, depending on the parameters passed. Setting the side effect is just like setting the return value. The only difference is that the side effect is a lambda function. When the mock is executed, the parameters are passed to the lambda function and the lambda is executed. The following is how we would use this with a mocked out Stock class:    def test_action_is_executed_when_rule_matches(self):        goog = mock.MagicMock(spec=Stock)        goog.updated = Event()        goog.update.side_effect = lambda date, value:                goog.updated.fire(self)        exchange = {"GOOG": goog}      rule = mock.MagicMock(spec=PriceRule)        rule.matches.return_value = True        rule.depends_on.return_value = {"GOOG"}        action = mock.MagicMock()        alert = Alert("sample alert", rule, action)        alert.connect(exchange)         exchange["GOOG"].update(datetime(2014, 2, 10), 11)        action.execute.assert_called_with("sample alert") So what is going on in that test? First, we create a mock of the Stock class instead of using the real one. Next, we add in the updated event. We need to do this because the Stock class creates the attribute at runtime in the __init__ scope. Because the attribute is set dynamically, MagicMock does not pick up the attribute from the spec parameter. We are setting an actual Event object here. We could set it as a mock as well, but it is probably overkill to do that. Finally, we set the side effect for the update method in the mock stock object. The lambda takes the two parameters that the method does. In this particular example, we just want to fire the event, so the parameters aren't used in the lambda. In other cases, we might want to perform different actions based on the values of the parameters. Setting the side_effect attribute allows us to do that. Just like with the return_value attribute, the side_effect attribute can also be set in the constructor. Run the test and it should pass. The side_effect attribute can also be set to an exception or a list. If it is set to an exception, then the given exception will be raised when the mock is called, as shown in the following: >>> m = mock.Mock() >>> m.side_effect = Exception() >>> m() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:Python34libunittestmock.py", line 885, in __call__    return _mock_self._mock_call(*args, **kwargs) File "C:Python34libunittestmock.py", line 941, in _mock_call    raise effect Exception If it is set to a list, then the mock will return the next element of the list each time it is called. This is a good way to mock a function that has to return different values each time it is called, as shown in the following: >>> m = mock.Mock() >>> m.side_effect = [1, 2, 3] >>> m() 1 >>> m() 2 >>> m() 3 >>> m() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:Python34libunittestmock.py", line 885, in __call__    return _mock_self._mock_call(*args, **kwargs) File "C:Python34libunittestmock.py", line 944, in _mock_call    result = next(effect) StopIteration As we have seen, the mocking framework's method of handling side effects using the side_effect attribute is very simple, yet quite powerful. How much mocking is too much? In the previous few sections, we've seen the same test written with different levels of mocking. We started off with a test that didn't use any mocks at all, and subsequently mocked out each of the dependencies one by one. Which one of these solutions is the best? As with many things, this is a point of personal preference. A purist would probably choose to mock out all dependencies. My personal preference is to use real objects when they are small and self-contained. I would not have mocked out the Stock class. This is because mocks generally require some configuration with return values or side effects, and this configuration can clutter the test and make it less readable. For small, self-contained classes, it is simpler to just use the real object. At the other end of the spectrum, classes that might interact with external systems, or that take a lot of memory, or are slow are good candidates for mocking out. Additionally, objects that require a lot of dependencies on other object to initialize are candidates for mocking. With mocks, you just create an object, pass it in, and assert on parts that you are interested in checking. You don't have to create an entirely valid object. Even here there are alternatives to mocking. For example, when dealing with a database, it is common to mock out the database calls and hardcode a return value into the mock. This is because the database might be on another server, and accessing it makes the tests slow and unreliable. However, instead of mocks, another option could be to use a fast in-memory database for the tests. This allows us to use a live database instead of a mocked out database. Which approach is better depends on the situation. Mocks versus stubs versus fakes versus spies We've been talking about mocks so far, but we've been a little loose on the terminology. Technically, everything we've talked about falls under the category of a test double. A test double is some sort of fake object that we use to stand in for a real object in a test case. Mocks are a specific kind of test double that record information about calls that have been made to it, so that we can assert on them later. Stubs are just an empty do-nothing kind of object or method. They are used when we don't care about some functionality in the test. For example, imagine we have a method that performs a calculation and then sends an e-mail. If we are testing the calculation logic, we might just replace the e-mail sending method with an empty do-nothing method in the test case so that no e-mails are sent out while the test is running. Fakes are a replacement of one object or system with a simpler one that facilitates easier testing. Using an in-memory database instead of the real one, or the way we created a dummy TestAction earlier in this article would be examples of fakes. Finally, spies are objects that are like middlemen. Like mocks, they record the calls so that we can assert on them later, but after recording, they continue execution to the original code. Spies are different from the other three in the sense that they do not replace any functionality. After recording the call, the real code is still executed. Spies sit in the middle and do not cause any change in execution pattern. Summary In this article, you looked at how to use mocks to test interactions between objects. You saw how to hand write our own mocks, followed by using the mocking framework provided in the Python standard library. Resources for Article: Further resources on this subject: Analyzing a Complex Dataset [article] Solving problems – closest good restaurant [article] Importing Dynamic Data [article]
Read more
  • 0
  • 0
  • 11065

article-image-constructing-common-ui-widgets
Packt
22 Apr 2015
21 min read
Save for later

Constructing Common UI Widgets

Packt
22 Apr 2015
21 min read
One of the biggest features that draws developers to Ext JS is the vast array of UI widgets available out of the box. The ease with which they can be integrated with each other and the attractive and consistent visuals each of them offers is also a big attraction. No other framework can compete on this front, and this is a huge reason Ext JS leads the field of large-scale web applications. In this article by Stuart Ashworth and Andrew Duncan by authors of the book, Ext JS Essentials, we will look at how UI widgets fit into the framework's structure, how they interact with each other, and how we can retrieve and reference them. We will then delve under the surface and investigate the lifecycle of a component and the stages it will go through during the lifetime of an application. (For more resources related to this topic, see here.) Anatomy of a UI widget Every UI element in Ext JS extends from the base component class Ext.Component. This class is responsible for rendering UI elements to the HTML document. They are generally sized and positioned by layouts used by their parent components and participate in the automatic component lifecycle process. You can imagine an instance of Ext.Component as a single section of the user interface in a similar way that you might think of a DOM element when building traditional web interfaces. Each subclass of Ext.Component builds upon this simple fact and is responsible for generating more complex HTML structures or combining multiple Ext.Components to create a more complex interface. Ext.Component classes, however, can't contain other Ext.Components. To combine components, one must use the Ext.container.Container class, which itself extends from Ext.Component. This class allows multiple components to be rendered inside it and have their size and positioning managed by the framework's layout classes. Components and HTML Creating and manipulating UIs using components requires a slightly different way of thinking than you may be used to when creating interactive websites with libraries such as jQuery. The Ext.Component class provides a layer of abstraction from the underlying HTML and allows us to encapsulate additional logic to build and manipulate this HTML. This concept is different from the way other libraries allow you to manipulate UI elements and provides a hurdle for new developers to get over. The Ext.Component class generates HTML for us, which we rarely need to interact with directly; instead, we manipulate the configuration and properties of the component. The following code and screenshot show the HTML generated by a simple Ext.Component instance: var simpleComponent = Ext.create('Ext.Component', { html   : 'Ext JS Essentials!', renderTo: Ext.getBody() }); As you can see, a simple <DIV> tag is created, which is given some CSS classes and an autogenerated ID, and has the HTML config displayed inside it. This generated HTML is created and managed by the Ext.dom.Element class, which wraps a DOM element and its children, offering us numerous helper methods to interrogate and manipulate it. After it is rendered, each Ext.Component instance has the element instance stored in its el property. You can then use this property to manipulate the underlying HTML that represents the component. As mentioned earlier, the el property won't be populated until the component has been rendered to the DOM. You should put logic dependent on altering the raw HTML of the component in an afterrender event listener or override the afterRender method. The following example shows how you can manipulate the underlying HTML once the component has been rendered. It will set the background color of the element to red: Ext.create('Ext.Component', { html     : 'Ext JS Essentials!', renderTo : Ext.getBody(), listeners: {    afterrender: function(comp) {      comp.el.setStyle('background-color', 'red');    } } }); It is important to understand that digging into and updating the HTML and CSS that Ext JS creates for you is a dangerous game to play and can result in unexpected results when the framework tries to update things itself. There is usually a framework way to achieve the manipulations you want to include, which we recommend you use first. We always advise new developers to try not to fight the framework too much when starting out. Instead, we encourage them to follow its conventions and patterns, rather than having to wrestle it to do things in the way they may have previously done when developing traditional websites and web apps. The component lifecycle When a component is created, it follows a lifecycle process that is important to understand, so as to have an awareness of the order in which things happen. By understanding this sequence of events, you will have a much better idea of where your logic will fit and ensure you have control over your components at the right points. The creation lifecycle The following process is followed when a new component is instantiated and rendered to the document by adding it to an existing container. When a component is shown explicitly (for example, without adding to a parent, such as a floating component) some additional steps are included. These have been denoted with a * in the following process. constructor First, the class' constructor function is executed, which triggers all of the other steps in turn. By overriding this function, we can add any setup code required for the component. Config options processed The next thing to be handled is the config options that are present in the class. This involves each option's apply and update methods being called, if they exist, meaning the values are available via the getter from now onwards. initComponent The initComponent method is now called and is generally used to apply configurations to the class and perform any initialization logic. render Once added to a container, or when the show method is called, the component is rendered to the document. boxready At this stage, the component is rendered and has been laid out by its parent's layout class, and is ready at its initial size. This event will only happen once on the component's first layout. activate (*) If the component is a floating item, then the activate event will fire, showing that the component is the active one on the screen. This will also fire when the component is brought back to focus, for example, in a Tab panel when a tab is selected. show (*) Similar to the previous step, the show event will fire when the component is finally visible on screen. The destruction process When we are removing a component from the Viewport and want to destroy it, it will follow a destruction sequence that we can use to ensure things are cleaned up sufficiently, so as to avoid memory leaks and so on. The framework takes care of the majority of this cleanup for us, but it is important that we tidy up any additional things we instantiate. hide (*) When a component is manually hidden (using the hide method), this event will fire and any additional hide logic can be included here. deactivate (*) Similar to the activate step, this is fired when the component becomes inactive. As with the activate step, this will happen when floating and nested components are hidden and are no longer the items under focus. destroy This is the final step in the teardown process and is implemented when the component and its internal properties and objects are cleaned up. At this stage, it is best to remove event handlers, destroy subclasses, and ensure any other references are released. Component Queries Ext JS boasts a powerful system to retrieve references to components called Component Queries. This is a CSS/XPath style query syntax that lets us target broad sets or specific components within our application. For example, within our controller, we may want to find a button with the text "Save" within a component of type MyForm. In this section, we will demonstrate the Component Query syntax and how it can be used to select components. We will also go into details about how it can be used within Ext.container.Container classes to scope selections. xtypes Before we dive in, it is important to understand the concept of xtypes in Ext JS. An xtype is a shorthand name for an Ext.Component that allows us to identify its declarative component configuration objects. For example, we can create a new Ext.Component as a child of an Ext.container.Container using an xtype with the following code: Ext.create('Ext.Container', { items: [    {      xtype: 'component',      html : 'My Component!'    } ] }); Using xtypes allows you to lazily instantiate components when required, rather than having them all created upfront. Common component xtypes include: Classes xtypes Ext.tab.Panel tabpanel Ext.container.Container container Ext.grid.Panel gridpanel Ext.Button button xtypes form the basis of our Component Query syntax in the same way that element types (for example, div, p, span, and so on) do for CSS selectors. We will use these heavily in the following examples. Sample component structure We will use the following sample component structure—a panel with a child tab panel, form, and buttons—to perform our example queries on: var panel = Ext.create('Ext.panel.Panel', { height : 500, width : 500, renderTo: Ext.getBody(), layout: {    type : 'vbox',    align: 'stretch' }, items : [    {      xtype : 'tabpanel',      itemId: 'mainTabPanel',      flex : 1,      items : [        {          xtype : 'panel',          title : 'Users',          itemId: 'usersPanel',          layout: {            type : 'vbox',            align: 'stretch'            },            tbar : [              {                xtype : 'button',                text : 'Edit',                itemId: 'editButton'                }              ],              items : [                {                  xtype : 'form',                  border : 0,                  items : [                  {                      xtype : 'textfield',                      fieldLabel: 'Name',                      allowBlank: false                    },                    {                      xtype : 'textfield',                      fieldLabel: 'Email',                      allowBlank: false                    }                  ],                  buttons: [                    {                      xtype : 'button',                      text : 'Save',                      action: 'saveUser'                    }                  ]                },                {                  xtype : 'grid',                  flex : 1,                  border : 0,                  columns: [                    {                     header : 'Name',                      dataIndex: 'Name',                      flex : 1                    },                    {                      header : 'Email',                      dataIndex: 'Email'                    }                   ],                  store : Ext.create('Ext.data.Store', {                    fields: [                      'Name',                      'Email'                    ],                    data : [                      {                        Name : 'Joe Bloggs',                        Email: 'joe@example.com'                      },                      {                        Name : 'Jane Doe',                        Email: 'jane@example.com'                      }                    ]                  })                }              ]            }          ]        },        {          xtype : 'component',          itemId : 'footerComponent',          html : 'Footer Information',          extraOptions: {            option1: 'test',            option2: 'test'          },          height : 40        }      ]    }); Queries with Ext.ComponentQuery The Ext.ComponentQuery class is used to perform Component Queries, with the query method primarily used. This method accepts two parameters: a query string and an optional Ext.container.Container instance to use as the root of the selection (that is, only components below this one in the hierarchy will be returned). The method will return an array of components or an empty array if none are found. We will work through a number of scenarios and use Component Queries to find a specific set of components. Finding components based on xtype As we have seen, we use xtypes like element types in CSS selectors. We can select all the Ext.panel.Panel instances using its xtype—panel: var panels = Ext.ComponentQuery.query('panel'); We can also add the concept of hierarchy by including a second xtype separated by a space. The following code will select all Ext.Button instances that are descendants (at any level) of an Ext.panel.Panel class: var buttons = Ext.ComponentQuery.query('panel buttons'); We could also use the > character to limit it to buttons that are direct descendants of a panel. var directDescendantButtons = Ext.ComponentQuery.query('panel > button'); Finding components based on attributes It is simple to select a component based on the value of a property. We use the XPath syntax to specify the attribute and the value. The following code will select buttons with an action attribute of saveUser: var saveButtons = Ext.ComponentQuery.query('button[action="saveUser"]); Finding components based on itemIds ItemIds are commonly used to retrieve components, and they are specially optimized for performance within the ComponentQuery class. They should be unique only within their parent container and not globally unique like the id config. To select a component based on itemId, we prefix the itemId with a # symbol: var usersPanel = Ext.ComponentQuery.query('#usersPanel'); Finding components based on member functions It is also possible to identify matching components based on the result of a function of that component. For example, we can select all text fields whose values are valid (that is, when a call to the isValid method returns true): var validFields = Ext.ComponentQuery.query('form > textfield{isValid()}'); Scoped Component Queries All of our previous examples will search the entire component tree to find matches, but often we may want to keep our searches local to a specific container and its descendants. This can help reduce the complexity of the query and improve the performance, as fewer components have to be processed. Ext.Containers have three handy methods to do this: up, down, and query. We will take each of these in turn and explain their features. up This method accepts a selector and will traverse up the hierarchy to find a single matching parent component. This can be useful to find the grid panel that a button belongs to, so an action can be taken on it: var grid = button.up('gridpanel'); down This returns the first descendant component that matches the given selector: var firstButton = grid.down('button'); query The query method performs much like Ext.ComponentQuery.query but is automatically scoped to the current container. This means that it will search all descendant components of the current container and return all matching ones as an array. var allButtons = grid.query('button'); Hierarchical data with trees Now that we know and understand components, their lifecycle, and how to retrieve references to them, we will move on to more specific UI widgets. The tree panel component allows us to display hierarchical data in a way that reflects the data's structure and relationships. In our application, we are going to use a tree panel to represent our navigation structure to allow users to see how the different areas of the app are linked and structured. Binding to a data source Like all other data-bound components, tree panels must be bound to a data store—in this particular case it must be an Ext.data.TreeStore instance or subclass, as it takes advantage of the extra features added to this specialist store class. We will make use of the BizDash.store.Navigation TreeStore to bind to our tree panel. Defining a tree panel The tree panel is defined in the Ext.tree.Panel class (which has an xtype of treepanel), which we will extend to create a custom class called BizDash.view.navigation.NavigationTree: Ext.define('BizDash.view.navigation.NavigationTree', { extend: 'Ext.tree.Panel', alias: 'widget.navigation-NavigationTree', store : 'Navigation', columns: [    {      xtype : 'treecolumn',      text : 'Navigation',      dataIndex: 'Label',      flex : 1    } ], rootVisible: false, useArrows : true }); We configure the tree to be bound to our TreeStore by using its storeId, in this case, Navigation. A tree panel is a subclass of the Ext.panel.Table class (similar to the Ext.grid.Panel class), which means it must have a columns configuration present. This tells the component what values to display as part of the tree. In a simple, traditional tree, we might only have one column showing the item and its children; however, we can define multiple columns and display additional fields in each row. This would be useful if we were displaying, for example, files and folders and wanted to have additional columns to display the file type and file size of each item. In our example, we are only going to have one column, displaying the Label field. We do this by using the treecolumn xtype, which is responsible for rendering the tree's navigation elements. Without defining treecolumn, the component won't display correctly. The treecolumn xtype's configuration allows us to define which of the attached data model's fields to use (dataIndex), the column's header text (text), and the fact that the column should fill the horizontal space. Additionally, we set the rootVisible to false, so the data's root is hidden, as it has no real meaning other than holding the rest of the data together. Finally, we set useArrows to true, so the items with children use an arrow instead of the +/- icon. Summary In this article, we have learnt how Ext JS' components fit together and the lifecycle that they follow when created and destroyed. We covered the component lifecycle and Component Queries. Resources for Article: Further resources on this subject: So, what is Ext JS? [article] Function passing [article] Static Data Management [article]
Read more
  • 0
  • 0
  • 4163
Modal Close icon
Modal Close icon