Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-exploring-the-strategy-behavioral-design-pattern-in-node-js
Expert Network
02 Jun 2021
10 min read
Save for later

Exploring the Strategy Behavioral Design Pattern in Node.js

Expert Network
02 Jun 2021
10 min read
A design pattern is a reusable solution to a recurring problem. The term is really broad in its definition and can span multiple domains of an application. However, the term is often associated with a well-known set of object-oriented patterns that were popularized in the 90s by the book, Design Patterns: Elements of Reusable Object- Oriented Software, Pearson Education, by the almost legendary Gang of Four (GoF): Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides. This article is an excerpt from the book Node.js Design Patterns, Third Edition by Mario Casciaro and Luciano Mammino – a comprehensive guide for learning proven patterns, techniques, and tricks to take full advantage of the Node.js platform. In this article, we’ll look at the behavior of components in software design. We’ll learn how to combine objects and how to define the way they communicate so that the behavior of the resulting structure becomes extensible, modular, reusable, and adaptable. After introducing all the behavioral design patterns, we will dive deep into the details of the strategy pattern. Now, it's time to roll up your sleeves and get your hands dirty with some behavioral design patterns. Types of Behavioral Design Patterns The Strategy pattern allows us to extract the common parts of a family of closely related components into a component called the context and allows us to define strategy objects that the context can use to implement specific behaviors. The State pattern is a variation of the Strategy pattern where the strategies are used to model the behavior of a component when under different states. The Template pattern, instead, can be considered the "static" version of the Strategy pattern, where the different specific behaviors are implemented as subclasses of the template class, which models the common parts of the algorithm. The Iterator pattern provides us with a common interface to iterate over a collection. It has now become a core pattern in Node.js. JavaScript offers native support for the pattern (with the iterator and iterable protocols). Iterators can be used as an alternative to complex async iteration patterns and even to Node.js streams. The Middleware pattern allows us to define a modular chain of processing steps. This is a very distinctive pattern born from within the Node.js ecosystem. It can be used to preprocess and postprocess data and requests. The Command pattern materializes the information required to execute a routine, allowing such information to be easily transferred, stored, and processed. The Strategy Pattern The Strategy pattern enables an object, called the context, to support variations in its logic by extracting the variable parts into separate, interchangeable objects called strategies. The context implements the common logic of a family of algorithms, while a strategy implements the mutable parts, allowing the context to adapt its behavior depending on different factors, such as an input value, a system configuration, or user preferences. Strategies are usually part of a family of solutions and all of them implement the same interface expected by the context. The following figure shows the situation we just described: Figure 1: General structure of the Strategy pattern Figure 1 shows you how the context object can plug different strategies into its structure as if they were replaceable parts of a piece of machinery. Imagine a car; its tires can be considered its strategy for adapting to different road conditions. We can fit winter tires to go on snowy roads thanks to their studs, while we can decide to fit high-performance tires for traveling mainly on motorways for a long trip. On the one hand, we don't want to change the entire car for this to be possible, and on the other, we don't want a car with eight wheels so that it can go on every possible road. The Strategy pattern is particularly useful in all those situations where supporting variations in the behavior of a component requires complex conditional logic (lots of if...else or switch statements) or mixing different components of the same family. Imagine an object called Order that represents an online order on an e-commerce website. The object has a method called pay() that, as it says, finalizes the order and transfers the funds from the user to the online store. To support different payment systems, we have a couple of options: Use an ..elsestatement in the pay() method to complete the operation based on the chosen payment option Delegate the logic of the payment to a strategy object that implements the logic for the specific payment gateway selected by the user In the first solution, our Order object cannot support other payment methods unless its code is modified. Also, this can become quite complex when the number of payment options grows. Instead, using the Strategy pattern enables the Order object to support a virtually unlimited number of payment methods and keeps its scope limited to only managing the details of the user, the purchased items, and the relative price while delegating the job of completing the payment to another object. Let's now demonstrate this pattern with a simple, realistic example. Multi-format configuration objects Let's consider an object called Config that holds a set of configuration parameters used by an application, such as the database URL, the listening port of the server, and so on. The Config object should be able to provide a simple interface to access these parameters, but also a way to import and export the configuration using persistent storage, such as a file. We want to be able to support different formats to store the configuration, for example, JSON, INI, or YAML. By applying what we learned about the Strategy pattern, we can immediately identify the variable part of the Config object, which is the functionality that allows us to serialize and deserialize the configuration. This is going to be our strategy. Creating a new module Let's create a new module called config.js, and let's define the generic part of our configuration manager: import { promises as fs } from 'fs' import objectPath from 'object-path' export class Config { constructor (formatStrategy) {                           // (1) this.data = {} this.formatStrategy = formatStrategy } get (configPath) {                                       // (2) return objectPath.get(this.data, configPath) } set (configPath, value) {                                // (2) return objectPath.set(this.data, configPath, value) } async load (filePath) {                                  // (3) console.log(`Deserializing from ${filePath}`) this.data = this.formatStrategy.deserialize( await fs.readFile(filePath, 'utf-8') ) } async save (filePath) {                                  // (3) console.log(`Serializing to ${filePath}`) await fs.writeFile(filePath, this.formatStrategy.serialize(this.data)) } } This is what's happening in the preceding code: In the constructor, we create an instance variable called data to hold the configuration data. Then we also store formatStrategy, which represents the component that we will use to parse and serialize the data. We provide two methods, set()and get(), to access the configuration properties using a dotted path notation (for example, property.subProperty) by leveraging a library called object-path (nodejsdp.link/object-path). The load() and save() methods are where we delegate, respectively, the deserialization and serialization of the data to our strategy. This is where the logic of the Config class is altered based on the formatStrategy passed as an input in the constructor. As we can see, this very simple and neat design allows the Config object to seamlessly support different file formats when loading and saving its data. The best part is that the logic to support those various formats is not hardcoded anywhere, so the Config class can adapt without any modification to virtually any file format, given the right strategy. Creating format Strategies To demonstrate this characteristic, let's now create a couple of format strategies in a file called strategies.js. Let's start with a strategy for parsing and serializing data using the INI file format, which is a widely used configuration format (more info about it here: nodejsdp.link/ini-format). For the task, we will use an npm package called ini (nodejsdp.link/ini): import ini from 'ini' export const iniStrategy = { deserialize: data => ini.parse(data), serialize: data => ini.stringify(data) } Nothing really complicated! Our strategy simply implements the agreed interface, so that it can be used by the Config object. Similarly, the next strategy that we are going to create allows us to support the JSON file format, widely used in JavaScript and in the web development ecosystem in general: export const jsonStrategy = { deserialize: data => JSON.parse(data), serialize: data => JSON.stringify(data, null, '  ') } Now, to show you how everything comes together, let's create a file named index.js, and let's try to load and save a sample configuration using different formats: import { Config } from './config.js' import { jsonStrategy, iniStrategy } from './strategies.js' async function main () { const iniConfig = new Config(iniStrategy) await iniConfig.load('samples/conf.ini') iniConfig.set('book.nodejs', 'design patterns') await iniConfig.save('samples/conf_mod.ini') const jsonConfig = new Config(jsonStrategy) await jsonConfig.load('samples/conf.json') jsonConfig.set('book.nodejs', 'design patterns') await jsonConfig.save('samples/conf_mod.json') } main() Our test module reveals the core properties of the Strategy pattern. We defined only one Config class, which implements the common parts of our configuration manager, then, by using different strategies for serializing and deserializing data, we created different Config class instances supporting different file formats. The example we've just seen shows us only one of the possible alternatives that we had for selecting a strategy. Other valid approaches might have been the following: Creating two different strategy families: One for the deserialization and the other for the serialization. This would have allowed reading from a format and saving to another. Dynamically selecting the strategy: Depending on the extension of the file provided; the Config object could have maintained a map extension → strategy and used it to select the right algorithm for the given extension. As we can see, we have several options for selecting the strategy to use, and the right one only depends on your requirements and the tradeoff in terms of features and the simplicity you want to obtain. Furthermore, the implementation of the pattern itself can vary a lot as well. For example, in its simplest form, the context and the strategy can both be simple functions: function context(strategy) {...} Even though this may seem insignificant, it should not be underestimated in a programming language such as JavaScript, where functions are first-class citizens and used as much as fully-fledged objects. Between all these variations, though, what does not change is the idea behind the pattern; as always, the implementation can slightly change but the core concepts that drive the pattern are always the same. Summary In this article, we dive deep into the details of the strategy pattern, one of the Behavioral Design Patterns in Node.js. Learn more in the book, Node.js Design Patterns, Third Edition by Mario Casciaro and Luciano Mammino. About the Authors Mario Casciaro is a software engineer and entrepreneur. Mario worked at IBM for a number of years, first in Rome, then in Dublin Software Lab. He currently splits his time between Var7 Technologies-his own software company-and his role as lead engineer at D4H Technologies where he creates software for emergency response teams. Luciano Mammino wrote his first line of code at the age of 12 on his father's old i386. Since then he has never stopped coding. He is currently working at FabFitFun as principal software engineer where he builds microservices to serve millions of users every day.
Read more
  • 0
  • 0
  • 51059

article-image-installing-jquery
Packt
04 Jun 2015
25 min read
Save for later

Installing jQuery

Packt
04 Jun 2015
25 min read
 In this article by Alex Libby, author of the book Mastering jQuery, we will examine some of the options available to help develop your skills even further. (For more resources related to this topic, see here.) Local or CDN, I wonder…? Which version…? Do I support old IE…? Installing jQuery is a thankless task that has to be done countless times by any developer—it is easy to imagine that person asking some of the questions. It is easy to imagine why most people go with the option of using a Content Delivery Network (CDN) link, but there is more to installing jQuery than taking the easy route! There are more options available, where we can be really specific about what we need to use—throughout this article, we will. We'll cover a number of topics, which include: Downloading and installing jQuery Customizing jQuery downloads Building from Git Using other sources to install jQuery Adding source map support Working with Modernizr as a fallback Intrigued? Let's get started. Downloading and installing jQuery As with all projects that require the use of jQuery, we must start somewhere—no doubt you've downloaded and installed jQuery a thousand times; let's just quickly recap to bring ourselves up to speed. If we browse to http://www.jquery.com/download, we can download jQuery using one of the two methods: downloading the compressed production version or the uncompressed development version. If we don't need to support old IE (IE6, 7, and 8), then we can choose the 2.x branch. If, however, you still have some diehards who can't (or don't want to) upgrade, then the 1.x branch must be used instead. To include jQuery, we just need to add this link to our page: <script src="http://code.jquery.com/jquery-X.X.X.js"></script> Here, X.X.X marks the version number of jQuery or the Migrate plugin that is being used in the page. Conventional wisdom states that the jQuery plugin (and this includes the Migrate plugin too) should be added to the <head> tag, although there are valid arguments to add it as the last statement before the closing <body> tag; placing it here may help speed up loading times to your site. This argument is not set in stone; there may be instances where placing it in the <head> tag is necessary and this choice should be left to the developer's requirements. My personal preference is to place it in the <head> tag as it provides a clean separation of the script (and the CSS) code from the main markup in the body of the page, particularly on lighter sites. I have even seen some developers argue that there is little perceived difference if jQuery is added at the top, rather than at the bottom; some systems, such as WordPress, include jQuery in the <head> section too, so either will work. The key here though is if you are perceiving slowness, then move your scripts to just before the <body> tag, which is considered a better practice. Using jQuery in a development capacity A useful point to note at this stage is that best practice recommends that CDN links should not be used within a development capacity; instead, the uncompressed files should be downloaded and referenced locally. Once the site is complete and is ready to be uploaded, then CDN links can be used. Adding the jQuery Migrate plugin If you've used any version of jQuery prior to 1.9, then it is worth adding the jQuery Migrate plugin to your pages. The jQuery Core team made some significant changes to jQuery from this version; the Migrate plugin will temporarily restore the functionality until such time that the old code can be updated or replaced. The plugin adds three properties and a method to the jQuery object, which we can use to control its behavior: Property or Method Comments jQuery.migrateWarnings This is an array of string warning messages that have been generated by the code on the page, in the order in which they were generated. Messages appear in the array only once even if the condition has occurred multiple times, unless jQuery.migrateReset() is called. jQuery.migrateMute Set this property to true in order to prevent console warnings from being generated in the debugging version. If this property is set, the jQuery.migrateWarnings array is still maintained, which allows programmatic inspection without console output. jQuery.migrateTrace Set this property to false if you want warnings but don't want traces to appear on the console. jQuery.migrateReset() This method clears the jQuery.migrateWarnings array and "forgets" the list of messages that have been seen already. Adding the plugin is equally simple—all you need to do is add a link similar to this, where X represents the version number of the plugin that is used: <script src="http://code.jquery.com/jquery-migrate- X.X.X.js"></script> If you want to learn more about the plugin and obtain the source code, then it is available for download from https://github.com/jquery/jquery-migrate. Using a CDN We can equally use a CDN link to provide our jQuery library—the principal link is provided by MaxCDN for the jQuery team, with the current version available at http://code.jquery.com. We can, of course, use CDN links from some alternative sources, if preferred—a reminder of these is as follows: Google (https://developers.google.com/speed/libraries/devguide#jquery) Microsoft (http://www.asp.net/ajaxlibrary/cdn.ashx#jQuery_Releases_on_the_CDN_0) CDNJS (http://cdnjs.com/libraries/jquery/) jsDelivr (http://www.jsdelivr.com/#%!jquery) Don't forget though that if you need, we can always save a copy of the file provided on CDN locally and reference this instead. The jQuery CDN will always have the latest version, although it may take a couple of days for updates to appear via the other links. Using other sources to install jQuery Right. Okay, let's move on and develop some code! "What's next?" I hear you ask. Aha! If you thought downloading and installing jQuery from the main site was the only way to do this, then you are wrong! After all, this is about mastering jQuery, so you didn't think I will only talk about something that I am sure you are already familiar with, right? Yes, there are more options available to us to install jQuery than simply using the CDN or main download page. Let's begin by taking a look at using Node. Each demo is based on Windows, as this is the author's preferred platform; alternatives are given, where possible, for other platforms. Using Node JS to install jQuery So far, we've seen how to download and reference jQuery, which is to use the download from the main jQuery site or via a CDN. The downside of this method is the manual work required to keep our versions of jQuery up to date! Instead, we can use a package manager to help manage our assets. Node.js is one such system. Let's take a look at the steps that need to be performed in order to get jQuery installed: We first need to install Node.js—head over to http://www.nodejs.org in order to download the package for your chosen platform; accept all the defaults when working through the wizard (for Mac and PC). Next, fire up a Node Command Prompt and then change to your project folder. In the prompt, enter this command: npm install jquery Node will fetch and install jQuery—it displays a confirmation message when the installation is complete: You can then reference jQuery by using this link: <name of drive>:websitenode_modulesjquerydistjquery.min.js. Node is now installed and ready for use—although we've installed it in a folder locally, in reality, we will most likely install it within a subfolder of our local web server. For example, if we're running WampServer, we can install it, then copy it into the /wamp/www/js folder, and reference it using http://localhost/js/jquery.min.js. If you want to take a look at the source of the jQuery Node Package Manager (NPM) package, then check out https://www.npmjs.org/package/jquery. Using Node to install jQuery makes our work simpler, but at a cost. Node.js (and its package manager, NPM) is primarily aimed at installing and managing JavaScript components and expects packages to follow the CommonJS standard. The downside of this is that there is no scope to manage any of the other assets that are often used within websites, such as fonts, images, CSS files, or even HTML pages. "Why will this be an issue?," I hear you ask. Simple, why make life hard for ourselves when we can manage all of these assets automatically and still use Node? Installing jQuery using Bower A relatively new addition to the library is the support for installation using Bower—based on Node, it's a package manager that takes care of the fetching and installing of packages from over the Internet. It is designed to be far more flexible about managing the handling of multiple types of assets (such as images, fonts, and CSS files) and does not interfere with how these components are used within a page (unlike Node). For the purpose of this demo, I will assume that you have already installed it; if not, you will need to revisit it before continuing with the following steps: Bring up the Node Command Prompt, change to the drive where you want to install jQuery, and enter this command: bower install jquery This will download and install the script, displaying the confirmation of the version installed when it has completed. The library is installed in the bower_components folder on your PC. It will look similar to this example, where I've navigated to the jquery subfolder underneath. By default, Bower will install jQuery in its bower_components folder. Within bower_components/jquery/dist/, we will find an uncompressed version, compressed release, and source map file. We can then reference jQuery in our script using this line: <script src="/bower_components/jquery/jquery.js"></script> We can take this further though. If we don't want to install the extra files that come with a Bower installation by default, we can simply enter this in a Command Prompt instead to just install the minified version 2.1 of jQuery: bower install http://code.jquery.com/jquery-2.1.0.min.js Now, we can be really clever at this point; as Bower uses Node's JSON files to control what should be installed, we can use this to be really selective and set Bower to install additional components at the same time. Let's take a look and see how this will work—in the following example, we'll use Bower to install jQuery 2.1 and 1.10 (the latter to provide support for IE6-8). In the Node Command Prompt, enter the following command: bower init This will prompt you for answers to a series of questions, at which point you can either fill out information or press Enter to accept the defaults. Look in the project folder; you should find a bower.json file within. Open it in your favorite text editor and then alter the code as shown here: {"ignore": [ "**/.*", "node_modules", "bower_components","test", "tests" ] ,"dependencies": {"jquery-legacy": "jquery#1.11.1","jquery-modern": "jquery#2.10"}} At this point, you have a bower.json file that is ready for use. Bower is built on top of Git, so in order to install jQuery using your file, you will normally need to publish it to the Bower repository. Instead, you can install an additional Bower package, which will allow you to install your custom package without the need to publish it to the Bower repository: In the Node Command Prompt window, enter the following at the prompt: npm install -g bower-installer When the installation is complete, change to your project folder and then enter this command line: bower-installer The bower-installer command will now download and install both the versions of jQuery. At this stage, you now have jQuery installed using Bower. You're free to upgrade or remove jQuery using the normal Bower process at some point in the future. If you want to learn more about how to use Bower, there are plenty of references online; https://www.openshift.com/blogs/day-1-bower-manage-your-client-side-dependencies is a good example of a tutorial that will help you get accustomed to using Bower. In addition, there is a useful article that discusses both Bower and Node, available at http://tech.pro/tutorial/1190/package-managers-an-introductory-guide-for-the-uninitiated-front-end-developer. Bower isn't the only way to install jQuery though—while we can use it to install multiple versions of jQuery, for example, we're still limited to installing the entire jQuery library. We can improve on this by referencing only the elements we need within the library. Thanks to some extensive work undertaken by the jQuery Core team, we can use the Asynchronous Module Definition (AMD) approach to reference only those modules that are needed within our website or online application. Using the AMD approach to load jQuery In most instances, when using jQuery, developers are likely to simply include a reference to the main library in their code. There is nothing wrong with it per se, but it loads a lot of extra code that is surplus to our requirements. A more efficient method, although one that takes a little effort in getting used to, is to use the AMD approach. In a nutshell, the jQuery team has made the library more modular; this allows you to use a loader such as require.js to load individual modules when needed. It's not suitable for every approach, particularly if you are a heavy user of different parts of the library. However, for those instances where you only need a limited number of modules, then this is a perfect route to take. Let's work through a simple example to see what it looks like in practice. Before we start, we need one additional item—the code uses the Fira Sans regular custom font, which is available from Font Squirrel at http://www.fontsquirrel.com/fonts/fira-sans. Let's make a start using the following steps: The Fira Sans font doesn't come with a web format by default, so we need to convert the font to use the web font format. Go ahead and upload the FiraSans-Regular.otf file to Font Squirrel's web font generator at http://www.fontsquirrel.com/tools/webfont-generator. When prompted, save the converted file to your project folder in a subfolder called fonts. We need to install jQuery and RequireJS into our project folder, so fire up a Node.js Command Prompt and change to the project folder. Next, enter these commands one by one, pressing Enter after each: bower install jquerybower install requirejs We need to extract a copy of the amd.html and amd.css files—it contains some simple markup along with a link to require.js; the amd.css file contains some basic styling that we will use in our demo. We now need to add in this code block, immediately below the link for require.js—this handles the calls to jQuery and RequireJS, where we're calling in both jQuery and Sizzle, the selector engine for jQuery: <script>require.config({paths: {"jquery": "bower_components/jquery/src","sizzle": "bower_components/jquery/src/sizzle/dist/sizzle"}});require(["js/app"]);</script> Now that jQuery has been defined, we need to call in the relevant modules. In a new file, go ahead and add the following code, saving it as app.js in a subfolder marked js within our project folder: define(["jquery/core/init", "jquery/attributes/classes"],function($) {$("div").addClass("decoration");}); We used app.js as the filename to tie in with the require(["js/app"]); reference in the code. If all went well, when previewing the results of our work in a browser. Although we've only worked with a simple example here, it's enough to demonstrate how easy it is to only call those modules we need to use in our code rather than call the entire jQuery library. True, we still have to provide a link to the library, but this is only to tell our code where to find it; our module code weighs in at 29 KB (10 KB when gzipped), against 242 KB for the uncompressed version of the full library! Now, there may be instances where simply referencing modules using this method isn't the right approach; this may apply if you need to reference lots of different modules regularly. A better alternative is to build a custom version of the jQuery library that only contains the modules that we need to use and the rest are removed during build. It's a little more involved but worth the effort—let's take a look at what is involved in the process. Customizing the downloads of jQuery from Git If we feel so inclined, we can really push the boat out and build a custom version of jQuery using the JavaScript task runner, Grunt. The process is relatively straightforward but involves a few steps; it will certainly help if you have some prior familiarity with Git! The demo assumes that you have already installed Node.js—if you haven't, then you will need to do this first before continuing with the exercise. Okay, let's make a start by performing the following steps: You first need to install Grunt if it isn't already present on your system—bring up the Node.js Command Prompt and enter this command: npm install -g grunt-cli Next, install Git—for this, browse to http://msysgit.github.io/ in order to download the package. Double-click on the setup file to launch the wizard, accepting all the defaults is sufficient for our needs. If you want more information on how to install Git, head over and take a look at https://github.com/msysgit/msysgit/wiki/InstallMSysGit for more details. Once Git is installed, change to the jquery folder from within the Command Prompt and enter this command to download and install the dependencies needed to build jQuery: npm install The final stage of the build process is to build the library into the file we all know and love; from the same Command Prompt, enter this command: grunt Browse to the jquery folder—within this will be a folder called dist, which contains our custom build of jQuery, ready for use. If there are modules within the library that we don't need, we can run a custom build. We can set the Grunt task to remove these when building the library, leaving in those that are needed for our project. For a complete list of all the modules that we can exclude, see https://github.com/jquery/jquery#modules. For example, to remove AJAX support from our build, we can run this command in place of step 5, as shown previously: grunt custom:-ajax This results in a file saving on the original raw version of 30 KB as shown in the following screenshot: The JavaScript and map files can now be incorporated into our projects in the usual way. For a detailed tutorial on the build process, this article by Dan Wellman is worth a read (https://www.packtpub.com/books/content/building-custom-version-jquery). Using a GUI as an alternative There is an online GUI available, which performs much the same tasks, without the need to install Git or Grunt. It's available at hhttp://projects.jga.me/jquery-builder/, although it is worth noting that it hasn't been updated for a while! Okay, so we have jQuery installed; let's take a look at one more useful function that will help in the event of debugging errors in our code. Support for source maps has been made available within jQuery since version 1.9. Let's take a look at how they work and see a simple example in action. Adding source map support Imagine a scenario, if you will, where you've created a killer site, which is running well, until you start getting complaints about problems with some of the jQuery-based functionality that is used on the site. Sounds familiar? Using an uncompressed version of jQuery on a production site is not an option; instead we can use source maps. Simply put, these map a compressed version of jQuery against the relevant line in the original source. Historically, source maps have given developers a lot of heartache when implementing, to the extent that the jQuery Team had to revert to disabling the automatic use of maps! For best effects, it is recommended that you use a local web server, such as WAMP (PC) or MAMP (Mac), to view this demo and that you use Chrome as your browser. Source maps are not difficult to implement; let's run through how you can implement them: Extract a copy of the sourcemap folder and save it to your project area locally. Press Ctrl + Shift + I to bring up the Developer Tools in Chrome. Click on Sources, then double-click on the sourcemap.html file—in the code window, and finally click on 17. Now, run the demo in Chrome—we will see it paused; revert back to the developer toolbar where line 17 is highlighted. The relevant calls to the jQuery library are shown on the right-hand side of the screen: If we double-click on the n.event.dispatch entry on the right, Chrome refreshes the toolbar and displays the original source line (highlighted) from the jQuery library, as shown here: It is well worth spending the time to get to know source maps—all the latest browsers support it, including IE11. Even though we've only used a simple example here, it doesn't matter as the principle is exactly the same, no matter how much code is used in the site. For a more in-depth tutorial that covers all the browsers, it is worth heading over to http://blogs.msdn.com/b/davrous/archive/2014/08/22/enhance-your-javascript-debugging-life-thanks-to-the-source-map-support-available-in-ie11-chrome-opera-amp-firefox.aspx—it is worth a read! Adding support for source maps We've just previewed the source map, source map support has already been added to the library. It is worth noting though that source maps are not included with the current versions of jQuery by default. If you need to download a more recent version or add support for the first time, then follow these steps: Source maps can be downloaded from the main site using http://code.jquery.com/jquery-X.X.X.min.map, where X represents the version number of jQuery being used. Open a copy of the minified version of the library and then add this line at the end of the file: //# sourceMappingURL=jquery.min.map Save it and then store it in the JavaScript folder of your project. Make sure you have copies of both the compressed and uncompressed versions of the library within the same folder. Let's move on and look at one more critical part of loading jQuery: if, for some unknown reason, jQuery becomes completely unavailable, then we can add a fallback position to our site that allows graceful degradation. It's a small but crucial part of any site and presents a better user experience than your site simply falling over! Working with Modernizr as a fallback A best practice when working with jQuery is to ensure that a fallback is provided for the library, should the primary version not be available. (Yes, it's irritating when it happens, but it can happen!) Typically, we might use a little JavaScript, such as the following example, in the best practice suggestions. This would work perfectly well but doesn't provide a graceful fallback. Instead, we can use Modernizr to perform the check for us and provide a graceful degradation if all fails. Modernizr is a feature detection library for HTML5/CSS3, which can be used to provide a standardized fallback mechanism in the event of a functionality not being available. You can learn more at http://www.modernizr.com. As an example, the code might look like this at the end of our website page. We first try to load jQuery using the CDN link, falling back to a local copy if that hasn't worked or an alternative if both fail: <body><script src="js/modernizr.js"></script><script type="text/javascript">Modernizr.load([{load: 'http://code.jquery.com/jquery-2.1.1.min.js',complete: function () {// Confirm if jQuery was loaded using CDN link// if not, fall back to local versionif ( !window.jQuery ) {Modernizr.load('js/jquery-latest.min.js');}}},// This script would wait until fallback is loaded, beforeloading{ load: 'jquery-example.js' }]);</script></body> In this way, we can ensure that jQuery either loads locally or from the CDN link—if all else fails, then we can at least make a graceful exit. Best practices for loading jQuery So far, we've examined several ways of loading jQuery into our pages, over and above the usual route of downloading the library locally or using a CDN link in our code. Now that we have it installed, it's a good opportunity to cover some of the best practices we should try to incorporate into our pages when loading jQuery: Always try to use a CDN to include jQuery on your production site. We can take advantage of the high availability and low latency offered by CDN services; the library may already be precached too, avoiding the need to download it again. Try to implement a fallback on your locally hosted library of the same version. If CDN links become unavailable (and they are not 100 percent infallible), then the local version will kick in automatically, until the CDN link becomes available again: <script type="text/javascript" src="//code.jquery.com/jquery-1.11.1.min.js"></script><script>window.jQuery || document.write('<scriptsrc="js/jquery-1.11.1.min.js"></script>')</script> Note that although this will work equally well as using Modernizr, it doesn't provide a graceful fallback if both the versions of jQuery should become unavailable. Although one hopes to never be in this position, at least we can use CSS to provide a graceful exit! Use protocol-relative/protocol-independent URLs; the browser will automatically determine which protocol to use. If HTTPS is not available, then it will fall back to HTTP. If you look carefully at the code in the previous point, it shows a perfect example of a protocol-independent URL, with the call to jQuery from the main jQuery Core site. If possible, keep all your JavaScript and jQuery inclusions at the bottom of your page—scripts block the rendering of the rest of the page until they have been fully rendered. Use the jQuery 2.x branch, unless you need to support IE6-8; in this case, use jQuery 1.x instead—do not load multiple jQuery versions. If you load jQuery using a CDN link, always specify the complete version number you want to load, such as jquery-1.11.1.min.js. If you are using other libraries, such as Prototype, MooTools, Zepto, and so on, that use the $ sign as well, try not to use $ to call jQuery functions and simply use jQuery instead. You can return the control of $ back to the other library with a call to the $.noConflict() function. For advanced browser feature detection, use Modernizr. It is worth noting that there may be instances where it isn't always possible to follow best practices; circumstances may dictate that we need to make allowances for requirements, where best practices can't be used. However, this should be kept to a minimum where possible; one might argue that there are flaws in our design if most of the code doesn't follow best practices! Summary If you thought that the only methods to include jQuery were via a manual download or using a CDN link, then hopefully this article has opened your eyes to some alternatives—let's take a moment to recap what we have learned. We kicked off with a customary look at how most developers are likely to include jQuery before quickly moving on to look at other sources. We started with a look at how to use Node, before turning our attention to using the Bower package manager. Next, we had a look at how we can reference individual modules within jQuery using the AMD approach. We then moved on and turned our attention to creating custom builds of the library using Git. We then covered how we can use source maps to debug our code, with a look at enabling support for them within Google's Chrome browser. To round out our journey of loading jQuery, we saw what might happen if we can't load jQuery at all and how we can get around this, by using Modernizr to allow our pages to degrade gracefully. We then finished the article with some of the best practices that we can follow when referencing jQuery. Resources for Article: Further resources on this subject: Using different jQuery event listeners for responsive interaction [Article] Building a Custom Version of jQuery [Article] Learning jQuery [Article]
Read more
  • 0
  • 0
  • 51051

article-image-working-forms-using-rest-api
Packt
11 Jul 2016
21 min read
Save for later

Working with Forms using REST API

Packt
11 Jul 2016
21 min read
WordPress, being an ever-improving content management system, is now moving toward becoming a full-fledged application framework, which brings up the necessity for new APIs. The WordPress REST API has been created to create necessary and reliable APIs. The plugin provides an easy-to-use REST API, available via HTTP that grabs your site's data in the JSON format and further retrieves it. WordPress REST API is now at its second version and has brought a few core differences, compared to its previous one, including route registration via functions, endpoints that take a single parameter, and all built-in endpoints that use a common controller. In this article by Sufyan bin Uzayr, author of the book Learning WordPress REST API, you'll learn how to write a functional plugin to create and edit posts using the latest version of the WordPress REST API. This article will also cover the process on how to work efficiently with data to update your page dynamically based on results. This tutorial comes to serve as a basis and introduction to processing form data using the REST API and AJAX and not as a redo of the WordPress post editor or a frontend editing plugin. REST API's first task is to make your WordPress powered websites more dynamic, and for this precise reason, I have created a thorough tutorial that will take you step by step in this process. After you understand how the framework works, you will be able to implement it on your sites, thus making them more dynamic. (For more resources related to this topic, see here.) Fundamentals In this article, you will be doing something similar, but instead of using the WordPress HTTP API and PHP, you'll use jQuery's AJAX methods. All of the code for that project should go in its plugin file.Another important tip before starting is to have the required JavaScript client installed that uses the WordPress REST API. You will be using the JavaScript client to make it possible to authorize via the current user's cookies. As a note for this tip would be the fact that you can actually substitute another authorization method such as OAuth if you would find it suitable. Setup the plugin During the course of this tutorial, you'll only need one PHP and one JavaScript file. Nothing else is necessary for the creation of our plugin. We will be starting off with writing a simple PHP file that will do the following three key things for us: Enqueue the JavaScript file Localize a dynamically created JavaScript object into the DOM when you use the said file Create the HTML markup for our future form All that is required of us is to have two functions and two hooks. To get this done, we will be creating a new folder in our plugin directory with one of the PHP files inside it. This will serve as the foundation for our future plugin. We will give the file a conventional name, such as my-rest-post-editor.php. In the following you can see our starting PHP file with the necessary empty functions that we will be expanding in the next steps: <?php /* Plugin Name: My REST API Post Editor */ add_shortcode( 'My-Post-EditorR', 'my_rest_post_editor_form'); function my_rest_post_editor_form( ) { } add_action( 'wp_enqueue_scripts', 'my_rest_api_scripts' ); function my_rest_api_scripts() { } For this demonstration, notice that you're working only with the post title and post content. This means that in the form editor function, you only need the HTML for a simple form for those two fields. Creating the form with HTML markup As you can notice, we are only working with the post title and post content. This makes it necessary only to have the HTML for a simple form for those two fields in the editor form function. The necessary code excerpt is as follows: function my_rest_post_editor_form( ) { $form = ' <form id="editor"> <input type="text" name="title" id="title" value="My title"> <textarea id="content"></textarea> <input type="submit" value="Submit" id="submit"> </form> <div id="results"> </div>'; return $form; } Our aim is to show this only to those users who are logged in on the site and have the ability to edit posts. We will be wrapping the variable containing the form in some conditional checks that will allow us to fulfill the said aim. These tests will check whether the user is logged-inin the system or not, and if he's not,he will be provided with a link to the default WordPress login page. The code excerpt with the required function is as follows: function my_rest_post_editor_form( ) { $form = ' <form id="editor"> <input type="text" name="title" id="title" value="My title"> <textarea id="content"></textarea> <input type="submit" value="Submit" id="submit"> </form> <div id="results"> </div> '; if ( is_user_logged_in() ) { if ( user_can( get_current_user_id(), 'edit_posts' ) ) { return $form; } else { return __( 'You do not have permissions to do this.', 'my-rest-post-editor' ); } } else {      return sprintf( '<a href="%1s" title="Login">%2s</a>', wp_login_url( get_permalink( get_ queried_object_id() ) ), __( 'You must be logged in to do this, please click here to log in.', 'my-rest-post-editor') ); } } To avoid confusions, we do not want our page to be processed automatically or somehow cause a page reload upon submitting it, which is why our form will not have either a method or an action set. This is an important thing to notice because that's how we are avoiding the unnecessary automatic processes. Enqueueing your JavaScript file Another necessary thing to do is to enqueue your JavaScript file. This step is important because this function provides a systematic and organized way of loading Javascript files and styles. Using the wp_enqueue_script function, you will tell WordPress when to load a script, where to load it, and what are its dependencies. By doing this, everyone utilizes the built-in JavaScript libraries that come bundled with WordPress rather than loading the same third-party script several times. Another big advantage of doing this is that it helps reduce the page load time and avoids potential code conflicts with other plugins. We use this method instead the wrong method of loading in the head section of our site because that's how we avoid loading two different plugins twice, in case we add one more manually. Once the enqueuing is done, we will be localizing an array of data into it, which you'll need to include in the JavaScript that needs to be generated dynamically. This will include the base URL for the REST API, as that can change with a filter, mainly for security purposes. Our next step is to make this piece as useable and user-friendly as possible, and for this, we will be creating both a failure and success message in an array so that our strings would be translation friendly. When done with this, you'll need to know the current user's ID and include that one in the code as well. The result we have accomplished so far is owed to the wp_enqueue_script()and wp_localize_script()functions. It would also be possible to add custom styles to the editor, and that would be achieved by using the wp_enqueue_style()function. While we have assessed the importance and functionality of wp_enqueue_script(), let's take a close look at the other ones as well. The wp_localize_script()function allows you to localize a registered script with data for a JavaScript variable. By this, we will be offered a properly localized translation for any used string within our script. As WordPress currently offers localization API in PHP; this comes as a necessary measure. Though the localization is the main use of the function, it can be used to make any data available to your script that you can usually only get from the server side of WordPress. The wp_enqueue_stylefunctionis the best solution for adding stylesheets within your WordPress plugins, as this will handle all of the stylesheets that need to be added to the page and will do it in one place. If you have two plugins using the same stylesheet and both of them use the same handle, then WordPress will only add the stylesheet on the page once. When adding things to wp_enqueue_style, it adds your styles to a list of stylesheets it needs to add on the page when it is loaded. If a handle already exists, it will not add a new stylesheet to the list. The function is as follows: function my_rest_api_scripts() { wp_enqueue_script( 'my-api-post-editor', plugins_url( 'my-api-post-editor.js', __FILE__ ), array( 'jquery' ), false, true ); wp_localize_script( 'my-api-post-editor', 'my_post_editor', array( 'root' => esc_url_raw( rest_url() ), 'nonce' => wp_create_nonce( 'wp_json' ), 'successMessage' => __( 'Post Creation Successful.', 'my-rest-post-editor' ), 'failureMessage' => __( 'An error has occurred.', 'my-rest-post-editor' ), 'userID'    => get_current_user_id(), ) ); } That will be all the PHP you need as everything else is handled via JavaScript. Creating a new page with the editor shortcode (MY-POST-EDITOR) is what you should be doing next and then proceed to that new page. If you've followed the instructions precisely, then you should see the post editor form on that page. It will obviously not be functional just yet, not before we write some JavaScript that will add functionality to it. Issuing requests for creating posts To create posts from our form, we will need to use a POST request, which we can make by using jQuery's AJAX method. This should be a familiar and very simple process for you, yet if you're not acquitted with it,you may want to take a look through the documentation and guiding offered by the guys at jQuery themselves (http://api.jquery.com/jquery.ajax/). You will also need to create two things that may be new to you, such as the JSON array and adding the authorization header. In the following, we will be walking through each of them in details. To create the JSON object for your AJAX request, you must firstly create a JavaScript array from the input and then use the JSON.stringify()to convert it into JSON. The JSON.strinfiy() method will convert a JavaScript value to a JSON string by replacing values if a replacer function is specified or optionally including only the specified properties if a replacer array is specified. The following code excerpt is the beginning of the JavaScript file that shows how to build the JSON array: (function($){ $( '#editor' ).on( 'submit', function(e) {        e.preventDefault(); var title = $( '#title' ).val(); var content = $( '#content' ).val();        var JSONObj = { "title"  :title, "content_raw" :content, "status"  :'publish' };        var data = JSON.stringify(JSONObj); })(jQuery); Before passing the variable data to the AJAX request, you will have first to set the URL for the request. This step is as simple as appending wp.v2/posts to the root URL for the API, which is accessible via _POST_EDITOR.root: var url = _POST_EDITOR.root; url = url + 'wp/v2/posts'; The AJAX request will look a lot like any other AJAX request you would make, with the sole exception of the authorization headers. Because of the REST API's JavaScript client, the only thing that you will be required to do is to add a header to the request containing the nonce set in the _POST_EDITOR object. Another method that could work as an alternative would be the OAuth authorization method. Nonce is an authorization method that generates a number for specific use, such as a session authentication. In this context, nonce stands for number used once or number once. OAuth authorization method OAuth authorization method provides users with secure access to server resources on behalf of a resource owner. It specifies a process for resource owners to authorize third-party access to their server resources without sharing any user credentials. It is important to state that is has been designed to work with HTTP protocols, allowing an authorization server to issue access tokens to third-party clients. The third party would then use the access token to access the protected resources hosted on the server. Using the nonce method to verify cookie authentication involves setting a request header with the name X-WP-Nonce, which will contain the said nonce value. You can then use the beforeSend function of the request to send the nonce. Following is what that looks like in the AJAX request: $.ajax({            type:"POST", url: url, dataType : 'json', data: data,            beforeSend : function( xhr ) {                xhr.setRequestHeader( 'X-WP-Nonce', MY_POST_EDITOR.nonce ); }, }); As you might have noticed, the only missing things are the functions that would display success and failure. These alerts can be easily created by using the messages that we localized into the script earlier. We will now output the result of the provided request as a simple JSON array so that we would see how it looks like. Following is the complete code for the JavaScript to create a post editor that can now create new posts: (function($){ $( '#editor' ).on( 'submit', function(e) {        e.preventDefault(); var title = $( '#title' ).val(); var content = $( '#content' ).val();        var JSONObj = { "title"   :title, "content_raw" :content, "status"   :'publish' };        var data = JSON.stringify(JSONObj);        var url = MY_POST_EDITOR.root; url += 'wp/v2/posts';        $.ajax({            type:"POST", url: url, dataType : 'json', data: data,            beforeSend : function( xhr ) {                xhr.setRequestHeader( 'X-WP-Nonce', MY_POST_EDITOR.nonce ); }, success: function(response) {                alert( MY_POST_EDITOR.successMessage );                $( "#results").append( JSON.stringify( response ) ); }, failure: function( response ) {                alert( MY_POST_EDITOR.failureMessage ); } }); }); })(jQuery); This is how we can create a basic editor in WP REST API. If you are a logged in and the API is still active, you should create a new post and then create an alert telling you that the post has been created. The returned JSON object would then be placed into the #results container. Insert image_B05401_04_01.png If you followed each and every step precisely, you should now have a basic editor ready. You may want to give it a try and see how it works for you. So far, we have created and set up a basic editor that allows you to create posts. In our next steps, we will go through the process of adding functionality to our plugin, which will enable us to edit existing posts. Issuing requests for editing posts In this section, we will go together through the process of adding functionality to our editor so that we could edit existing posts. This part may be a little bit more detailed, mainly because the first part of our tutorial covered the basics and setup of the editor. To edit posts, we would need to have the following two things: A list of posts by author, with all of the posts titles and post content A new form field to hold the ID of the post you're editing As you can understand, the list of posts by author and the form field would lay the foundation for the functionality of editing posts. Before adding that hidden field to your form, add the following HTMLcode: <input type="hidden" name="post-id" id="post-id" value=""> In this step, we will need to get the value of the field for creating new posts. This will be achieved by writing a few lines of code in the JavaScript function. This code will then allow us to automatically change the URL, thus making it possible to edit the post of the said ID, rather than having to create a new one every time we would go through the process. This would be easily achieved by writing down a simple code piece, like the following one: var postID = $( '#post-id').val(); if ( undefined !== postID ) { url += '/';    url += postID; } As we move on, the preceding code will be placed before the AJAX section of the editor form processor. It is important to understand that the variable URL in the AJAX function will have the ID of the post that you are editing only if the field has value as well. The case in which no such value is present for the field, it will yield in the creation of a new post, which would be identical to the process you have been taken through previously. It is important to understand that to populate the said field, including the post title and post content field, you will be required to add a second form. This will result in all posts to be retrieved by the current user, by using a GET request. Based on the selection provided in the said form, you can set the editor form to update. In the PHP, you will then add the second form, which will look similar to the following: <form id="select-post"> <select id="posts" name="posts"> </select> <input type="submit" value="Select a Post to edit" id="choose-post"> </form> REST API will now be used to populate the options within the #posts select. For us to achieve that, we will have to create a request for posts by the current user. To accomplish our goal, we will be using the available results. We will now have to form the URL for requesting posts by the current user, which will happen if you will set the current user ID as a part of the _POST_EDITOR object during the processes of the script setup. A function needs to be created to get posts by the current author and populate the select field. This is very similar to what we did when we made our posts update, yet it is way simpler. This function will not require any authentication, and given the fact that you have already been taken through the process of creating a similar function, creating this one shouldn't be any more of a hassle for you. The success function loops through the results and adds them to the postselector form as options for its one field and will generate a similar code, something like the following: function getPostsByUser( defaultID ) {    url += '?filter[author]=';    url += my_POST_EDITOR.userID;    url += '&filter[per_page]=20';    $.ajax({ type:"GET", url: url, dataType : 'json', success: function(response) { var posts = {}; $.each(response, function(i, val) {                $( "#posts" ).append(new Option( val.title, val.ID ) ); });            if ( undefined != defaultID ) {                $('[name=posts]').val( defaultID ) } } }); } You can notice that the function we have created has one of the parameters set for defaultID, but this shouldn't be a matter of concern for you just now. The parameter, if defined, would be used to set the default value of the select field, yet, for now, we will ignore it. We will use the very same function, but without the default value, and will then set it to run on document ready. This is simply achieved by a small piece of code like the following: $( document ).ready( function() {    getPostsByUser(); }); Having a list of posts by the current user isn't enough, and you will have to get the title and the content of the selected post and push it into the form for further editing. This is will assure the proper editing possibility and make it possible to achieve the projected result. Moving on, we will need the other GET request to run on the submission of the postselector form. This should be something of the kind: $( '#select-post' ).on( 'submit', function(e) {    e.preventDefault();    var ID = $( '#posts' ).val();    var postURL = MY_POST_EDITOR.root; postURL += 'wp/v2/posts/';    postURL += ID;    $.ajax({ type:"GET", url: postURL, dataType : 'json', success: function(post) { var title = post.title; var content = post.content;            var postID = postID; $( '#editor #title').val( title ); $( '#editor #content').val( content );            $( '#select-post #posts').val( postID ); } }); }); In the form of <json-url>wp/v2/posts/<post-id>, we will build a new URL that will be used to scrape post data for any selected post. This will result in us making an actual request that will be used to take the returned data and then set it as the value of any of the three fields there in the editor form. Upon refreshing the page, you will be able to see all posts by the current user in a specific selector. Submitting the data by a click will yield in the following: The content and title of the post that you have selected will be visible to the editor, given that you have followed the preceding steps correctly. And the second occurrence will be in the fact that the hidden field for the post ID you have added should now be set. Even though the content and title of the post will be visible, we would still be unable to edit the actual posts as the function for the editor form was not set for this specific purpose, just yet. To achieve that, we will need to make a small modification to the function that will make it possible for the content to be editable. Besides, at the moment, we would only get our content and title displayed in raw JSON data; however, applying the method described previously will improve the success function for that request so that the title and content of the post displays in the proper container, #results. In order to achieve this, you will need a function that is going to update the said container with the appropriate data. The code piece for this function will be something like the following: function results( val ) { $( "#results").empty();        $( "#results" ).append( '<div class="post-title">' + val.title + '</div>'  );        $( "#results" ).append( '<div class="post-content">' + val.content + '</div>'  ); } The preceding code makes use of some very simple jQuery techniques, but that doesn't make it any worse as a proper introduction to updating page content by making use of data from the REST API. There are countless ways of getting a lot more detailed or creative with this if you dive in the markup or start adding any additional fields. That will always be an option for you if you're more of a savvy developer, but as an introductory tutorial, we're trying not to keep this tutorial extremely technical, which is why we'll stick to the provided example for now. Insert image_B05401_04_02.png As we move forward, you can use it in your modified form procession function, which will be something like the following: $( '#editor' ).on( 'submit', function(e) {    e.preventDefault(); var title = $( '#title' ).val(); var content = $( '#content' ).val(); console.log( content );    var JSONObj = { "title" "content_raw" "status" }; :title, :content, :'publish'    var data = JSON.stringify(JSONObj);    var postID = $( '#post-id').val();    if ( undefined !== postID ) { url += '/';        url += postID; }    $.ajax({        type:"POST", url: url, dataType : 'json', data: data,        beforeSend : function( xhr ) {            xhr.setRequestHeader( 'X-WP-Nonce', MY_POST_EDITOR.nonce ); }, success: function(response) {            alert( MY_POST_EDITOR.successMessage );            getPostsByUser( response.ID ); results( response ); }, failure: function( response ) {            alert( MY_POST_EDITOR.failureMessage ); } }); }); As you have noticed, a few changes have been applied, and we will go through each of them in specific: The first thing that has changed is the Post ID that's being edited is now conditionally added. This implies that we will make use of the form and it will serve to create new posts by POSTing to the endpoint. Another change with the POST ID is that it will now update posts via posts/<post-id>. The second change regards the success function. A new result() function was used to output the post title and content during the process of editing. Another thing is that we also reran the getPostsbyUser() function, yet it has been set in a way that posts will automatically offer the functionality of editing, just after you will createthem. Summary With this, we havefinishedoff this article, and if you have followed each step with precision, you should now have a simple yet functional plugin that can create and edit posts by using the WordPress REST API. This article also covered techniques on how to work with data in order to update your page dynamically based on the available results. We will now progress toward further complicated actions with REST API. Resources for Article: Further resources on this subject: Implementing a Log-in screen using Ext JS [article] Cluster Computing Using Scala [article] Understanding PHP basics [article]
Read more
  • 0
  • 0
  • 50652

article-image-5-reasons-you-should-learn-node-js
Richard Gall
22 Feb 2019
7 min read
Save for later

5 reasons you should learn Node.js

Richard Gall
22 Feb 2019
7 min read
Open source software in general, and JavaScript in particular, can seem like a place where boom and bust is the rule of law: rapid growth before everyone moves on to the next big thing. But Node.js is different. Although it certainly couldn’t be described as new, and it's growth hasn't been dramatic by any measure, over the last few years it has managed to push itself forward as one of the most widely used JavaScript tools on the planet. Do you want to learn Node.js? Popularity, however can only tell you so much. The key question, if you’re reading this, is whether you should learn Node.js. So, to help you decide if it’s time to learn the JavaScript library, here’s a list of the biggest reasons why you should start learning Node.js... Learn everything you need to know about Node.js with Packt's Node.js Complete Reference Guide Book Learning Path. Node.js lets you write JavaScript on both client and server Okay, let’s get the obvious one out of the way first: Node.js is worth learning because it allows you to write JavaScript on the server. This has arguably transformed the way we think about JavaScript. Whereas in the past it was a language specifically written on the client, backed by the likes of PHP and Java, it’s now a language that you can use across your application. Read next: The top 5 reasons Node.js could topple Java This is important because it means teams can work much more efficiently together. Using different languages for backend and frontend is typically a major source of friction. Unless you have very good polyglot developers, a team is restricted to their core skills, while tooling is also more inflexible. If you’re using JavaScript across the stack, it’s easier to use a consistent toolchain. From a personal perspective, learning Node.js is a great starting point for full stack development. In essence, it's like an add-on that immediately expands what you can do with JavaScript. In terms of your career, then, it could well make you an invaluable asset to a development team. Read next: How is Node.js changing web development? Node.js allows you to build complex and powerful applications without writing complex code Another strong argument for Node.js is that it is built for performance. This is because of 2 important things - Node.js' asynchronous-driven architecture, and the fact that it uses the V8 JavaScript engine. The significance of this is that V8 is one of the fastest implementations of JavaScript, used to power many of Google’s immensely popular in-browser products (like Gmail). Node.js is powerful because it employs an asynchronous paradigm for handling data between client and server. To clarify what this means, it’s worth comparing to the typical application server model that uses blocking I/O - in this instance, the application has to handle each request sequentially, suspending threads until they can be processed. This can add complexity to an application and, of course, slows an application down. In contrast, Node.js allows you to use non-blocking I/O in which threads (in this case sequential, not concurrent), which can manage multiple requests. If one can’t be processed, it’s effectively ‘withheld’ as a promise, which means it can be executed later without holding up other threads. This means Node.js can help you build applications of considerable complexity without adding to the complexity of your code. Node.js is well suited to building microservices Microservices have become a rapidly growing architectural style that offer increased agility and flexibility over the traditional monolith. The advantages of microservices are well documented, and whether or not they’re right for you now, it’s likely that they’re going to dominate the software landscape as the world moves away from monolithic architecture. This fact only serves to strengthen the argument that you should learn Node.js because the library is so well suited to developing in this manner. This is because it encourages you to develop in a modular and focused manner, quite literally using specific modules to develop an application. This is distinct and almost at odds with the monolithic approach to software architecture. At this point, it’s probably worth highlighting that it’s incredibly easy to package and publish the modules you build thanks to npm (node package manager). So, even if you haven’t yet worked with microservices, learning Node.js is a good way to prepare yourself for a future where they are going to become even more prevalent. Node.js can be used for more than just web development We know by now that Node.js is flexible. But it’s important to recognise that its flexibility means it can be used for a wide range of different purposes. Yes, the library's community are predominantly building applications for the web, but it’s also a useful tool for those working in ops or infrastructure. This is because Node.js is a great tool for developing other development tools. If you’re someone working to support a team of developers, or, indeed, to help manage an entire distributed software infrastructure, it could be vital in empowering you to get creative and build your own support tools. Even more surprisingly, Node.js can be used in some IoT projects. As this post from 2016 suggests, the two things might not be quite such strange bedfellows. Node.js is a robust project that won't be going anywhere As I’ve already said, in the JavaScript world frameworks and tools can appear and disappear quickly. That means deciding what to learn, and, indeed, what to integrate into your stack, can feel like a bit of a gamble. However, you can be sure that Node.js is here to stay. There are a number of reasons for this. For starters, there’s no other tool that brings JavaScript to the server. But more than that, with Google betting heavily on V8 - which is, as we’ve seen, such an important part of the project - you can be sure it’s only going to go from strength to strength. It’s also worth pointing out that Node.js went through a small crisis when io.js broke away from the main Node.js project. This feud was as much personal as it was technical, but with the rift healed, and the Node.js Foundation now managing the whole project, helping to ensure that the software is continually evolving with other relevant technological changes and that the needs of the developers who use it continue to be met. Conclusion: spend some time exploring Node.js before you begin using it at work That’s just 5 reasons why you should learn Node.js. You could find more, but broadly speaking these all underline its importance in today’s development world. If you’re still not convinced, there’s a caveat. If Node.js isn’t yet right for you, don’t assume that it’s going to fix any technological or cultural issues that have been causing you headaches. It probably won’t. In fact, you should probably tackle those challenges before deciding to use it. But that all being said, even if you don’t think it’s the right time to use Node.js professionally, that doesn’t mean it isn’t worth learning. As you can see, it’s well worth your time. Who knows where it might take you? Ready to begin learning? Purchase Node.js Complete Reference Guide or read it for free with a subscription free trial.
Read more
  • 0
  • 0
  • 50419

article-image-making-simple-web-based-ssh-client-using-nodejs-and-socketio
Jakub Mandula
28 Oct 2015
7 min read
Save for later

Making a simple Web based SSH client using Node.js and Socket.io

Jakub Mandula
28 Oct 2015
7 min read
If you are reading this post, you probably know what SSH stands for. But just for the sake of formality, here we go: SSH stands for Secure Shell. It is a network protocol for secure access to the shell on a remote computer. You can do much more over SSH besides commanding your computer. Here you can find further information: http://en.wikipedia.org/wiki/Secure_Shell. In this post, we are going to create a very simple web terminal. And when I say simple, I mean it! However much you like colors, it will not support them because the parsing is just beyond the scope of this post. If you want a good client-side terminal library use term.js. It is made by the same guy who wrote pty.js, which we will be using. It is able to handle pretty much all key events and COLORS!!!! Installation I am going to assume you already have your node and npm installed. First we will install all of the npm packages we will be using: npm install express pty.js socket.io Express is a super cool web framework for Node. We are going to use it to serve our static files. I know it is a bit overkill, but I like Express. pty.js is where the magic will be happening. It forks processes into virtual pseudo terminals and provides bindings for communication. Socket.io is what we will use to transmit the data from the web browser to the server and back. It uses modern WebSockets, but provides fallbacks for backward compatibility. Anytime you want to create a real-time application, Socket.io is the way to go. Planning First things first, we need to think what we want the program to do. We want the program to create an instance of a shell on the server (remote machine) and send all of the text to the browser. Back in the browser, we want to capture any user events and send them back to the server shell. The WebSSH server This is the code that will power the terminal forwarding. Open a new file named server.js and start by importing all of the libraries: var express = require('express'); var https = require('https'); var http = require('http'); var fs = require('fs'); var pty = require('pty.js'); Set up express: // Setup the express app var app = express(); // Static file serving app.use("/",express.static("./")); Next we are going to create the server. // Creating an HTTP server var server = http.createServer(app).listen(8080) If you want to use HTTPS, which you probably will, you need to generate a key and certificate and import them as shown. var options = { key: fs.readFileSync('keys/key.pem'), cert: fs.readFileSync('keys/cert.pem') }; Then use the options object to create the actual server. Notice that this time we are using the https package. // Create an HTTPS server var server = https.createServer(options, app).listen(8080) CAUTION: Even if you use HTTPS, do not use this example program on the Internet. You are not authenticating the client in any way and thus providing a free open gate to your computer. Please make sure you only use this on your Private network protected by a firewall!!! Now bind the socket.io instance to the server: var io = require('socket.io')(server); After this, we can set up the place where the magic happens. // When a new socket connects io.on('connection', function(socket){ // Create terminal var term = pty.spawn('sh', [], { name: 'xterm-color', cols: 80, rows: 30, cwd: process.env.HOME, env: process.env }); // Listen on the terminal for output and send it to the client term.on('data', function(data){ socket.emit('output', data); }); // Listen on the client and send any input to the terminal socket.on('input', function(data){ term.write(data); }); // When socket disconnects, destroy the terminal socket.on("disconnect", function(){ term.destroy(); console.log("bye"); }); }); In this block, all we do is wait for new connections. When we get one, we spawn a new virtual terminal and start to pump the data from the terminal to the socket and vice versa. After the socket disconnects, we make sure to destroy the terminal. If you have noticed, I am using the simple sh shell. I did this mainly because I don't have a fancy prompt on it. Because we are not adding any parsing logic, my bash prompt would show up like this: ]0;piman@mothership: ~ _[01;32m✓ [33mpiman_[0m ↣ _[1;34m[~]_[37m$[0m - Eww! But you may use any shell you like. This is all that we need on the server side. Save the file and close it. Client side The client side is going to be just a very simple HTML file. Start with a very simple HTML markup: <!doctype html> <html> <head> <title>SSH Client</title> <script type="text/javascript" src="//cdnjs.cloudflare.com/ajax/libs/socket.io/1.3.5/socket.io.min.js"></script> <script type="text/javascript" src="//cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js"></script> <style> body { margin: 0; padding: 0; } .terminal { font-family: monospace; color: white; background: black; } </style> </head> <body> <h1>SSH</h1> <div class="terminal"> </div> <script> </script> </body> </html> I am downloading the client side libraries jquery and socket.io from cdnjs. All of the client code will be written in the script tag below the terminal div. Surprisingly the code is very simple: // Connect to the socket.io server var socket = io.connect('http://localhost:8080'); // Wait for data from the server socket.on('output', function (data) { // Insert some line breaks where they belong data = data.replace("n", "<br>"); data = data.replace("r", "<br>"); // Append the data to our terminal $('.terminal').append(data); }); // Listen for user input and pass it to the server $(document).on("keypress",function(e){ var char = String.fromCharCode(e.which); socket.emit("input", char); }); Notice that we do not have to explicitly append the text the client types to the terminal mainly because the server echos it back anyways. Now we are done! Run the server and open up the URL in your browser. node server.js You should see a small prompt and be able to start typing commands. You can now explore you machine from the browser! Remember that our Web Terminal does not support Tab, Ctrl, Backspace or Esc characters. Implementing this is your homework. Conclusion I hope you found this tutorial useful. You can apply the knowledge in any real-time application where communication with the server is critical. All the code is available here. Please note, that if you'd like to use a browser terminal I strongly recommend term.js. It supports colors and styles and all the basic keys including Tabs, Backspace etc. I use it in my PiDashboard project. It is much cleaner and less tedious than the example I have here. I can't wait what amazing apps you will invent based on this. About the Author Jakub Mandula is a student interested in anything to do with technology, computers, mathematics or science.
Read more
  • 0
  • 6
  • 48193

article-image-basic-website-using-nodejs-and-mysql-database
Packt
14 Jul 2016
5 min read
Save for later

Basic Website using Node.js and MySQL database

Packt
14 Jul 2016
5 min read
In this article by Fernando Monteiro author of the book Node.JS 6.x Blueprints we will understand some basic concepts of a Node.js application using a relational database (Mysql) and also try to look at some differences between Object Document Mapper (ODM) from MongoDB and Object Relational Mapper (ORM) used by Sequelize and Mysql. For this we will create a simple application and use the resources we have available as sequelize is a powerful middleware for creation of models and mapping database. We will also use another engine template called Swig and demonstrate how we can add the template engine manually. (For more resources related to this topic, see here.) Creating the baseline applications The first step is to create another directory, I'll use the root folder. Create a folder called chapter-02. Open your terminal/shell on this folder and type the express command: express –-git Note that we are using only the –-git flag this time, we will use another template engine but we will install it manually. Installing Swig template Engine The first step to do is change the default express template engine to use Swig, a pretty simple template engine very flexible and stable, also offers us a syntax very similar to Angular which is denoting expressions just by using double curly brackets {{ variableName }}. More information about Swig can be found on the official website at: http://paularmstrong.github.io/swig/docs/ Open the package.json file and replace the jade line for the following: "swig": "^1.4.2" Open your terminal/shell on project folder and type: npm install Before we proceed let's make some adjust to app.js, we need to add the swig module. Open app.js and add the following code, right after the var bodyParser = require('body-parser'); line: var swig = require('swig'); Replace the default jade template engine line for the following code: var swig = new swig.Swig(); app.engine('html', swig.renderFile); app.set('view engine', 'html'); Refactoring the views folder Let's change the views folder to the following new structure: views pages/ partials/ Remove the default jade files form views. Create a file called layout.html inside pages folder and place the following code: <!DOCTYPE html> <html> <head> </head> <body> {% block content %} {% endblock %} </body> </html> Create a index.html inside the views/pages folder and place the following code: {% extends 'layout.html' %} {% block title %}{% endblock %} {% block content %} <h1>{{ title }}</h1> Welcome to {{ title }} {% endblock %} Create a error.html page inside the views/pages folder and place the following code: {% extends 'layout.html' %} {% block title %}{% endblock %} {% block content %} <div class="container"> <h1>{{ message }}</h1> <h2>{{ error.status }}</h2> <pre>{{ error.stack }}</pre> </div> {% endblock %} We need to adjust the views path on app.js, replace the code on line 14 for the following code: // view engine setup app.set('views', path.join(__dirname, 'views/pages')); At this time we completed the first step to start our MVC application. In this example we will use the MVC pattern in its full meaning, Model, View, Controller. Creating controllers folder Create a folder called controllers inside the root project folder. Create a index.js inside the controllers folder and place the following code: // Index controller exports.show = function(req, res) { // Show index content res.render('index', { title: 'Express' }); }; Edit the app.js file and replace the original index route app.use('/', routes); with the following code: app.get('/', index.show); Add the controller path to app.js on line 9, replace the original code, with the following code: // Inject index controller var index = require('./controllers/index'); Now it's time to get if all goes as expected, we run the application and check the result. Type on your terminal/shell the following command: npm start Check with the following URL: http://localhost:3000, you'll see the welcome message of express framework. Removing the default routes folder Remove the routes folder and its content. Remove the user route from the app.js, after the index controller and on line 31. Adding partials files for head and footer Inside views/partials create a new file called head.html and place the following code: <meta charset="utf-8"> <title>{{ title }}</title> <link rel='stylesheet' href='https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/4.0.0-alpha.2/css/bootstrap.min.css'> <link rel="stylesheet" href="/stylesheets/style.css"> Inside views/partials create a file called footer.html and place the following code: <script src='https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.1/jquery.min.js'></script> <script src='https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/4.0.0-alpha.2/js/bootstrap.min.js'></script> Now is time to add the partials file to layout.html page using the include tag. Open layout.html and add the following highlighted code: <!DOCTYPE html> <html> <head> {% include "../partials/head.html" %} </head> <body> {% block content %} {% endblock %} {% include "../partials/footer.html" %} </body> </html> Finally we are prepared to continue with our project, this time our directories structure looks like the following image: Folder structure Summaray In this article, we are discussing the basic concept of Node.js and Mysql database and we also saw how to refactor express engine template and use another resource like Swig template library to build a basic website. Resources for Article: Further resources on this subject: Exception Handling in MySQL for Python [article] Python Scripting Essentials [article] Splunk's Input Methods and Data Feeds [article]
Read more
  • 0
  • 0
  • 47023
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-build-a-foodie-bot-with-javascript
Gebin George
03 May 2018
7 min read
Save for later

Build a foodie bot with JavaScript

Gebin George
03 May 2018
7 min read
Today, we are going to build a chatbot that can search for restaurants based on user goals and preferences. Let us begin by building Node.js modules to get data from Zomato based on user preferences. Create a file called zomato.js. Add a request module to the Node.js libraries using the following command in the console: This tutorial has been taken from Hands-On Chatbots and Conversational UI Development. > npm install request --save In zomato.js, add the following code to begin with: var request = require('request'); var baseURL = 'https://developers.zomato.com/api/v2.1/'; var apiKey = 'YOUR_API_KEY'; var catergories = null; var cuisines = null; getCategories(); getCuisines(76); Replace YOUR_API_KEY with your Zomato key. Let's build functions to get the list of categories and cuisines at startup. These queries need not be run when the user asks for a restaurant search because this information is pretty much static: function getCuisines(cityId){ var options = { uri: baseURL + 'cuisines', headers: { 'user-key': apiKey }, qs: {'city_id':cityId}, method: 'GET' } var callback = function(error, response, body) { if (error) { console.log('Error sending messages: ', error) } else if (response.body.error) { console.log('Error: ', response.body.error) } else { console.log(body); cuisines = JSON.parse(body).cuisines; } } request(options,callback); } The preceding code will fetch a list of cuisines available in a particular city (identified by a Zomato city ID). Let us add the code for identifying the list of categories: function getCategories(){ var options = { uri: baseURL + 'categories', headers: { 'user-key': apiKey }, qs: {}, method: 'GET' } var callback = function(error, response, body) { if (error) { console.log('Error sending messages: ', error) } else if (response.body.error) { console.log('Error: ', response.body.error) } else { categories = JSON.parse(body).categories; } } request(options,callback); } Now that we have the basic functions out of our way, let us code in the restaurant search code: function getRestaurant(cuisine, location, category){ var cuisineId = getCuisineId(cuisine); var categoryId = getCategoryId(category); var options = { uri: baseURL + 'locations', headers: { 'user-key': apiKey }, qs: {'query':location}, method: 'GET' } var callback = function(error, response, body) { if (error) { console.log('Error sending messages: ', error) } else if (response.body.error) { console.log('Error: ', response.body.error) } else { console.log(body); locationInfo = JSON.parse(body).location_suggestions; search(locationInfo[0], cuisineId, categoryId); } } request(options,callback); } function search(location, cuisineId, categoryId){ var options = { uri: baseURL + 'search', headers: { 'user-key': apiKey }, qs: {'entity_id': location.entity_id, 'entity_type': location.entity_type, 'cuisines': [cuisineId], 'categories': [categoryId]}, method: 'GET' } var callback = function(error, response, body) { if (error) { console.log('Error sending messages: ', error) } else if (response.body.error) { console.log('Error: ', response.body.error) } else { console.log('Found restaurants:') var results = JSON.parse(body).restaurants; console.log(results); } } request(options,callback); } The preceding code will look for restaurants in a given location, cuisine, and category. For instance, you can search for a list of Indian restaurants in Newington, Edinburgh that do delivery. We now need to integrate this with the chatbot code. Let us create a separate file called index.js. Let us begin with the basics: var restify = require('restify'); var builder = require('botbuilder'); var request = require('request'); var baseURL = 'https://developers.zomato.com/api/v2.1/'; var apiKey = 'YOUR_API_KEY'; var catergories = null; var cuisines = null; Chapter 6 [ 247 ] getCategories(); //setTimeout(function(){getCategoryId('Delivery')}, 10000); getCuisines(76); //setTimeout(function(){getCuisineId('European')}, 10000); // Setup Restify Server var server = restify.createServer(); server.listen(process.env.port || process.env.PORT || 3978, function () { console.log('%s listening to %s', server.name, server.url); }); // Create chat connector for communicating with // the Bot Framework Service var connector = new builder.ChatConnector({ appId: process.env.MICROSOFT_APP_ID, appPassword: process.env.MICROSOFT_APP_PASSWORD }); // Listen for messages from users server.post('/foodiebot', connector.listen()); Add the bot dialog code to carry out the restaurant search. Let us design the bot to ask for cuisine, category, and location before proceeding to the restaurant search: var bot = new builder.UniversalBot(connector, [ function (session) { session.send("Hi there! Hungry? Looking for a restaurant?"); session.send("Say 'search restaurant' to start searching."); session.endDialog(); } ]); // Search for a restaurant bot.dialog('searchRestaurant', [ function (session) { session.send('Ok. Searching for a restaurant!'); builder.Prompts.text(session, 'Where?'); }, function (session, results) { session.conversationData.searchLocation = results.response; builder.Prompts.text(session, 'Cuisine? Indian, Italian, or anything else?'); }, function (session, results) { session.conversationData.searchCuisine = results.response; builder.Prompts.text(session, 'Delivery or Dine-in?'); }, function (session, results) { session.conversationData.searchCategory = results.response; session.send('Ok. Looking for restaurants..'); getRestaurant(session.conversationData.searchCuisine, session.conversationData.searchLocation, session.conversationData.searchCategory, session); } ]) .triggerAction({ matches: /^search restaurant$/i, confirmPrompt: 'Your restaurant search task will be abandoned. Are you sure?' }); Notice that we are calling the getRestaurant() function with four parameters. Three of these are ones that we have already defined: cuisine, location, and category. To these, we have to add another: session. This passes the session pointer that can be used to send messages to the emulator when the data is ready. Notice how this changes the getRestaurant() and search() functions: function getRestaurant(cuisine, location, category, session){ var cuisineId = getCuisineId(cuisine); var categoryId = getCategoryId(category); var options = { uri: baseURL + 'locations', headers: { 'user-key': apiKey }, qs: {'query':location}, method: 'GET' } var callback = function(error, response, body) { if (error) { console.log('Error sending messages: ', error) } else if (response.body.error) { console.log('Error: ', response.body.error) } else { console.log(body); locationInfo = JSON.parse(body).location_suggestions; search(locationInfo[0], cuisineId, categoryId, session); } } request(options,callback); } function search(location, cuisineId, categoryId, session){ var options = { uri: baseURL + 'search', headers: { 'user-key': apiKey }, qs: {'entity_id': location.entity_id, 'entity_type': location.entity_type, 'cuisines': [cuisineId], 'category': categoryId}, method: 'GET' } var callback = function(error, response, body) { if (error) { console.log('Error sending messages: ', error) } else if (response.body.error) { console.log('Error: ', response.body.error) } else { console.log('Found restaurants:') console.log(body); //var results = JSON.parse(body).restaurants; //console.log(results); var resultsCount = JSON.parse(body).results_found; console.log('Found:' + resultsCount); session.send('I have found ' + resultsCount + ' restaurants for you!'); session.endDialog(); } } request(options,callback); } Once the results are obtained, the bot responds using session.send() and ends the dialog: Now that we have the results, let's present them in a more visually appealing way using cards. To do this, we need a function that can take the results of the search and turn them into an array of cards: function presentInCards(session, results){ var msg = new builder.Message(session); msg.attachmentLayout(builder.AttachmentLayout.carousel) var heroCardArray = []; var l = results.length; if (results.length > 10){ l = 10; } for (var i = 0; i < l; i++){ var r = results[i].restaurant; var herocard = new builder.HeroCard(session) .title(r.name) .subtitle(r.location.address) .text(r.user_rating.aggregate_rating) .images([builder.CardImage.create(session, r.thumb)]) .buttons([ builder.CardAction.imBack(session, "book_table:" + r.id, "Book a table") ]); heroCardArray.push(herocard); } msg.attachments(heroCardArray); return msg; } And we call this function from the search() function: function search(location, cuisineId, categoryId, session){ var options = { uri: baseURL + 'search', headers: { 'user-key': apiKey }, qs: {'entity_id': location.entity_id, 'entity_type': location.entity_type, 'cuisines': [cuisineId], 'category': categoryId}, method: 'GET' } var callback = function(error, response, body) { if (error) { console.log('Error sending messages: ', error) } else if (response.body.error) { console.log('Error: ', response.body.error) } else { console.log('Found restaurants:') console.log(body); var results = JSON.parse(body).restaurants; var msg = presentInCards(session, results); session.send(msg); session.endDialog(); } } request(options,callback); } Here is how it looks: We saw how to build a restaurant search bot, that gives you restaurant suggestions as per your preference. If you found our post useful check out Chatbots and Conversational UI Development. Top 4 chatbot development frameworks for developers How to create a conversational assistant using Python My friend, the robot: Artificial Intelligence needs Emotional Intelligence    
Read more
  • 0
  • 0
  • 46967

article-image-building-scalable-microservices
Packt
18 Jan 2017
33 min read
Save for later

Building Scalable Microservices

Packt
18 Jan 2017
33 min read
In this article by Vikram Murugesan, the author of the book Microservices Deployment Cookbook, we will see a brief introduction to concept of the microservices. (For more resources related to this topic, see here.) Writing microservices with Spring Boot Now that our project is ready, let's look at how to write our microservice. There are several Java-based frameworks that let you create microservices. One of the most popular frameworks from the Spring ecosystem is the Spring Boot framework. In this article, we will look at how to create a simple microservice application using Spring Boot. Getting ready Any application requires an entry point to start the application. For Java-based applications, you can write a class that has the main method and run that class as a Java application. Similarly, Spring Boot requires a simple Java class with the main method to run it as a Spring Boot application (microservice). Before you start writing your Spring Boot microservice, you will also require some Maven dependencies in your pom.xml file. How to do it… Create a Java class called com.packt.microservices.geolocation.GeoLocationApplication.java and give it an empty main method: package com.packt.microservices.geolocation; public class GeoLocationApplication { public static void main(String[] args) { // left empty intentionally } } Now that we have our basic template project, let's make our project a child project of Spring Boot's spring-boot-starter-parent pom module. This module has a lot of prerequisite configurations in its pom.xml file, thereby reducing the amount of boilerplate code in our pom.xml file. At the time of writing this, 1.3.6.RELEASE was the most recent version: <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>1.3.6.RELEASE</version> </parent> After this step, you might want to run a Maven update on your project as you have added a new parent module. If you see any warnings about the version of the maven-compiler plugin, you can either ignore it or just remove the <version>3.5.1</version> element. If you remove the version element, please perform a Maven update afterward. Spring Boot has the ability to enable or disable Spring modules such as Spring MVC, Spring Data, and Spring Caching. In our use case, we will be creating some REST APIs to consume the geolocation information of the users. So we will need Spring MVC. Add the following dependencies to your pom.xml file: <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> </dependencies> We also need to expose the APIs using web servers such as Tomcat, Jetty, or Undertow. Spring Boot has an in-memory Tomcat server that starts up as soon as you start your Spring Boot application. So we already have an in-memory Tomcat server that we could utilize. Now let's modify the GeoLocationApplication.java class to make it a Spring Boot application: package com.packt.microservices.geolocation; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; @SpringBootApplication public class GeoLocationApplication { public static void main(String[] args) { SpringApplication.run(GeoLocationApplication.class, args); } } As you can see, we have added an annotation, @SpringBootApplication, to our class. The @SpringBootApplication annotation reduces the number of lines of code written by adding the following three annotations implicitly: @Configuration @ComponentScan @EnableAutoConfiguration If you are familiar with Spring, you will already know what the first two annotations do. @EnableAutoConfiguration is the only annotation that is part of Spring Boot. The AutoConfiguration package has an intelligent mechanism that guesses the configuration of your application and automatically configures the beans that you will likely need in your code. You can also see that we have added one more line to the main method, which actually tells Spring Boot the class that will be used to start this application. In our case, it is GeoLocationApplication.class. If you would like to add more initialization logic to your application, such as setting up the database or setting up your cache, feel free to add it here. Now that our Spring Boot application is all set to run, let's see how to run our microservice. Right-click on GeoLocationApplication.java from Package Explorer, select Run As, and then select Spring Boot App. You can also choose Java Application instead of Spring Boot App. Both the options ultimately do the same thing. You should see something like this on your STS console: If you look closely at the console logs, you will notice that Tomcat is being started on port number 8080. In order to make sure our Tomcat server is listening, let's run a simple curl command. cURL is a command-line utility available on most Unix and Mac systems. For Windows, use tools such as Cygwin or even Postman. Postman is a Google Chrome extension that gives you the ability to send and receive HTTP requests. For simplicity, we will use cURL. Execute the following command on your terminal: curl http://localhost:8080 This should give us an output like this: {"timestamp":1467420963000,"status":404,"error":"Not Found","message":"No message available","path":"/"} This error message is being produced by Spring. This verifies that our Spring Boot microservice is ready to start building on with more features. There are more configurations that are needed for Spring Boot, which we will perform later in this article along with Spring MVC. Writing microservices with WildFly Swarm WildFly Swarm is a J2EE application packaging framework from RedHat that utilizes the in-memory Undertow server to deploy microservices. In this article, we will create the same GeoLocation API using WildFly Swarm and JAX-RS. To avoid confusion and dependency conflicts in our project, we will create the WildFly Swarm microservice as its own Maven project. This article is just here to help you get started on WildFly Swarm. When you are building your production-level application, it is your choice to either use Spring Boot, WildFly Swarm, Dropwizard, or SparkJava based on your needs. Getting ready Similar to how we created the Spring Boot Maven project, create a Maven WAR module with the groupId com.packt.microservices and name/artifactId geolocation-wildfly. Feel free to use either your IDE or the command line. Be aware that some IDEs complain about a missing web.xml file. We will see how to fix that in the next section. How to do it… Before we set up the WildFly Swarm project, we have to fix the missing web.xml error. The error message says that Maven expects to see a web.xml file in your project as it is a WAR module, but this file is missing in your project. In order to fix this, we have to add and configure maven-war-plugin. Add the following code snippet to your pom.xml file's project section: <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <version>2.6</version> <configuration> <failOnMissingWebXml>false</failOnMissingWebXml> </configuration> </plugin> </plugins> </build> After adding the snippet, save your pom.xml file and perform a Maven update. Also, if you see that your project is using a Java version other than 1.8. Again, perform a Maven update for the changes to take effect. Now, let's add the dependencies required for this project. As we know that we will be exposing our APIs, we have to add the JAX-RS library. JAX-RS is the standard JSR-compliant API for creating RESTful web services. JBoss has its own version of JAX-RS. So let's  add that dependency to the pom.xml file: <dependencies> <dependency> <groupId>org.jboss.spec.javax.ws.rs</groupId> <artifactId>jboss-jaxrs-api_2.0_spec</artifactId> <version>1.0.0.Final</version> <scope>provided</scope> </dependency> </dependencies> The one thing that you have to note here is the provided scope. The provide scope in general means that this JAR need not be bundled with the final artifact when it is built. Usually, the dependencies with provided scope will be available to your application either via your web server or application server. In this case, when Wildfly Swarm bundles your app and runs it on the in-memory Undertow server, your server will already have this dependency. The next step toward creating the GeoLocation API using Wildfly Swarm is creating the domain object. Use the com.packt.microservices.geolocation.GeoLocation.java file. Now that we have the domain object, there are two classes that you need to create in order to write your first JAX-RS web service. The first of those is the Application class. The Application class in JAX-RS is used to define the various components that you will be using in your application. It can also hold some metadata about your application, such as your basePath (or ApplicationPath) to all resources listed in this Application class. In this case, we are going to use /geolocation as our basePath. Let's see how that looks: package com.packt.microservices.geolocation; import javax.ws.rs.ApplicationPath; import javax.ws.rs.core.Application; @ApplicationPath("/geolocation") public class GeoLocationApplication extends Application { public GeoLocationApplication() {} } There are two things to note in this class; one is the Application class and the other is the @ApplicationPath annotation—both of which we've already talked about. Now let's move on to the resource class, which is responsible for exposing the APIs. If you are familiar with Spring MVC, you can compare Resource classes to Controllers. They are responsible for defining the API for any specific resource. The annotations are slightly different from that of Spring MVC. Let's create a new resource class called com.packt.microservices.geolocation.GeoLocationResource.java that exposes a simple GET API: package com.packt.microservices.geolocation; import java.util.ArrayList; import java.util.List; import javax.ws.rs.GET; import javax.ws.rs.Path; import javax.ws.rs.Produces; @Path("/") public class GeoLocationResource { @GET @Produces("application/json") public List<GeoLocation> findAll() { return new ArrayList<>(); } } All the three annotations, @GET, @Path, and @Produces, are pretty self explanatory. Before we start writing the APIs and the service class, let's test the application from the command line to make sure it works as expected. With the current implementation, any GET request sent to the /geolocation URL should return an empty JSON array. So far, we have created the RESTful APIs using JAX-RS. It's just another JAX-RS project: In order to make it a microservice using Wildfly Swarm, all you have to do is add the wildfly-swarm-plugin to the Maven pom.xml file. This plugin will be tied to the package phase of the build so that whenever the package goal is triggered, the plugin will create an uber JAR with all required dependencies. An uber JAR is just a fat JAR that has all dependencies bundled inside itself. It also deploys our application in an in-memory Undertow server. Add the following snippet to the plugins section of the pom.xml file: <plugin> <groupId>org.wildfly.swarm</groupId> <artifactId>wildfly-swarm-plugin</artifactId> <version>1.0.0.Final</version> <executions> <execution> <id>package</id> <goals> <goal>package</goal> </goals> </execution> </executions> </plugin> Now execute the mvn clean package command from the project's root directory, and wait for the Maven build to be successful. If you look at the logs, you can see that wildfly-swarm-plugin will create the uber JAR, which has all its dependencies. You should see something like this in your console logs: After the build is successful, you will find two artifacts in the target directory of your project. The geolocation-wildfly-0.0.1-SNAPSHOT.war file is the final WAR created by the maven-war-plugin. The geolocation-wildfly-0.0.1-SNAPSHOT-swarm.jar file is the uber JAR created by the wildfly-swarm-plugin. Execute the following command in the same terminal to start your microservice: java –jar target/geolocation-wildfly-0.0.1-SNAPSHOT-swarm.jar After executing this command, you will see that Undertow has started on port number 8080, exposing the geolocation resource we created. You will see something like this: Execute the following cURL command in a separate terminal window to make sure our API is exposed. The response of the command should be [], indicating there are no geolocations: curl http://localhost:8080/geolocation Now let's build the service class and finish the APIs that we started. For simplicity purposes, we are going to store the geolocations in a collection in the service class itself. In a real-time scenario, you will be writing repository classes or DAOs that talk to the database that holds your geolocations. Get the com.packt.microservices.geolocation.GeoLocationService.java interface. We'll use the same interface here. Create a new class called com.packt.microservices.geolocation.GeoLocationServiceImpl.java that extends the GeoLocationService interface: package com.packt.microservices.geolocation; import java.util.ArrayList; import java.util.Collections; import java.util.List; public class GeoLocationServiceImpl implements GeoLocationService { private static List<GeoLocation> geolocations = new ArrayList<>(); @Override public GeoLocation create(GeoLocation geolocation) { geolocations.add(geolocation); return geolocation; } @Override public List<GeoLocation> findAll() { return Collections.unmodifiableList(geolocations); } } Now that our service classes are implemented, let's finish building the APIs. We already have a very basic stubbed-out GET API. Let's just introduce the service class to the resource class and call the findAll method. Similarly, let's use the service's create method for POST API calls. Add the following snippet to GeoLocationResource.java: private GeoLocationService service = new GeoLocationServiceImpl(); @GET @Produces("application/json") public List<GeoLocation> findAll() { return service.findAll(); } @POST @Produces("application/json") @Consumes("application/json") public GeoLocation create(GeoLocation geolocation) { return service.create(geolocation); } We are now ready to test our application. Go ahead and build your application. After the build is successful, run your microservice: let's try to create two geolocations using the POST API and later try to retrieve them using the GET method. Execute the following cURL commands in your terminal one by one: curl -H "Content-Type: application/json" -X POST -d '{"timestamp": 1468203975, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "latitude": 41.803488, "longitude": -88.144040}' http://localhost:8080/geolocation This should give you something like the following output (pretty-printed for readability): { "latitude": 41.803488, "longitude": -88.14404, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "timestamp": 1468203975 } curl -H "Content-Type: application/json" -X POST -d '{"timestamp": 1468203975, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "latitude": 9.568012, "longitude": 77.962444}' http://localhost:8080/geolocation This command should give you an output similar to the following (pretty-printed for readability): { "latitude": 9.568012, "longitude": 77.962444, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "timestamp": 1468203975 } To verify whether your entities were stored correctly, execute the following cURL command: curl http://localhost:8080/geolocation This should give you an output like this (pretty-printed for readability): [ { "latitude": 41.803488, "longitude": -88.14404, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "timestamp": 1468203975 }, { "latitude": 9.568012, "longitude": 77.962444, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "timestamp": 1468203975 } ] Whatever we have seen so far will give you a head start in building microservices with WildFly Swarm. Of course, there are tons of features that WildFly Swarm offers. Feel free to try them out based on your application needs. I strongly recommend going through the WildFly Swarm documentation for any advanced usages. Writing microservices with Dropwizard Dropwizard is a collection of libraries that help you build powerful applications quickly and easily. The libraries vary from Jackson, Jersey, Jetty, and so on. You can take a look at the full list of libraries on their website. This ecosystem of libraries that help you build powerful applications could be utilized to create microservices as well. As we saw earlier, it utilizes Jetty to expose its services. In this article, we will create the same GeoLocation API using Dropwizard and Jersey. To avoid confusion and dependency conflicts in our project, we will create the Dropwizard microservice as its own Maven project. This article is just here to help you get started with Dropwizard. When you are building your production-level application, it is your choice to either use Spring Boot, WildFly Swarm, Dropwizard, or SparkJava based on your needs. Getting ready Similar to how we created other Maven projects,  create a Maven JAR module with the groupId com.packt.microservices and name/artifactId geolocation-dropwizard. Feel free to use either your IDE or the command line. After the project is created, if you see that your project is using a Java version other than 1.8. Perform a Maven update for the change to take effect. How to do it… The first thing that you will need is the dropwizard-core Maven dependency. Add the following snippet to your project's pom.xml file: <dependencies> <dependency> <groupId>io.dropwizard</groupId> <artifactId>dropwizard-core</artifactId> <version>0.9.3</version> </dependency> </dependencies> Guess what? This is the only dependency you will need to spin up a simple Jersey-based Dropwizard microservice. Before we start configuring Dropwizard, we have to create the domain object, service class, and resource class: com.packt.microservices.geolocation.GeoLocation.java com.packt.microservices.geolocation.GeoLocationService.java com.packt.microservices.geolocation.GeoLocationImpl.java com.packt.microservices.geolocation.GeoLocationResource.java Let's see what each of these classes does. The GeoLocation.java class is our domain object that holds the geolocation information. The GeoLocationService.java class defines our interface, which is then implemented by the GeoLocationServiceImpl.java class. If you take a look at the GeoLocationServiceImpl.java class, we are using a simple collection to store the GeoLocation domain objects. In a real-time scenario, you will be persisting these objects in a database. But to keep it simple, we will not go that far. To be consistent with the previous, let's change the path of GeoLocationResource to /geolocation. To do so, replace @Path("/") with @Path("/geolocation") on line number 11 of the GeoLocationResource.java class. We have now created the service classes, domain object, and resource class. Let's configure Dropwizard. In order to make your project a microservice, you have to do two things: Create a Dropwizard configuration class. This is used to store any meta-information or resource information that your application will need during runtime, such as DB connection, Jetty server, logging, and metrics configurations. These configurations are ideally stored in a YAML file, which will them be mapped to your Configuration class using Jackson. In this application, we are not going to use the YAML configuration as it is out of scope for this article. If you would like to know more about configuring Dropwizard, refer to their Getting Started documentation page at http://www.dropwizard.io/0.7.1/docs/getting-started.html. Let's  create an empty Configuration class called GeoLocationConfiguration.java: package com.packt.microservices.geolocation; import io.dropwizard.Configuration; public class GeoLocationConfiguration extends Configuration { } The YAML configuration file has a lot to offer. Take a look at a sample YAML file from Dropwizard's Getting Started documentation page to learn more. The name of the YAML file is usually derived from the name of your microservice. The microservice name is usually identified by the return value of the overridden method public String getName() in your Application class. Now let's create the GeoLocationApplication.java application class: package com.packt.microservices.geolocation; import io.dropwizard.Application; import io.dropwizard.setup.Environment; public class GeoLocationApplication extends Application<GeoLocationConfiguration> { public static void main(String[] args) throws Exception { new GeoLocationApplication().run(args); } @Override public void run(GeoLocationConfiguration config, Environment env) throws Exception { env.jersey().register(new GeoLocationResource()); } } There are a lot of things going on here. Let's look at them one by one. Firstly, this class extends Application with the GeoLocationConfiguration generic. This clearly makes an instance of your GeoLocationConfiguraiton.java class available so that you have access to all the properties you have defined in your YAML file at the same time mapped in the Configuration class. The next one is the run method. The run method takes two arguments: your configuration and environment. The Environment instance is a wrapper to other library-specific objects such as MetricsRegistry, HealthCheckRegistry, and JerseyEnvironment. For example, we could register our Jersey resources using the JerseyEnvironment instance. The env.jersey().register(new GeoLocationResource())line does exactly that. The main method is pretty straight-forward. All it does is call the run method. Before we can start the microservice, we have to configure this project to create a runnable uber JAR. Uber JARs are just fat JARs that bundle their dependencies in themselves. For this purpose, we will be using the maven-shade-plugin. Add the following snippet to the build section of the pom.xml file. If this is your first plugin, you might want to wrap it in a <plugins> element under <build>: <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-shade-plugin</artifactId> <version>2.3</version> <configuration> <createDependencyReducedPom>true</createDependencyReducedPom> <filters> <filter> <artifact>*:*</artifact> <excludes> <exclude>META-INF/*.SF</exclude> <exclude>META-INF/*.DSA</exclude> <exclude>META-INF/*.RSA</exclude> </excludes> </filter> </filters> </configuration> <executions> <execution> <phase>package</phase> <goals> <goal>shade</goal> </goals> <configuration> <transformers> <transformer implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer" /> <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer"> <mainClass>com.packt.microservices.geolocation.GeoLocationApplication</mainClass> </transformer> </transformers> </configuration> </execution> </executions> </plugin> The previous snippet does the following: It creates a runnable uber JAR that has a reduced pom.xml file that does not include the dependencies that are added to the uber JAR. To learn more about this property, take a look at the documentation of maven-shade-plugin. It utilizes com.packt.microservices.geolocation.GeoLocationApplication as the class whose main method will be invoked when this JAR is executed. This is done by updating the MANIFEST file. It excludes all signatures from signed JARs. This is required to avoid security errors. Now that our project is properly configured, let's try to build and run it from the command line. To build the project, execute mvn clean package from the project's root directory in your terminal. This will create your final JAR in the target directory. Execute the following command to start your microservice: java -jar target/geolocation-dropwizard-0.0.1-SNAPSHOT.jar server The server argument instructs Dropwizard to start the Jetty server. After you issue the command, you should be able to see that Dropwizard has started the in-memory Jetty server on port 8080. If you see any warnings about health checks, ignore them. Your console logs should look something like this: We are now ready to test our application. Let's try to create two geolocations using the POST API and later try to retrieve them using the GET method. Execute the following cURL commands in your terminal one by one: curl -H "Content-Type: application/json" -X POST -d '{"timestamp": 1468203975, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "latitude": 41.803488, "longitude": -88.144040}' http://localhost:8080/geolocation This should give you an output similar to the following (pretty-printed for readability): { "latitude": 41.803488, "longitude": -88.14404, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "timestamp": 1468203975 } curl -H "Content-Type: application/json" -X POST -d '{"timestamp": 1468203975, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "latitude": 9.568012, "longitude": 77.962444}' http://localhost:8080/geolocation This should give you an output like this (pretty-printed for readability): { "latitude": 9.568012, "longitude": 77.962444, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "timestamp": 1468203975 } To verify whether your entities were stored correctly, execute the following cURL command: curl http://localhost:8080/geolocation It should give you an output similar to the following (pretty-printed for readability): [ { "latitude": 41.803488, "longitude": -88.14404, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "timestamp": 1468203975 }, { "latitude": 9.568012, "longitude": 77.962444, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "timestamp": 1468203975 } ] Excellent! You have created your first microservice with Dropwizard. Dropwizard offers more than what we have seen so far. Some of it is out of scope for this article. I believe the metrics API that Dropwizard uses could be used in any type of application. Writing your Dockerfile So far in this article, we have seen how to package our application and how to install Docker. Now that we have our JAR artifact and Docker set up, let's see how to Dockerize our microservice application using Docker. Getting ready In order to Dockerize our application, we will have to tell Docker how our image is going to look. This is exactly the purpose of a Dockerfile. A Dockerfile has its own syntax (or Dockerfile instructions) and will be used by Docker to create images. Throughout this article, we will try to understand some of the most commonly used Dockerfile instructions as we write our Dockerfile for the geolocation tracker microservice. How to do it… First, open your STS IDE and create a new file called Dockerfile in the geolocation project. The first line of the Dockerfile is always the FROM instruction followed by the base image that you would like to create your image from. There are thousands of images on Docker Hub to choose from. In our case, we would need something that already has Java installed on it. There are some images that are official, meaning they are well documented and maintained. Docker Official Repositories are very well documented, and they follow best practices and standards. Docker has its own team to maintain these repositories. This is essential in order to keep the repository clear, thus helping the user make the right choice of repository. To read more about Docker Official Repositories, take a look at https://docs.docker.com/docker-hub/official_repos/ We will be using the Java official repository. To find the official repository, go to hub.docker.com and search for java. You have to choose the one that says official. At the time of writing this, the Java image documentation says it will soon be deprecated in favor of the openjdk image. So the first line of our Dockerfile will look like this: FROM openjdk:8 As you can see, we have used version (or tag) 8 for our image. If you are wondering what type of operating system this image uses, take a look at the Dockerfile of this image, which you can get from the Docker Hub page. Docker images are usually tagged with the version of the software they are written for. That way, it is easy for users to pick from. The next step is creating a directory for our project where we will store our JAR artifact. Add this as your next line: RUN mkdir -p /opt/packt/geolocation This is a simple Unix command that creates the /opt/packt/geolocation directory. The –p flag instructs it to create the intermediate directories if they don't exist. Now let's create an instruction that will add the JAR file that was created in your local machine into the container at /opt/packt/geolocation. ADD target/geolocation-0.0.1-SNAPSHOT.jar /opt/packt/geolocation/ As you can see, we are picking up the uber JAR from target directory and dropping it into the /opt/packt/geolocation directory of the container. Take a look at the / at the end of the target path. That says that the JAR has to be copied into the directory. Before we can start the application, there is one thing we have to do, that is, expose the ports that we would like to be mapped to the Docker host ports. In our case, the in-memory Tomcat instance is running on port 8080. In order to be able to map port 8080 of our container to any port to our Docker host, we have to expose it first. For that, we will use the EXPOSE instruction. Add the following line to your Dockerfile: EXPOSE 8080 Now that we are ready to start the app, let's go ahead and tell Docker how to start a container for this image. For that, we will use the CMD instruction: CMD ["java", "-jar", "/opt/packt/geolocation/geolocation-0.0.1-SNAPSHOT.jar"] There are two things we have to note here. Once is the way we are starting the application and the other is how the command is broken down into comma-separated Strings. First, let's talk about how we start the application. You might be wondering why we haven't used the mvn spring-boot:run command to start the application. Keep in mind that this command will be executed inside the container, and our container does not have Maven installed, only OpenJDK 8. If you would like to use the maven command, take that as an exercise, and try to install Maven on your container and use the mvn command to start the application. Now that we know we have Java installed, we are issuing a very simple java –jar command to run the JAR. In fact, the Spring Boot Maven plugin internally issues the same command. The next thing is how the command has been broken down into comma-separated Strings. This is a standard that the CMD instruction follows. To keep it simple, keep in mind that for whatever command you would like to run upon running the container, just break it down into comma-separated Strings (in whitespaces). Your final Dockerfile should look something like this: FROM openjdk:8 RUN mkdir -p /opt/packt/geolocation ADD target/geolocation-0.0.1-SNAPSHOT.jar /opt/packt/geolocation/ EXPOSE 8080 CMD ["java", "-jar", "/opt/packt/geolocation/geolocation-0.0.1-SNAPSHOT.jar"] This Dockerfile is one of the simplest implementations. Dockerfiles can sometimes get bigger due to the fact that you need a lot of customizations to your image. In such cases, it is a good idea to break it down into multiple images that can be reused and maintained separately. There are some best practices to follow whenever you create your own Dockerfile and image. Though we haven't covered that here as it is out of the scope of this article, you still should take a look at and follow them. To learn more about the various Dockerfile instructions, go to https://docs.docker.com/engine/reference/builder/. Building your Docker image We created the Dockerfile, which will be used in this article to create an image for our microservice. If you are wondering why we would need an image, it is the only way we can ship our software to any system. Once you have your image created and uploaded to a common repository, it will be easier to pull your image from any location. Getting ready Before you jump right into it, it might be a good idea to get yourself familiar with some of the most commonly used Docker commands. In this article, we will use the build command. Take a look at this URL to understand the other commands: https://docs.docker.com/engine/reference/commandline/#/image-commands. After familiarizing yourself with the commands, open up a new terminal, and change your directory to the root of the geolocation project. Make sure your docker-machine instance is running. If it is not running, use the docker-machine start command to run your docker-machine instance: docker-machine start default If you have to configure your shell for the default Docker machine, go ahead and execute the following command: eval $(docker-machine env default) How to do it… From the terminal, issue the following docker build command: docker build –t packt/geolocation. We'll try to understand the command later. For now, let's see what happens after you issue the preceding command. You should see Docker downloading the openjdk image from Docker Hub. Once the image has been downloaded, you will see that Docker tries to validate each and every instruction provided in the Dockerfile. When the last instruction has been processed, you will see a message saying Successfully built. This says that your image has been successfully built. Now let's try to understand the command. There are three things to note here: The first thing is the docker build command itself. The docker build command is used to build a Docker image from a Dockerfile. It needs at least one input, which is usually the location of the Dockerfile. Dockerfiles can be renamed to something other than Dockerfile and can be referred to using the –f option of the docker build command. An instance of this being used is when teams have different Dockerfiles for different build environments, for example, using DockerfileDev for the dev environment, DockerfileStaging for the staging environment, and DockerfileProd for the production environment. It is still encouraged as best practice to use other Docker options in order to keep the same Dockerfile for all environments. The second thing is the –t option. The –t option takes the name of the repo and a tag. In our case, we have not mentioned the tag, so by default, it will pick up latest as the tag. If you look at the repo name, it is different from the official openjdk image name. It has two parts: packt and geolocation. It is always a good practice to put the Docker Hub account name followed by the actual image name as the name of your repo. For now, we will use packt as our account name, we will see how to create our own Docker Hub account and use that account name here. The third thing is the dot at the end. The dot operator says that the Dockerfile is located in the current directory, or the present working directory to be more precise. Let's go ahead and verify whether our image was created. In order to do that, issue the following command on your terminal: docker images The docker images command is used to list down all images available in your Docker host. After issuing the command, you should see something like this: As you can see, the newly built image is listed as packt/geolocation in your Docker host. The tag for this image is latest as we did not specify any. The image ID uniquely identifies your image. Note the size of the image. It is a few megabytes bigger than the openjdk:8 image. That is most probably because of the size of our executable uber JAR inside the container. Now that we know how to build an image using an existing Dockerfile, we are at the end of this article. This is just a very quick intro to the docker build command. There are more options that you can provide to the command, such as CPUs and memory. To learn more about the docker build command, take a look at this page: https://docs.docker.com/engine/reference/commandline/build/ Running your microservice as a Docker container We successfully created our Docker image in the Docker host. Keep in mind that if you are using Windows or Mac, your Docker host is the VirtualBox VM and not your local computer. In this article, we will look at how to spin off a container for the newly created image. Getting ready To spin off a new container for our packt/geolocation image, we will use the docker run command. This command is used to run any command inside your container, given the image. Open your terminal and go to the root of the geolocation project. If you have to start your Docker machine instance, do so using the docker-machine start command, and set the environment using the docker-machine env command. How to do it… Go ahead and issue the following command on your terminal: docker run packt/geolocation Right after you run the command, you should see something like this: Yay! We can see that our microservice is running as a Docker container. But wait—there is more to it. Let's see how we can access our microservice's in-memory Tomcat instance. Try to run a curl command to see if our app is up and running: Open a new terminal instance and execute the following cURL command in that shell: curl -H "Content-Type: application/json" -X POST -d '{"timestamp": 1468203975, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "latitude": 41.803488, "longitude": -88.144040}' http://localhost:8080/geolocation Did you get an error message like this? curl: (7) Failed to connect to localhost port 8080: Connection refused Let's try to understand what happened here. Why would we get a connection refused error when our microservice logs clearly say that it is running on port 8080? Yes, you guessed it right: the microservice is not running on your local computer; it is actually running inside the container, which in turn is running inside your Docker host. Here, your Docker host is the VirtualBox VM called default. So we have to replace localhost with the IP of the container. But getting the IP of the container is not straightforward. That is the reason we are going to map port 8080 of the container to the same port on the VM. This mapping will make sure that any request made to port 8080 on the VM will be forwarded to port 8080 of the container. Now go to the shell that is currently running your container, and stop your container. Usually, Ctrl + C will do the job. After your container is stopped, issue the following command: docker run –p 8080:8080 packt/geolocation The –p option does the port mapping from Docker host to container. The port number to the left of the colon indicates the port number of the Docker host, and the port number to the right of the colon indicates that of the container. In our case, both of them are same. After you execute the previous command, you should see the same logs that you saw before. We are not done yet. We still have to find the IP that we have to use to hit our RESTful endpoint. The IP that we have to use is the IP of our Docker Machine VM. To find the IP of the docker-machine instance, execute the following command in a new terminal instance: docker-machine ip default. This should give you the IP of the VM. Let's say the IP that you received was 192.168.99.100. Now, replace localhost in your cURL command with this IP, and execute the cURL command again: curl -H "Content-Type: application/json" -X POST -d '{"timestamp": 1468203975, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "latitude": 41.803488, "longitude": -88.144040}' http://192.168.99.100:8080/geolocation This should give you an output similar to the following (pretty-printed for readability): { "latitude": 41.803488, "longitude": -88.14404, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "timestamp": 1468203975 } This confirms that you are able to access your microservice from the outside. Take a moment to understand how the port mapping is done. The following figure shows how your machine, VM, and container are orchestrated: This confirms that you are able to access your microservice from the outside. Summary We looked at an example of a geolocation tracker application to see how it can be broken down into smaller and manageable services. Next, we saw how to create the GeoLocationTracker service using the Spring Boot framework. Resources for Article: Further resources on this subject: Domain-Driven Design [article] Breaking into Microservices Architecture [article] A capability model for microservices [article]
Read more
  • 0
  • 0
  • 46772

article-image-nativescript-set-up
Amey Varangaonkar
09 May 2018
9 min read
Save for later

NativeScript: What is it, and how to set it up

Amey Varangaonkar
09 May 2018
9 min read
In this tutorial, we introduce you to the NativeScript library, which allows you to create and deploy a web application on a mobile device and use it like a mobile app, rather than as a web or a hybrid application. [box type="shadow" align="" class="" width=""]The following excerpt is taken from the book TypeScript 2.x By Example written by Sachin Ohri. This book presents essential techniques to leverage the power of TypeScript 2.x to build efficient web applications.[/box] What is NativeScript? NativeScript is the open source framework for building native Android and iOS applications with web technologies. This means we can develop native mobile applications with JavaScript, TypeScript, and/or Angular. It is based on the thinking of write once and run everywhere. Applications developed with NativeScript are pure mobile apps when compared to applications developed with technologies such as PhoneGap. As they are native mobile applications, we can use all the richness of the mobile platform and provide the performance associated with that. We use native APIs and use native controls to render, which allows us to create more sophisticated applications compared to a hybrid approach. Hybrid applications do not provide the same level of flexibility or performance because they are hosted on a separate framework and do not get to interact with low-level mobile APIs directly. The best part is that it does not require us to learn a new programming language, unlike developing an iOS-based application, for which you need to know Objective C or Swift. So, we can use our existing skills to develop mobile applications. NativeScript design NativeScript is a runtime that sits on top of the native mobile operating system and uses the JavaScript Virtual Machine (JVM) V8 on Android and JavaScriptCore on iOS. Having access to these platforms allows NativeScript to expose a unified API system for developers, which is then converted into the native API at runtime. This translation between the JavaScript APIs and the native platform APIs is possible through reflection, which NativeScript uses to create its own set of interfaces. Another advantage of using JavaScript by NativeScript is its independence from specific editors. You can use any of your favorite editors to develop a NativeScript application, and you will have access to all the native APIs rather than using Xcode for iOS-based apps and Android Studio for Android-based apps. Architecture The following is a high-level diagram of NativeScript and its interaction with the mobile platform: As we can see, the runtime is responsible for converting JavaScript application code to the native platform code. It has various components that work together to convert and call the native APIs. Because NativeScript uses JVM and JavaScriptCore, it has access to all the latest ECMAScript language specifications for development, which allows us to use the latest ES6 feature set. One of the main components that we need to understand in NativeScript design is modules. Modules The NativeScript team made sure that the platform was developed in a modular fashion, much like plugins, which allow us to include only the modules that we need in our development. These modules provide us with the abstraction of native APIs and allow us to write code that work on both platforms. It has separate APIs for each logical functionality. For example, if you want to use SQLite for your storage needs, there is a package for that; if you want to use a filesystem, there is a package for that. Let's take one example to see how these modules help us write consistent code for a multiplatform environment. If you want to access a filesystem on the native platform using NativeScript, you will write code similar to what you see in the following code snippet: var filesystem = require("file-system"); new filesystem.file(path) This code is written in pure JavaScript, which first gets a reference to a file-system module, and then, using the API of the file-system module calls a file method. This code, when executed by the NativeScript runtime, first checks the platform it wants to run on and then converts the code accordingly, as shown in the following code snippets. The Android version of the code will be as follows: new java.io.file(path) The iOS version of the code will be as follows: nsFileManager.defaultManager(); fileManager.createFileAtPathContentsAttributes(path); If you have worked on any of the mobile platforms before, you will recognize this code as using the native filesystem API to access the file path. NativeScript versus web applications Until now, we have been mentioning that we can use our web technologies to write mobile applications with the help of NativeScript. So, can we write a pure web application and use the it in runtime to create a mobile application? Yes and no. Yes, we can, and we will see with our application that we can use the same code base to write with NativeScript. No, because not all components of web applications can be directly used. NativeScript allows us to use our existing JavaScript/TypeScript and CSS skills for developing the business logic and the design for our application. But because the native platforms are not web-based and do not have a DOM, we cannot use HTML as the template for our applications. Although you will see that the extension of our template files will be HTML, the element tags will be somewhat different. To give you a brief example, it does not have UI elements such as <div> or <span>, but has elements such as <StackLayout> and <DockLayout>, which allow us to arrange our UI components. Another thing to note here is that these UI elements are then converted into native elements based on the platform. So, if we use the <Button> control in NativeScript, it will get converted into android.widget.Button on the Android platform and UIButton on iOS. Setting up your NativeScript environment NativeScript provides very good documentation about installing and setting up your development environment. You can find the documentation at https://docs.nativescript.org/angular/start/quick-setup. We will briefly go through the setup process here, but recommend that you go through the documentation to understand the process. NativeScript CLI The best way to use is through the NativeScript CLI. You can install it from npm using the following command: npm install -g nativescript This command will install the NativeScript library in your global scope. To confirm that the installation has been successful, you can try running the following command from the command-line window: tns The tns command is a short form for Telerik NativeScript, and will show the array of commands associated with it. The NativeScript CLI comes with a host of commands to assist in our development, commands such as create, which helps us create a basic startup project, and deploy, which informs the NativeScript CLI to deploy the application to the device (the device can be a connected device or an emulator). You can check all the commands available with the NativeScript CLI by using the help command as follows: tns --help Installing mobile platform dependencies To build native applications, we need to install the dependencies for those mobile platforms. It is important to remember that if we want to build a NativeScript application for iOS and run it on an iOS-compatible device, we need to use macOS; for building Android applications, we can use both Windows and macOS. It provides an easy single script for Windows and macOS that takes care of the responsibility to install all the tools and framework required. The script for Windows is as shown in the following code: @powershell -NoProfile -ExecutionPolicy Bypass -Command "iex ((new-object net.webclient).DownloadString('https://www.nativescript.org/setup/win'))" The script for iOS is as shown in the following code: ruby -e "$(curl -fsSL https://www.nativescript.org/setup/mac)" It's important to note that these scripts require administrator-level privileges, so you may need to run them using the sudo command. It also provides a step-by-step guide to installing all these dependencies manually; details can be found at https://docs.nativescript.org/start/ns-setup-win. Once you have installed all the packages, you can check if the installation was successful by running the following command: tns doctor This command checks all the required prerequisites for building a NativeScript application, and if there are no issues identified, this command will return a success message, No issues were detected. Installing an Android Virtual Device Once you have installed all the dependencies, the next step is to install an Android emulator, which can be used for testing instead of connecting real devices. To be able to create an emulator, you need to have Android Studio on your machine. You can install Android Studio from https://developer.android.com/studio/index.html. Once you have installed Android Studio, you can check whether you have the correct Android SDK version. The NativeScript CLI needs Android SDK version 25 or higher; if you see that you do not have the required Android SDK version, then you can install it either using the following command or using the Android Studio IDE: "%ANDROID_HOME%\tools\bin\sdkmanager" "tools" "platform-tools" "platforms;android-25" "build-tools;25.0.2" "extras;android;m2repository" "extras;google;m2repository" To install the Android emulator, we use Android Studio, the details of which can be found at https://docs.nativescript.org/tooling/android-virtual-devices. On macOS, we need to make sure we have hXcodeCode installed, or else, we will not be able to run iOS-based applications. Again, you can use the tns doctor command to check if your installation was successful. And that's it! You have successfully installed and set up the NativeScript environment. Want to learn how to develop native web apps? We've got it covered. All you have to do is check out this book TypeScript 2.x By Example to create and deploy web app as a native app in a step-by-step manner. Tools in TypeScript Introducing Object Oriented Programmng with TypeScript Writing SOLID JavaScript code with TypeScript  
Read more
  • 0
  • 0
  • 45490

article-image-html5-and-the-rise-of-modern-javascript-browser-apis-tutorial
Pavan Ramchandani
20 Jul 2018
15 min read
Save for later

HTML5 and the rise of modern JavaScript browser APIs [Tutorial]

Pavan Ramchandani
20 Jul 2018
15 min read
The HTMbrowserification arrived in 2008. HTML5, however, was so technologically advanced in 2008 that it was predicted that it would not be ready till at least 2022! However, that turned out to be incorrect, and here we are, with fully supported HTML5 and ES6/ES7/ES8-supported browsers. A lot of APIs used by HTML5 go hand in hand with JavaScript. Before looking at those APIs, let us understand a little about how JavaScript sees the web. This'll eventually put us in a strong position to understand various interesting, JavaScript-related things such as the Web Workers API, etc. In this article, we will introduce you to the most popular web languages HTML and JavaScript and how they came together to become the default platform for building modern front-end web applications. This is an excerpt from the book, Learn ECMAScript - Second Edition, written by Mehul Mohan and Narayan Prusty. The HTML DOM The HTML DOM is a tree version of how the document looks. Here is a very simple example of an HTML document: <!doctype HTML> <html> <head> <title>Cool Stuff!</title> </head> <body> <p>Awesome!</p> </body> </html> Here's how its tree version will look: The previous diagram is just a rough representation of the DOM tree. HTML tags consist of head and body; furthermore, the <body> tag consists of a <p> tag, whereas the <head> tag consists of the <title> tag. Simple! JavaScript has access to the DOM directly, and can modify the connections between these nodes, add nodes, remove nodes, change contents, attach event listeners, and so on. What is the Document Object Model (DOM)? Simply put, the DOM is a way to represent HTML or XML documents as nodes. This makes it easier for other programming languages to connect to a DOM-following page and modify it accordingly. To be clear, DOM is not a programming language. DOM provides JavaScript with a way to interact with web pages. You can think of it as a standard. Every element is part of the DOM tree, which can be accessed and modified with APIs exposed to JavaScript. DOM is not restricted to being accessed only by JavaScript. It is language-independent and there are several modules available in various languages to parse DOM (just like JavaScript) including PHP, Python, Java, and so on. As said previously, DOM provides JavaScript with a way to interact with it. How? Well, accessing DOM is as easy as accessing predefined objects in JavaScript: document. The DOM API specifies what you'll find inside the document object. The document object essentially gives JavaScript access to the DOM tree formed by your HTML document. If you notice, you cannot access any element at all without actually accessing the document object first. DOM methods/properties All HTML elements are objects in JavaScript. The most commonly used object is the document object. It has the whole DOM tree attached to it. You can query for elements on that. Let's look at some very common examples of these methods: getElementById method getElementsByTagName method getElementsByClassName method querySelector method querySelectorAll method By no means is this an exhaustive list of all methods available. However, this list should at least get you started with DOM manipulation. Use MDN as your reference for various other methods. Here's the link: https://developer.mozilla.org/en-US/docs/Web/API/Document#Methods. Modern JavaScript browser APIs HTML5 brought a lot of support for some awesome APIs in JavaScript, right from the start. Although some APIs were released with HTML5 itself (such as the Canvas API), some were added later (such as the Fetch API). Let's see some of these APIs and how to use them with some code examples. Page Visibility API - is the user still on the page? The Page Visibility API allows developers to run specific code whenever the page user is on goes in focus or out of foucs. Imagine you run a game-hosting site and want to pause the game whenever the user loses focus on your tab. This is the way to go! function pageChanged() { if (document.hidden) { console.log('User is on some other tab/out of focus') // line #1 } else { console.log('Hurray! User returned') // line #2 } } document.addEventListener("visibilitychange", pageChanged); We're adding an event listener to the document; it fires whenever the page is changed. Sure, the pageChanged function gets an event object as well in the argument, but we can simply use the document.hidden property, which returns a Boolean value depending on the page's visibility at the time the code was called. You'll add your pause game code at line #1 and your resume game code at line #2. navigator.onLine API – the user's network status The navigator.onLine API tells you if the user is online or not. Imagine building a multiplayer game and you want the game to automatically pause if the user loses their internet connection. This is the way to go here! function state(e) { if(navigator.onLine) { console.log('Cool we\'re up'); } else { console.log('Uh! we\'re down!'); } } window.addEventListener('offline', state); window.addEventListener('online', state); Here, we're attaching two event listeners to window global. We want to call the state function whenever the user goes offline or online. The browser will call the state function every time the user goes offline or online. We can access it if the user is offline or online with navigator.onLine, which returns a Boolean value of true if there's an internet connection, and false if there's not. Clipboard API - programmatically manipulating the clipboard The Clipboard API finally allows developers to copy to a user's clipboard without those nasty Adobe Flash plugin hacks that were not cross-browser/cross-device-friendly. Here's how you'll copy a selection to a user's clipboard: <script> function copy2Clipboard(text) { const textarea = document.createElement('textarea'); textarea.value = text; document.body.appendChild(textarea); textarea.focus(); textarea.setSelectionRange(0, text.length); document.execCommand('copy'); document.body.removeChild(textarea); } </script> <button onclick="copy2Clipboard('Something good!')">Click me!</button> First of all, we need the user to actually click the button. Once the user clicks the button, we call a function that creates a textarea in the background using the document.createElement method. The script then sets the value of the textarea to the passed text (this is pretty good!) We then focus on that textarea and select all the contents inside it. Once the contents are selected, we execute a copy with document.execCommand('copy'); this copies the current selection in the document to the clipboard. Since, right now, the value inside the textarea is selected, it gets copied to the clipboard. Finally, we remove the textarea from the document so that it doesn't disrupt the document layout. You cannot trigger copy2Clipboard without user interaction. I mean, obviously you can, but document.execCommand('copy') will not work if the event does not come from the user (click, double-click, and so on). This is a security implementation so that a user's clipboard is not messed around with by every website that they visit. The Canvas API - the web's drawing board HTML5 finally brought in support for <canvas>, a standard way to draw graphics on the web! Canvas can be used pretty much for everything related to graphics you can think of; from digitally signing with a pen, to creating 3D games on the web (3D games require WebGL knowledge, interested? - visit http://bit.ly/webgl-101). Let's look at the basics of the Canvas API with a simple example: <canvas id="canvas" width="100" height="100"></canvas> <script> const canvas = document.getElementById("canvas"); const ctx = canvas.getContext("2d"); ctx.moveTo(0,0); ctx.lineTo(100, 100); ctx.stroke(); </script> This renders the following: How does it do this? Firstly, document.getElementById('canvas') gives us the reference to the canvas on the document. Then we get the context of the canvas. This is a way to say what I want to do with the canvas. You could put a 3D value there, of course! That is indeed the case when you're doing 3D rendering with WebGL and canvas. Once we have a reference to our context, we can do a bunch of things and add methods provided by the API out-of-the-box. Here we moved the cursor to the (0, 0) coordinates. Then we drew a line till (100,100) (which is basically a diagonal on the square canvas). Then we called stroke to actually draw that on our canvas. Easy! Canvas is a wide topic and deserves a book of its own! If you're interested in developing awesome games and apps with Canvas, I recommend you start off with MDN docs: http://bit.ly/canvas-html5. The Fetch API - promise-based HTTP requests One of the coolest async APIs introduced in browsers is the Fetch API, which is the modern replacement for the XMLHttpRequest API. Have you ever found yourself using jQuery just for simplifying AJAX requests with $.ajax? If you have, then this is surely a golden API for you, as it is natively easier to code and read! However, fetch comes natively, hence, there are performance benefits. Let's see how it works: fetch(link) .then(data => { // do something with data }) .catch(err => { // do something with error }); Awesome! So fetch uses promises! If that's the case, we can combine it with async/await to make it look completely synchronous and easy to read! <img id="img1" alt="Mozilla logo" /> <img id="img2" alt="Google logo" /> const get2Images = async () => { const image1 = await fetch('https://cdn.mdn.mozilla.net/static/img/web-docs-sprite.22a6a085cf14.svg'); const image2 = await fetch('https://www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png'); console.log(image1); // gives us response as an object const blob1 = await image1.blob(); const blob2 = await image2.blob(); const url1 = URL.createObjectURL(blob1); const url2 = URL.createObjectURL(blob2); document.getElementById('img1').src = url1; document.getElementById('img2').src = url2; return 'complete'; } get2Images().then(status => console.log(status)); The line console.log(image1) will print the following: You can see the image1 response provides tons of information about the request. It has an interesting field body, which is actually a ReadableStream, and a byte stream of data that can be cast to a  Binary Large Object (BLOB) in our case. A blob object represents a file-like object of immutable and raw data. After getting the Response, we convert it into a blob object so that we can actually use it as an image. Here, fetch is actually fetching us the image directly so we can serve it to the user as a blob (without hot-linking it to the main website). Thus, this could be done on the server side, and blob data could be passed down a WebSocket or something similar. Fetch API customization The Fetch API is highly customizable. You can even include your own headers in the request. Suppose you've got a site where only authenticated users with a valid token can access an image. Here's how you'll add a custom header to your request: const headers = new Headers(); headers.append("Allow-Secret-Access", "yeah-because-my-token-is-1337"); const config = { method: 'POST', headers }; const req = new Request('http://myawesomewebsite.awesometld/secretimage.jpg', config); fetch(req) .then(img => img.blob()) .then(blob => myImageTag.src = URL.createObjectURL(blob)); Here, we added a custom header to our Request and then created something called a Request object (an object that has information about our Request). The first parameter, that is, http://myawesomewebsite.awesometld/secretimage.jpg, is the URL and the second is the configuration. Here are some other configuration options: Credentials: Used to pass cookies in a Cross-Origin Resource Sharing (CORS)-enabled server on cross-domain requests. Method: Specifies request methods (GET, POST, HEAD, and so on). Headers: Headers associated with the request. Integrity: A security feature that consists of a (possibly) SHA-256 representation of the file you're requesting, in order to verify whether the request has been tampered with (data is modified) or not. Probably not a lot to worry about unless you're building something on a very large scale and not on HTTPS. Redirect: Redirect can have three values: Follow: Will follow the URL redirects Error: Will throw an error if the URL redirects Manual: Doesn't follow redirect but returns a filtered response that wraps the redirect response Referrer: the URL that appears as a referrer header in the HTTP request. Accessing and modifying history with the history API You can access a user's history to some level and modify it according to your needs using the history API. It consists of the length and state properties: console.log(history, history.length, history.state); The output is as follows: {length: 4, scrollRestoration: "auto", state: null} 4 null In your case, the length could obviously be different depending on how many pages you've visited from that particular tab. history.state can contain anything you like (we'll come to its use case soon). Before looking at some handy history methods, let us take a look at the window.onpopstate event. Handling window.onpopstate events The window.onpopstate event is fired automatically by the browser when a user navigates between history states that a developer has set. This event is important to handle when you push to history object and then later retrieve information whenever the user presses the back/forward button of the browser. Here's how we'll program a simple popstate event: window.addEventListener('popstate', e => { console.log(e.state); // state data of history (remember history.state ?) }) Now we'll discuss some methods associated with the history object. Modifying history - the history.go(distance) method history.go(x) is equivalent to the user clicking his forward button x times in the browser. However, you can specify the distance to move, that is history.go(5); . This equivalent to the user hitting the forward button in the browser five times. Similarly, you can specify negative values as well to make it move backward. Specifying 0 or no value will simply refresh the page: history.go(5); // forwards the browser 5 times history.go(-1); // similar effect of clicking back button history.go(0); // refreshes page history.go(); // refreshes page Jumping ahead - the history.forward() method This method is simply the equivalent of history.go(1). This is handy when you want to just push the user to the page he/she is coming from. One use case of this is when you can create a full-screen immersive web application and on your screen there are some minimal controls that play with the history behind the scenes: if(awesomeButtonClicked && userWantsToMoveForward()) { history.forward() } Going back - the history.back() method This method is simply the equivalent of history.go(-1). A negative number, makes the history go backwards. Again, this is just a simple (and numberless) way to go back to a page the user came from. Its application could be similar to a forward button, that is, creating a full-screen web app and providing the user with an interface to navigate by. Pushing on the history - history.pushState() This is really fun. You can change the browser URL without hitting the server with an HTTP request. If you run the following JS in your browser, your browser will change the path from whatever it is (domain.com/abc/egh) to  /i_am_awesome (domain.com/i_am_awesome) without actually navigating to any page: history.pushState({myName: "Mehul"}, "This is title of page", "/i_am_awesome"); history.pushState({page2: "Packt"}, "This is page2", "/page2_packt"); // <-- state is currently here The History API doesn't care whether the page actually exists on the server or not. It'll just replace the URL as it is instructed. The  popstate event when triggered with the browser's back/forward button, will fire the function below and we can program it like this: window.onpopstate = e => { // when this is called, state is already updated. // e.state is the new state. It is null if it is the root state. if(e.state !== null) { console.log(e.state); } else { console.log("Root state"); } } To run this code, run the onpopstate event first, then the two lines of history.pushState previously. Then press your browser's back button. You should see something like: {myName: "Mehul"} which is the information related to the parent state. Press back button one more time and you'll see the message Root State. pushState does not fire onpopstate event. Only browsers' back/forward buttons do. Pushing on the history stack - history.replaceState() The history.replaceState() method is exactly like history.pushState(), the only difference is that it replaces the current page with another, that is, if you use history.pushState() and press the back button, you'll be directed to the page you came from. However, when you use history.replaceState() and you press the back button, you are not directed to the page you came from because it is replaced with the new one on the stack. Here's an example of working with the replaceState method: history.replaceState({myName: "Mehul"}, "This is title of page", "/i_am_awesome"); This replaces (instead of pushing) the current state with the new state. Although using the History API directly in your code may not be beneficial to you right now, many frameworks and libraries such as React, under the hood, use the History API to create a seamless, reload-less, smooth experience for the end user. If you found this article useful, do check out the book Learn ECMAScript, Second Edition to learn the ECMAScript standards for designing quality web applications. What's new in ECMAScript 2018 (ES9)? 8 recipes to master Promises in ECMAScript 2018 Build a foodie bot with JavaScript
Read more
  • 0
  • 0
  • 45312
article-image-bootstrap-grid-system-responsive-website
Savia Lobo
18 May 2018
12 min read
Save for later

How to use Bootstrap grid system for responsive website design?

Savia Lobo
18 May 2018
12 min read
Bootstrap Origins In 2011, Bootstrap was created by two Twitter employees (Mark Otto and Jacob Thornton) to address the issue of fragmentation of internal tools/platforms. Bootstrap aimed to provide consistency among different web applications that were internally developed at Twitter to reduce redundancy and increase adaptability and reusability. As digital creators, we should always aim to make our applications adaptable and reusable. This will help keep coherency between applications and speed up processes, as we won't need to create basic foundations over and over again. In today's tutorial, you will learn what a Bootstrap is, how it relates to Responsive Web Design and its importance to the web industry. When Twitter Blueprint was born, it provided a way to document and share common design patterns/assets within Twitter. This alone is an amazing feature that would make Bootstrap an extremely useful framework. With this more internal developers began contributing towards the Bootstrap project as part of Hackathon week, and the project just exploded. Not long after, it was renamed "Bootstrap" as we know and love it today, and was released as an open source project to the community. A core team led by Mark and Jacob along with a passionate and growing community of developers helped to accelerate the growth of Bootstrap. In early 2012 after a lot of contributions from the core team and the community, Bootstrap 2 was born. It had come a long way from being a framework for providing internal consistency among Twitter tools. It was now a responsive framework using a 12-column grid system. It also provided inbuilt support for Glyphicons and a plethora of other new components. In 2013, Bootstrap 3 was released with a mobile-first approach to design and a fully redesigned set of components using the immensely popular flat design. This is the version many websites use today and it is very suitable for most developers. Bootstrap 4 is the latest stable  release. This article is an excerpt taken from the book,' Responsive Web Design by Example', written by Frahaan Hussain. Why use Bootstrap? You probably have a reasonable idea of why you would use Bootstrap for developing websites after reading its history, but there is more to it. Simply put, it provides the following: A responsive grid, using the design philosophies. Cross browser compatibility, using Normalize.css to ensure elements render consistently across all browsers (which isn't a very easy task). You might be wondering why it's difficult. Simply put, there are several different browsers, each with a plethora of versions, which all render content differently. I've seen some browsers put a border around an image by default, whereas some browsers don't. This type of inconsistency will prove to be very bad for user experience. A plethora of UI components, by providing polished UI components as developers, we are going to bring our creativity to life in a much easier way. These components usually allow a team to increase their development velocity since they start from a solid tried and tested foundation. They not only provide good design, but they are usually implemented using best practices in terms of performance and accessibility. A very compact size with only a small footprint. Really fast to develop with, it doesn't get in the way like many other frameworks, but allows your creativity to shine through. Extremely easy to start using Bootstrap in your website. Bundles common JavaScript plugins such as jQuery. Excellent documentation. Customizable, allowing you to remove any unnecessary features. An amazing community that is always ready, 24/7, to help. It's pretty clear now that Bootstrap is an amazing framework and that it will help provide consistency among our projects and aid cross browser responsive design. But why use Bootstrap over other frameworks? There are endless responsive frameworks like Bootstrap out there, such as Foundation, W3.CSS, and Skeleton, to mention a few. Bootstrap, however, was one of the first responsive frameworks and is by far the most developed with an ever-growing community. It has documentation online, both official and unofficial, and other frameworks aren't able to boast about their resources as much as Bootstrap can. Constantly being updated, it makes it the right choice for any website developer. Also, most JavaScript frameworks, such as Angular and React, have bindings to Bootstrap components that will reduce the amount of code and time spent binding it with another framework. It can also be used with tools such as SASS to customize  the components provided further. Bootstrap's grid system First, let's cover what a grid system is in general, regardless of the framework you choose to develop your amazing website on top of. Without using a framework, CSS would be used to implement the grid. However, a framework like Bootstrap handles all of the CSS side and provides us with easy-to-use classes. A responsive grid system is composed of two main elements: Columns: These are the horizontal containers for storing content on a single row Rows: These are top level containers for storing columns Your website will have at least one row, but it can have more. Each row can contain containers that span a set number of columns. For example, if the grid system had 100 columns, then a container that spans 50 would be half the width of the browser and/or parent element. Basics of Bootstrap Bootstrap's grid system consists of 12 columns that can be used to display content. Bootstrap also uses containers (methods for storing the website's content), rows, and columns to aid in the layout and alignment of the web page's content. All of these employ HTML classes for usage and will be explained very shortly. The purpose of these are as follows: Columns are used to group snippets of the website's content, and they, in turn, allow manipulation without disrupting the internal content's flow. There are two different types of columns: .container: Used for a fixed width, which is set by Bootstrap .container fluid: Used for full width to span the entire browser Rows are used to horizontally group columns, which aids in lining up the site's content properly: .row: There is only one type of row Columns mentioned previously are a way of setting how wide content should be. The following are the classes used for columns: .col-xs: Designed to display the content only on extra-small screens Max container width—none Triggered when the browser width is below 576px .col-sm: Designed to display the content only on small screens Max container width—540px Triggered when the browser width is above or equal to 576px and below 768px .col-md: Designed to display the content only on medium screens Max container width—720px Triggered when the browser width is above or equal to 768 and below 992px .col-lg: Designed to display the content only on large screens Max container width—960px Triggered when the browser width is above or equal to 992px and below 1200px .col-xl: Designed to display the content only on extra-large screens Max container width—1140px Triggered when the browser width is above or equal to 1200px .col: Designed to be triggered on all screen sizes To set a column's width, we simply append an integer ranging from 1 to 12 at the end of the class, like so: .col-6: Spans six columns on all screen sizes .col-md-6: Spans six columns only on extra-small screen sizes Later in this chapter, we will run through some examples of how to use these features and how they work together. Usage and examples To use the aforementioned features, the structure is as follows: div with container class div with row class div with column class Content div with column class Content div with column class Content div with column class Content div with row class div with column class Content div with column class Content div with column class Content div with column class Content div with column class Content div with column class Content The following examples may have some CSS styling applied; this does not affect their usage. Equal width columns example We will start off with a simple example that consists of one row and three equal columns on all screen sizes. The following code produces the aforementioned result: You may be scratching your head in regards to the column classes, as they have no numbers appended. This is an amazing feature that will come in useful very often. It allows us, as web developers, to add columns easily, without having to update the numbers, if the width of the columns is equal. In this example, there are three columns, which means the three divs equally span their thirds of the row. Multi-row, equal-width columns example Now let's extend the previous example to multiple rows: The following code produces the aforementioned result: As you can see, by adding a new row, the columns automatically go to the next row. This is extremely useful for grouping content together. Multi-row, equal-width columns without multiple rows example The title of this example may seem confusing, but you need to read it correctly. We will now cover creating multiple rows using only a single row class. This can be achieved with the help of a display utility class called w-100. The following code produces the aforementioned result: The example shows multiple row divs are not required for multiple rows. But the result isn't exactly identical, as there is no gap between the rows. This is useful for separating content that is still similar. For example, on a social network, it is common to have posts, and each post will contain information such as its date, title, description, and so on. Each post could be its own row, but within the post, the individual pieces of information could be separated using this class. Differently sized columns Up until now, we have only created rows with equal-width columns. These are useful, but not as useful as being able to set individual sizes. As mentioned in the Bootstrap grid system section, we can easily change the column width by appending a number ranging from 1-12 at the end of the col class. The following code produces the aforementioned result: As you can see, setting the explicit width of a column is very easy, but this applies the width to all screen sizes. You may want it only to be applied on certain screen sizes. The next section will cover this. Differently sized columns with screen size restrictions Let's use the previous example and expand it to change size responsively on differently sized screens. On extra-large screens, the grid will look like the following: On all other screen sizes it will appear with equal-width columns: The following code produces the aforementioned result: Now we are beginning to use breakpoints that provide a way of creating multiple layouts with minimal extra code to make use of the available real estate fully. Mixing and matching We aren't restricted to choosing only one break-point, we are able to set breakpoints for all the available screen sizes. The following figures illustrate all screen sizes, from extra-small to extra-large: Extra-small: Small: Medium: Large: Extra-large: The following code produces the aforementioned results: It isn't necessary for all divs to have the same breakpoints or to have breakpoints at all. Vertical alignment The previous examples provide functionality for use cases, but sometimes the need may arise to align objects vertically. This could technically be done with empty divs, but this wouldn't be a very elegant solution. Instead, there are alignment classes to help with this as can be seen here: As we can see, you can align rows vertically in one of three positions. The following code produces the aforementioned result: We aren't restricted to only aligning rows, we can easily align columns relative to each other, as is demonstrated here: The following code produces the aforementioned result: Horizontal alignment As we vertically aligned content in the previous section, we will now cover how easy it is to align content horizontally. The following figures show the results of horizontal alignment:   The following code produces the aforementioned result: Column offsetting The need may arise to position content with a slight offset. If the content isn't centered or at the start or end, this can become problematic, but using column offsetting, we can overcome this issue. Simply add an offset class, with the screen size to target, and how many columns (1-12) the content should be offset by, as can be seen in the following example:   The following code produces the aforementioned result: Grid wrap up The examples covered so far will suffice for most websites. There are more techniques for manipulating the grid, which can be found on Bootstrap's website. If you tried any of the examples, you may have noticed cascading from smaller screen-size classes to larger screen-size classes. This occurs when there are no explicit classes set for a certain screen size. Bootstrap components There are plethora of amazing components that are provided with Bootstrap, thus saving time creating them from scratch. There are components for dropdowns, buttons, images, and so much more. The usage is very similar to that of the grid system, and the same HTML elements we know and love are used with CSS classes to modify and display Bootstrap constructs. I won't go over every component that Bootstrap offers as that would require an encyclopedia in itself, and many of the commonly used ones will be covered in future chapters through example projects. I would however recommend taking a look at some of the components on Bootstrap's website. If you have found this post useful, do check out this book, ' Responsive Web Design by Example' to build engaging responsive websites using frameworks like Bootstrap and upgrade your skills as a web designer. Get ready for Bootstrap v4.1; Web developers to strap up their boots Web Development with React and Bootstrap Bootstrap 4 Objects, Components, Flexbox, and Layout  
Read more
  • 0
  • 2
  • 45262

article-image-the-seven-deadly-sins-of-web-design
Guest Contributor
13 Mar 2019
7 min read
Save for later

The seven deadly sins of web design

Guest Contributor
13 Mar 2019
7 min read
Just 30 days before the debut of "Captain Marvel," the latest cinematic offering by the successful and prolific Marvel Studios, a delightful and nostalgia-filled website was unveiled to promote the movie. Since the story of "Captain Marvel" is set in the 1990s, the brilliant minds at the marketing department of Marvel Studios decided to design a website with the right look and feel, which in this case meant using FrontPage and hosting on Angelfire. The "Captain Marvel" promo website is filled with the typography, iconography, glitter, and crudely animated GIFs you would expect from a 1990s creation, including a guestbook, hidden easter eggs, flaming borders, hit counter, and even headers made with Microsoft WordArt. (Image courtesy of Marvel) The site is delightful not just for the dead-on nostalgia trip it provides to visitors, but also because it is very well developed. This is a site with a lot to explore, and it is clearly evident that the website developers met client demands while at the same time thinking about users. This site may look and feel like it was made during the GeoCities era, but it does not make any of the following seven mistakes: Sin #1: Non-Responsiveness In 2019, it is simply inconceivable to think of a web development firm that neglects to make a responsive site. Since 2016, internet traffic flowing through mobile devices has been higher than the traffic originating from desktops and laptops. Current rates are about 53 percent smartphones and tablets versus 47 percent desktops, laptops, kiosks, and smart TVs. Failure to develop responsive websites means potentially alienating more than 50 percent of prospective visitors. As for the "Captain Marvel" website, it is amazingly responsive when considering that internet users in the 1990s barely dreamed about the day when they would be able to access the web from handheld devices (mobile phones were yet to be mass distributed back then). Sin #2: Way too much Jargon (Image courtesy of the Botanical Linguist) Not all website developers have a good sense of readability, and this is something that often shows up when completed projects result in product visitors struggling to comprehend. We’re talking about jargon. There’s a lot of it online, not only in the usual places like the privacy policy and terms of service sections but sometimes in content too. Regardless of how jargon creeps onto your website, it should be rooted out. The "Captain Marvel" website features legal notices written by The Walt Disney Company, and they are very reader-friendly with minimal jargon. The best way to handle jargon is to avoid it as much as possible unless the business developer has good reasons to include it. Sin #3: A noticeable lack of content No content means no message, and this is the reason 46 percent of visitors who land on B2B websites end up leaving without further exploration or interaction. Quality content that is relevant to the intention of a website is crucial in terms of establishing credibility, and this goes beyond B2B websites. In the case of "Captain Marvel," the amount of content is reduced to match the retro sensibility, but there are enough photos, film trailers, character bios, and games to keep visitors entertained. Modern website development firms that provide full-service solutions can either provide or advise clients on the content they need to get started. Furthermore, they can also offer lessons on how to operate content management systems. Sin #4: Making essential information hard to find There was a time when the "mystery meat navigation” issue of website development was thought to have been eradicated through the judicious application of recommended practices, but then mobile apps came around. Even technology giant Google fell victim to mystery meat navigation with its 2016 release of Material Design, which introduced bottom navigation bars intended to offer a more clarifying alternative to hamburger menus. Unless there is a clever purpose for prompting visitors to click or tap on a button, link or page element, that does not explain next steps, mystery meat navigation should be avoided, particularly when it comes to essential information. When the 1990s "Captain Marvel" page loads, visitors can click or tap on labeled links to get information about the film, enjoy multimedia content, play games, interact with the guestbook, or get tickets. There is a mysterious old woman that pops up every now and then from the edges of the screen, but the reason behind this mysterious element is explained in the information section. Sin #5: Website loads too slow (Image courtesy of Horton Marketing Solutions) There is an anachronism related to the "Captain Marvel" website that users who actually used Netscape in the 1990s will notice: all pages load very fast. This is one retro aspect that Marvel Studios decided to not include on this site, and it makes perfect sense. For a fast-loading site, a web design rule of thumb is to simplify and this responsibility lies squarely with the developer. It stands to reason that the more “stuff” you have on a page (images, forms, videos, widgets, shiny things), the longer it takes the server to send over the site files and the longer it takes the browser to render them. Here are a few design best practices to keep in mind: 1 Make the site light - get rid of non-essential elements, especially if they are bandwidth-sucking images or video. 2 Compress your pages - it’s easy with Gzip. 3 Split long pages into several shorter ones 4 Write clean code that doesn’t rely on external sources 5 Optimize images For more web design tips that help your site load in the sub-three second range, like Google expects in 2019, check out our article on current design trends.   Once you have design issues under control, investigate your web host. They aren’t all created equal. Cheap, entry-level shared packages are notoriously slow and unpredictable, especially as your traffic increases. But even beyond that, the reality is that some companies spend money buying better, faster servers and don’t overload them with too many clients. Some do. Recent testing from review site HostingCanada.org checked load times across the leading providers and found variances from a ‘meh’ 2,850 ms all the way down to speedy 226 ms. With pricing amongst credible competitors roughly equal, web developers should know which hosts are the fastest and point clients in that direction. Sin #6: Outdated information Functional and accurate information will always triumph over form. The "Captain Marvel" website is garish to look at by 2019 standards, but all the information is current. The film's theater release date is clearly displayed, and should something happen that would require this date to change, you can be sure that Marvel Studios will fire up FrontPage to promptly make the adjustment. Sin #7: No clear call to action Every website should compel visitors to do something. Even if the purpose is to provide information, the call-to-action or CTA should encourage visitors to remember it and return for updates. The CTA should be as clear as the navigation elements, otherwise, the purpose of the visit is lost. Creating enticements is acceptable, but the CTA message should be explained nonetheless. In the case of "Captain Marvel," visitors can click on "Get Tickets" link to be taken to a Fandango.com page with geolocation redirection for their region. The Bottom Line In the end, the seven mistakes listed herein are easy to avoid. Whenever developers run into clients whose instructions may result in one of these mistakes, proper explanations should be given. Author Bio Gary Stevens is a front-end developer. He’s a full-time blockchain geek and a volunteer working for the Ethereum foundation as well as an active Github contributor. 7 Web design trends and predictions for 2019 How to create a web designer resume that lands you a Job Will Grant’s 10 commandments for effective UX Design
Read more
  • 0
  • 0
  • 44445

article-image-how-get-started-redux-react-native
Emilio Rodriguez
04 Apr 2016
5 min read
Save for later

How To Get Started with Redux in React Native

Emilio Rodriguez
04 Apr 2016
5 min read
In mobile development there is a need for architectural frameworks, but complex frameworks designed to be used in web environments may end up damaging the development process or even the performance of our app. Because of this, some time ago I decided to introduce in all of my React Native projects the leanest framework I ever worked with: Redux. Redux is basically a state container for JavaScript apps. It is 100 percent library-agnostic so you can use it with React, Backbone, or any other view library. Moreover, it is really small and has no dependencies, which makes it an awesome tool for React Native projects. Step 1: Install Redux in your React Native project. Redux can be added as an npm dependency into your project. Just navigate to your project’s main folder and type: npm install --save react-redux By the time this article was written React Native was still depending on React Redux 3.1.0 since versions above depended on React 0.14, which is not 100 percent compatible with React Native. Because of this, you will need to force version 3.1.0 as the one to be dependent on in your project. Step 2: Set up a Redux-friendly folder structure. Of course, setting up the folder structure for your project is totally up to every developer but you need to take into account that you will need to maintain a number of actions, reducers, and components. Besides, it’s also useful to keep a separate folder for your API and utility functions so these won’t be mixing with your app’s core functionality. Having this in mind, this is my preferred folder structure under the src folder in any React Native project: Step 3: Create your first action. In this article we will be implementing a simple login functionality to illustrate how to integrate Redux inside React Native. A good point to start this implementation is the action, a basic function called from the component whenever we want the whole state of the app to be changed (i.e. changing from the logged out state into the logged in state). To keep this example as concise as possible we won’t be doing any API calls to a backend – only the pure Redux integration will be explained. Our action creator is a simple function returning an object (the action itself) with a type attribute expressing what happened with the app. No business logic should be placed here; our action creators should be really plain and descriptive. Step 4: Create your first reducer. Reducers are the ones in charge of updating the state of the app. Unlike in Flux, Redux only has one store for the whole app, but it will be conveniently name-spaced automatically by Redux once the reducers have been applied. In our example, the user reducer needs to be aware of when the user is logged in. Because of that, it needs to import the LOGIN_SUCCESS constant we defined in our actions before and export a default function, which will be called by Redux every time an action occurs in the app. Redux will automatically pass the current state of the app and the action occurred. It’s up to the reducer to realize if it needs to modify the state or not based on the action.type. That’s why almost every time our reducer will be a function containing a switch statement, which modifies and returns the state based on what action occurred. It’s important to state that Redux works with object references to identify when the state is changed. Because of this, the state should be cloned before any modification. It’s also interesting to know that the action passed to the reducers can contain other attributes apart from type. For example, when doing a more complex login, the user first name and last name can be added to the action by the action created and used by the reducer to update the state of the app. Step 5: Create your component. This step is almost pure React Native coding. We need a component to trigger the action and to respond to the change of state in the app. In our case it will be a simple View containing a button that disappears when logged in. This is a normal React Native component except for some pieces of the Redux boilerplate: The three import lines at the top will require everything we need from Redux ‘mapStateToProps’ and ‘mapDispatchToProps’ are two functions bound with ‘connect’ to the component: this makes Redux know that this component needs to be passed a piece of the state (everything under ‘userReducers’) and all the actions available in the app. Just by doing this, we will have access to the login action (as it is used in the onLoginButtonPress) and to the state of the app (as it is used in the !this.props.user.loggedIn statement) Step 6: Glue it all from your index.ios.js. For Redux to apply its magic, some initialization should be done in the main file of your React Native project (index.ios.js). This is pure boilerplate and only done once: Redux needs to inject a store holding the app state into the app. To do so, it requires a ‘Provider’ wrapping the whole app. This store is basically a combination of reducers. For this article we only need one reducer, but a full app will include many others and each of them should be passed into the combineReducers function to be taken into account by Redux whenever an action is triggered. About the Author Emilio Rodriguez started working as a software engineer for Sun Microsystems in 2006. Since then, he has focused his efforts on building a number of mobile apps with React Native while contributing to the React Native project. These contributions helped his understand how deep and powerful this framework is.
Read more
  • 0
  • 0
  • 44376
article-image-5-developers-explain-why-they-use-visual-studio-code-sponsored-by-microsoft
Richard Gall
22 May 2019
7 min read
Save for later

5 developers explain why they use Visual Studio Code [Sponsored by Microsoft]

Richard Gall
22 May 2019
7 min read
Visual Studio Code has quickly become one of the most popular text editors on the planet. While debate will continue to rage about the relative merits of every text editor, it’s nevertheless true that Visual Studio Code is unique in that it is incredibly customizable: it can be as lightweight as a text editor or as feature-rich as an IDE. This post is part of a series brought to you in conjunction with Microsoft. Download Learning Node.js Development for free from Microsoft here. Try Visual Studio Code yourself. Learn more here. This means the range of developers using Visual Studio Code are incredibly diverse. Each one faces a unique set of challenges alongside their personal preferences. I spoke to a few of them about why they use Visual Studio Code and how they make it work for them. “Visual Studio Code is streamlined and flexible” Ben Sibley is the Founder of Complete Themes. He likes Visual Studio Code because it is relatively lightweight while also offering considerable flexibility. “I love how streamlined and flexible Visual Studio Code is. Personally, I don’t need a ton of functionality from my IDE, so I appreciate how simple the default configuration is. There's a very concise set of features built-in like the Git integration. “I was using PHPStorm previously and while it was really feature-rich, it was also overwhelming at times. VSC is faster, lighter, and with the extension market you can pick and choose which additional tools you need. And it’s a popular enough editor that you can usually find a reliable and well-reviewed extension.” Read next: How Visual Studio Code can help bridge the gap between full-stack development and DevOps [Sponsored by Microsoft] “Visual Studio Code is the best in terms of extension ecosystem, language support and configuration” Libby Horacek is a developer at Position Development. She has worked with several different code editors but struggled to find one that allowed her to effectively move between languages. For Libby, Visual Studio Code offered the right level of flexibility. She also explained how the team at Position Development have used VSC’s Live Share feature which allows developers to directly share and collaborate on code inside their editor. “I currently use Visual Studio Code. I’ve tried a LOT of different editors. I’m a polyglot developer, so I need an editor that isn’t just for one language. RubyMine is great for Ruby, and PyCharm is good for Python, but I don’t want to switch editors every time I switch languages (sometimes multiple times a day). My main constraint is Haskell language support — there are plugins for most IDEs now, but some are better than others. “For a long time I used Emacs just because I was able to steal a great configuration setup for it from a coworker, but a few months back it stopped working due to updates and I didn’t want to acquire the Emacs expertise to fix it. So I tried IntelliJ, Visual Studio, Atom, Sublime Text, even Vim… but in the end I liked Visual Studio the best in terms of extension ecosystem, language support, and ease of use and configuration. “My team also uses Visual Studio’s Live Share for pairing. I haven’t tried it personally but it looks like a great option for remote pairing. The only thing my coworkers have cautioned is that they encountered a bug with the “undo” functionality that wiped out most of a file they were working on. Maybe that bug has been fixed by now, but as always, commit early and commit often!” “As a JavaScript dev shop, we love that VSC is written in JavaScript” Cody Swann is the CEO of Gunner Technology, a software development company that builds using JavaScript on AWS for both the public and private sector. “All our developers here [at Gunner Technology] use VSC. “We switched from Sublime about two years ago because Sublime started to feel slow and neglected. “Before that, we used TextMate and abandoned that for the same reasons. “As a JavaScript dev shop, we love that VSC is written in JavaScript. It makes it easier for us to write in-house extensions and such. “Additionally, we love that Microsoft releases monthly updates and keeps improving performance.” Read next: Microsoft Build 2019: Microsoft showcases new updates to MS 365 platform with focus on AI and developer productivity “The Visual Studio Code team pay close attention to the problems developers face” Ajeet Dhaliwal is a software developer at Tesults. He explains he has used several different IDEs and editors but came to Visual Studio Code after spending some time using Node.js and React on Brackets. “I have used Visual Studio Code almost exclusively for the last couple of years. “In years prior to making this switch, the nature of the development work that I did meant that I was broadly limited to using specific IDEs such as Visual Studio and Xcode. Then in 2014 I stated to get into Node.js and was looking for a code editor that would be more suitable. I tried out a few and ultimately settled on Brackets. “I used Brackets for a while but wasn’t always happy with it. The most annoying issue was the way text was rendered on my Mac. “Over time I started doing React work too and every time I revisited VSC the improvements were impressive, it seemed to me that the developers were closely paying attention to the problems developers face, they were creating features I had never even thought I would need and the extensions added highly useful features for Node.js and React dev work. The font rendering was not an issue either so it became an inevitable switch.” “I have to context switch regularly - I expect my brain to be the slowest element, not the IDE” Kyle Balnave is Senior Developer and Squad Manager at High Speed Training. Despite working with numerous editors and IDEs, he likes Visual Studio Code because it allows him to move between different contexts incredibly quickly. Put simply, it allows him to work faster than other IDEs do. "I've used several different editors over the years. They generally fall under two categories: Monolithic (I can do anything you'll ever want to do out of the box). Modular (I do the basics but allow extensions to be added to do most the rest). “The former are IDEs like Netbeans, IntelliJ and Visual Studio. In my experience they are slow to load and need a more powerful development machine to keep responsive. They have a huge range of functionality, but in everyday development I just need it to be an intelligent code editor. “The latter are IDEs like Eclipse, Visual Studio Code, Atom. They load quickly, respond fast and have a wide range of extensions that allow me to develop what I need. They sometimes fall short in their functionality, but I generally find this to be infrequent. “Why do I use VSCode? Because it doesn't slow me down when I code. I have to context switch regularly so I expect my own brain to be the slowest element, not the IDE. Learn how to develop with Node.js on Azure by downloading Learning Node.js with Azure for free, courtesy of Microsoft.
Read more
  • 0
  • 0
  • 44251

article-image-design-a-restful-web-api-with-java-tutorial
Pavan Ramchandani
12 Jun 2018
12 min read
Save for later

Design a RESTful web API with Java [Tutorial]

Pavan Ramchandani
12 Jun 2018
12 min read
In today's tutorial, you will learn to design REST services. We will break down the key design considerations you need to make when building RESTful web APIs. In particular, we will focus on the core elements of the REST architecture style: Resources and their identifiers Interaction semantics for RESTful APIs (HTTP methods) Representation of resources Hypermedia controls This article is an excerpt from a book written by Balachandar Bogunuva Mohanram, titled RESTful Java Web Services, Second Edition. This book will help you build robust, scalable and secure RESTful web services, making use of the JAX-RS and Jersey framework extensions. Let's start by discussing the guidelines for identifying resources in a problem domain. Richardson Maturity Model—Leonardo Richardson has developed a model to help with assessing the compliance of a service to REST architecture style. The model defines four levels of maturity, starting from level-0 to level-3 as the highest maturity level. The maturity levels are decided considering the aforementioned principle elements of the REST architecture. Identifying resources in the problem domain The basic steps that yoneed to take while building a RESTful web API for a specific problem domain are: Identify all possible objects in the problem domain. This can be done by identifying all the key nouns in the problem domain. For example, if you are building an application to manage employees in a department, the obvious nouns are department and employee. The next step is to identify the objects that can be manipulated using CRUD operations. These objects can be classified as resources. Note that you should be careful while choosing resources. Based on the usage pattern, you can classify resources as top-level and nested resources (which are the children of a top-level resource). Also, there is no need to expose all resources for use by the client; expose only those resources that are required for implementing the business use case. Transforming operations to HTTP methods Once you have identified all resources, as the next step, you may want to map the operations defined on the resources to the appropriate HTTP methods. The most commonly used HTTP methods (verbs) in RESTful web APIs are POST, GET, PUT, and DELETE. Note that there is no one-to-one mapping between the CRUD operations defined on the resources and the HTTP methods. Understanding of idempotent and safe operation concepts will help with using the correct HTTP method. An operation is called idempotent if multiple identical requests produce the same result. Similarly, an idempotent RESTful web API will always produce the same result on the server irrespective of how many times the request is executed with the same parameters; however, the response may change between requests. An operation is called safe if it does not modify the state of the resources. Check out the following table: MethodIdempotentSafeGETYESYESOPTIONSYESYESHEADYESYESPOSTNONOPATCHNONOPUTYESNODELETEYESNO Here are some tips for identifying the most appropriate HTTP method for the operations that you want to perform on the resources: GET: You can use this method for reading a representation of a resource from the server. According to the HTTP specification, GET is a safe operation, which means that it is only intended for retrieving data, not for making any state changes. As this is an idempotent operation, multiple identical GET requests will behave in the same manner. A GET method can return the 200 OK HTTP response code on the successful retrieval of resources. If there is any error, it can return an appropriate status code such as 404 NOT FOUND or 400 BAD REQUEST. DELETE: You can use this method for deleting resources. On successful deletion, DELETE can return the 200 OK status code. According to the HTTP specification, DELETE is an idempotent operation. Note that when you call DELETE on the same resource for the second time, the server may return the 404 NOT FOUND status code since it was already deleted, which is different from the response for the first request. The change in response for the second call is perfectly valid here. However, multiple DELETE calls on the same resource produce the same result (state) on the server. PUT: According to the HTTP specification, this method is idempotent. When a client invokes the PUT method on a resource, the resource available at the given URL is completely replaced with the resource representation sent by the client. When a client uses the PUT request on a resource, it has to send all the available properties of the resource to the server, not just the partial data that was modified within the request. You can use PUT to create or update a resource if all attributes of the resource are available with the client. This makes sure that the server state does not change with multiple PUT requests. On the other hand, if you send partial resource content in a PUT request multiple times, there is a chance that some other clients might have updated some attributes that are not present in your request. In such cases, the server cannot guarantee that the state of the resource on the server will remain identical when the same request is repeated, which breaks the idempotency rule. POST: This method is not idempotent. This method enables you to use the POST method to create or update resources when you do not know all the available attributes of a resource. For example, consider a scenario where the identifier field for an entity resource is generated at the server when the entity is persisted in the data store. You can use the POST method for creating such resources as the client does not have an identifier attribute while issuing the request. Here is a simplified example that illustrates this scenario. In this example, the employeeID attribute is generated on the server: POST hrapp/api/employees HTTP/1.1 Host: packtpub.com {employee entity resource in JSON} On the successful creation of a resource, it is recommended to return the status of 201 Created and the location of the newly created resource. This allows the client to access the newly created resource later (with server-generated attributes). The sample response for the preceding example will look as follows: 201 Created Location: hrapp/api/employees/1001 Best practice Use caching only for idempotent and safe HTTP methods, as others have an impact on the state of the resources. Understanding the difference between PUT and POST A common question that you will encounter while designing a RESTful web API is when you should use the PUT and POST methods? Here's the simplified answer: You can use PUT for creating or updating a resource, when the client has the full resource content available. In this case, all values are with the client and the server does not generate a value for any of the fields. You will use POST for creating or updating a resource if the client has only partial resource content available. Note that you are losing the idempotency support with POST. An idempotent method means that you can call the same API multiple times without changing the state. This is not true for the POST method; each POST method call may result in a server state change. PUT is idempotent, and POST is not. If you have strong customer demands, you can support both methods and let the client choose the suitable one on the basis of the use case. Naming RESTful web resources Resources are a fundamental concept in RESTful web services. A resource represents an entity that is accessible via the URI that you provide. The URI, which refers to a resource (which is known as a RESTful web API), should have a logically meaningful name. Having meaningful names improves the intuitiveness of the APIs and, thereby, their usability. Some of the widely followed recommendations for naming resources are shown here: It is recommended you use nouns to name both resources and path segments that will appear in the resource URI. You should avoid using verbs for naming resources and resource path segments. Using nouns to name a resource improves the readability of the corresponding RESTful web API, particularly when you are planning to release the API over the internet for the general public. You should always use plural nouns to refer to a collection of resources. Make sure that you are not mixing up singular and plural nouns while forming the REST URIs. For instance, to get all departments, the resource URI must look like /departments. If you want to read a specific department from the collection, the URI becomes /departments/{id}. Following the convention, the URI for reading the details of the HR department identified by id=10 should look like /departments/10. The following table illustrates how you can map the HTTP methods (verbs) to the operations defined for the departments' resources: ResourceGETPOSTPUTDELETE/departmentsGet all departmentsCreate a new departmentBulk update on departmentsDelete all departments/departments/10Get the HR department with id=10Not allowedUpdate the HR departmentDelete the HR department While naming resources, use specific names over generic names. For instance, to read all programmers' details of a software firm, it is preferable to have a resource URI of the form /programmers (which tells about the type of resource), over the much generic form /employees. This improves the intuitiveness of the APIs by clearly communicating the type of resources that it deals with. Keep the resource names that appear in the URI in lowercase to improve the readability of the resulting resource URI. Resource names may include hyphens; avoid using underscores and other punctuation. If the entity resource is represented in the JSON format, field names used in the resource must conform to the following guidelines: Use meaningful names for the properties Follow the camel case naming convention: The first letter of the name is in lowercase, for example, departmentName The first character must be a letter, an underscore (_), or a dollar sign ($), and the subsequent characters can be letters, digits, underscores, and/or dollar signs Avoid using the reserved JavaScript keywords If a resource is related to another resource(s), use a subresource to refer to the child resource. You can use the path parameter in the URI to connect a subresource to its base resource. For instance, the resource URI path to get all employees belonging to the HR department (with id=10) will look like /departments/10/employees. To get the details of employee with id=200 in the HR department, you can use the following URI: /departments/10/employees/200. The resource path URI may contain plural nouns representing a collection of resources, followed by a singular resource identifier to return a specific resource item from the collection. This pattern can repeat in the URI, allowing you to drill down a collection for reading a specific item. For instance, the following URI represents an employee resource identified by id=200 within the HR department: /departments/hr/employees/200. Although the HTTP protocol does not place any limit on the length of the resource URI, it is recommended not to exceed 2,000 characters because of the restriction set by many popular browsers. Best practice: Avoid using actions or verbs in the URI as it refers to a resource. Using HATEOAS in response representation Hypertext as the Engine of Application State (HATEOAS) refers to the use of hypermedia links in the resource representations. This architectural style lets the clients dynamically navigate to the desired resource by traversing the hypermedia links present in the response body. There is no universally accepted single format for representing links between two resources in JSON. Hypertext Application Language The Hypertext API Language (HAL) is a promising proposal that sets the conventions for expressing hypermedia controls (such as links) with JSON or XML. Currently, this proposal is in the draft stage. It mainly describes two concepts for linking resources: Embedded resources: This concept provides a way to embed another resource within the current one. In the JSON format, you will use the _embedded attribute to indicate the embedded resource. Links: This concept provides links to associated resources. In the JSON format, you will use the _links attribute to link resources. Here is the link to this proposal: http://tools.ietf.org/html/draft-kelly-json-hal-06. It defines the following properties for each resource link: href: This property indicates the URI to the target resource representation template: This property would be true if the URI value for href has any PATH variable inside it (template) title: This property is used for labeling the URI hreflang: This property specifies the language for the target resource title: This property is used for documentation purposes name: This property is used for uniquely identifying a link The following example demonstrates how you can use the HAL format for describing the department resource containing hyperlinks to the associated employee resources. This example uses the JSON HAL for representing resources, which is represented using the application/hal+json media type: GET /departments/10 HTTP/1.1 Host: packtpub.com Accept: application/hal+json HTTP/1.1 200 OK Content-Type: application/hal+json { "_links": { "self": { "href": "/departments/10" }, "employees": { "href": "/departments/10/employees" }, "employee": { "href": "/employees/{id}", "templated": true } }, "_embedded": { "manager": { "_links": { "self": { "href": "/employees/1700" } }, "firstName": "Chinmay", "lastName": "Jobinesh", "employeeId": "1700", } }, "departmentId": 10, "departmentName": "Administration" } To summarize, we discussed the details of designing RESTful web APIs including identifying the resources, using HTTP methods, and naming the web resources. Additionally we got introduced to Hypertext application language. Read More: Getting started with Django RESTful Web Services Testing RESTful Web Services with Postman Documenting RESTful Java web services using Swagger
Read more
  • 0
  • 0
  • 44031
Modal Close icon
Modal Close icon