Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-structure-applications
Packt
21 Apr 2015
21 min read
Save for later

Structure of Applications

Packt
21 Apr 2015
21 min read
In this article by Colin Ramsay, author of the book Ext JS Application Development Blueprints, we will learn that one of the great things about imposing structure is that it automatically gives predictability (a kind of filing system in which we immediately know where a particular piece of code should live). The same applies to the files that make up your application. Certainly, we could put all of our files in the root of the website, mixing CSS, JavaScript, configuration and HTML files in a long alphabetical list, but we'd be losing out on a number of opportunities to keep our application organized. In this article, we'll look at: Ideas to structure your code The layout of a typical Ext JS application Use of singletons, mixins, and inheritance Why global state is a bad thing Structuring your application is like keeping your house in order. You'll know where to find your car keys, and you'll be prepared for unexpected guests. (For more resources related to this topic, see here.) Ideas for structure One of the ways in which code is structured in large applications involves namespacing (the practice of dividing code up by naming identifiers). One namespace could contain everything relating to Ajax, whereas another could contain classes related to mathematics. Programming languages (such as C# and Java) even incorporate namespaces as a first-class language construct to help with code organization. Separating code from directories based on namespace becomes a sensible extension of this: From left: Java's Platform API, Ext JS 5, and .NET Framework A namespace identifier is made up of one or more name tokens, such as "Java" or "Ext", "Ajax" or "Math", separated by a symbol, in most cases a full stop/period. The top level name will be an overarching identifier for the whole package (such as "Ext") and will become less specific as names are added and you drill down into the code base. The Ext JS source code makes heavy use of this practice to partition UI components, utility classes, and all the other parts of the framework, so let's look at a real example. The GridPanel component is perhaps one of the most complicated in the framework; a large collection of classes contribute to features (such as columns, cell editing, selection, and grouping). These work together to create a highly powerful UI widget. Take a look at the following files that make up GridPanel: The Ext JS grid component's directory structure The grid directory reflects the Ext.grid namespace. Likewise, the subdirectories are child namespaces with the deepest namespace being Ext.grid.filters.filter. The main Panel and View classes: Ext.grid.Grid and Ext.grid.View respectively are there in the main director. Then, additional pieces of functionality, for example, the Column class and the various column subclasses are further grouped together in their own subdirectories. We can also see a plugins directory, which contains a number of grid-specific plugins. Ext JS actually already has an Ext.plugins namespace. It contains classes to support the plugin infrastructure as well as plugins that are generic enough to apply across the entire framework. In the event of uncertainty regarding the best place in the code base for a plugin, we might mistakenly have put it in Ext.plugins. Instead, Ext JS follows best practice and creates a new, more specific namespace underneath Ext.grid. Going back to the root of the Ext JS framework, we can see that there are only a few files at the top level. In general, these will be classes that are either responsible for orchestrating other parts of the framework (such as EventManager or StoreManager) or classes that are widely reused across the framework (such as Action or Component). Any more specific functionality should be namespaced in a suitably specific way. As a rule of thumb, you can take your inspiration from the organization of the Ext JS framework, though as a framework rather than a full-blown application. It's lacking some of the structural aspects we'll talk about shortly. Getting to know your application When generating an Ext JS application using Sencha Cmd, we end up with a code base that adheres to the concept of namespacing in class names and in the directory structure, as shown here: The structure created with Sencha Cmd's "generate app" feature We should be familiar with all of this, as it was already covered when we discussed MVVM in Ext JS. Having said that, there are some parts of this that are worth examining further to see whether they're being used to the full. /overrides This is a handy one to help us fall into a positive and predictable pattern. There are some cases where you need to override Ext JS functionality on a global level. Maybe, you want to change the implementation of a low-level class (such as Ext.data.proxy.Proxy) to provide custom batching behavior for your application. Sometimes, you might even find a bug in Ext JS itself and use an override to hotfix until the next point release. The overrides directory provides a logical place to put these changes (just mirror the directory structure and namespacing of the code you're overriding). This also provides us with a helpful rule, that is, subclasses go in /app and overrides go in /overrides. /.sencha This contains configuration information and build files used by Sencha Cmd. In general, I'd say try and avoid fiddling around in here too much until you know Sencha Cmd inside out because there's a chance you'll end up with nasty conflicts if you try and upgrade to a newer version of Sencha Cmd. bootstrap.js, bootstrap.json, and bootstrap.css The Ext JS class system has powerful dependency management through the requires feature, which gives us the means to create a build that contains only the code that's in use. The bootstrap files contain information about the minimum CSS and JavaScript needed to run your application as provided by the dependency system. /packages In a similar way to something like Ruby has RubyGems and Node.js has npm, Sencha Cmd has the concept of packages (a bundle which can be pulled into your application from a local or remote source). This allows you to reuse and publish bundles of functionality (including CSS, images, and other resources) to reduce copy and paste of code and share your work with the Sencha community. This directory is empty until you configure packages to be used in your app. /resources and SASS SASS is a technology that aids in the creation of complex CSS by promoting reuse and bringing powerful features (such as mixins and functions) to your style sheets. Ext JS uses SASS for its theme files and encourages you to use it as well. index.html We know that index.html is the root HTML page of our application. It can be customized as you see fit (although, it's rare you'll need to). There's one caveat and it's written in a comment in the file already: <!-- The line below must be kept intact for Sencha Cmd to build your application --><script id="microloader" type="text/javascript" src="bootstrap.js"></script> We know what bootstrap.js refers to (loading up our application and starting to fulfill its dependencies according to the current build). So, heed the comment and leave this script tag, well, alone! /build and build.xml The /build directory contains build artifacts (the files created when the build process is run). If you run a production build, then you'll get a directory inside /build called production and you should use only these files when deploying. The build.xml file allows you to avoid tweaking some of the files in /.sencha when you want to add some extra functionality to a build process. If you want to do something before, during, or after the build, this is the place to do it. app.js This is the main JavaScript entry point to your application. The comments in this file advise avoiding editing it in order to allow Sencha Cmd to upgrade it in the future. The Application.js file at /app/Application.js can be edited without fear of conflicts and will enable you to do the majority of things you might need to do. app.json This contains configuration options related to Sencha Cmd and to boot your application. When we refer to the subject of this article as a JavaScript application, we need to remember that it's just a website composed of HTML, CSS, and JavaScript as well. However, when dealing with a large application that needs to target different environments, it's incredibly useful to augment this simplicity with tools that assist in the development process. At first, it may seem that the default application template contains a lot of cruft, but they are the key to supporting the tools that will help you craft a solid product. Cultivating your code As you build your application, there will come a point at which you create a new class and yet it doesn't logically fit into the directory structure Sencha Cmd created for you. Let's look at a few examples. I'm a lumberjack – let's go log in Many applications have a centralized SessionManager to take care of the currently logged in user, perform authentication operations, and set up persistent storage for session credentials. There's only one SessionManager in an application. A truncated version might look like this: /** * @class CultivateCode.SessionManager * @extends extendsClass * Description */ Ext.define('CultivateCode.SessionManager', {    singleton: true,    isLoggedIn: false,      login: function(username, password) {        // login impl    },        logout: function() {        // logout impl    },        isLoggedIn() {        return isLoggedIn;    } }); We create a singleton class. This class doesn't have to be instantiated using the new keyword. As per its class name, CultivateCode.SessionManager, it's a top-level class and so it goes in the top-level directory. In a more complicated application, there could be a dedicated Session class too and some other ancillary code, so maybe, we'd create the following structure: The directory structure for our session namespace What about user interface elements? There's an informal practice in the Ext JS community that helps here. We want to create an extension that shows the coordinates of the currently selected cell (similar to cell references in Excel). In this case, we'd create an ux directory—user experience or user extensions—and then go with the naming conventions of the Ext JS framework: Ext.define('CultivateCode.ux.grid.plugins.CoordViewer', {    extend: 'Ext.plugin.Abstract',    alias: 'plugin.coordviewer',      mixins: {        observable: 'Ext.util.Observable'    },      init: function(grid) {        this.mon(grid.view, 'cellclick', this.onCellClick, this);    },      onCellClick: function(view, cell, colIdx, record, row, rowIdx, e) {        var coords = Ext.String.format('Cell is at {0}, {1}', colIdx, rowIdx)          Ext.Msg.alert('Coordinates', coords);    } }); It looks a little like this, triggering when you click on a grid cell: Also, the corresponding directory structure follows directly from the namespace: You can probably see a pattern emerging already. We've mentioned before that organizing an application is often about setting things up to fall into a position of success. A positive pattern like this is a good sign that you're doing things right. We've got a predictable system that should enable us to create new classes without having to think too hard about where they're going to sit in our application. Let's take a look at one more example of a mathematics helper class (one that is a little less obvious). Again, we can look at the Ext JS framework itself for inspiration. There's an Ext.util namespace containing over 20 general classes that just don't fit anywhere else. So, in this case, let's create CultivateCode.util.Mathematics that contains our specialized methods for numerical work: Ext.define('CultivateCode.util.Mathematics', {    singleton: true,      square: function(num) {        return Math.pow(num, 2);    },      circumference: function(radius) {        return 2 * Math.PI * radius;    } }); There is one caveat here and it's an important one. There's a real danger that rather than thinking about the namespace for your code and its place in your application, a lot of stuff ends up under the utils namespace, thereby defeating the whole purpose. Take time to carefully check whether there's a more suitable location for your code before putting it in the utils bucket. This is particularly applicable if you're considering adding lots of code to a single class in the utils namespace. Looking again at Ext JS, there are lots of specialized namespaces (such as Ext.state or Ext.draw. If you were working with an application with lots of mathematics, perhaps you'd be better off with the following namespace and directory structure: Ext.define('CultivateCode.math.Combinatorics', {    // implementation here! }); Ext.define('CultivateCode.math.Geometry', {    // implementation here! }); The directory structure for the math namespace is shown in the following screenshot: This is another situation where there is no definitive right answer. It will come to you with experience and will depend entirely on the application you're building. Over time, putting together these high-level applications, building blocks will become second nature. Money can't buy class Now that we're learning where our classes belong, we need to make sure that we're actually using the right type of class. Here's the standard way of instantiating an Ext JS class: var geometry = Ext.create('MyApp.math.Geometry'); However, think about your code. Think how rare it's in Ext JS to actually manually invoke Ext.create. So, how else are the class instances created? Singletons A singleton is simply a class that only has one instance across the lifetime of your application. There are quite a number of singleton classes in the Ext JS framework. While the use of singletons in general is a contentious point in software architecture, they tend to be used fairly well in Ext JS. It could be that you prefer to implement the mathematical functions (we discussed earlier) as a singleton. For example, the following command could work: var area = CultivateCode.math.areaOfCircle(radius); However, most developers would implement a circle class: var circle = Ext.create('CultivateCode.math.Circle', { radius: radius }); var area = circle.getArea(); This keeps the circle-related functionality partitioned off into the circle class. It also enables us to pass the circle variable round to other functions and classes for additional processing. On the other hand, look at Ext.Msg. Each of the methods here are fired and forget, there's never going to be anything to do further actions on. The same is true of Ext.Ajax. So, once more we find ourselves with a question that does not have a definitive answer. It depends entirely on the context. This is going to happen a lot, but it's a good thing! This article isn't going to teach you a list of facts and figures; it's going to teach you to think for yourself. Read other people's code and learn from experience. This isn't coding by numbers! The other place you might find yourself reaching for the power of the singleton is when you're creating an overarching manager class (such as the inbuilt StoreManager or our previous SessionManager example). One of the objections about singletons is that they tend to be abused to store lots of global state and break down the separation of concerns we've set up in our code as follows: Ext.define('CultivateCode.ux.grid.GridManager', {       singleton: true,    currentGrid: null,    grids: [],      add: function(grid) {        this.grids.push(grid);    },      setCurrentGrid: function(grid) {        this.focusedGrid = grid;    } }); No one wants to see this sort of thing in a code base. It brings behavior and state to a high level in the application. In theory, any part of the code base could call this manager with unexpected results. Instead, we'd do something like this: Ext.define('CultivateCode.view.main.Main', {    extend: 'CultivateCode.ux.GridContainer',      currentGrid: null,    grids: [],      add: function(grid) {        this.grids.push(grid);    },      setCurrentGrid: function(grid) {        this.currentGrid = grid;    } }); We still have the same behavior (a way of collecting together grids), but now, it's limited to a more contextually appropriate part of the grid. Also, we're working with the MVVM system. We avoid global state and organize our code in a more correct manner. A win all round. As a general rule, if you can avoid using a singleton, do so. Otherwise, think very carefully to make sure that it's the right choice for your application and that a standard class wouldn't better fit your requirements. In the previous example, we could have taken the easy way out and used a manager singleton, but it would have been a poor choice that would compromise the structure of our code. Mixins We're used to the concept of inheriting from a subclass in Ext JS—a grid extends a panel to take on all of its functionality. Mixins provide a similar opportunity to reuse functionality to augment an existing class with a thin slice of behavior. An Ext.Panel "is an" Ext.Component, but it also "has a" pinnable feature that provides a pin tool via the Ext.panel.Pinnable mixin. In your code, you should be looking at mixins to provide a feature, particularly in cases where this feature can be reused. In the next example, we'll create a UI mixin called shakeable, which provides a UI component with a shake method that draws the user's attention by rocking it from side to side: Ext.define('CultivateCode.util.Shakeable', {    mixinId: 'shakeable',      shake: function() {        var el = this.el,            box = el.getBox(),            left = box.x - (box.width / 3),            right = box.x + (box.width / 3),            end = box.x;          el.animate({            duration: 400,            keyframes: {                33: {                      x: left                },                66: {                    x: right                },                 100: {                    x: end                }            }        });    } }); We use the animate method (which itself is actually mixed in Ext.Element) to set up some animation keyframes to move the component's element first left, then right, then back to its original position. Here's a class that implements it: Ext.define('CultivateCode.ux.button.ShakingButton', {    extend: 'Ext.Button',    mixins: ['CultivateCode.util.Shakeable'],    xtype: 'shakingbutton' }); Also it's used like this: var btn = Ext.create('CultivateCode.ux.button.ShakingButton', {    text: 'Shake It!' }); btn.on('click', function(btn) {    btn.shake(); }); The button has taken on the new shake method provided by the mixin. Now, if we'd like a class to have the shakeable feature, we can reuse this mixin where necessary. In addition, mixins can simply be used to pull out the functionality of a class into logical chunks, rather than having a single file of many thousands of lines. Ext.Component is an example of this. In fact, most of its core functionality is found in classes that are mixed in Ext.Component. This is also helpful when navigating a code base. Methods that work together to build a feature can be grouped and set aside in a tidy little package. Let's take a look at a practical example of how an existing class could be refactored using a mixin. Here's the skeleton of the original: Ext.define('CultivateCode.ux.form.MetaPanel', {    extend: 'Ext.form.Panel',      initialize: function() {        this.callParent(arguments);        this.addPersistenceEvents();    },      loadRecord: function(model) {        this.buildItemsFromRecord(model);        this.callParent(arguments);    },      buildItemsFromRecord: function(model) {        // Implementation    },      buildFieldsetsFromRecord: function(model){        // Implementation    },      buildItemForField: function(field){        // Implementation    },      isStateAvailable: function(){        // Implementation    },      addPersistenceEvents: function(){      // Implementation    },      persistFieldOnChange: function(){        // Implementation    },      restorePersistedForm: function(){        // Implementation    },      clearPersistence: function(){        // Implementation    } }); This MetaPanel does two things that the normal FormPanel does not: It reads the Ext.data.Fields from an Ext.data.Model and automatically generates a form layout based on these fields. It can also generate field sets if the fields have the same group configuration value. When the values of the form change, it persists them to localStorage so that the user can navigate away and resume completing the form later. This is useful for long forms. In reality, implementing these features would probably require additional methods to the ones shown in the previous code skeleton. As the two extra features are clearly defined, it's easy enough to refactor this code to better describe our intent: Ext.define('CultivateCode.ux.form.MetaPanel', {    extend: 'Ext.form.Panel',      mixins: [        // Contains methods:        // - buildItemsFromRecord        // - buildFieldsetsFromRecord        // - buildItemForField        'CultivateCode.ux.form.Builder',          // - isStateAvailable        // - addPersistenceEvents        // - persistFieldOnChange        // - restorePersistedForm        // - clearPersistence        'CultivateCode.ux.form.Persistence'    ],      initialize: function() {        this.callParent(arguments);        this.addPersistenceEvents();    },      loadRecord: function(model) {        this.buildItemsFromRecord(model);        this.callParent(arguments);    } }); We have a much shorter file and the behavior we're including in this class is described a lot more concisely. Rather than seven or more method bodies that may span a couple of hundred lines of code, we have two mixin lines and the relevant methods extracted to a well-named mixin class. Summary This article showed how the various parts of an Ext JS application can be organized into a form that eases the development process. Resources for Article: Further resources on this subject: CreateJS – Performing Animation and Transforming Function [article] Good time management in CasperJS tests [article] The Login Page using Ext JS [article]
Read more
  • 0
  • 0
  • 1685

article-image-third-party-libraries
Packt
21 Apr 2015
21 min read
Save for later

Third Party Libraries

Packt
21 Apr 2015
21 min read
In this article by Nathan Rozentals, author of the book Mastering TypeScript, the author believes that our TypeScript development environment would not amount to much if we were not able to reuse the myriad of existing JavaScript libraries, frameworks and general goodness. However, in order to use a particular third party library with TypeScript, we will first need a matching definition file. Soon after TypeScript was released, Boris Yankov set up a github repository to house TypeScript definition files for third party JavaScript libraries. This repository, named DefinitelyTyped (https://github.com/borisyankov/DefinitelyTyped) quickly became very popular, and is currently the place to go for high-quality definition files. DefinitelyTyped currently has over 700 definition files, built up over time from hundreds of contributors from all over the world. If we were to measure the success of TypeScript within the JavaScript community, then the DefinitelyTyped repository would be a good indication of how well TypeScript has been adopted. Before you go ahead and try to write your own definition files, check the DefinitelyTyped repository to see if there is one already available. In this article, we will have a closer look at using these definition files, and cover the following topics: Choosing a JavaScript Framework Using TypeScript with Backbone Using TypeScript with Angular (For more resources related to this topic, see here.) Using third party libraries In this section of the article, we will begin to explore some of the more popular third party JavaScript libraries, their declaration files, and how to write compatible TypeScript for each of these frameworks. We will compare Backbone, and Angular which are all frameworks for building rich client-side JavaScript applications. During our discussion, we will see that some frameworks are highly compliant with the TypeScript language and its features, some are partially compliant, and some have very low compliance. Choosing a JavaScript framework Choosing a JavaScript framework or library to develop Single Page Applications is a difficult and sometimes daunting task. It seems that there is a new framework appearing every other month, promising more and more functionality for less and less code. To help developers compare these frameworks, and make an informed choice, Addy Osmani wrote an excellent article, named Journey Through the JavaScript MVC Jungle. (http://www.smashingmagazine.com/2012/07/27/journey-through-the-javascript-mvc-jungle/). In essence, his advice is simple – it's a personal choice – so try some frameworks out, and see what best fits your needs, your programming mindset, and your existing skill set. The TodoMVC project (http://todomvc.com), which Addy started, does an excellent job of implementing the same application in a number of MV* JavaScript frameworks. This really is a reference site for digging into a fully working application, and comparing for yourself the coding techniques and styles of different frameworks. Again, depending on the JavaScript library that you are using within TypeScript, you may need to write your TypeScript code in a specific way. Bear this in mind when choosing a framework - if it is difficult to use with TypeScript, then you may be better off looking at another framework with better integration. If it is easy and natural to work with the framework in TypeScript, then your productivity and overall development experience will be much better. We will look at some of the popular JavaScript libraries, along with their declaration files, and see how to write compatible TypeScript. The key thing to remember is that TypeScript generates JavaScript - so if you are battling to use a third party library, then crack open the generated JavaScript and see what the JavaScript code looks like that TypeScript is emitting. If the generated JavaScript matches the JavaScript code samples in the library's documentation, then you are on the right track. If not, then you may need to modify your TypeScript until the compiled JavaScript starts matching up with the samples. When trying to write TypeScript code for a third party JavaScript framework – particularly if you are working off the JavaScript documentation – your initial foray may just be one of trial and error. Along the way, you may find that you need to write your TypeScript in a specific way in order to match this particular third party library. The rest of this article shows how three different libraries require different ways of writing TypeScript. Backbone Backbone is a popular JavaScript library that gives structure to web applications by providing models, collections and views, amongst other things. Backbone has been around since 2010, and has gained a very large following, with a wealth of commercial websites using the framework. According to Infoworld.com, Backbone has over 1,600 Backbone related projects on GitHub that rate over 3 stars - meaning that it has a vast ecosystem of extensions and related libraries. Let's take a quick look at Backbone written in TypeScript. To follow along with the code in your own project, you will need to install the following NuGet packages: backbone.js ( currently at v1.1.2), and backbone.TypeScript.DefinitelyTyped (currently at version 1.2.3). Using inheritance with Backbone From the Backbone documentation, we find an example of creating a Backbone.Model in JavaScript as follows: var Note = Backbone.Model.extend(    {        initialize: function() {            alert("Note Model JavaScript initialize");        },        author: function () { },        coordinates: function () { },        allowedToEdit: function(account) {            return true;        }    } ); This code shows a typical usage of Backbone in JavaScript. We start by creating a variable named Note that extends (or derives from) Backbone.Model. This can be seen with the Backbone.Model.extend syntax. The Backbone extend function uses JavaScript object notation to define an object within the outer curly braces { … }. In the preceding code, this object has four functions: initialize, author, coordinates and allowedToEdit. According to the Backbone documentation, the initialize function will be called once a new instance of this class is created. The initialize function simply creates an alert to indicate that the function was called. The author and coordinates functions are blank at this stage, with only the allowedToEdit function actually doing something: return true. If we were to simply copy and paste the above JavaScript into a TypeScript file, we would generate the following compile error: Build: 'Backbone.Model.extend' is inaccessible. When working with a third party library, and a definition file from DefinitelyTyped, our first port of call should be to see if the definition file may be in error. After all, the JavaScript documentation says that we should be able to use the extend method as shown, so why is this definition file causing an error? If we open up the backbone.d.ts file, and then search to find the definition of the class Model, we will find the cause of the compilation error: class Model extends ModelBase {      /**    * Do not use, prefer TypeScript's extend functionality.    **/    private static extend(        properties: any, classProperties?: any): any; This declaration file snippet shows some of the definition of the Backbone Model class. Here, we can see that the extend function is defined as private static, and as such, it will not be available outside the Model class itself. This, however, seems contradictory to the JavaScript sample that we saw in the documentation. In the preceding comment on the extend function definition, we find the key to using Backbone in TypeScript: prefer TypeScript's extend functionality. This comment indicates that the declaration file for Backbone is built around TypeScript's extends keyword – thereby allowing us to use natural TypeScript inheritance syntax to create Backbone objects. The TypeScript equivalent to this code, therefore, must use the extends TypeScript keyword to derive a class from the base class Backbone.Model, as follows: class Note extends Backbone.Model {    initialize() {      alert("Note model Typescript initialize");    }    author() { }    coordinates() { }    allowedToEdit(account) {        return true;    } } We are now creating a class definition named Note that extends the Backbone.Model base class. This class then has the functions initialize, author, coordinates and allowedToEdit, similar to the previous JavaScript version. Our Backbone sample will now compile and run correctly. With either of these versions, we can create an instance of the Note object by including the following script within an HTML page: <script type="text/javascript">    $(document).ready( function () {        var note = new Note();    }); </script> This JavaScript sample simply waits for the jQuery document.ready event to be fired, and then creates an instance of the Note class. As documented earlier, the initialize function will be called when an instance of the class is constructed, so we would see an alert box appear when we run this in a browser. All of Backbone's core objects are designed with inheritance in mind. This means that creating new Backbone collections, views and routers will use the same extends syntax in TypeScript. Backbone, therefore, is a very good fit for TypeScript, because we can use natural TypeScript syntax for inheritance to create new Backbone objects. Using interfaces As Backbone allows us to use TypeScript inheritance to create objects, we can just as easily use TypeScript interfaces with any of our Backbone objects as well. Extracting an interface for the Note class above would be as follows: interface INoteInterface {    initialize();    author();    coordinates();    allowedToEdit(account: string); } We can now update our Note class definition to implement this interface as follows: class Note extends Backbone.Model implements INoteInterface {    // existing code } Our class definition now implements the INoteInterface TypeScript interface. This simple change protects our code from being modified inadvertently, and also opens up the ability to work with core Backbone objects in standard object-oriented design patterns. We could, if we needed to, apply the Factory Pattern to return a particular type of Backbone Model – or any other Backbone object for that matter. Using generic syntax The declaration file for Backbone has also added generic syntax to some class definitions. This brings with it further strong typing benefits when writing TypeScript code for Backbone. Backbone collections (surprise, surprise) house a collection of Backbone models, allowing us to define collections in TypeScript as follows: class NoteCollection extends Backbone.Collection<Note> {    model = Note;    //model: Note; // generates compile error    //model: { new (): Note }; // ok } Here, we have a NoteCollection that derives from, or extends a Backbone.Collection, but also uses generic syntax to constrain the collection to handle only objects of type Note. This means that any of the standard collection functions such as at() or pluck() will be strongly typed to return Note models, further enhancing our type safety and Intellisense. Note the syntax used to assign a type to the internal model property of the collection class on the second line. We cannot use the standard TypeScript syntax model: Note, as this causes a compile time error. We need to assign the model property to a the class definition, as seen with the model=Note syntax, or we can use the { new(): Note } syntax as seen on the last line. Using ECMAScript 5 Backbone also allows us to use ECMAScript 5 capabilities to define getters and setters for Backbone.Model classes, as follows: interface ISimpleModel {    Name: string;    Id: number; } class SimpleModel extends Backbone.Model implements ISimpleModel {    get Name() {        return this.get('Name');    }    set Name(value: string) {        this.set('Name', value);    }    get Id() {        return this.get('Id');    }    set Id(value: number) {        this.set('Id', value);    } } In this snippet, we have defined an interface with two properties, named ISimpleModel. We then define a SimpleModel class that derives from Backbone.Model, and also implements the ISimpleModel interface. We then have ES 5 getters and setters for our Name and Id properties. Backbone uses class attributes to store model values, so our getters and setters simply call the underlying get and set methods of Backbone.Model. Backbone TypeScript compatibility Backbone allows us to use all of TypeScript's language features within our code. We can use classes, interfaces, inheritance, generics and even ECMAScript 5 properties. All of our classes also derive from base Backbone objects. This makes Backbone a highly compatible library for building web applications with TypeScript. Angular AngularJs (or just Angular) is also a very popular JavaScript framework, and is maintained by Google. Angular takes a completely different approach to building JavaScript SPA's, introducing an HTML syntax that the running Angular application understands. This provides the application with two-way data binding capabilities, which automatically synchronizes models, views and the HTML page. Angular also provides a mechanism for Dependency Injection (DI), and uses services to provide data to your views and models. The example provided in the tutorial shows the following JavaScript: var phonecatApp = angular.module('phonecatApp', []); phonecatApp.controller('PhoneListCtrl', function ($scope) { $scope.phones = [    {'name': 'Nexus S',      'snippet': 'Fast just got faster with Nexus S.'},    {'name': 'Motorola XOOM™ with Wi-Fi',      'snippet': 'The Next, Next Generation tablet.'},    {'name': 'MOTOROLA XOOM™',      'snippet': 'The Next, Next Generation tablet.'} ]; }); This code snippet is typical of Angular JavaScript syntax. We start by creating a variable named phonecatApp, and register this as an Angular module by calling the module function on the angular global instance. The first argument to the module function is a global name for the Angular module, and the empty array is a place-holder for other modules that will be injected via Angular's Dependency Injection routines. We then call the controller function on the newly created phonecatApp variable with two arguments. The first argument is the global name of the controller, and the second argument is a function that accepts a specially named Angular variable named $scope. Within this function, the code sets the phones object of the $scope variable to be an array of JSON objects, each with a name and snippet property. If we continue reading through the tutorial, we find a unit test that shows how the PhoneListCtrl controller is used: describe('PhoneListCtrl', function(){    it('should create "phones" model with 3 phones', function() {      var scope = {},          ctrl = new PhoneListCtrl(scope);        expect(scope.phones.length).toBe(3); });   }); The first two lines of this code snippet use a global function called describe, and within this function another function called it. These two functions are part of a unit testing framework named Jasmine. We declare a variable named scope to be an empty JavaScript object, and then a variable named ctrl that uses the new keyword to create an instance of our PhoneListCtrl class. The new PhoneListCtrl(scope) syntax shows that Angular is using the definition of the controller just like we would use a normal class in TypeScript. Building the same object in TypeScript would allow us to use TypeScript classes, as follows: var phonecatApp = angular.module('phonecatApp', []);   class PhoneListCtrl {    constructor($scope) {        $scope.phones = [            { 'name': 'Nexus S',              'snippet': 'Fast just got faster' },            { 'name': 'Motorola',              'snippet': 'Next generation tablet' },            { 'name': 'Motorola Xoom',              'snippet': 'Next, next generation tablet' }        ];    } }; Our first line is the same as in our previous JavaScript sample. We then, however, use the TypeScript class syntax to create a class named PhoneListCtrl. By creating a TypeScript class, we can now use this class as shown in our Jasmine test code: ctrl = new PhoneListCtrl(scope). The constructor function of our PhoneListCtrl class now acts as the anonymous function seen in the original JavaScript sample: phonecatApp.controller('PhoneListCtrl', function ($scope) {    // this function is replaced by the constructor } Angular classes and $scope Let's expand our PhoneListCtrl class a little further, and have a look at what it would look like when completed: class PhoneListCtrl {    myScope: IScope;    constructor($scope, $http: ng.IHttpService, Phone) {        this.myScope = $scope;        this.myScope.phones = Phone.query();        $scope.orderProp = 'age';          _.bindAll(this, 'GetPhonesSuccess');    }    GetPhonesSuccess(data: any) {       this.myScope.phones = data;    } }; The first thing to note in this class, is that we are defining a variable named myScope, and storing the $scope argument that is passed in via the constructor, into this internal variable. This is again because of JavaScript's lexical scoping rules. Note the call to _.bindAll at the end of the constructor. This Underscore utility function will ensure that whenever the GetPhonesSuccess function is called, it will use the variable this in the context of the class instance, and not in the context of the calling code. The GetPhonesSuccess function uses the this.myScope variable within its implementation. This is why we needed to store the initial $scope argument in an internal variable. Another thing we notice from this code, is that the myScope variable is typed to an interface named IScope, which will need to be defined as follows: interface IScope {    phones: IPhone[]; } interface IPhone {    age: number;    id: string;    imageUrl: string;    name: string;    snippet: string; }; This IScope interface just contains an array of objects of type IPhone (pardon the unfortunate name of this interface – it can hold Android phones as well). What this means is that we don't have a standard interface or TypeScript type to use when dealing with $scope objects. By its nature, the $scope argument will change its type depending on when and where the Angular runtime calls it, hence our need to define an IScope interface, and strongly type the myScope variable to this interface. Another interesting thing to note on the constructor function of the PhoneListCtrl class is the type of the $http argument. It is set to be of type ng.IHttpService. This IHttpService interface is found in the declaration file for Angular. In order to use TypeScript with Angular variables such as $scope or $http, we need to find the matching interface within our declaration file, before we can use any of the Angular functions available on these variables. The last point to note in this constructor code is the final argument, named Phone. It does not have a TypeScript type assigned to it, and so automatically becomes of type any. Let's take a quick look at the implementation of this Phone service, which is as follows: var phonecatServices =     angular.module('phonecatServices', ['ngResource']);   phonecatServices.factory('Phone',    [        '$resource', ($resource) => {            return $resource('phones/:phoneId.json', {}, {                query: {                    method: 'GET',                    params: {                        phoneId: 'phones'                    },                    isArray: true                }            });        }    ] ); The first line of this code snippet again creates a global variable named phonecatServices, using the angular.module global function. We then call the factory function available on the phonecatServices variable, in order to define our Phone resource. This factory function uses a string named 'Phone' to define the Phone resource, and then uses Angular's dependency injection syntax to inject a $resource object. Looking through this code, we can see that we cannot easily create standard TypeScript classes for Angular to use here. Nor can we use standard TypeScript interfaces or inheritance on this Angular service. Angular TypeScript compatibility When writing Angular code with TypeScript, we are able to use classes in certain instances, but must rely on the underlying Angular functions such as module and factory to define our objects in other cases. Also, when using standard Angular services, such as $http or $resource, we will need to specify the matching declaration file interface in order to use these services. We can therefore describe the Angular library as having medium compatibility with TypeScript. Inheritance – Angular versus Backbone Inheritance is a very powerful feature of object-oriented programming, and is also a fundamental concept when using JavaScript frameworks. Using a Backbone controller or an Angular controller within each framework relies on certain characteristics, or functions being available. Each framework implements inheritance in a different way. As JavaScript does not have the concept of inheritance, each framework needs to find a way to implement it, so that the framework can allow us to extend base classes and their functionality. In Backbone, this inheritance implementation is via the extend function of each Backbone object. The TypeScript extends keyword follows a similar implementation to Backbone, allowing the framework and language to dovetail each other. Angular, on the other hand, uses its own implementation of inheritance, and defines functions on the angular global namespace to create classes (that is angular.module). We can also sometimes use the instance of an application (that is <appName>.controller) to create modules or controllers. We have found, though, that Angular uses controllers in a very similar way to TypeScript classes, and we can therefore simply create standard TypeScript classes that will work within an Angular application. So far, we have only skimmed the surface of both the Angular TypeScript syntax and the Backbone TypeScript syntax. The point of this exercise was to try and understand how TypeScript can be used within each of these two third party frameworks. Be sure to visit http://todomvc.com, and have a look at the full source-code for the Todo application written in TypeScript for both Angular and Backbone. They can be found on the Compile-to-JS tab in the example section. These running code samples, combined with the documentation on each of these sites, will prove to be an invaluable resource when trying to write TypeScript syntax with an external third party library such as Angular or Backbone. Angular 2.0 The Microsoft TypeScript team and the Google Angular team have just completed a months long partnership, and have announced that the upcoming release of Angular, named Angular 2.0, will be built using TypeScript. Originally, Angular 2.0 was going to use a new language named AtScript for Angular development. During the collaboration work between the Microsoft and Google teams, however, the features of AtScript that were needed for Angular 2.0 development have now been implemented within TypeScript. This means that the Angular 2.0 library will be classed as highly compatible with TypeScript, once the Angular 2.0 library, and the 1.5 edition of the TypeScript compiler are available. Summary In this article, we looked at three types of third party libraries, and discussed how to integrate these libraries with TypeScript. We explored Backbone, which can be categorized as a highly compliant third party library, Angular, which is a partially compliant library. Resources for Article: Further resources on this subject: Optimizing JavaScript for iOS Hybrid Apps [article] Introduction to TypeScript [article] Getting Ready with CoffeeScript [article]
Read more
  • 0
  • 0
  • 2458

article-image-our-first-api-go
Packt
14 Apr 2015
15 min read
Save for later

Our First API in Go

Packt
14 Apr 2015
15 min read
This article is penned by Nathan Kozyra, the author of the book, Mastering Go Web Services. This quickly introduces—or reintroduces—some core concepts related to Go setup and usage as well as the http package. (For more resources related to this topic, see here.) If you spend any time developing applications on the Web (or off it, for that matter), it won't be long before you find yourself facing the prospect of interacting with a web service or an API. Whether it's a library that you need or another application's sandbox with which you have to interact, the world of development relies in no small part on the cooperation among dissonant applications, languages, and formats. That, after all, is why we have APIs to begin with—to allow standardized communication between any two given platforms. If you spend a long amount of time working on the Web, you'll encounter bad APIs. By bad we mean APIs that are not all-inclusive, do not adhere to best practices and standards, are confusing semantically, or lack consistency. You'll encounter APIs that haphazardly use OAuth or simple HTTP authentication in some places and the opposite in others, or more commonly, APIs that ignore the stated purposes of HTTP verbs. Google's Go language is particularly well suited to servers. With its built-in HTTP serving, a simple method for XML and JSON encoding of data, high availability, and concurrency, it is the ideal platform for your API. We will cover the following topics in this article: Understanding requirements and dependencies Introducing the HTTP package Understanding requirements and dependencies Before we get too deep into the weeds in this article, it would be a good idea for us to examine the things that you will need to have installed. Installing Go It should go without saying that we will need to have the Go language installed. However, there are a few associated items that you will also need to install in order to do everything we do in this book. Go is available for Mac OS X, Windows, and most common Linux variants. You can download the binaries at http://golang.org/doc/install. On Linux, you can generally grab Go through your distribution's package manager. For example, you can grab it on Ubuntu with a simple apt-get install golang command. Something similar exists for most distributions. In addition to the core language, we'll also work a bit with the Google App Engine, and the best way to test with the App Engine is to install the Software Development Kit (SDK). This will allow us to test our applications locally prior to deploying them and simulate a lot of the functionality that is provided only on the App Engine. The App Engine SDK can be downloaded from https://developers.google.com/appengine/downloads. While we're obviously most interested in the Go SDK, you should also grab the Python SDK as there are some minor dependencies that may not be available solely in the Go SDK. Installing and using MySQL We'll be using quite a few different databases and datastores to manage our test and real data, and MySQL will be one of the primary ones. We will use MySQL as a storage system for our users; their messages and their relationships will be stored in our larger application (we will discuss more about this in a bit). MySQL can be downloaded from http://dev.mysql.com/downloads/. You can also grab it easily from a package manager on Linux/OS X as follows: Ubuntu: sudo apt-get install mysql-server mysql-client OS X with Homebrew: brew install mysql Redis Redis is the first of the two NoSQL datastores that we'll be using for a couple of different demonstrations, including caching data from our databases as well as the API output. If you're unfamiliar with NoSQL, we'll do some pretty simple introductions to results gathering using both Redis and Couchbase in our examples. If you know MySQL, Redis will at least feel similar, and you won't need the full knowledge base to be able to use the application in the fashion in which we'll use it for our purposes. Redis can be downloaded from http://redis.io/download. Redis can be downloaded on Linux/OS X using the following: Ubuntu: sudo apt-get install redis-server OS X with Homebrew: brew install redis Couchbase As mentioned earlier, Couchbase will be our second NoSQL solution that we'll use in various products, primarily to set short-lived or ephemeral key store lookups to avoid bottlenecks and as an experiment with in-memory caching. Unlike Redis, Couchbase uses simple REST commands to set and receive data, and everything exists in the JSON format. Couchbase can be downloaded from http://www.couchbase.com/download. For Ubuntu (deb), use the following command to download Couchbase: dpkg -i couchbase-server version.deb For OS X with Homebrew use the following command to download Couchbase: brew install https://github.com/couchbase/homebrew/raw/    stable/Library/Formula/libcouchbase.rb Nginx Although Go comes with everything you need to run a highly concurrent, performant web server, we're going to experiment with wrapping a reverse proxy around our results. We'll do this primarily as a response to the real-world issues regarding availability and speed. Nginx is not available natively for Windows. For Ubuntu, use the following command to download Nginx: apt-get install nginx For OS X with Homebrew, use the following command to download Nginx: brew install nginx Apache JMeter We'll utilize JMeter for benchmarking and tuning our API for performance. You have a bit of a choice here, as there are several stress-testing applications for simulating traffic. The two we'll touch on are JMeter and Apache's built-in Apache Benchmark (AB) platform. The latter is a stalwart in benchmarking but is a bit limited in what you can throw at your API, so JMeter is preferred. One of the things that we'll need to consider when building an API is its ability to stand up to heavy traffic (and introduce some mitigating actions when it cannot), so we'll need to know what our limits are. Apache JMeter can be downloaded from http://jmeter.apache.org/download_jmeter.cgi. Using predefined datasets While it's not entirely necessary to have our dummy dataset, you can save a lot of time as we build our social network by bringing it in because it is full of users, posts, and images. By using this dataset, you can skip creating this data to test certain aspects of the API and API creation. Our dummy dataset can be downloaded at https://github.com/nkozyra/masteringwebservices. Choosing an IDE A choice of Integrated Development Environment (IDE) is one of the most personal choices a developer can make, and it's rare to find a developer who is not steadfastly passionate about their favorite. Nothing in this article will require one IDE over another; indeed, most of Go's strength in terms of compiling, formatting, and testing lies at the command-line level. That said, we'd like to at least explore some of the more popular choices for editors and IDEs that exist for Go. Eclipse As one of the most popular and expansive IDEs available for any language, Eclipse is an obvious first mention. Most languages get their support in the form of an Eclipse plugin and Go is no exception. There are some downsides to this monolithic piece of software; it is occasionally buggy on some languages, notoriously slow for some autocompletion functions, and is a bit heavier than most of the other available options. However, the pluses are myriad. Eclipse is very mature and has a gigantic community from which you can seek support when issues arise. Also, it's free to use. Eclipse can be downloaded from http://eclipse.org/ Get the Goclipse plugin at http://goclipse.github.io/ Sublime Text Sublime Text is our particular favorite, but it comes with a large caveat—it is the only one listed here that is not free. This one feels more like a complete code/text editor than a heavy IDE, but it includes code completion options and the ability to integrate the Go compilers (or other languages' compilers) directly into the interface. Although Sublime Text's license costs $70, many developers find its elegance and speed to be well worth it. You can try out the software indefinitely to see if it's right for you; it operates as nagware unless and until you purchase a license. Sublime Text can be downloaded from http://www.sublimetext.com/2. LiteIDE LiteIDE is a much younger IDE than the others mentioned here, but it is noteworthy because it has a focus on the Go language. It's cross-platform and does a lot of Go's command-line magic in the background, making it truly integrated. LiteIDE also handles code autocompletion, go fmt, build, run, and test directly in the IDE and a robust package browser. It's free and totally worth a shot if you want something lean and targeted directly for the Go language. LiteIDE can be downloaded from https://code.google.com/p/golangide/. IntelliJ IDEA Right up there with Eclipse is the JetBrains family of IDE, which has spanned approximately the same number of languages as Eclipse. Ultimately, both are primarily built with Java in mind, which means that sometimes other language support can feel secondary. The Go integration here, however, seems fairly robust and complete, so it's worth a shot if you have a license. If you do not have a license, you can try the Community Edition, which is free. You can download IntelliJ IDEA at http://www.jetbrains.com/idea/download/ The Go language support plugin is available at http://plugins.jetbrains.com/plugin/?idea&id=5047 Some client-side tools Although the vast majority of what we'll be covering will focus on Go and API services, we will be doing some visualization of client-side interactions with our API. In doing so, we'll primarily focus on straight HTML and JavaScript, but for our more interactive points, we'll also rope in jQuery and AngularJS. Most of what we do for client-side demonstrations will be available at this book's GitHub repository at https://github.com/nkozyra/goweb under client. Both jQuery and AngularJS can be loaded dynamically from Google's CDN, which will prevent you from having to download and store them locally. The examples hosted on GitHub call these dynamically. To load AngularJS dynamically, use the following code: <script src="//ajax.googleapis.com/ajax/libs/ angularjs/1.2.18/angular.min.js"></script> To load jQuery dynamically, use the following code: <script src="//ajax.googleapis.com/ajax/ libs/jquery/1.11.1/jquery.min.js"></script> Looking at our application Well in the book, we'll be building myriad small applications to demonstrate points, functions, libraries, and other techniques. However, we'll also focus on a larger project that mimics a social network wherein we create and return to users, statuses, and so on, via the API. For that you'll need to have a copy of it. Setting up our database As mentioned earlier, we'll be designing a social network that operates almost entirely at the API level (at least at first) as our master project in the book. Time and space wouldn't allow us to cover this here in the article. When we think of the major social networks (from the past and in the present), there are a few omnipresent concepts endemic among them, which are as follows: The ability to create a user and maintain a user profile The ability to share messages or statuses and have conversations based on them The ability to express pleasure or displeasure on the said statuses/messages to dictate the worthiness of any given message There are a few other features that we'll be building here, but let's start with the basics. Let's create our database in MySQL as follows: create database social_network; This will be the basis of our social network product in the book. For now, we'll just need a users table to store our individual users and their most basic information. We'll amend this to include more features as we go along: CREATE TABLE users ( user_id INT(10) UNSIGNED NOT NULL AUTO_INCREMENT, user_nickname VARCHAR(32) NOT NULL, user_first VARCHAR(32) NOT NULL, user_last VARCHAR(32) NOT NULL, user_email VARCHAR(128) NOT NULL, PRIMARY KEY (user_id), UNIQUE INDEX user_nickname (user_nickname) ) We won't need to do too much in this article, so this should suffice. We'll have a user's most basic information—name, nickname, and e-mail, and not much else. Introducing the HTTP package The vast majority of our API work will be handled through REST, so you should become pretty familiar with Go's http package. In addition to serving via HTTP, the http package comprises of a number of other very useful utilities that we'll look at in detail. These include cookie jars, setting up clients, reverse proxies, and more. The primary entity about which we're interested right now, though, is the http.Server struct, which provides the very basis of all of our server's actions and parameters. Within the server, we can set our TCP address, HTTP multiplexing for routing specific requests, timeouts, and header information. Go also provides some shortcuts for invoking a server without directly initializing the struct. For example, if you have a lot of default properties, you could use the following code: Server := Server { Addr: ":8080", Handler: urlHandler, ReadTimeout: 1000 * time.MicroSecond, WriteTimeout: 1000 * time.MicroSecond, MaxHeaderBytes: 0, TLSConfig: nil } You can simply execute using the following code: http.ListenAndServe(":8080", nil) This will invoke a server struct for you and set only the Addr and Handler  properties within. There will be times, of course, when we'll want more granular control over our server, but for the time being, this will do just fine. Let's take this concept and output some JSON data via HTTP for the first time. Quick hitter – saying Hello, World via API As mentioned earlier in this article, we'll go off course and do some work that we'll preface with quick hitter to denote that it's unrelated to our larger project. In this case, we just want to rev up our http package and deliver some JSON to the browser. Unsurprisingly, we'll be merely outputting the uninspiring Hello, world message to, well, the world. Let's set this up with our required package and imports: package main   import ( "net/http" "encoding/json" "fmt" ) This is the bare minimum that we need to output a simple string in JSON via HTTP. Marshalling JSON data can be a bit more complex than what we'll look at here, so if the struct for our message doesn't immediately make sense, don't worry. This is our response struct, which contains all of the data that we wish to send to the client after grabbing it from our API: type API struct { Message string "json:message" } There is not a lot here yet, obviously. All we're setting is a single message string in the obviously-named Message variable. Finally, we need to set up our main function (as follows) to respond to a route and deliver a marshaled JSON response: func main() {   http.HandleFunc("/api", func(w http.ResponseWriter, r    *http.Request) {      message := API{"Hello, world!"}      output, err := json.Marshal(message)      if err != nil {      fmt.Println("Something went wrong!")    }      fmt.Fprintf(w, string(output))   })   http.ListenAndServe(":8080", nil) } Upon entering main(), we set a route handling function to respond to requests at /api that initializes an API struct with Hello, world! We then marshal this to a JSON byte array, output, and after sending this message to our iowriter class (in this case, an http.ResponseWriter value), we cast that to a string. The last step is a kind of quick-and-dirty approach for sending our byte array through a function that expects a string, but there's not much that could go wrong in doing so. Go handles typecasting pretty simply by applying the type as a function that flanks the target variable. In other words, we can cast an int64 value to an integer by simply surrounding it with the int(OurInt64) function. There are some exceptions to this—types that cannot be directly cast and some other pitfalls, but that's the general idea. Among the possible exceptions, some types cannot be directly cast to others and some require a package like strconv to manage typecasting. If we head over to our browser and call localhost:8080/api (as shown in the following screenshot), you should get exactly what we expect, assuming everything went correctly: Summary We've touched on the very basics of developing a simple web service interface in Go. Admittedly, this particular version is extremely limited and vulnerable to attack, but it shows the basic mechanisms that we can employ to produce usable, formalized output that can be ingested by other services. At this point, you should have the basic tools at your disposal that are necessary to start refining this process and our application as a whole. Resources for Article: Further resources on this subject: Adding Authentication [article] C10K – A Non-blocking Web Server in Go [article] Clusters, Parallel Computing, and Raspberry Pi – A Brief Background [article]
Read more
  • 0
  • 0
  • 1766

article-image-managing-images
Packt
14 Apr 2015
11 min read
Save for later

Managing Images

Packt
14 Apr 2015
11 min read
Cats, dogs and all sorts of memes, the Internet as we know it today is dominated by images. You can open almost any web page and you'll surely find images on the page. The more interactive our web browsing experience becomes, the more images we tend to use. So, it is tremendously important to ensure that the images we use are optimized and loaded as fast as possible. We should also make sure that we choose the correct image type. In this article by Dewald Els, author of the book Responsive Design High Performance,we will talk about, why image formats are important, conditional loading, visibility for DOM elements, specifying sizes, media queries, introducing sprite sheets, and caching. Let's talk basics. (For more resources related to this topic, see here.) Choosing the correct image format Deciding what image format to use is usually the first step you take when you start your website. Take a look at this table for an overview and comparison ofthe available image formats: Format Features GIF 256 colors Support for animation Transparency PNG 256 colors True colors Transparency JPEG/JPG 256 colors True colors From the preceding listed formats, you can conclude that, if you had a complex image that was 1000 x 1000 pixels, the image in the JPEG format would be the smallest in file size. This also means that it would load the fastest. The smallest image is not always the best choice though. If you need to have images with transparent parts, you'll have to use the PNG or GIF formats and if you need an animation, you are stuck with using a GIF format or the lesser know APNG format. Optimizing images Optimizing your image can have a huge impact on your overall website performance. There are some great applications to help you with image optimization and compression. TinyPNG is a great example of a site that helps you to compress you PNG's images online for free. They also have a Photoshop plugin that is available for download at https://tinypng.com/. Another great application to help you with JPG compression is JPEGMini. Head over to http://www.jpegmini.com/ to get a copy for either Windows or Mac OS X. Another application that is worth considering is Radical Image Optimization Tool (RIOT). It is a free program and can be found at http://luci.criosweb.ro/riot/. RIOT is a Windows application. Viewing as JPEG is not the only image format that we use in the Web; you can also look at a Mac OS X application called ImageOptim (http://www.imageoptim.com) It is also a free application and compresses both JPEG and PNG images. If you are not on Mac OS X, you can head over to https://tinypng.com/. This handy little site allows you to upload your image to the site, where it is then compressed. The optimized images are then linked to the site as downloadable files. As JPEG image formats make up the majority of most web pages, with some exceptions, lets take a look at how to make your images load faster. Progressive images Most advanced image editors such as Photoshop and GIMP give you the option to encode your JPEG images using either baseline or progressive. If you Save For Web using Photoshop, you will see this section at the top of the dialog box: In most cases, for use on web pages, I would advise you to use the Progressive encoding type. When you save an image using baseline, the full image data of every pixel block is written to the file one after the other. Baseline images load gradually from the top-left corner. If you save an image using the Progressive option, then it saves only a part of each of these blocks to the file and then another part and so on, until the entire image's information is captured in the file. When you render a progressive image, you will see a very grainy image display and this will gradually become sharper as it loads. Progressive images are also smaller than baseline images for various technical reasons. This means that they load faster. In addition, they appear to load faster when something is displayed on the screen. Here is a typical example of the visual difference between loading a progressive and a baseline JPEG image: Here, you can clearly see how the two encodings load in a browser. On the left, the progressive image is already displayed whereas the baseline image is still loading from the top. Alright, that was some really basic stuff, but it was extremely important nonetheless. Let's move on to conditional loading. Adaptive images Adaptive images are an adaptation of Filament Group's context-aware image sizing experiment. What does it do? Well, this is what the guys say about themselves: "Adaptive images detects your visitor's screen size and automatically creates, caches, and delivers device appropriate re-scaled versions of your web page's embedded HTML images. No mark-up changes needed. It is intended for use with Responsive Designs and to be combined with Fluid Images techniques." It certainly trumps the experiment in the simplicity of implementation. So, how does it work? It's quite simple. There is no need to change any of your current code. Head over to http://adaptive-images.com/download.htm and get the latest version of adaptive images. You can place the adaptive-images.php file in the root of your site. Make sure to add the content of the .htaccess file to your own as well. Head over to the index file of your site and add this in the <head> tags: <script>document.cookie='resolution='+Math.max(screen.width,screen.height)+'; path=/';</script> Note that it is has to be in the <head> tag of your site. Open the adaptive-images.php file and add you media query values into the $resolutions variable. Here is a snippet of code that is pretty self-explanatory: $resolutions   = array(1382, 992, 768, 480);$cache_path   = "ai-cache";$jpg_quality   = 80;$sharpen       = TRUE;$watch_cache   = TRUE;$browser_cache = 60*60*24*7; The $resolution variable accepts the break-points that you use for your website. You can simply add the value of the screen width in pixels. So, in the the preceding example, it would read 1382 pixels as the first break-point, 992 pixels as the second one, and so on. The cache path tells adaptive images where to store the generated resized images. It's a relative path from your document root. So, in this case, your folder structure would read as document_root/a-cache/{images stored here}. The next variable, $jpg_quality, sets the quality of any generated JPGs images on a scale of 0 to 100. Shrinking images could cause blurred details. Set $sharpen to TRUE to perform a sharpening process on rescaled images. When you set $watch_cache to TRUE, you force adaptive images to check that the adapted image isn't stale; that is, it ensures that the updated source images are recached. Lastly, $browser_cache sets how long the browser cache should last for. The values are seconds, minutes, hours, days (7 days by default). You can change the last digit to modify the days. So, if you want images to be cached for two days, simply change the last value to 2. Then,… oh wait, that's all? It is indeed! Adaptive images will work with your existing website and they don't require any markup changes. They are also device-agnostic and follow a mobile-first philosophy. Conditional loading Responsive designs combine three main techniques, which are as follows: Fluid grids Flexible images Media queries The technique that I want to focus on in this section is media queries. In most cases, developers use media queries to change the layout, width height, padding, font size and so on, depending on conditions related to the viewport. Let's see how we can achieve conditional image loading using CSS3's image-set function: .my-background-img {background-image: image-set(url(icon1x.jpg) 1x,url(icon2x.jpg) 2x);} You can see in the preceding piece of CSS3 code that the image is loaded conditionally based on its display type. The second statement url(icon2x.jpg) 2x would load the hi-resolution image or retina image. This reduces the number of CSS rules we have to create. Maintaining a site with a lot of background images can become quite a chore if a separate rule exists for each one. Here is a simple media query example: @media screen and (max-width: 480px) {   .container {       width: 320px;   }} As I'm sure you already know, this snippet tells the browser that, for any device with a viewport of fewer than 480 pixels, any element with the class container has to be 320 pixels wide. When you use media queries, always make sure to include the viewport <meta> tag in the head of your HTML document, as follows: <meta name="viewport" content="width=device-width, initial-scale=1"> I've included this template here as I'd like to start with this. It really makes it very easy to get started with new responsive projects: /* MOBILE */@media screen and (max-width: 480px) {   .container {       width: 320px;   }}/* TABLETS */@media screen and (min-width: 481px) and (max-width: 720px) {   .container {       width: 480px;   }}/* SMALL DESKTOP OR LARGE TABLETS */@media screen and (min-width: 721px) and (max-width: 960px) {   .container {       width: 720px;   }}/* STANDARD DESKTOP */@media screen and (min-width: 961px) and (max-width: 1200px) {   .container {       width: 960px;   }}/* LARGE DESKTOP */@media screen and (min-width: 1201px) and (max-width: 1600px) {   .container {       width: 1200px;   }}/* EXTRA LARGE DESKTOP */@media screen and (min-width: 1601px) {   .container {       width: 1600px;   }} When you view a website on a desktop, it's quite a common thing to have a left and a right column. Generally, the left column contains information that requires more focus and the right column contains content with a bit less importance. In some cases, you might even have three columns. Take the social website Facebook as an example. At the time of writing this article, Facebook used a three-column layout, which is as follows:   When you view a web page on a mobile device, you won't be able to fit all three columns into the smaller viewport. So, you'd probably want to hide some of the columns and not request the data that is usually displayed in the columns that are hidden. Alright, we've done some talking. Well, you've done some reading. Now, let's get into our code! Our goal in this section is to learn about conditional development, with the focus on images. I've constructed a little website with a two-column layout. The left column houses the content and the right column is used to populate a little news feed. I made a simple PHP script that returns a JSON object with the news items. Here is a preview of the different screens that we will work on:   These two views are a result of the queries that are shown in the following style sheet code: /* MOBILE */@media screen and (max-width: 480px) {}/* TABLETS */@media screen and (min-width: 481px) and (max-width: 720px) {} Summary Managing images is no small feat in a website. Almost all modern websites rely heavily on images to present content to the users. In this article we looked at which image formats to use and when. We also looked at how to optimize your images for websites. We discussed the difference between progressive and optimized images as well. Conditional loading can greatly help you to load your site faster. In this article, we briefly discussed how to use conditional loading to improve your site's performance. Resources for Article: Further resources on this subject: A look into responsive design frameworks [article] Building Responsive Image Sliders [article] Creating a Responsive Project [article]
Read more
  • 0
  • 0
  • 12081

article-image-creating-responsive-project
Packt
08 Apr 2015
14 min read
Save for later

Creating a Responsive Project

Packt
08 Apr 2015
14 min read
In today's ultra connected world, a good portion of your students probably own multiple devices. Of course, they may want to take your eLearning course on all their devices. They might want to start the course on their desktop computer at work, continue it on their phone while commuting back home, and finish it at night on their tablet. In other situations, students might only have a mobile phone available to take the course, and sometimes the topic to teach only makes sense on a mobile device. To address these needs, you want to deliver your course on multiple screens. As of Captivate 6, you can publish your courses in HTML5, which makes them available on mobile devices that do not support the Flash technology. Now, Captivate 8 takes it one huge step further by introducing Responsive Projects. A Responsive Project is a project that you can optimize for the desktop, the tablet, and the mobile phone. It is like providing three different versions of the course in a single project. In this article, by Damien Bruyndonckx, author of the book Mastering Adobe Captivate 8, you will be introduced to the key concepts and techniques used to create a responsive project in Captivate 8. While reading, keep the following two things in mind. First, everything you have learned so far can be applied to a responsive project. Second, creating a responsive project requires more experience than what a book can offer. I hope that this article will give you a solid understanding of the core concepts in order to jump start your own discovery of Captivate 8 Responsive Projects. (For more resources related to this topic, see here.) About Responsive Projects A Responsive Project is meant to be used on multiple devices, including tablets and smartphones that do not support the Flash technology. Therefore, it can be published only in HTML5. This means that all the restrictions of a traditional HTML5 project also apply to a Responsive Project. For example, you will not be able to add Text Animations or Rollover Objects in a Responsive Project because these features are not supported in HTML5. Responsive design is not limited to eLearning projects made in Captivate. It is actually used by web designers and developers around the world to create websites that have the ability to automatically adapt themselves to the screen they are viewed on. To do so, they need to detect the screen width that is available to their content and adapt accordingly. Responsive Design by Ethan Marcotte If you want to know more about responsive design, I strongly recommend a book by Ethan Marcotte in the A Book Apart collection. This is the founding book of responsive design. If you have some knowledge of HTML and CSS, this is a must have resource in order to fully understand what responsive design is all about. More information on this book can be found at http://www.abookapart.com/products/responsive-web-design. Viewport size versus screen size At the heart of the responsive design approach is the width of the screen used by the student to consume the content. To be more exact, it is the width of the viewport that is detected—not the width of the screen. The viewport is the area that is actually available to the content. On a desktop or laptop computer, the difference between the screen width and the viewport width is very easy to understand. Let's do a simple experiment to grasp that concept hands-on: Open your default web browser and make sure it is in fullscreen mode. Browse to http://www.viewportsizes.com/mine. The main information provided by this page is the size of your viewport. Because your web browser is currently in fullscreen mode, the viewport size should be close (but not quite the same) to the resolution of your screen. Use your mouse to resize your browser window and see how the viewport size evolves. As shown in the following screenshot, the size of the viewport changes as you resize your browser window but the actual screen you use is always the same: This viewport concept is also valid on a mobile device, even though it may be a bit subtler to grasp. The following screenshot shows the http://www.viewportsizes.com/mine web page as viewed in the Safari mobile browser on an iPad mini held in landscape (left) and in portrait (right). As you can see, the viewport size changes but, once again, the actual screen used is always the same. Don't hesitate to perform these experiments on your own mobile devices and compare your results to mine. Another thing that might affect the viewport size on a mobile device is the browser used. The following screenshot shows the viewport size of the same iPad mini held in portrait mode in Safari mobile (left) and in Chrome mobile (right). Note that the viewport size is slightly different in Chrome than in Safari. This is due to the interface elements of the browser (such as the address bar and the tabs) that use a variable portion of the screen real estate in each browser. Understanding breakpoints Before setting up your own Responsive Project there is one more concept to explore. To discover this second concept, you will also perform a simple experiment with your desktop or laptop computer: Open the web browser of your desktop or laptop computer and maximize it to fullscreen size. Browse to http://courses.dbr-training.eu/8/goingmobile. This is the online version of the Responsive Project that you will build in this article. When viewed on a desktop or laptop computer in fullscreen mode, you should see a version of the course optimized for larger screens. Use your mouse to slowly scale your browser window down. Note how the size and the position of the elements are automatically recalculated as you resize the browser window. At some point, you should see that the height of the slide changes and that another layout is applied. The point at which the layout changes is situated at a width of exactly 768 px. In other words, if the width of the browser (actually the viewport) is above 768 px, one layout is applied, but if the width of the viewport falls under 768 px, another layout is applied. You just discovered a breakpoint. The layout that is applied after the breakpoint (in other words when the viewport width is lower than 768 px) is optimized for a tablet device held in portrait mode. Note that even though you are using a desktop or laptop computer, it is the tablet-optimized layout that is applied when the viewport width is at or under 768 px. Keep scaling the browser window down and see how the position and the size of the elements of the slide are recalculated in real time as you resize the browser window. This simple experiment should better explain what a breakpoint is and how these breakpoints work. Before moving on to the next section, let's take some time to summarize the important concepts uncovered in this section: The aim of responsive design is to provide an optimized viewing experience across a wide range of devices and form factors. To achieve this goal, responsive design uses fluid sizing and positioning techniques, responsive images, and breakpoints. Responsive design is not limited to eLearning courses made in Captivate, but is widely used in web and app design by thousands of designers around the world. A Captivate 8 Responsive Project can only be published in HTML5. The capabilities and restrictions of a standard HTML5 project also apply to a Responsive Project. A breakpoint defines the exact viewport width at which the layout breaks and another layout is applied. The breakpoints, and therefore the optimized layouts, are based on the width of the viewport and not on the detection of an actual device. This explains why the tablet-optimized layout is applied to the downsized browser window on a desktop computer. The viewport width and the screen width are two different things. In the next section, you will start the creation of your very first Responsive Project. To learn more about these concepts, there is a video course on Responsive eLearning with Captivate 8 available on Adobe KnowHow. The course itself is for a fee, but there is a free sample of 15 minutes that walks you through these concepts using another approach. I suggest you take some time to watch this 15-minute sample at https://www.adobeknowhow.com/courselanding/create-responsive-elearning-adobe-captivate-8. Setting up a Responsive Project It is now time to open Captivate and set up your first Responsive Project using the following steps: Open Captivate or close every open file. Switch to the New tab of the Welcome screen. Double-click on the Responsive Project thumbnail. Alternatively, you can also use the File | New Project | Responsive Project menu item. This action creates a new Responsive Project. Note that the choice to create a Responsive Project or a regular Captivate project must be done up front when creating the project. As of Captivate 8, it is not yet possible to take an existing non-responsive project and make it responsive after the fact. The workspace of Captivate should be very similar to what you are used to, with the exception of an extra ruler that spans across the top of the screen. This ruler contains three predefined breakpoints. As shown in the following screenshot, the first breakpoint is called the Primary breakpoint and is situated at 1024 pixels. Also, note that the breakpoint ruler is green when the Primary breakpoint is selected. You will now discover the other two breakpoints using the following steps. In the breakpoint ruler, click on the icon of a tablet to select the second breakpoint. The stage and all the elements it contains are resized. In the breakpoint ruler at the top of the stage, the second breakpoint is now selected. It is called the Tablet breakpoint and is situated at 768 pixels. Note the blue color associated with the Tablet breakpoint. In the breakpoint ruler, click on the icon of a smartphone to select the third and last breakpoint. Once again, the stage and the elements it contains are resized. The third breakpoint is called the Mobile breakpoint and is situated at 360 pixels. The orange color is associated with this third breakpoint. Adjusting the breakpoints In some situations, the default location of these three breakpoints works just fine But, in other situations, some adjustments are needed. In this project, you want to target the regular screen of a desktop or laptop computer in the Primary view, an iPad mini held in portrait in the Tablet view, and an iPhone 4 held in portrait in the Mobile view. You will now adjust the breakpoints to fit these particular specifications by using the following steps: Click on the Primary breakpoint in the breakpoints ruler to select it. Use your mouse to move the breakpoint all the way to the left. Captivate should stop at a width of 1280 pixels. It is not possible to have a stage wider than 1280 pixels in a Responsive Project. For this project, the default width of 1024 pixels is perfect, so you will now move this breakpoint back to its original location. Move the Primary breakpoint to the right until it is placed at 1024 pixels. Return to your web browser and browse to http://www.viewportsizes.com. Once on the website, type iPad in the Filter field at the top of the page. The portrait width of an iPad mini is 768 pixels. In Captivate, the Tablet breakpoint is placed at 768 pixels by default, which is perfectly fine for the needs of this project. Still on the http://www.viewportsizes.com website, type iPhone in the Filter field at the top of the page. The portrait width of an iPhone 4 is 320 pixels. In Captivate, the Mobile breakpoint is placed at 360 pixels by default. You will now move it to 320 pixels so that it matches the portrait width of an iPhone 4. Return to Captivate and select the Mobile breakpoint. Move the Mobile breakpoint to the right until it is placed at exactly 320 pixels. Note that the minimum width of the stage in the Mobile breakpoint is 320 pixels. In other words, the stage cannot be narrower than 320 pixels in a Responsive Project. The viewport size of your device Before moving on to the next section, take some time to inspect the http://viewportsizes.com site a little further. For example, type the name of the devices you own and compare their characteristics to the breakpoints of the current project. Will the project fit on your devices? How do you need to change the breakpoints so the project perfectly fits your devices? The breakpoints are now in place. But these breakpoints only take care of the width of the stage. In the next section, you will adjust the height of the stage in each breakpoint. Adjusting the slide height Captivate slides have a fixed height. This is the primary difference between a Captivate project and a regular responsive website whose page height is infinite. In this section, you will adjust the height of the stage in all three breakpoints. The steps are as follows: Still in Captivate, click on the desktop icon situated on the left side of the breakpoint switcher to return to the Primary view. On the far right of the breakpoint ruler, select the View Device Height checkbox. As shown in the following screenshot, a yellow border now surrounds the stage in the Primary view, and the slide height is displayed in the top left corner of the stage: For the Primary view, a slide height of 627 pixels is perfect. It matches the viewport size of an iPad held in landscape and provides a big enough area on a desktop or laptop computer. Click on the Tablet breakpoint to select it. Return to http://www.viewportsizes.com/ and type iPad in the filter field at the top of the page. According to the site, the height of an iPad is 1024 pixels. Use your mouse to drag the yellow rectangle situated at the bottom of the stage down until the stage height is around 950 pixels. It may be needed to reduce the zoom magnification to perform this action in good conditions. After this operation, the stage should look like the following screenshot (the zoom magnification has been reduced to 50 percent in the screenshot): With a height of 950 pixels, the Captivate slide can fit on an iPad screen and still account for the screen real estate consumed by the interface elements of the browser such as the address bar and the tabs. Still in the Tablet view, make sure the slide is the selected object and open the Properties panel. Note that, at the end of the Properties panel, the Slide Height property is currently unavailable. Click on the chain icon (Unlink from Device height) next to the Slide Height property. By default, the slide height is linked to the device height. By clicking on the chain icon you have broken the link between the slide height and the device (or viewport) height. This allows you to modify the height of the Captivate slide without modifying the height of the device. Use the Properties panel to change the Slide Height to 1024 pixels. On the stage, note that the slide is now a little bit higher than the yellow rectangle. This means that this particular slide will generate a vertical scrollbar on the tablet device held in portrait. Scrolling is something you want to avoid as much as possible, so you will now enable the link between the device height and the Slide Height. In the Properties panel, click on the chain icon next to the Slide Height property to enable the link. The slide height is automatically readjusted to the device height of 950 pixels. Use the breakpoint ruler to select the Mobile breakpoint. By default, the device height in the Mobile breakpoint is set to 415 pixels. According to the http://www.viewportsizes.com/ website, the screen of an iPhone 4 has a height of 480 pixels. A slide height of 415 pixels is perfect to accommodate the slide itself plus the interface elements of the mobile browser. Summary In this article, you learned the key concepts and techniques used to create a responsive project in Captivate 8. Resources for Article: Further resources on this subject: Publishing the project for mobile [article] Getting Started with Adobe Premiere Pro CS6 Hotshot [article] Creating Motion Through the Timeline [article]
Read more
  • 0
  • 0
  • 2483

article-image-adding-and-editing-content-your-web-pages
Packt
06 Apr 2015
12 min read
Save for later

Adding and Editing Content in Your Web Pages

Packt
06 Apr 2015
12 min read
This article by Miko Coffey, the author of the book, Building Business Websites with Squarespace 7, delves into the processes of adjusting images, adding content to sidebars or footers, and adding links. (For more resources related to this topic, see here.) Adjusting images in Squarespace We've learned how to adjust the size of images in relation to other elements on the page, but so far, the images themselves have remained intact, showing the full image. However, you can actually crop or zoom images so they only show a portion of the photo on the screen without having to leave the Squarespace interface. You can also apply effects to images using the built-in Aviary Image Editor, such as rotating, enhancing color, boosting contrast, whitening teeth, removing blemishes, or hundreds of other adjustments, which means you don't need fancy image editing software to perform even fairly advanced image adjustments. Cropping and zooming images with LayoutEngine If you only want to crop your image, you don't need to use the Aviary Image Editor: you can crop images using LayoutEngine in the Squarespace Content Editor. To crop an image, you perform the same steps as those to adjust the height of a Spacer Block: just click and drag the dot to change the part of the image that is shown. As you drag the dot up or down, you will notice that: Dragging the dot up will chop off the top and bottom of your image Dragging the dot down will zoom in your image, cutting off the sides and making the image appear larger When dragging the dot very near the original dimensions of your image, you will feel and see the cursor pull/snap to the original size Cropping an image in an Image Block in this manner does not remove parts from the original image; it merely adjusts the part of the image that will be shown in the Image Block on the page. You can always change your mind later. Adjusting the Focal Point of images You'll notice that all of the cropping and zooming of images is based on the center of the image. What if your image has elements near the edges that you want to show instead of weighting things towards the center? With Squarespace, you can influence which part of the image displays by adjusting the Focal Point of the image. The Focal Point identifies the most important part of the image to instruct the system to try to use this point as the basis for cropping or zooming. However, if your Image Block is an extreme shape, such as a long skinny rectangle, it may not be possible to fit all of your desired area into the cropped or zoomed image space. Adjusting the Focal Point can also be useful for Gallery images, as certain templates display images in a square format or other formats that may not match the dimensions of your images. You can also adjust the Focal Point of any Thumbnail Images that you have added in Page Settings to select which part to show as the thumbnail or header banner. To adjust an image's Focal Point, follow these steps: Double-click on the image to open the Edit Image overlay window. Hover your mouse over the image thumbnail, and you will see a translucent circle appear at the center of the thumbnail. This is the Focal Point. Click and drag the circle until it sits on top of the part of the image you want to include, as shown in the following screenshot: Using the Aviary Image Editor You can also use the Aviary Image Editor to crop or zoom into your images as well as many more adjustments that are too numerous to list here. It's important to remember that all adjustments carried out in the Aviary Image Editor are permanent: there is no way to go back to a previous version of your image. Therefore, it's better to use LayoutEngine for cropping and zooming and reserve Aviary for other adjustments that you know you want to make permanently, such as rotating a portrait image that was taken sideways to display vertically. Because edits performed in the Aviary Image Editor are permanent, use it with caution and always keep a backup original version of the image on your computer just in case. Here's how to edit an image with the Aviary Image Editor: Double-click on the image to open the Edit Image overlay window. Click on the Edit button below the image thumbnail. This will open the Aviary window, as shown in the following screenshot: Select the type of adjustment you want to perform from the menu at the top. Use the controls to perform the adjustment and click on Apply. The window will show you the effect of the adjustment on the image. Perform any other adjustments in the same manner. You can go back to previous steps using the arrows in the bottom-left section of the editor window. Once you have performed all desired adjustments, click on Save to commit the adjustments to the image permanently. The Aviary window will now close. In the Edit Image window, click on Save to store the Aviary adjustments. The Edit Image window will now close. In the Content Editor window, click on the Save button to refresh the page with the newly edited version of the image. Adding content to sidebars or footers Until this point, all of our content additions and edits have been performed on a single page. However, it's likely that you will want to have certain blocks of content appear on multiple or all pages, such as a copyright notice in your page footer or an About the Author text snippet in the sidebar of all blog posts. You add content to footers or sidebars using the Content Editor in much the same way as adding the page content. However, there are a few restrictions. Certain templates only allow certain types of blocks in footers or sidebars, and some templates have restrictions on positioning elements as well—for example, it's unlikely that you will be able to wrap text around an image in a sidebar due to space limitations. If you are unable to get the system to accept an addition or repositioning move that you are trying to perform to a block in a sidebar or footer, it usually indicates that you are trying to perform something that is prohibited. Adding or editing content in a footer Follow these steps to add or edit content in a footer: In the preview screen, scroll to the bottom of your page and hover over the footer area to activate the Annotations there, and click on the Edit button next to the Footer Content label, as shown in the following screenshot: This will open the Content Editor window. You will notice that Insert Points appear just like before, and you can click on an Insert Point to add a block, or click within an existing Text Block to edit it. You can move blocks in a footer in the same way as those in a page body. Most templates have a footer, but not all of them do. Some templates hide the footer on certain page types, so if you can't see your footer, try looking at a standard page, or double-check whether your template offers one. Adding or editing content in a sidebar Not all templates have a sidebar, but if yours does, here's how you can add or edit content in your sidebar: While in the Pages menu, navigate to a page that you know has a sidebar in your template, such as a Blog page. You should see the template's demo content preloaded into your sidebar. Hover your mouse over the sidebar area to activate the Annotations, and click on the Edit button that appears at the top of the sidebar area. Make sure you click on the correct Annotation. Other Annotations may be activated on the page, so don't get confused and click on anything other than the sidebar Annotations, as shown in the following screenshot: Once the Content Editor window opens, you can click on an Insert Point to add a block or click within an existing Text Block to edit it. You can move blocks in a sidebar in the same way as those in a page body. Enabling a sidebar If you do not see the sidebar, but you know that your template allows one, and you know you are on the correct page type (for example, a Blog post), then it's possible that your sidebar is not enabled. Depending on the template, you enable your sidebar in one of two ways. The first method is in the Style Editor, as follows: First, ensure you are looking at a Blog page (or any page that can have a sidebar in your template). From the Home menu in the side panel, navigate to Design | Style Editor. In the Style Editor menu, scroll down until you see a set of controls related to Blog Styles. Find the control for the sidebar, and select the position you want. The following screenshot shows you an example of this: Click on Save to commit your changes. If you don't see the sidebar control in the Style Editor, you may find it in the Blog or Page Settings instead, as described here: First, ensure you are looking at the Blog page (or any page that can have a sidebar in your template), and then click on the Settings button in the Annotations or the cog icon in the Pages menu to open the Settings window. Look for a menu item called Page Layout and select the sidebar position, as shown in the following screenshot: On smaller screens, many templates use fluid layout to stack the sidebar below the main content area instead of showing it on the left- or right-hand side. If you can't see your sidebar and are viewing the website on a tablet or another small/low-resolution screen, scroll down to the bottom of the page and you will most likely see your sidebar content there, just above the footer. Adding links that point to web pages or files The final type of basic content that we'll cover in this article is adding hyperlinks to your pages. You can use these links to point to external websites, other pages on your own website, or files that visitors can either view within the browser or download to their computers. You can assign a link to any word or phrase in a Text Block, or you can assign a link to an image. You can also use a special type of block called a Button to make links really stand out and encourage users to click. When creating links in any of these scenarios, you will be presented with three main options: External: You can paste or type the full web address of the external website you want the link to point to. You can also choose to have this website open in a new window to allow users to keep your site open instead of navigating away entirely. The following screenshot shows the External option: Files: You can either upload a file directly, or link to a file that you have already uploaded earlier. This screenshot shows the Files option: Content: You can link to any page, category, or tag that you have created on your site. Linking to a category or tag will display a list of all items that have been labeled with that tag or category. Here's an example of the Content option: Assigning a link to word(s) in a Text Block Follow these steps to assign a link to a word or phrase in a Text Block: Highlight the word(s), and then click on the link icon (two interlocked ovals) in the text editor menu. Select the type of link you want to add, input the necessary settings, and then click anywhere outside the Edit Link window. This will close the Edit Link window, and you will see that the word is now a different color to indicate that the link has been applied. Click on the Save button at the top of the page. You can change or remove the link at any time by clicking on the word. A floating window will appear to show you what the link currently points to, along with the options to edit or remove the link. This is shown in the following screenshot: Assigning a link to an image Here's how you can assign a link to an image: Double-click on the image to open the Edit Image window. Under Clickthrough URL, select the type of link that you want to add and input the necessary settings. Click on Save. Creating a Button on your page You can create a Button on your page by following these steps: In the Content Editor window, find the point where you want to insert the button on the page, and click on the Insert Point to open the Add Block menu. Under the Filters & Lists category, choose Button. Type the text that you want to show on the button, select the type of link, and select the button size and alignment you want. The following screenshot shows the Edit Button window: This is how the button appears on the page: Summary In this article, you have acquired skills you need to add and edit basic web content, and you have learned how to move things around to create finished web pages with your desired page layout. Visit www.square-help.com/inspiration if you'd like to see some examples of different page layouts to give you ideas for making your own pages. Here, you'll see how you can use LayoutEngine to create sophisticated layouts for a range of page types. Resources for Article: Further resources on this subject: Welcoming your Visitors: Creating Attractive Home Pages and Overview Pages [article] Selecting Elements [article] Creating Blog Content in WordPress [article]
Read more
  • 0
  • 0
  • 11464
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-woocommerce-basics
Packt
01 Apr 2015
16 min read
Save for later

WooCommerce Basics

Packt
01 Apr 2015
16 min read
In this article by Patrick Rauland, author of the book WooCommerce Cookbook, we will focus on the following topics: Installing WooCommerce Installing official WooThemes plugins Manually creating WooCommerce pages Creating a WooCommerce plugin (For more resources related to this topic, see here.) A few years ago, building an online store used to be an incredibly complex task. You had to install bulky software onto your own website and pay expensive developers a significant sum of money to customize even the simplest elements of your store. Luckily, nowadays, adding e-commerce functionality to your existing WordPress-powered website can be done by installing a single plugin. In this article, we'll go over the settings that you'll need to configure before launching your online store with WooCommerce. Most of the recipes in this article are simple to execute. We do, however, add a relatively complex recipe near the end of the article to show you how to create a plugin specifically for WooCommerce. If you're going to be customizing WooCommerce with code, it's definitely worth looking at that recipe to know the best way to customize WooCommerce without affecting other parts of your site. The recipes in this article form the very basics of setting up a store, installing plugins that enhance WooCommerce, and managing those plugins. There are recipes for official WooCommerce plugins written using WooThemes as well as a recipe for unofficial plugins. Feel free to select either one. In general, the official plugins are better supported, more up to date, and have more functionality than unofficial plugins. You could always try an unofficial plugin to see whether it meets your needs, and if it doesn't, then use an official plugin that is much more likely to meet your needs. At the end of this article, your store will be fully functional and ready to display products. Installing WooCommerce WooCommerce is a WordPress plugin, which means that you need to have WordPress running on your own server to add WooCommerce. The first step is to install WooCommerce. You could do this on an established website or a brand new website—it doesn't matter. Since e-commerce is more complex than your average plugin, there's more to the installation process than just installing the plugin. Getting ready Make sure you have the permissions necessary to install plugins on your WordPress site. The easiest way to have the correct permissions is to make sure your account on your WordPress site has the admin role. How to do it… There are two parts to this recipe. The first part is installing the plugin and the second step is adding the required pages to the site. Let's have a look at the following steps for further clarity: Log in to your WordPress site. Click on the Plugins menu. Click on the Add New menu item. These steps have been demonstrated visually in the following screenshot: Search for WooCommerce. Click on the Install Now button, as shown in the following screenshot: Once the plugin has been installed, click on the Activate Plugin button. You now have WooCommerce activated on your site, which means we're half way there. E-commerce platforms need to have certain pages (such as a cart page, a checkout page, an account page, and so on) to function. We need to add those to your site. Click on the Install WooCommerce Pages button, which appears after you've activated WooCommerce. This is demonstrated in the following screenshot: How it works… WordPress has an infrastructure that allows any WordPress site to install a plugin hosted on WordPress.org. This is a secure process that is managed by WordPress.org. Installing the WooCommerce pages allows all of the e-commerce functionality to run. Without installing the pages, WooCommerce won't know which page is the cart page or the checkout page. Once these pages are set up, we're ready to have a basic store up and running. If WordPress prompts you for FTP credentials when installing the plugin, that's likely to be a permissions issue with your web host. It is a huge pain if you have to enter FTP credentials every time you want to install or update a plugin, and it's something you should take care of. You can send this link to your web host provider so they know how to change their permissions. You can refer to http://www.chrisabernethy.com/why-wordpress-asks-connection-info/ for more information to resolve this WordPress issue. Installing official WooThemes plugins WooThemes doesn't just create the WooCommerce plugin. They also create standalone plugins and hundreds of extensions that add extra functionality to WooCommerce. The beauty of this system is that WooCommerce is very easy to use because users only add extra complexity when they need it. If you only need simple shipping options, you don't ever have to see the complex shipping settings. On the WooThemes website, you may browse for WooCommerce extensions, purchase them, and download and install them on your site. WooThemes has made the whole process very easy to maintain. They have built an updater similar to the one in WordPress, which, once configured, will allow a user to update a plugin with one click instead of having to through the whole plugin upload process again. Getting ready Make sure you have the necessary permissions to install plugins on your WordPress site. You also need to have a WooThemes product. There are several free WooThemes products including Pay with Amazon which you can find at http://www.woothemes.com/products/pay-with-amazon/. How to do it… There are two parts to this recipe. The first part is installing the plugin and the second step is adding your license for future updates. Follow these steps: Log in to http://www.woothemes.com. Click on the Downloads menu: Find the product you wish to download and click on the Download link for the product. You will see that you get a ZIP file. On your WordPress site, go the Plugins menu and click on Add New. Click on Upload Plugin. Select the file you just downloaded and click on the Install Now button. After the plugin has finished installing, click on the Activate Plugin link. You now have WooCommerce as well as a WooCommerce extension activated on your site. They're both functioning and will continue to function. You will, however, want to perform a few more steps to make sure it's easy to update your extensions: Once you have an extension activated on your site, you'll see a link in the WordPress admin: Install the WooThemes Updater plugin. Click on that link: The updater will be installed automatically. Once it is installed, you need to activate the updater. After activation, you'll see a new link in the WordPress admin: activate your product licenses. Click that link to go straight to the page where you can enter your licenses. You could also navigate to that page manually by going to Dashboard | WooThemes Helper from the menu. Keep your WordPress site open in one tab and log back in to your WooThemes account in another browser tab. On the WooThemes browser tab, go to My Licenses and you'll see a list of your products with a license key under the heading KEY: Copy the key, go back to your WordPress site, and enter it in the Licenses field. Click on the Activate Products button at the bottom of the page. The activation process can take a few seconds to complete. If you've successfully put in your key, you should see a message at the top of the screen saying so. How it works… A plugin that's not hosted on WordPress.org can't update without someone manually reuploading it. The WooThemes updater was built to make this process easier so you can press the update button and have your website do all the heavy lifting. Some websites sell official WooCommerce plugins without a license key. These sales aren't licensed and you won't be getting updates, bug fixes, or access to the support desk. With a regular website, it's important to stay up to date. However, with e-commerce, it's even more important since you'll be handling very sensitive payment information. That's why I wouldn't ever recommend using a plugin that can't update. Manually creating WooCommerce pages Every e-commerce platform will need to have some way of creating extra pages for e-commerce functionality, such as a cart page, a checkout page, an account page, and so on. WooCommerce prompts to helps you create these pages for you when you first install the plugin. So if you installed it correctly, you shouldn't have to do this. But if you were trying multiple e-commerce systems and for some reason deleted some pages, you may have to recreate those pages. How to do it… There's a very useful Tools menu in WooCommerce. It's a bit hard to find since you won't be needing it everyday, but it has some pretty useful tools if you ever need to do some troubleshooting. One of these tools is the one that allows you to recreate your WooCommerce pages. Let's have a look at how to use that tool: Log in to the WordPress admin. Click on WooCommerce | System Status: Click on Tools: Click on the Install Pages button: How it works… WooCommerce keeps track of which pages run e-commerce functionality. When you click on the Install Pages button, it checks which pages exist and if they don't exist, it will automatically create them for you. You could create them by creating new WordPress pages and then manually assigning each page with specific e-commerce functionality. You may want to do this if you already have a cart page and don't want to recreate a new cart page but just copy the content from the old page to the new page. All you want to do is tell WooCommerce which page should have the cart functionality. Let's have a look at the following manual settings: The Cart, Checkout, and Terms & Conditions page can all be set by going to WooCommerce | Settings | Checkout The My Account page can be set by going to WooCommerce | Settings | Accounts There's more... You can manually set some pages, such as the Cart and Checkout page, but you can't set subpages. WooCommerce uses a WordPress functionality called end points to create these subpages. Pages such as the Order Received page, which is displayed right after payment, can't be manually created. These endpoints are created on the fly based on the parent page. The Order Received page is part of the checkout process, so it's based on the Checkout page. Any content on the Checkout page will appear on both the Checkout page and on the Order Received page. You can't add content to the parent page without it affecting the subpage, but you can change the subpage URLs. The checkout endpoints can be configured by going to WooCommerce | Settings | Checkout | Checkout Endpoints. Creating a WooCommerce plugin Unlike a lot of hosted e-commerce solutions, WooCommerce is entirely customizable. That's one of the huge advantages for anyone who builds on open source software. If you don't like it, you can change it. At some point, you'll probably want to change something that's not on a settings page, and that's when you may want to dig into the code. Even if you don't know how to code, you may want to look this over so that when you work with a developer, you would know they're doing it the right way. Getting ready In addition to having admin access to a WordPress site, you'll also need FTP credentials so you can upload a plugin. You'll also need a text editor. Popular code editors include Sublime Text, Coda, Dreamweaver, and Atom. I personally use Atom. You could also use Notepad on a Windows machine or Text Edit on a Mac in a pinch. How to do it… We're going to be creating a plugin that interacts with WooCommerce. It will take the existing WooCommerce functionality and change it. These are the WooCommerce basics. If you build a plugin like this correctly, when WooCommerce isn't active, it won't do anything at all and won't slow down your website. Let's create a plugin by performing the following steps: Open your text editor and create a new file. Save the file as woocommerce-demo-plugin.php. In that file, add the opening PHP tag, which looks like this: <?php. On the next line, add a plugin header. This allows WordPress to recognize the file as a plugin so that it can be activated. It looks something like the following: /** * Plugin Name: WooCommerce Demo Plugin * Plugin URI: https://gist.github.com/BFTrick/3ab411e7cec43eff9769 * Description: A WooCommerce demo plugin * Author: Patrick Rauland * Author URI: http://speakinginbytes.com/ * Version: 1.0 * * This program is free software: you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation, either version 3 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program. If not, see <http://www.gnu.org/licenses/>. * */ Now that WordPress knows that your file is a plugin, it's time to add some functionality to this. The first thing a good developer does is makes sure their plugin won't conflict with another plugin. To do that, we make sure an existing class doesn't have the same name as our class. I'll be using the WC_Demo_Plugin class, but you can use any class name you want. Add the following code beneath the plugin header: if ( class_exists( 'WC_Demo_Plugin' ) ) {    return; }   class WC_Demo_Plugin {   } Our class doesn't do anything yet, but at least we've written it in such a way that it won't break another plugin. There's another good practice we should add to our plugin before we add the functionality, and that's some logic to make sure another plugin won't misuse our plugin. In the vast majority of use cases, you want to make sure there can't be two instances of your code running. In computer science, this is called the Singleton pattern. This can be controlled by tracking the instances of the plugin with a variable. Right after the WC_Demo_Plugin { line, add the following: protected static $instance = null;     /** * Return an instance of this class. * * @return object A single instance of this class. * @since 1.0 */ public static function get_instance() {    // If the single instance hasn't been set, set it now.    if ( null == self::$instance ) {        self::$instance = new self;    }      return self::$instance; } And get the plugin started by adding this right before the endif; line: add_action( 'plugins_loaded', array( 'WC_Demo_Plugin', 'get_instance' ), 0 ); At this point, we've made sure our plugin doesn't break other plugins and we've also dummy-proofed our own plugin so that we or other developers don't misuse it. Let's add just a bit more logic so that we don't run our logic unless WooCommerce is already loaded. This will make sure that we don't accidentally break something if we turn WooCommerce off temporarily. Right after the protected static $instance = null; line, add the following: /** * Initialize the plugin. * * @since 1.0 */ private function __construct() {    if ( class_exists( 'WooCommerce' ) ) {      } } And now our plugin only runs when WooCommerce is loaded. I'm guessing that at this point, you finally want it to do something, right? After we make sure WooCommerce is running, let's add some functionality. Right after the if ( class_exists( 'WooCommerce' ) ) { line, add the following code so that we add an admin notice: // print an admin notice to the screen. add_action( 'admin_notices', array( $this, 'my_admin_notice' ) ); This code will call a method called my_admin_notice, but we haven't written that yet, so it's not doing anything. Let's write that method. Have a look at the __construct method, which should now look like this: /** * Initialize the plugin. * * @since 1.0 */ private function __construct() {    if ( class_exists( 'WooCommerce' ) ) {          // print an admin notice to the screen.        add_action( 'admin_notices', array( $this, 'display_admin_notice' ) );      } } Add the following after the preceding __construct method: /** * Print an admin notice * * @since 1.0 */ public function display_admin_notice() {    ?>    <div class="updated">        <p><?php _e( 'The WooCommerce dummy plugin notice.', 'woocommerce-demo-plugin' ); ?></p>    </div>    <?php } This will print an admin notice on every single admin page. This notice includes all the messages you typically see in the WordPress admin. You could replace this admin notice method with just about any other hook in WooCommerce to provide additional customizations in other areas of WooCommerce, whether it be for shipping, the product page, the checkout process, or any other area. This plugin is the easiest way to get started with WooCommerce customizations. If you'd like to see the full code sample, you can see it at https://gist.github.com/BFTrick/3ab411e7cec43eff9769. Now that the plugin is complete, you need to upload it to your plugins folder. You can do this via the WordPress admin or more commonly via FTP. Once the plugin has been uploaded to your site, you'll need to activate the plugin just like any other WordPress plugin. The end result is a notice in the WordPress admin letting us know we did everything successfully. Whenever possible, use object-oriented code. That means using objects (like the WC_Demo_Plugin class) to encapsulate your code. It will prevent a lot of naming conflicts down the road. If you see some procedural code online, you can usually convert it to object-oriented code pretty easily. Summary In this article, you have learned the basic steps in installing WooCommerce, installing WooThemes plugins, manually creating WooCommerce pages, and creating a WooCommerce plugin. Resources for Article: Further resources on this subject: Creating Blog Content in WordPress [article] Tips and Tricks [article] Setting Up WooCommerce [article]
Read more
  • 0
  • 0
  • 11289

article-image-optimizing-javascript-ios-hybrid-apps
Packt
01 Apr 2015
17 min read
Save for later

Optimizing JavaScript for iOS Hybrid Apps

Packt
01 Apr 2015
17 min read
In this article by Chad R. Adams, author of the book, Mastering JavaScript High Performance, we are going to take a look at the process of optimizing JavaScript for iOS web apps (also known as hybrid apps). We will take a look at some common ways of debugging and optimizing JavaScript and page performance, both in a device's web browser and a standalone app's web view. Also, we'll take a look at the Apple Web Inspector and see how to use it for iOS development. Finally, we will also gain a bit of understanding about building a hybrid app and learn the tools that help to better build JavaScript-focused apps for iOS. Moreover, we'll learn about a class that might help us further in this. We are going to learn about the following topics in the article: Getting ready for iOS development iOS hybrid development (For more resources related to this topic, see here.) Getting ready for iOS development Before starting this article with Xcode examples and using iOS Simulator, I will be displaying some native code and will use tools that haven't been covered in this course. Mobile app developments, regardless of platform, are books within themselves. When covering the build of the iOS project, I will be briefly going over the process of setting up a project and writing non-JavaScript code to get our JavaScript files into a hybrid iOS WebView for development. This is essential due to the way iOS secures its HTML5-based apps. Apps on iOS that use HTML5 can be debugged, either from a server or from an app directly, as long as the app's project is built and deployed in its debug setting on a host system (meaning the developers machine). Readers of this article are not expected to know how to build a native app from the beginning to the end. And that's completely acceptable, as you can copy-and-paste, and follow along as I go. But I will show you the code to get us to the point of testing JavaScript code, and the code used will be the smallest and the fastest possible to render your content. All of these code samples will be hosted as an Xcode project solution of some type on Packt Publishing's website, but they will also be shown here if you want to follow along, without relying on code samples. Now with that said, lets get started… iOS hybrid development Xcode is the IDE provided by Apple to develop apps for both iOS devices and desktop devices for Macintosh systems. As a JavaScript editor, it has pretty basic functions, but Xcode should be mainly used in addition to a project's toolset for JavaScript developers. It provides basic code hinting for JavaScript, HTML, and CSS, but not more than that. To install Xcode, we will need to start the installation process from the Mac App Store. Apple, in recent years, has moved its IDE to the Mac App Store for faster updates to developers and subsequently app updates for iOS and Mac applications. Installation is easy; simply log in with your Apple ID in the Mac App Store and download Xcode; you can search for it at the top or, if you look in the right rail among popular free downloads, you can find a link to the Xcode Mac App Store page. Once you reach this, click Install as shown in the following screenshot: It's important to know that, for the sake of simplicity in this article, we will not deploy an app to a device; so if you are curious about it, you will need to be actively enrolled in Apple's Developer Program. The cost is 99 dollars a year, or 299 dollars for an enterprise license that allows deployment of an app outside the control of the iOS App Store. If you're curious to learn more about deploying to a device, the code in this article will run on the device assuming that your certificates are set up on your end. For more information on this, check out Apple's iOS Developer Center documentation online at https://developer.apple.com/library/ios/documentation/IDEs/Conceptual/AppDistributionGuide/Introduction/Introduction.html#//apple_ref/doc/uid/TP40012582. Once it's installed, we can open up Xcode and look at the iOS Simulator; we can do this by clicking XCode, followed by Open Developer Tool, and then clicking on iOS Simulator. Upon first opening iOS Simulator, we will see what appears to be a simulation of an iOS device, shown in the next screenshot. Note that this is a simulation, not a real iOS device (even if it feels pretty close). A neat trick for JavaScript developers working with local HTML files outside an app is that they can quickly drag-and-drop an HTML file. Due to this, the simulator will open the mobile version of Safari, the built-in browser for iPhone and iPads, and render the page as it would do on an iOS device; this is pretty helpful when testing pages before deploying them to a web server. Setting up a simple iOS hybrid app JavaScript performance on a built-in hybrid application can be much slower than the same page run on the mobile version of Safari. To test this, we are going to build a very simple web browser using Apple's new programming language Swift. Swift is an iOS-ready language that JavaScript developers should feel at home with. Swift itself follows a syntax similar to JavaScript but, unlike JavaScript, variables and objects can be given types allowing for stronger, more accurate coding. In that regard, Swift follows syntax similar to what can be seen in the ECMAScript 6 and TypeScript styles of coding practice. If you are checking these newer languages out, I encourage you to check out Swift as well. Now let's create a simple web view, also known as a UIWebView, which is the class used to create a web view in an iOS app. First, let's create a new iPhone project; we are using an iPhone to keep our app simple. Open Xcode and select the Create new XCode project project; then, as shown in the following screenshot, select the Single View Application option and click the Next button. On the next view of the wizard, set the product name as JS_Performance, the language to Swift, and the device to iPhone; the organization name should autofill with your name based on your account name in the OS. The organization identifier is a reverse domain name unique identifier for our app; this can be whatever you deem appropriate. For instructional purposes, here's my setup: Once your project names are set, click the Next button and save to a folder of your choice with Git repository left unchecked. When that's done, select Main.storyboard under your Project Navigator, which is found in the left panel. We should be in the storyboard view now. Let's open the Object Library, which can be found in the lower-right panel in the subtab with an icon of a square inside a circle. Search for Web View in the Object Library in the bottom-right search bar, and then drag that to the square view that represents our iOS view. We need to consider two more things before we link up an HTML page using Swift; we need to set constraints as native iOS objects will be stretched to fit various iOS device windows. To fill the space, you can add the constraints by selecting the UIWebView object and pressing Command + Option + Shift + = on your Mac keyboard. Now you should see a blue border appear briefly around your UIWebView. Lastly, we need to connect our UIWebView to our Swift code; for this, we need to open the Assistant Editor by pressing Command + Option + Return on the keyboard. We should see ViewController.swift open up in a side panel next to our Storyboard. To link this as a code variable, right-click (or option-click the UIWebView object) and, with the button held down, drag the UIWebView to line number 12 in the ViewController.swift code in our Assistant Editor. This is shown in the following diagram: Once that's done, a popup will appear. Now leave everything the same as it comes up, but set the name to webview; this will be the variable referencing our UIWebView. With that done, save your Main.storyboard file and navigate to your ViewController.swift file. Now take a look at the Swift code shown in the following screenshot, and copy it into the project; the important part is on line 19, which contains the filename and type loaded into the web view; which in this case, this is index.html. Now obviously, we don't have an index.html file, so let's create one. Go to File and then select New followed by the New File option. Next, under iOS select Empty Application and click Next to complete the wizard. Save the file as index.html and click Create. Now open the index.html file, and type the following code into the HTML page: <br />Hello <strong>iOS</strong> Now click Run (the play button in the main iOS task bar), and we should see our HTML page running inside our own app, as shown here: That's nice work! We built an iOS app with Swift (even if it's a simple app). Let's create a structured HTML page; we will override our Hello iOS text with the HTML shown in the following screenshot: Here, we use the standard console.time function and print a message to our UIWebView page when finished; if we hit Run in Xcode, we will see the Loop Completed message on load. But how do we get our performance information? How can we get our console.timeEnd function code on line 14 on our HTML page? Using Safari web inspector for JavaScript performance Apple does provide a Web Inspector for UIWebViews, and it's the same inspector for desktop Safari. It's easy to use, but has an issue: the inspector only works on iOS Simulators and devices that have started from an Xcode project. This limitation is due to security concerns for hybrid apps that may contain sensitive JavaScript code that could be exploited if visible. Let's check our project's embedded HTML page console. First, open desktop Safari on your Mac and enable developer mode. Launch the Preferences option. Under the Advanced tab, ensure that the Show develop menu in menu bar option is checked, as shown in the following screenshot: Next, let's rerun our Xcode project, start up iOS Simulator and then rerun our page. Once our app is running with the Loop Completed result showing, open desktop Safari and click Develop, then iOS Simulator, followed by index.html. If you look closely, you will see iOS simulator's UIWebView highlighted in blue when you place the mouse over index.html; a visible page is seen as shown in the following screenshot: Once we release the mouse on index.html, we Safari's Web Inspector window appears featuring our hybrid iOS app's DOM and console information. The Safari's Web Inspector is pretty similar to Chrome's Developer tools in terms of feature sets; the panels used in the Developer tools also exist as icons in Web Inspector. Now let's select the Console panel in Web Inspector. Here, we can see our full console window including our Timer console.time function test included in the for loop. As we can see in the following screenshot, the loop took 0.081 milliseconds to process inside iOS. Comparing UIWebView with Mobile Safari What if we wanted to take our code and move it to Mobile Safari to test? This is easy enough; as mentioned earlier in the article, we can drag-and-drop the index.html file into our iOS Simulator, and then the OS will open the mobile version of Safari and load the page for us. With that ready, we will need to reconnect Safari Web Inspector to the iOS Simulator and reload the page. Once that's done, we can see that our console.time function is a bit faster; this time it's roughly 0.07 milliseconds, which is a full .01 milliseconds faster than UIWebView, as shown here: For a small app, this is minimal in terms of a difference in performance. But, as an application gets larger, the delay in these JavaScript processes gets longer and longer. We can also debug the app using the debugging inspector in the Safari's Web Inspector tool. Click Debugger in the top menu panel in Safari's Web Inspector. We can add a break point to our embedded script by clicking a line number and then refreshing the page with Command + R. In the following screenshot, we can see the break occurring on page load, and we can see our scope variable displayed for reference in the right panel: We can also check page load times using the timeline inspector. Click Timelines at the top of the Web Inspector and now we will see a timeline similar to the Resources tab found in Chrome's Developer tools. Let's refresh our page with Command + R on our keyboard; the timeline then processes the page. Notice that after a few seconds, the timeline in the Web Inspector stops when the page fully loads, and all JavaScript processes stop. This is a nice feature when you're working with the Safari Web Inspector as opposed to Chrome's Developer tools. Common ways to improve hybrid performance With hybrid apps, we can use all the techniques for improving performance using a build system such as Grunt.js or Gulp.js with NPM, using JSLint to better optimize our code, writing code in an IDE to create better structure for our apps, and helping to check for any excess code or unused variables in our code. We can use best performance practices such as using strings to apply an HTML page (like the innerHTML property) rather than creating objects for them and applying them to the page that way, and so on. Sadly, the fact that hybrid apps do not perform as well as native apps still holds true. Now, don't let that dismay you as hybrid apps do have a lot of good features! Some of these are as follows: They are (typically) faster to build than using native code They are easier to customize They allow for rapid prototyping concepts for apps They are easier to hand off to other JavaScript developers rather than finding a native developer They are portable; they can be reused for another platform (with some modification) for Android devices, Windows Modern apps, Windows Phone apps, Chrome OS, and even Firefox OS They can interact with native code using helper libraries such as Cordova At some point, however, application performance will be limited to the hardware of the device, and it's recommended you move to native code. But, how do we know when to move? Well, this can be done using Color Blended Layers. The Color Blended Layers option applies an overlay that highlights slow-performing areas on the device display, for example, green for good performance and red for slow performance; the darker the color is, the more impactful will be the performance result. Rerun your app using Xcode and, in the Mac OS toolbar for iOS Simulator, select Debug and then Color Blended Layers. Once we do that, we can see that our iOS Simulator shows a green overlay; this shows us how much memory iOS is using to process our rendered view, both native and non-native code, as shown here: Currently, we can see a mostly green overlay with the exception of the status bar elements, which take up more render memory as they overlay the web view and have to be redrawn over that object repeatedly. Let's make a copy of our project and call it JS_Performance_CBL, and let's update our index.html code with this code sample, as shown in the following screenshot: Here, we have a simple page with an empty div; we also have a button with an onclick function called start. Our start function will update the height continuously using the setInterval function, increasing the height every millisecond. Our empty div also has a background gradient assigned to it with an inline style tag. CSS background gradients are typically a huge performance drain on mobile devices as they can potentially re-render themselves over and over as the DOM updates itself. Some other issues include listener events; some earlier or lower-end devices do not have enough RAM to apply an event listener to a page. Typically, it's a good practice to apply onclick attributes to HTML either inline or through JavaScript. Going back to the gradient example, let's run this in iOS Simulator and enable Color Blended Layers after clicking our HTML button to trigger the JavaScript animation. As expected, our div element that we've expanded now has a red overlay indicating that this is a confirmed performance issue, which is unavoidable. To correct this, we would need to remove the CSS gradient background, and it would show as green again. However, if we had to include a gradient in accordance with a design spec, a native version would be required. When faced with UI issues such as these, it's important to understand tools beyond normal developer tools and Web Inspectors, and take advantage of the mobile platform tools that provide better analysis of our code. Now, before we wrap this article, let's take note of something specific for iOS web views. The WKWebView framework At the time of writing, Apple has announced the WebKit framework, a first-party iOS library intended to replace UIWebView with more advanced and better performing web views; this was done with the intent of replacing apps that rely on HTML5 and JavaScript with better performing apps as a whole. The WebKit framework, also known in developer circles as WKWebView, is a newer web view that can be added to a project. WKWebView is also the base class name for this framework. This framework includes many features that native iOS developers can take advantage of. These include listening for function calls that can trigger native Objective-C or Swift code. For JavaScript developers like us, it includes a faster JavaScript runtime called Nitro, which has been included with Mobile Safari since iOS6. Hybrid apps have always run worse that native code. But with the Nitro JavaScript runtime, HTML5 has equal footing with native apps in terms of performance, assuming that our view doesn't consume too much render memory as shown in our color blended layers example. WKWebView does have limitations though; it can only be used for iOS8 or higher and it doesn't have built-in Storyboard or XIB support like UIWebView. So, using this framework may be an issue if you're new to iOS development. Storyboards are simply XML files coded in a specific way for iOS user interfaces to be rendered, while XIB files are the precursors to Storyboard. XIB files allow for only one view whereas Storyboards allow multiple views and can link between them too. If you are working on an iOS app, I encourage you to reach out to your iOS developer lead and encourage the use of WKWebView in your projects. For more information, check out Apple's documentation of WKWebView at their developer site at https://developer.apple.com/library/IOs/documentation/WebKit/Reference/WKWebView_Ref/index.html. Summary In this article, we learned the basics of creating a hybrid-application for iOS using HTML5 and JavaScript; we learned about connecting the Safari Web Inspector to our HTML page while running an application in iOS Simulator. We also looked at Color Blended Layers for iOS Simulator, and saw how to test for performance from our JavaScript code when it's applied to device-rendering performance issues. Now we are down to the wire. As for all JavaScript web apps before they go live to a production site, we need to smoke-test our JavaScript and web app code and see if we need to perform any final improvements before final deployment. Resources for Article: Further resources on this subject: GUI Components in Qt 5 [article] The architecture of JavaScriptMVC [article] JavaScript Promises – Why Should I Care? [article]
Read more
  • 0
  • 0
  • 9664

article-image-testing-our-application-ios-device
Packt
01 Apr 2015
10 min read
Save for later

Testing our application on an iOS device

Packt
01 Apr 2015
10 min read
In this article by Michelle M. Fernandez, author of the book Corona SDK Mobile Game Development Beginner's Guide, we can upload our first Hello World application on an iOS device, we need to log in into our Apple developer account so that we can create and install our signing certificates on our development machine. If you haven't created a developer account yet, do so by going to http://developer.apple.com/programs/ios/. Remember that there is a fee of $99 a year to become an Apple developer. (For more resources related to this topic, see here.) The Apple developer account is only applied to users developing on Mac OS X. Make sure that your version of Xcode is the same or newer than the version of the OS on your phone. For example, if you have version 5.0 of the iPhone OS installed, you will need Xcode that is bundled with the iOS SDK version 5.0 or later. Time for action – obtaining the iOS developer certificate Make sure that you're signed up for the developer program; you will need to use the Keychain Access tool located in /Applications/Utilities so that you can create a certificate request. A valid certificate must sign all iOS applications before they can be run on an Apple device in order to do any kind of testing. The following steps will show you how to create an iOS developer certificate: Go to Keychain Access | Certificate Assistant | Request a Certificate From a Certificate Authority: In the User Email Address field, type in the e-mail address you used when you registered as an iOS developer. For Common Name, enter your name or team name. Make sure that the name entered matches the information that was submitted when you registered as an iOS developer. The CA Email Address field does not need to be filled in, so you can leave it blank. We are not e-mailing the certificate to a Certificate Authority (CA). Check Saved to disk and Let me specify key pair information. When you click on Continue, you will be asked to choose a save location. Save your file at a destination where you can locate it easily, such as your desktop. In the following window, make sure that 2048 bits is selected for the Key Size and RSA for the Algorithm, and then click on Continue. This will generate the key and save it to the location you specified. Click on Done in the next window. Next, go to the Apple developer website at http://developer.apple.com/, click on iOS Dev Center, and log in to your developer account. Select Certificates, Identifiers & Profiles under iOS Developer Program on the right-hand side of the screen and navigate to Certificates under iOS Apps. Select the + icon on the right-hand side of the page. Under Development, click on the iOS App Development radio button. Click on the Continue button till you reach the screen to generate your certificate: Click on the Choose File button and locate your certificate file that you saved to your desktop, and then, click on the Generate button. Upon hitting Generate, you will get the e-mail notification you specified in the CA request form from Keychain Access, or you can download it directly from the developer portal. The person who created the certificate will get this e-mail and can approve the request by hitting the Approve button. Click on the Download button and save the certificate to a location that is easy to find. Once this is completed, double-click on the file, and the certificate will be added automatically in the Keychain Access. What just happened? We now have a valid certificate for iOS devices. The iOS Development Certificate is used for development purposes only and valid for about a year. The key pair is made up of your public and private keys. The private key is what allows Xcode to sign iOS applications. Private keys are available only to the key pair creator and are stored in the system keychain of the creator's machine. Adding iOS devices You are allowed to assign up to 100 devices for development and testing purposes in the iPhone Developer Program. To register a device, you will need the Unique Device Identification (UDID) number. You can find this in iTunes and Xcode. Xcode To find out your device's UDID, connect your device to your Mac and open Xcode. In Xcode, navigate to the menu bar, select Window, and then click on Organizer. The 40 hex character string in the Identifier field is your device's UDID. Once the Organizer window is open, you should see the name of your device in the Devices list on the left-hand side. Click on it and select the identifier with your mouse, copying it to the clipboard. Usually, when you connect a device to Organizer for the first time, you'll receive a button notification that says Use for Development. Select it and Xcode will do most of the provisioning work for your device in the iOS Provisioning Portal. iTunes With your device connected, open iTunes and click on your device in the device list. Select the Summary tab. Click on the Serial Number label to show the Identifier field and the 40-character UDID. Press Command + C to copy the UDID to your clipboard. Time for action – adding/registering your iOS device To add a device to use for development/testing, perform the following steps: Select Devices in the Developer Portal and click on the + icon to register a new device. Select the Register Device radio button to register one device. Create a name for your device in the Name field and put your device's UDID in the UDID field by pressing Command + V to paste the number you have saved on the clipboard. Click on Continue when you are done and click on Register once you have verified the device information. Time for action – creating an App ID Now that you have added a device to the portal, you will need to create an App ID. An App ID has a unique 10-character Apple ID Prefix generated by Apple and an Apple ID Suffix that is created by the Team Admin in the Provisioning Portal. An App ID could looks like this: 7R456G1254.com.companyname.YourApplication. To create a new App ID, use these steps: Click on App IDs in the Identifiers section of the portal and select the + icon. Fill out the App ID Description field with the name of your application. You are already assigned an Apple ID Prefix (also known as a Team ID). In the App ID Suffix field, specify a unique identifier for your app. It is up to you how you want to identify your app, but it is recommended that you use the reverse-domain style string, that is, com.domainname.appname. Click on Continue and then on Submit to create your App ID. You can create a wildcard character in the bundle identifier that you can share among a suite of applications using the same Keychain access. To do this, simply create a single App ID with an asterisk (*) at the end. You would place this in the field for the bundle identifier either by itself or at the end of your string, for example, com.domainname.*. More information on this topic can be found in the App IDs section of the iOS Provisioning Portal at https://developer.apple.com/ios/manage/bundles/howto.action. What just happened? All UDIDs are unique on every device, and we can locate them in Xcode and iTunes. When we added a device in the iOS Provisioning Portal, we took the UDID, which consists of 40 hex characters, and made sure we created a device name so that we could identify what we're using for development. We now have an App ID for the applications we want to install on a device. An App ID is a unique identifier that iOS uses to allow your application to connect to the Apple Push Notification service, share keychain data between applications, and communicate with external hardware accessories you wish to pair your iOS application with. Provisioning profiles A provisioning profile is a collection of digital entities that uniquely ties apps and devices to an authorized iOS Development Team and enables a device to be used to test a particular app. Provisioning profiles define the relationship between apps, devices, and development teams. They need to be defined for both the development and distribution aspects of an app. Time for action – creating a provisioning profile To create a provisioning profile, go to the Provisioning Profiles section of the Developer Portal and click on the + icon. Perform the following steps: Select the iOS App Development radio button under the Development section and then select Continue. Select the App ID you created for your application in the pull-down menu and click on Continue. Select the certificate you wish to include in the provisioning profile and then click on Continue. Select the devices you wish to authorize for this profile and click on Continue. Create a Profile Name and click on the Generate button when you are done: Click on the Download button. While the file is downloading, launch Xcode if it's not already open and press Shift + Command + 2 on the keyboard to open Organizer. Under Library, select the Provisioning Profiles section. Drag your downloaded .mobileprovision file to the Organizer window. This will automatically copy your .mobileprovision file to the proper directory. What just happened? Devices that have permission within the provisioning profile can be used for testing as long as the certificates are included in the profile. One device can have multiple provisioning profiles installed. Application icon Currently, our app has no icon image to display on the device. By default, if there is no icon image set for the application, you will see a light gray box displayed along with your application name below it once the build has been loaded to your device. So, launch your preferred creative developmental tool and let's create a simple image. The application icon for standard resolution iPad2 or iPad mini image file is 76 x 76 px PNG. The image should always be saved as Icon.png and must be located in your current project folder. iPhone/iPod touch devices that support retina display need an additional high resolution 120 x 120 px and iPad or iPad mini have an icon of 152 x 152 px named as Icon@2x.png. The contents of your current project folder should look like this: Hello World/       name of your project folderIcon.png           required for iPhone/iPod/iPadIcon@2x.png   required for iPhone/iPod with Retina displaymain.lua In order to distribute your app, the App Store requires a 1024 x 1024 pixel version of the icon. It is best to create your icon at a higher resolution first. Refer to the Apple iOS Human Interface Guidelines for the latest official App Store requirements at http://developer.apple.com/library/ios/#documentation/userexperience/conceptual/mobilehig/Introduction/Introduction.html. Creating an application icon is a visual representation of your application name. You will be able to view the icon on your device once you compile a build together. The icon is also the image that launches your application. Summary In this article, we covered how to test your app on an iOS device and register your iOS device. Resources for Article: Further resources on this subject: Linking OpenCV to an iOS project [article] Creating a New iOS Social Project [article] Sparrow iOS Game Framework - The Basics of Our Game [article]
Read more
  • 0
  • 0
  • 11080

Packt
24 Mar 2015
15 min read
Save for later

REST – What You Didn't Know

Packt
24 Mar 2015
15 min read
Nowadays, topics such as cloud computing and mobile device service feeds, and other data sources being powered by cutting-edge, scalable, stateless, and modern technologies such as RESTful web services, leave the impression that REST has been invented recently. Well, to be honest, it is definitely not! In fact, REST was defined at the end of the 20th century. This article by Valentin Bojinov, author of the book RESTful Web API Design with Node.js, will walk you through REST's history and will teach you how REST couples with the HTTP protocol. You will look at the five key principles that need to be considered while turning an HTTP application into a RESTful-service-enabled application. You will also look at the differences between RESTful and SOAP-based services. Finally, you will learn how to utilize already existing infrastructure for your benefit. In this article, we will cover the following topics: A brief history of REST REST with HTTP RESTful versus SOAP-based services Taking advantage of existing infrastructure (For more resources related to this topic, see here.) A brief history of REST Let's look at a time when the madness around REST made everybody talk restlessly about it! This happened back in 1999, when a request for comments was submitted to the Internet Engineering Task Force (IETF: http://www.ietf.org/) via RFC 2616: "Hypertext Transfer Protocol - HTTP/1.1." One of its authors, Roy Fielding, later defined a set of principles built around the HTTP and URI standards. This gave birth to REST as we know it today. Let's look at the key principles around the HTTP and URI standards, sticking to which will make your HTTP application a RESTful-service-enabled application: Everything is a resource Each resource is identifiable by a unique identifier (URI) Use the standard HTTP methods Resources can have multiple representation Communicate statelessly Principle 1 – everything is a resource To understand this principle, one must conceive the idea of representing data by a specific format and not by a physical file. Each piece of data available on the Internet has a format that could be described by a content type. For example, JPEG Images; MPEG videos; html, xml, and text documents; and binary data are all resources with the following content types: image/jpeg, video/mpeg, text/html, text/xml, and application/octet-stream. Principle 2 – each resource is identifiable by a unique identifier Since the Internet contains so many different resources, they all should be accessible via URIs and should be identified uniquely. Furthermore, the URIs can be in a human-readable format (frankly I do believe they should be), despite the fact that their consumers are more likely to be software programmers rather than ordinary humans. The URI keeps the data self-descriptive and eases further development on it. In addition, using human-readable URIs helps you to reduce the risk of logical errors in your programs to a minimum. Here are a few sample examples of such URIs: http://www.mydatastore.com/images/vacation/2014/summer http://www.mydatastore.com/videos/vacation/2014/winter http://www.mydatastore.com/data/documents/balance?format=xml http://www.mydatastore.com/data/archives/2014 These human-readable URIs expose different types of resources in a straightforward manner. In the example, it is quite clear that the resource types are as follows: Images Videos XML documents Some kinds of binary archive documents Principle 3 – use the standard HTTP methods The native HTTP protocol (RFC 2616) defines eight actions, also known as verbs: GET POST PUT DELETE HEAD OPTIONS TRACE CONNECT The first four of them feel just natural in the context of resources, especially when defining actions for resource data manipulation. Let's make a parallel with relative SQL databases where the native language for data manipulation is CRUD (short for Create, Read, Update, and Delete) originating from the different types of SQL statements: INSERT, SELECT, UPDATE and DELETE respectively. In the same manner, if you apply the REST principles correctly, the HTTP verbs should be used as shown here: HTTP verb Action Response status code GET Request an existing resource "200 OK" if the resource exists, "404 Not Found" if it does not exist, and "500 Internal Server Error" for other errors PUT Create or update a resource "201 CREATED" if a new resource is created, "200 OK" if updated, and "500 Internal Server Error" for other errors POST Update an existing resource "200 OK" if the resource has been updated successfully, "404 Not Found" if the resource to be updated does not exist, and "500 Internal Server Error" for other errors DELETE Delete a resource "200 OK" if the resource has been deleted successfully, "404 Not Found" if the resource to be deleted does not exist, and "500 Internal Server Error" for other errors There is an exception in the usage of the verbs, however. I just mentioned that POST is used to create a resource. For instance, when a resource has to be created under a specific URI, then PUT is the appropriate request: PUT /data/documents/balance/22082014 HTTP/1.1 Content-Type: text/xml Host: www.mydatastore.com <?xml version="1.0" encoding="utf-8"?> <balance date="22082014"> <Item>Sample item</Item> <price currency="EUR">100</price> </balance> HTTP/1.1 201 Created Content-Type: text/xml Location: /data/documents/balance/22082014 However, in your application you may want to leave it up to the server REST application to decide where to place the newly created resource, and thus create it under an appropriate but still unknown or non-existing location. For instance, in our example, we might want the server to create the date part of the URI based on the current date. In such cases, it is perfectly fine to use the POST verb to the main resource URI and let the server respond with the location of the newly created resource: POST /data/documents/balance HTTP/1.1Content-Type: text/xmlHost: www.mydatastore.com<?xml version="1.0" encoding="utf-8"?><balance date="22082014"><Item>Sample item</Item><price currency="EUR">100</price></balance>HTTP/1.1 201 CreatedContent-Type: text/xmlLocation: /data/documents/balance Principle 4 – resources can have multiple representations A key feature of a resource is that they may be represented in a different form than the one it is stored. Thus, it can be requested or posted in different representations. As long as the specified format is supported, the REST-enabled endpoint should use it. In the preceding example, we posted an xml representation of a balance, but if the server supported the JSON format, the following request would have been valid as well: POST /data/documents/balance HTTP/1.1Content-Type: application/jsonHost: www.mydatastore.com{"balance": {"date": ""22082014"","Item": "Sample item","price": {"-currency": "EUR","#text": "100"}}}HTTP/1.1 201 CreatedContent-Type: application/jsonLocation: /data/documents/balance Principle 5 – communicate statelessly Resource manipulation operations through HTTP requests should always be considered atomic. All modifications of a resource should be carried out within an HTTP request in isolation. After the request execution, the resource is left in a final state, which implicitly means that partial resource updates are not supported. You should always send the complete state of the resource. Back to the balance example, updating the price field of a given balance would mean posting a complete JSON document that contains all of the balance data, including the updated price field. Posting only the updated price is not stateless, as it implies that the application is aware that the resource has a price field, that is, it knows its state. Another requirement for your RESTful application is to be stateless; the fact that once deployed in a production environment, it is likely that incoming requests are served by a load balancer, ensuring scalability and high availability. Once exposed via a load balancer, the idea of keeping your application state at server side gets compromised. This doesn't mean that you are not allowed to keep the state of your application. It just means that you should keep it in a RESTful way. For example, keep a part of the state within the URI. The statelessness of your RESTful API isolates the caller against changes at the server side. Thus, the caller is not expected to communicate with the same server in consecutive requests. This allows easy application of changes within the server infrastructure, such as adding or removing nodes. Remember that it is your responsibility to keep your RESTful APIs stateless, as the consumers of the API would expect it to be. Now that you know that REST is around 15 years old, a sensible question would be, "why has it become so popular just quite recently?" My answer to the question is that we as humans usually reject simple, straightforward approaches, and most of the time, we prefer spending more time on turning complex solutions into even more complex and sophisticated solutions. Take classical SOAP web services for example. Their various WS-* specifications are so many and sometimes loosely defined in order to make different solutions from different vendors interoperable. The WS-* specifications need to be unified by another specification, WS-BasicProfile. This mechanism defines extra interoperability rules in order to ensure that all WS-* specifications in SOAP-based web services transported over HTTP provide different means of transporting binary data. This is again described in other sets of specifications such as SOAP with Attachment References (SwaRef) and Message Transmission Optimisation Mechanism (MTOM), mainly because the initial idea of the web service was to execute business logic and return its response remotely, not to transport large amounts of data. Well, I personally think that when it comes to data transfer, things should not be that complex. This is where REST comes into place by introducing the concept of resources and standard means to manipulate them. The REST goals Now that we've covered the main REST principles, let's dive deeper into what can be achieved when they are followed: Separation of the representation and the resource Visibility Reliability Scalability Performance Separation of the representation and the resource A resource is just a set of information, and as defined by principle 4, it can have multiple representations. However, the state of the resource is atomic. It is up to the caller to specify the content-type header of the http request, and then it is up to the server application to handle the representation accordingly and return the appropriate HTTP status code: HTTP 200 OK in the case of success HTTP 400 Bad request if a unsupported content type is requested, or for any other invalid request HTTP 500 Internal Server error when something unexpected happens during the request processing For instance, let's assume that at the server side, we have balance resources stored in an XML file. We can have an API that allows a consumer to request the resource in various formats, such as application/json, application/zip, application/octet-stream, and so on. It would be up to the API itself to load the requested resource, transform it into the requested type (for example, json or xml), and either use zip to compress it or directly flush it to the HTTP response output. It is the Accept HTTP header that specifies the expected representation of the response data. So, if we want to request our balance data inserted in the previous section in XML format, the following request should be executed: GET /data/balance/22082014 HTTP/1.1Host: my-computer-hostnameAccept: text/xmlHTTP/1.1 200 OKContent-Type: text/xmlContent-Length: 140<?xml version="1.0" encoding="utf-8"?><balance date="22082014"><Item>Sample item</Item><price currency="EUR">100</price></balance> To request the same balance in JSON format, the Accept header needs to be set to application/json: GET /data/balance/22082014 HTTP/1.1Host: my-computer-hostnameAccept: application/jsonHTTP/1.1 200 OKContent-Type: application/jsonContent-Length: 120{"balance": {"date": "22082014","Item": "Sample item","price": {"-currency": "EUR","#text": "100"}}} Visibility REST is designed to be visible and simple. Visibility of the service means that every aspect of it should self-descriptive and should follow the natural HTTP language according to principles 3, 4, and 5. Visibility in the context of the outer world would mean that monitoring applications would be interested only in the HTTP communication between the REST service and the caller. Since the requests and responses are stateless and atomic, nothing more is needed to flow the behavior of the application and to understand whether anything has gone wrong. Remember that caching reduces the visibility of you restful applications and should be avoided. Reliability Before talking about reliability, we need to define which HTTP methods are safe and which are idempotent in the REST context. So let's first define what safe and idempotent methods are: An HTTP method is considered to be safe provided that when requested, it does not modify or cause any side effects on the state of the resource An HTTP method is considered to be idempotent if its response is always the same, no matter how many times it is requested The following table lists shows you which HTTP method is safe and which is idempotent: HTTP Method Safe Idempotent GET Yes Yes POST No No PUT No Yes DELETE No Yes Scalability and performance So far, I have often stressed on the importance of having stateless implementation and stateless behavior for a RESTful web application. The World Wide Web is an enormous universe, containing a huge amount of data and a few times more users eager to get that data. Its evolution has brought about the requirement that applications should scale easily as their load increases. Scaling applications that have a state is hardly possible, especially when zero or close-to-zero downtime is needed. That's why being stateless is crucial for any application that needs to scale. In the best-case scenario, scaling your application would require you to put another piece of hardware for a load balancer. There would be no need for the different nodes to sync between each other, as they should not care about the state at all. Scalability is all about serving all your clients in an acceptable amount of time. Its main idea is keep your application running and to prevent Denial of Service (DoS) caused by a huge amount of incoming requests. Scalability should not be confused with performance of an application. Performance is measured by the time needed for a single request to be processed, not by the total number of requests that the application can handle. The asynchronous non-blocking architecture and event-driven design of Node.js make it a logical choice for implementing a well-scalable application that performs well. Working with WADL If you are familiar with SOAP web services, you may have heard of the Web Service Definition Language (WSDL). It is an XML description of the interface of the service. It is mandatory for a SOAP web service to be described by such a WSDL definition. Similar to SOAP web services, RESTful services also offer a description language named WADL. WADL stands for Web Application Definition Language. Unlike WSDL for SOAP web services, a WADL description of a RESTful service is optional, that is, consuming the service has nothing to do with its description. Here is a sample part of a WADL file that describes the GET operation of our balance service: <application ><grammer><include href="balance.xsd"/><include href="error.xsd"/></grammer><resources base="http://localhost:8080/data/balance/"><resource path="{date}"><method name="GET"><request><param name="date" type="xsd:string" style="template"/></request><response status="200"><representation mediaType="application/xml"element="service:balance"/><representation mediaType="application/json" /></response><response status="404"><representation mediaType="application/xml"element="service:balance"/></response></method></resource></resources></application> This extract of a WADL file shows how application-exposing resources are described. Basically, each resource must be a part of an application. The resource provides the URI where it is located with the base attribute, and describes each of its supported HTTP methods in a method. Additionally, an optional doc element can be used at resource and application to provide additional documentation about the service and its operations. Though WADL is optional, it significantly reduces the efforts of discovering RESTful services. Taking advantage of the existing infrastructure The best part of developing and distributing RESTful applications is that the infrastructure needed is already out there waiting restlessly for you. As RESTful applications use the existing web space heavily, you need to do nothing more than following the REST principles when developing. In addition, there are a plenty of libraries available out there for any platform, and I do mean any given platform. This eases development of RESTful applications, so you just need to choose the preferred platform for you and start developing. Summary In this article, you learned about the history of REST, and we made a slight comparison between RESTful services and classical SOAP Web services. We looked at the five key principles that would turn our web application into a REST-enabled application, and finally took a look at how RESTful services are described and how we can simplify the discovery of the services we develop. Now that you know the REST basics, we are ready to dive into the Node.js way of implementing RESTful services. Resources for Article: Further resources on this subject: Creating a RESTful API [Article] So, what is Node.js? [Article] CreateJS – Performing Animation and Transforming Function [Article]
Read more
  • 0
  • 0
  • 1393
article-image-drupal-8-and-configuration-management
Packt
18 Mar 2015
15 min read
Save for later

Drupal 8 and Configuration Management

Packt
18 Mar 2015
15 min read
In this article, by the authors, Stefan Borchert and Anja Schirwinski, of the book, Drupal 8 Configuration Management,we will learn the inner workings of the Configuration Management system in Drupal 8. You will learn about config and schema files and read about the difference between simple configuration and configuration entities. (For more resources related to this topic, see here.) The config directory During installation, Drupal adds a directory within sites/default/files called config_HASH, where HASH is a long random string of letters and numbers, as shown in the following screenshot: This sequence is a random hash generated during the installation of your Drupal site. It is used to add some protection to your configuration files. Additionally to the default restriction enforced by the .htaccess file within the subdirectories of the config directory that prevents unauthorized users from seeing the content of the directories. As a result, would be really hard for someone to guess the folder's name. Within the config directory, you will see two additional directories that are empty by default (leaving the .htaccess and README.txt files aside). One of the directories is called active. If you change the configuration system to use file storage instead of the database for active Drupal site configuration, this directory will contain the active configuration. If you did not customize the storage mechanism of the active configuration (we will learn later how to do this), Drupal 8 uses the database to store the active configuration. The other directory is called staging. This directory is empty by default, but can host the configuration you want to be imported into your Drupal site from another installation. You will learn how to use this later on in this article. A simple configuration example First, we want to become familiar with configuration itself. If you look into the database of your Drupal installation and open up the config table , you will find the entire active configuration of your site, as shown in the following screenshot: Depending on your site's configuration, table names may be prefixed with a custom string, so you'll have to look for a table name that ends with config. Don't worry about the strange-looking text in the data column; this is the serialized content of the corresponding configuration. It expands to single configuration values—that is, system.site.name, which holds the value of the name of your site. Changing the site's name in the user interface on admin/config/system/site-information will immediately update the record in the database; thus, put simply the records in the table are the current state of your site's configuration, as shown in the following screenshot: But where does the initial configuration of your site come from? Drupal itself and the modules you install must use some kind of default configuration that gets added to the active storage during installation. Config and schema files – what are they and what are they used for? In order to provide a default configuration during the installation process, Drupal (modules and profiles) comes with a bunch of files that hold the configuration needed to run your site. To make parsing of these files simple and enhance readability of these configuration files, the configuration is stored using the YAML format. YAML (http://yaml.org/) is a data-orientated serialization standard that aims for simplicity. With YAML, it is easy to map common data types such as lists, arrays, or scalar values. Config files Directly beneath the root directory of each module and profile defining or overriding configuration (either core or contrib), you will find a directory named config. Within this directory, there may be two more directories (although both are optional): install and schema. Check the image module inside core/modules and take a look at its config directory, as shown in the following screenshot: The install directory shown in the following screenshot contains all configuration values that the specific module defines or overrides and that are stored in files with the extension .yml (one of the default extensions for files in the YAML format): During installation, the values stored in these files are copied to the active configuration of your site. In the case of default configuration storage, the values are added to the config table; in file-based configuration storage mechanisms, on the other hand, the files are copied to the appropriate directories. Looking at the filenames, you will see that they follow a simple convention: <module name>.<type of configuration>[.<machine name of configuration object>].yml (setting aside <module name>.settings.yml for now). The explanation is as follows: <module name>: This is the name of the module that defines the settings included in the file. For instance, the image.style.large.yml file contains settings defined by the image module. <type of configuration>: This can be seen as a type of group for configuration objects. The image module, for example, defines several image styles. These styles are a set of different configuration objects, so the group is defined as style. Hence, all configuration files that contain image styles defined by the image module itself are named image.style.<something>.yml. The same structure applies to blocks (<block.block.*.yml>), filter formats (<filter.format.*.yml>), menus (<system.menu.*.yml>), content types (<node.type.*.yml>), and so on. <machine name of configuration object>: The last part of the filename is the unique machine-readable name of the configuration object itself. In our examples from the image module, you see three different items: large, medium, and thumbnail. These are exactly the three image styles you will find on admin/config/media/image-styles after installing a fresh copy of Drupal 8. The image styles are shown in the following screenshot: Schema files The primary reason schema files were introduced into Drupal 8 is multilingual support. A tool was needed to identify all translatable strings within the shipped configuration. The secondary reason is to provide actual translation forms for configuration based on your data and to expose translatable configuration pieces to external tools. Each module can have as many configuration the .yml files as needed. All of these are explained in one or more schema files that are shipped with the module. As a simple example of how schema files work, let's look at the system's maintenance settings in the system.maintenance.yml file at core/modules/system/config/install. The file's contents are as follows: message: '@site is currently under maintenance. We should be back shortly. Thank you for your patience.' langcode: en The system module's schema files live in core/modules/system/config/schema. These define the basic types but, for our example, the most important aspect is that they define the schema for the maintenance settings. The corresponding schema section from the system.schema.yml file is as follows: system.maintenance: type: mapping label: 'Maintenance mode' mapping:    message:      type: text      label: 'Message to display when in maintenance mode'    langcode:      type: string      label: 'Default language' The first line corresponds to the filename for the .yml file, and the nested lines underneath the first line describe the file's contents. Mapping is a basic type for key-value pairs (always the top-level type in .yml). The system.maintenance.yml file is labeled as label: 'Maintenance mode'. Then, the actual elements in the mapping are listed under the mapping key. As shown in the code, the file has two items, so the message and langcode keys are described. These are a text and a string value, respectively. Both values are given a label as well in order to identify them in configuration forms. Learning the difference between active and staging By now, you know that Drupal works with the two directories active and staging. But what is the intention behind those directories? And how do we use them? The configuration used by your site is called the active configuration since it's the configuration that is affecting the site's behavior right now. The current (active) configuration is stored in the database and direct changes to your site's configuration go into the specific tables. The reason Drupal 8 stores the active configuration in the database is that it enhances performance and security. Source: https://www.drupal.org/node/2241059. However, sometimes you might not want to store the active configuration in the database and might need to use a different storage mechanism. For example, using the filesystem as configuration storage will enable you to track changes in the site's configuration using a versioning system such as Git or SVN. Changing the active configuration storage If you do want to switch your active configuration storage to files, here's how: Note that changing the configuration storage is only possible before installing Drupal. After installing it, there is no way to switch to another configuration storage! To use a different configuration storage mechanism, you have to make some modifications to your settings.php file. First, you'll need to find the section named Active configuration settings. Now you will have to uncomment the line that starts with $settings['bootstrap_config_storage'] to enable file-based configuration storage. Additionally, you need to copy the existing default.services.yml (next to your settings.php file) to a file named services.yml and enable the new configuration storage: services: # Override configuration storage. config.storage:    class: DrupalCoreConfigCachedStorage    arguments: ['@config.storage.active', '@cache.config'] config.storage.active:    # Use file storage for active configuration.    alias: config.storage.file This tells Drupal to override the default service used for configuration storage and use config.storage.file as the active configuration storage mechanism instead of the default database storage. After installing the site with these settings, we will take another look at the config directory in sites/default/files (assuming you didn't change to the location of the active and staging directory): As you can see, the active directory contains the entire site's configuration. The files in this directory get copied here during the website's installation process. Whenever you make a change to your website, the change is reflected in these files. Exporting a configuration always exports a snapshot of the active configuration, regardless of the storage method. The staging directory contains the changes you want to add to your site. Drupal compares the staging directory to the active directory and checks for changes between them. When you upload your compressed export file, it actually gets placed inside the staging directory. This means you can save yourself the trouble of using the interface to export and import the compressed file if you're comfortable enough with copy-and-pasting files to another directory. Just make sure you copy all of the files to the staging directory even if only one of the files was changed. Any missing files are interpreted as deleted configuration, and will mess up your site. In order to get the contents of staging into active, we simply have to use the synchronize option at admin/config/development/configuration again. This page will show us what was changed and allows us to import the changes. On importing, your active configuration will get overridden with the configuration in your staging directory. Note that the files inside the staging directory will not be removed after the synchronization is finished. The next time you want to copy-and-paste from your active directory, make sure you empty staging first. Note that you cannot override files directly in the active directory. The changes have to be made inside staging and then synchronized. Changing the storage location of the active and staging directories In case you do not want Drupal to store your configuration in sites/default/files, you can set the path according to your wishes. Actually, this is recommended for security reasons, as these directories should never be accessible over the Web or by unauthorized users on your server. Additionally, it makes your life easier if you work with version control. By default, the whole files directory is usually ignored in version-controlled environments because Drupal writes to it, and having the active and staging directory located within sites/default/files would result in them being ignored too. So how do we change the location of the configuration directories? Before installing Drupal, you will need to create and modify the settings.php file that Drupal uses to load its basic configuration data from (that is, the database connection settings). If you haven't done it yet, copy the default.settings.php file and rename the copy to settings.php. Afterwards, open the new file with the editor of your choice and search for the following line: $config_directories = array(); Change the preceding line to the following (or simply insert your addition at the bottom of the file). $config_directories = array( CONFIG_ACTIVE_DIRECTORY => './../config/active', // folder outside the webroot CONFIG_STAGING_DIRECTORY => './../config/staging', // folder outside the webroot ); The directory names can be chosen freely, but it is recommended that you at least use similar names to the default ones so that you or other developers don't get confused when looking at them later. Remember to put these directories outside your webroot, or at least protect the directories using an .htaccess file (if using Apache as the server). Directly after adding the paths to your settings.php file, make sure you remove write permissions from the file as it would be a security risk if someone could change it. Drupal will now use your custom location for its configuration files on installation. You can also change the location of the configuration directories after installing Drupal. Open up your settings.php file and find these two lines near the end of the file and start with $config_directories. Change their paths to something like this: $config_directories['active'] = './../config/active'; $config_directories['staging] = './../config/staging'; This path places the directories above your Drupal root. Now that you know about active and staging, let's learn more about the different types of configuration you can create on your own. Simple configuration versus configuration entities As soon as you want to start storing your own configuration, you need to understand the differences between simple configuration and configuration entities. Here's a short definition of the two types of configuration used in Drupal. Simple configuration This configuration type is easier to implement and therefore ideal for basic configuration settings that result in Boolean values, integers, or simple strings of text being stored, as well as global variables that are used throughout your site. A good example would be the value of an on/off toggle for a specific feature in your module, or our previously used example of the site name configured by the system module: name: 'Configuration Management in Drupal 8' Simple configuration also includes any settings that your module requires in order to operate correctly. For example, JavaScript aggregation has to be either on or off. If it doesn't exist, the system module won't be able to determine the appropriate course of action. Configuration entities Configuration entities are much more complicated to implement but far more flexible. They are used to store information about objects that users can create and destroy without breaking the code. A good example of configuration entities is an image style provided by the image module. Take a look at the image.style.thumbnail.yml file: uuid: fe1fba86-862c-49c2-bf00-c5e1f78a0f6c langcode: en status: true dependencies: { } name: thumbnail label: 'Thumbnail (100×100)' effects: 1cfec298-8620-4749-b100-ccb6c4500779:    uuid: 1cfec298-8620-4749-b100-ccb6c4500779    id: image_scale    weight: 0    data:      width: 100      height: 100      upscale: false third_party_settings: { } This defines a specific style for images, so the system is able to create derivatives of images that a user uploads to the site. Configuration entities also come with a complete set of create, read, update, and delete (CRUD) hooks that are fired just like any other entity in Drupal, making them an ideal candidate for configuration that might need to be manipulated or responded to by other modules. As an example, the Views module uses configuration entities that allow for a scenario where, at runtime, hooks are fired that allow any other module to provide configuration (in this case, custom views) to the Views module. Summary In this article, you learned about how to store configuration and briefly got to know the two different types of configuration. Resources for Article: Further resources on this subject: Tabula Rasa: Nurturing your Site for Tablets [article] Components - Reusing Rules, Conditions, and Actions [article] Introduction to Drupal Web Services [article]
Read more
  • 0
  • 0
  • 21909

article-image-drupal-8-configuration-management
Packt
18 Mar 2015
14 min read
Save for later

Drupal 8 Configuration Management

Packt
18 Mar 2015
14 min read
In this article, by the authors, Stefan Borchert and Anja Schirwinski, of the book, Drupal 8 Configuration Management,we will learn the inner workings of the Configuration Management system in Drupal 8. You will learn about config and schema files and read about the difference between simple configuration and configuration entities. (For more resources related to this topic, see here.) The config directory During installation, Drupal adds a directory within sites/default/files called config_HASH, where HASH is a long random string of letters and numbers, as shown in the following screenshot: This sequence is a random hash generated during the installation of your Drupal site. It is used to add some protection to your configuration files. Additionally to the default restriction enforced by the .htaccess file within the subdirectories of the config directory that prevents unauthorized users from seeing the content of the directories. As a result, would be really hard for someone to guess the folder's name. Within the config directory, you will see two additional directories that are empty by default (leaving the .htaccess and README.txt files aside). One of the directories is called active. If you change the configuration system to use file storage instead of the database for active Drupal site configuration, this directory will contain the active configuration. If you did not customize the storage mechanism of the active configuration (we will learn later how to do this), Drupal 8 uses the database to store the active configuration. The other directory is called staging. This directory is empty by default, but can host the configuration you want to be imported into your Drupal site from another installation. You will learn how to use this later on in this article. A simple configuration example First, we want to become familiar with configuration itself. If you look into the database of your Drupal installation and open up the config table , you will find the entire active configuration of your site, as shown in the following screenshot: Depending on your site's configuration, table names may be prefixed with a custom string, so you'll have to look for a table name that ends with config. Don't worry about the strange-looking text in the data column; this is the serialized content of the corresponding configuration. It expands to single configuration values—that is, system.site.name, which holds the value of the name of your site. Changing the site's name in the user interface on admin/config/system/site-information will immediately update the record in the database; thus, put simply the records in the table are the current state of your site's configuration, as shown in the following screenshot: But where does the initial configuration of your site come from? Drupal itself and the modules you install must use some kind of default configuration that gets added to the active storage during installation. Config and schema files – what are they and what are they used for? In order to provide a default configuration during the installation process, Drupal (modules and profiles) comes with a bunch of files that hold the configuration needed to run your site. To make parsing of these files simple and enhance readability of these configuration files, the configuration is stored using the YAML format. YAML (http://yaml.org/) is a data-orientated serialization standard that aims for simplicity. With YAML, it is easy to map common data types such as lists, arrays, or scalar values. Config files Directly beneath the root directory of each module and profile defining or overriding configuration (either core or contrib), you will find a directory named config. Within this directory, there may be two more directories (although both are optional): install and schema. Check the image module inside core/modules and take a look at its config directory, as shown in the following screenshot: The install directory shown in the following screenshot contains all configuration values that the specific module defines or overrides and that are stored in files with the extension .yml (one of the default extensions for files in the YAML format): During installation, the values stored in these files are copied to the active configuration of your site. In the case of default configuration storage, the values are added to the config table; in file-based configuration storage mechanisms, on the other hand, the files are copied to the appropriate directories. Looking at the filenames, you will see that they follow a simple convention: <module name>.<type of configuration>[.<machine name of configuration object>].yml (setting aside <module name>.settings.yml for now). The explanation is as follows: <module name>: This is the name of the module that defines the settings included in the file. For instance, the image.style.large.yml file contains settings defined by the image module. <type of configuration>: This can be seen as a type of group for configuration objects. The image module, for example, defines several image styles. These styles are a set of different configuration objects, so the group is defined as style. Hence, all configuration files that contain image styles defined by the image module itself are named image.style.<something>.yml. The same structure applies to blocks (<block.block.*.yml>), filter formats (<filter.format.*.yml>), menus (<system.menu.*.yml>), content types (<node.type.*.yml>), and so on. <machine name of configuration object>: The last part of the filename is the unique machine-readable name of the configuration object itself. In our examples from the image module, you see three different items: large, medium, and thumbnail. These are exactly the three image styles you will find on admin/config/media/image-styles after installing a fresh copy of Drupal 8. The image styles are shown in the following screenshot: Schema files The primary reason schema files were introduced into Drupal 8 is multilingual support. A tool was needed to identify all translatable strings within the shipped configuration. The secondary reason is to provide actual translation forms for configuration based on your data and to expose translatable configuration pieces to external tools. Each module can have as many configuration the .yml files as needed. All of these are explained in one or more schema files that are shipped with the module. As a simple example of how schema files work, let's look at the system's maintenance settings in the system.maintenance.yml file at core/modules/system/config/install. The file's contents are as follows: message: '@site is currently under maintenance. We should be back shortly. Thank you for your patience.' langcode: en The system module's schema files live in core/modules/system/config/schema. These define the basic types but, for our example, the most important aspect is that they define the schema for the maintenance settings. The corresponding schema section from the system.schema.yml file is as follows: system.maintenance: type: mapping label: 'Maintenance mode' mapping:    message:      type: text      label: 'Message to display when in maintenance mode'    langcode:      type: string      label: 'Default language' The first line corresponds to the filename for the .yml file, and the nested lines underneath the first line describe the file's contents. Mapping is a basic type for key-value pairs (always the top-level type in .yml). The system.maintenance.yml file is labeled as label: 'Maintenance mode'. Then, the actual elements in the mapping are listed under the mapping key. As shown in the code, the file has two items, so the message and langcode keys are described. These are a text and a string value, respectively. Both values are given a label as well in order to identify them in configuration forms. Learning the difference between active and staging By now, you know that Drupal works with the two directories active and staging. But what is the intention behind those directories? And how do we use them? The configuration used by your site is called the active configuration since it's the configuration that is affecting the site's behavior right now. The current (active) configuration is stored in the database and direct changes to your site's configuration go into the specific tables. The reason Drupal 8 stores the active configuration in the database is that it enhances performance and security. Source: https://www.drupal.org/node/2241059. However, sometimes you might not want to store the active configuration in the database and might need to use a different storage mechanism. For example, using the filesystem as configuration storage will enable you to track changes in the site's configuration using a versioning system such as Git or SVN. Changing the active configuration storage If you do want to switch your active configuration storage to files, here's how: Note that changing the configuration storage is only possible before installing Drupal. After installing it, there is no way to switch to another configuration storage! To use a different configuration storage mechanism, you have to make some modifications to your settings.php file. First, you'll need to find the section named Active configuration settings. Now you will have to uncomment the line that starts with $settings['bootstrap_config_storage'] to enable file-based configuration storage. Additionally, you need to copy the existing default.services.yml (next to your settings.php file) to a file named services.yml and enable the new configuration storage: services: # Override configuration storage. config.storage:    class: DrupalCoreConfigCachedStorage    arguments: ['@config.storage.active', '@cache.config'] config.storage.active:    # Use file storage for active configuration.    alias: config.storage.file This tells Drupal to override the default service used for configuration storage and use config.storage.file as the active configuration storage mechanism instead of the default database storage. After installing the site with these settings, we will take another look at the config directory in sites/default/files (assuming you didn't change to the location of the active and staging directory): As you can see, the active directory contains the entire site's configuration. The files in this directory get copied here during the website's installation process. Whenever you make a change to your website, the change is reflected in these files. Exporting a configuration always exports a snapshot of the active configuration, regardless of the storage method. The staging directory contains the changes you want to add to your site. Drupal compares the staging directory to the active directory and checks for changes between them. When you upload your compressed export file, it actually gets placed inside the staging directory. This means you can save yourself the trouble of using the interface to export and import the compressed file if you're comfortable enough with copy-and-pasting files to another directory. Just make sure you copy all of the files to the staging directory even if only one of the files was changed. Any missing files are interpreted as deleted configuration, and will mess up your site. In order to get the contents of staging into active, we simply have to use the synchronize option at admin/config/development/configuration again. This page will show us what was changed and allows us to import the changes. On importing, your active configuration will get overridden with the configuration in your staging directory. Note that the files inside the staging directory will not be removed after the synchronization is finished. The next time you want to copy-and-paste from your active directory, make sure you empty staging first. Note that you cannot override files directly in the active directory. The changes have to be made inside staging and then synchronized. Changing the storage location of the active and staging directories In case you do not want Drupal to store your configuration in sites/default/files, you can set the path according to your wishes. Actually, this is recommended for security reasons, as these directories should never be accessible over the Web or by unauthorized users on your server. Additionally, it makes your life easier if you work with version control. By default, the whole files directory is usually ignored in version-controlled environments because Drupal writes to it, and having the active and staging directory located within sites/default/files would result in them being ignored too. So how do we change the location of the configuration directories? Before installing Drupal, you will need to create and modify the settings.php file that Drupal uses to load its basic configuration data from (that is, the database connection settings). If you haven't done it yet, copy the default.settings.php file and rename the copy to settings.php. Afterwards, open the new file with the editor of your choice and search for the following line: $config_directories = array(); Change the preceding line to the following (or simply insert your addition at the bottom of the file). $config_directories = array( CONFIG_ACTIVE_DIRECTORY => './../config/active', // folder outside the webroot CONFIG_STAGING_DIRECTORY => './../config/staging', // folder outside the webroot ); The directory names can be chosen freely, but it is recommended that you at least use similar names to the default ones so that you or other developers don't get confused when looking at them later. Remember to put these directories outside your webroot, or at least protect the directories using an .htaccess file (if using Apache as the server). Directly after adding the paths to your settings.php file, make sure you remove write permissions from the file as it would be a security risk if someone could change it. Drupal will now use your custom location for its configuration files on installation. You can also change the location of the configuration directories after installing Drupal. Open up your settings.php file and find these two lines near the end of the file and start with $config_directories. Change their paths to something like this: $config_directories['active'] = './../config/active'; $config_directories['staging] = './../config/staging'; This path places the directories above your Drupal root. Now that you know about active and staging, let's learn more about the different types of configuration you can create on your own. Simple configuration versus configuration entities As soon as you want to start storing your own configuration, you need to understand the differences between simple configuration and configuration entities. Here's a short definition of the two types of configuration used in Drupal. Simple configuration This configuration type is easier to implement and therefore ideal for basic configuration settings that result in Boolean values, integers, or simple strings of text being stored, as well as global variables that are used throughout your site. A good example would be the value of an on/off toggle for a specific feature in your module, or our previously used example of the site name configured by the system module: name: 'Configuration Management in Drupal 8' Simple configuration also includes any settings that your module requires in order to operate correctly. For example, JavaScript aggregation has to be either on or off. If it doesn't exist, the system module won't be able to determine the appropriate course of action. Configuration entities Configuration entities are much more complicated to implement but far more flexible. They are used to store information about objects that users can create and destroy without breaking the code. A good example of configuration entities is an image style provided by the image module. Take a look at the image.style.thumbnail.yml file: uuid: fe1fba86-862c-49c2-bf00-c5e1f78a0f6c langcode: en status: true dependencies: { } name: thumbnail label: 'Thumbnail (100×100)' effects: 1cfec298-8620-4749-b100-ccb6c4500779:    uuid: 1cfec298-8620-4749-b100-ccb6c4500779    id: image_scale    weight: 0    data:      width: 100      height: 100      upscale: false third_party_settings: { } This defines a specific style for images, so the system is able to create derivatives of images that a user uploads to the site. Configuration entities also come with a complete set of create, read, update, and delete (CRUD) hooks that are fired just like any other entity in Drupal, making them an ideal candidate for configuration that might need to be manipulated or responded to by other modules. As an example, the Views module uses configuration entities that allow for a scenario where, at runtime, hooks are fired that allow any other module to provide configuration (in this case, custom views) to the Views module. Summary In this article, you learned about how to store configuration and briefly got to know the two different types of configuration. Resources for Article: Further resources on this subject: Tabula Rasa: Nurturing your Site for Tablets [article] Components - Reusing Rules, Conditions, and Actions [article] Introduction to Drupal Web Services [article]
Read more
  • 0
  • 0
  • 13299

article-image-angularjs-performance
Packt
04 Mar 2015
20 min read
Save for later

AngularJS Performance

Packt
04 Mar 2015
20 min read
In this article by Chandermani, the author of AngularJS by Example, we focus our discussion on the performance aspect of AngularJS. For most scenarios, we can all agree that AngularJS is insanely fast. For standard size views, we rarely see any performance bottlenecks. But many views start small and then grow over time. And sometimes the requirement dictates we build large pages/views with a sizable amount of HTML and data. In such a case, there are things that we need to keep in mind to provide an optimal user experience. Take any framework and the performance discussion on the framework always requires one to understand the internal working of the framework. When it comes to Angular, we need to understand how Angular detects model changes. What are watches? What is a digest cycle? What roles do scope objects play? Without a conceptual understanding of these subjects, any performance guidance is merely a checklist that we follow without understanding the why part. Let's look at some pointers before we begin our discussion on performance of AngularJS: The live binding between the view elements and model data is set up using watches. When a model changes, one or many watches linked to the model are triggered. Angular's view binding infrastructure uses these watches to synchronize the view with the updated model value. Model change detection only happens when a digest cycle is triggered. Angular does not track model changes in real time; instead, on every digest cycle, it runs through every watch to compare the previous and new values of the model to detect changes. A digest cycle is triggered when $scope.$apply is invoked. A number of directives and services internally invoke $scope.$apply: Directives such as ng-click, ng-mouse* do it on user action Services such as $http and $resource do it when a response is received from server $timeout or $interval call $scope.$apply when they lapse A digest cycle tracks the old value of the watched expression and compares it with the new value to detect if the model has changed. Simply put, the digest cycle is a workflow used to detect model changes. A digest cycle runs multiple times till the model data is stable and no watch is triggered. Once you have a clear understanding of the digest cycle, watches, and scopes, we can look at some performance guidelines that can help us manage views as they start to grow. (For more resources related to this topic, see here.) Performance guidelines When building any Angular app, any performance optimization boils down to: Minimizing the number of binding expressions and hence watches Making sure that binding expression evaluation is quick Optimizing the number of digest cycles that take place The next few sections provide some useful pointers in this direction. Remember, a lot of these optimization may only be necessary if the view is large. Keeping the page/view small The sanest advice is to keep the amount of content available on a page small. The user cannot interact/process too much data on the page, so remember that screen real estate is at a premium and only keep necessary details on a page. The lesser the content, the lesser the number of binding expressions; hence, fewer watches and less processing are required during the digest cycle. Remember, each watch adds to the overall execution time of the digest cycle. The time required for a single watch can be insignificant but, after combining hundreds and maybe thousands of them, they start to matter. Angular's data binding infrastructure is insanely fast and relies on a rudimentary dirty check that compares the old and the new values. Check out the stack overflow (SO) post (http://stackoverflow.com/questions/9682092/databinding-in-angularjs), where Misko Hevery (creator of Angular) talks about how data binding works in Angular. Data binding also adds to the memory footprint of the application. Each watch has to track the current and previous value of a data-binding expression to compare and verify if data has changed. Keeping a page/view small may not always be possible, and the view may grow. In such a case, we need to make sure that the number of bindings does not grow exponentially (linear growth is OK) with the page size. The next two tips can help minimize the number of bindings in the page and should be seriously considered for large views. Optimizing watches for read-once data In any Angular view, there is always content that, once bound, does not change. Any read-only data on the view can fall into this category. This implies that once the data is bound to the view, we no longer need watches to track model changes, as we don't expect the model to update. Is it possible to remove the watch after one-time binding? Angular itself does not have something inbuilt, but a community project bindonce (https://github.com/Pasvaz/bindonce) is there to fill this gap. Angular 1.3 has added support for bind and forget in the native framework. Using the syntax {{::title}}, we can achieve one-time binding. If you are on Angular 1.3, use it! Hiding (ng-show) versus conditional rendering (ng-if/ng-switch) content You have learned two ways to conditionally render content in Angular. The ng-show/ng-hide directive shows/hides the DOM element based on the expression provided and ng-if/ng-switch creates and destroys the DOM based on an expression. For some scenarios, ng-if can be really beneficial as it can reduce the number of binding expressions/watches for the DOM content not rendered. Consider the following example: <div ng-if='user.isAdmin'>   <div ng-include="'admin-panel.html'"></div></div> The snippet renders an admin panel if the user is an admin. With ng-if, if the user is not an admin, the ng-include directive template is neither requested nor rendered saving us of all the bindings and watches that are part of the admin-panel.html view. From the preceding discussion, it may seem that we should get rid of all ng-show/ng-hide directives and use ng-if. Well, not really! It again depends; for small size pages, ng-show/ng-hide works just fine. Also, remember that there is a cost to creating and destroying the DOM. If the expression to show/hide flips too often, this will mean too many DOMs create-and-destroy cycles, which are detrimental to the overall performance of the app. Expressions being watched should not be slow Since watches are evaluated too often, the expression being watched should return results fast. The first way we can make sure of this is by using properties instead of functions to bind expressions. These expressions are as follows: {{user.name}}ng-show='user.Authorized' The preceding code is always better than this: {{getUserName()}}ng-show = 'isUserAuthorized(user)' Try to minimize function expressions in bindings. If a function expression is required, make sure that the function returns a result quickly. Make sure a function being watched does not: Make any remote calls Use $timeout/$interval Perform sorting/filtering Perform DOM manipulation (this can happen inside directive implementation) Or perform any other time-consuming operation Be sure to avoid such operations inside a bound function. To reiterate, Angular will evaluate a watched expression multiple times during every digest cycle just to know if the return value (a model) has changed and the view needs to be synchronized. Minimizing the deep model watch When using $scope.$watch to watch for model changes in controllers, be careful while setting the third $watch function parameter to true. The general syntax of watch looks like this: $watch(watchExpression, listener, [objectEquality]); In the standard scenario, Angular does an object comparison based on the reference only. But if objectEquality is true, Angular does a deep comparison between the last value and new value of the watched expression. This can have an adverse memory and performance impact if the object is large. Handling large datasets with ng-repeat The ng-repeat directive undoubtedly is the most useful directive Angular has. But it can cause the most performance-related headaches. The reason is not because of the directive design, but because it is the only directive that allows us to generate HTML on the fly. There is always the possibility of generating enormous HTML just by binding ng-repeat to a big model list. Some tips that can help us when working with ng-repeat are: Page data and use limitTo: Implement a server-side paging mechanism when a number of items returned are large. Also use the limitTo filter to limit the number of items rendered. Its syntax is as follows: <tr ng-repeat="user in users |limitTo:pageSize">…</tr> Look at modules such as ngInfiniteScroll (http://binarymuse.github.io/ngInfiniteScroll/) that provide an alternate mechanism to render large lists. Use the track by expression: The ng-repeat directive for performance tries to make sure it does not unnecessarily create or delete HTML nodes when items are added, updated, deleted, or moved in the list. To achieve this, it adds a $$hashKey property to every model item allowing it to associate the DOM node with the model item. We can override this behavior and provide our own item key using the track by expression such as: <tr ng-repeat="user in users track by user.id">…</tr> This allows us to use our own mechanism to identify an item. Using your own track by expression has a distinct advantage over the default hash key approach. Consider an example where you make an initial AJAX call to get users: $scope.getUsers().then(function(users){ $scope.users = users;}) Later again, refresh the data from the server and call something similar again: $scope.users = users; With user.id as a key, Angular is able to determine what elements were added/deleted and moved; it can also determine created/deleted DOM nodes for such elements. Remaining elements are not touched by ng-repeat (internal bindings are still evaluated). This saves a lot of CPU cycles for the browser as fewer DOM elements are created and destroyed. Do not bind ng-repeat to a function expression: Using a function's return value for ng-repeat can also be problematic, depending upon how the function is implemented. Consider a repeat with this: <tr ng-repeat="user in getUsers()">…</tr> And consider the controller getUsers function with this: $scope.getUser = function() {   var orderBy = $filter('orderBy');   return orderBy($scope.users, predicate);} Angular is going to evaluate this expression and hence call this function every time the digest cycle takes place. A lot of CPU cycles were wasted sorting user data again and again. It is better to use scope properties and presort the data before binding. Minimize filters in views, use filter elements in the controller: Filters defined on ng-repeat are also evaluated every time the digest cycle takes place. For large lists, if the same filtering can be implemented in the controller, we can avoid constant filter evaluation. This holds true for any filter function that is used with arrays including filter and orderBy. Avoiding mouse-movement tracking events The ng-mousemove, ng-mouseenter, ng-mouseleave, and ng-mouseover directives can just kill performance. If an expression is attached to any of these event directives, Angular triggers a digest cycle every time the corresponding event occurs and for events like mouse move, this can be a lot. We have already seen this behavior when working with 7 Minute Workout, when we tried to show a pause overlay on the exercise image when the mouse hovers over it. Avoid them at all cost. If we just want to trigger some style changes on mouse events, CSS is a better tool. Avoiding calling $scope.$apply Angular is smart enough to call $scope.$apply at appropriate times without us explicitly calling it. This can be confirmed from the fact that the only place we have seen and used $scope.$apply is within directives. The ng-click and updateOnBlur directives use $scope.$apply to transition from a DOM event handler execution to an Angular execution context. Even when wrapping the jQuery plugin, we may require to do a similar transition for an event raised by the JQuery plugin. Other than this, there is no reason to use $scope.$apply. Remember, every invocation of $apply results in the execution of a complete digest cycle. The $timeout and $interval services take a Boolean argument invokeApply. If set to false, the lapsed $timeout/$interval services does not call $scope.$apply or trigger a digest cycle. Therefore, if you are going to perform background operations that do not require $scope and the view to be updated, set the last argument to false. Always use Angular wrappers over standard JavaScript objects/functions such as $timeout and $interval to avoid manually calling $scope.$apply. These wrapper functions internally call $scope.$apply. Also, understand the difference between $scope.$apply and $scope.$digest. $scope.$apply triggers $rootScope.$digest that evaluates all application watches whereas, $scope.$digest only performs dirty checks on the current scope and its children. If we are sure that the model changes are not going to affect anything other than the child scopes, we can use $scope.$digest instead of $scope.$apply. Lazy-loading, minification, and creating multiple SPAs I hope you are not assuming that the apps that we have built will continue to use the numerous small script files that we have created to separate modules and module artefacts (controllers, directives, filters, and services). Any modern build system has the capability to concatenate and minify these files and replace the original file reference with a unified and minified version. Therefore, like any JavaScript library, use minified script files for production. The problem with the Angular bootstrapping process is that it expects all Angular application scripts to be loaded before the application can bootstrap. We cannot load modules, controllers, or in fact, any of the other Angular constructs on demand. This means we need to provide every artefact required by our app, upfront. For small applications, this is not a problem as the content is concatenated and minified; also, the Angular application code itself is far more compact as compared to the traditional JavaScript of jQuery-based apps. But, as the size of the application starts to grow, it may start to hurt when we need to load everything upfront. There are at least two possible solutions to this problem; the first one is about breaking our application into multiple SPAs. Breaking applications into multiple SPAs This advice may seem counterintuitive as the whole point of SPAs is to get rid of full page loads. By creating multiple SPAs, we break the app into multiple small SPAs, each supporting parts of the overall app functionality. When we say app, it implies a combination of the main (such as index.html) page with ng-app and all the scripts/libraries and partial views that the app loads over time. For example, we can break the Personal Trainer application into a Workout Builder app and a Workout Runner app. Both have their own start up page and scripts. Common scripts such as the Angular framework scripts and any third-party libraries can be referenced in both the applications. On similar lines, common controllers, directives, services, and filters too can be referenced in both the apps. The way we have designed Personal Trainer makes it easy to achieve our objective. The segregation into what belongs where has already been done. The advantage of breaking an app into multiple SPAs is that only relevant scripts related to the app are loaded. For a small app, this may be an overkill but for large apps, it can improve the app performance. The challenge with this approach is to identify what parts of an application can be created as independent SPAs; it totally depends upon the usage pattern of the application. For example, assume an application has an admin module and an end consumer/user module. Creating two SPAs, one for admin and the other for the end customer, is a great way to keep user-specific features and admin-specific features separate. A standard user may never transition to the admin section/area, whereas an admin user can still work on both areas; but transitioning from the admin area to a user-specific area will require a full page refresh. If breaking the application into multiple SPAs is not possible, the other option is to perform the lazy loading of a module. Lazy-loading modules Lazy-loading modules or loading module on demand is a viable option for large Angular apps. But unfortunately, Angular itself does not have any in-built support for lazy-loading modules. Furthermore, the additional complexity of lazy loading may be unwarranted as Angular produces far less code as compared to other JavaScript framework implementations. Also once we gzip and minify the code, the amount of code that is transferred over the wire is minimal. If we still want to try our hands on lazy loading, there are two libraries that can help: ocLazyLoad (https://github.com/ocombe/ocLazyLoad): This is a library that uses script.js to load modules on the fly angularAMD (http://marcoslin.github.io/angularAMD): This is a library that uses require.js to lazy load modules With lazy loading in place, we can delay the loading of a controller, directive, filter, or service script, until the page that requires them is loaded. The overall concept of lazy loading seems to be great but I'm still not sold on this idea. Before we adopt a lazy-load solution, there are things that we need to evaluate: Loading multiple script files lazily: When scripts are concatenated and minified, we load the complete app at once. Contrast it to lazy loading where we do not concatenate but load them on demand. What we gain in terms of lazy-load module flexibility we lose in terms of performance. We now have to make a number of network requests to load individual files. Given these facts, the ideal approach is to combine lazy loading with concatenation and minification. In this approach, we identify those feature modules that can be concatenated and minified together and served on demand using lazy loading. For example, Personal Trainer scripts can be divided into three categories: The common app modules: This consists of any script that has common code used across the app and can be combined together and loaded upfront The Workout Runner module(s): Scripts that support workout execution can be concatenated and minified together but are loaded only when the Workout Runner pages are loaded. The Workout Builder module(s): On similar lines to the preceding categories, scripts that support workout building can be combined together and served only when the Workout Builder pages are loaded. As we can see, there is a decent amount of effort required to refactor the app in a manner that makes module segregation, concatenation, and lazy loading possible. The effect on unit and integration testing: We also need to evaluate the effect of lazy-loading modules in unit and integration testing. The way we test is also affected with lazy loading in place. This implies that, if lazy loading is added as an afterthought, the test setup may require tweaking to make sure existing tests still run. Given these facts, we should evaluate our options and check whether we really need lazy loading or we can manage by breaking a monolithic SPA into multiple smaller SPAs. Caching remote data wherever appropriate Caching data is the one of the oldest tricks to improve any webpage/application performance. Analyze your GET requests and determine what data can be cached. Once such data is identified, it can be cached from a number of locations. Data cached outside the app can be cached in: Servers: The server can cache repeated GET requests to resources that do not change very often. This whole process is transparent to the client and the implementation depends on the server stack used. Browsers: In this case, the browser caches the response. Browser caching depends upon the server sending HTTP cache headers such as ETag and cache-control to guide the browser about how long a particular resource can be cached. Browsers can honor these cache headers and cache data appropriately for future use. If server and browser caching is not available or if we also want to incorporate any amount of caching in the client app, we do have some choices: Cache data in memory: A simple Angular service can cache the HTTP response in the memory. Since Angular is SPA, the data is not lost unless the page refreshes. This is how a service function looks when it caches data: var workouts;service.getWorkouts = function () {   if (workouts) return $q.resolve(workouts);   return $http.get("/workouts").then(function (response){       workouts = response.data;       return workouts;   });}; The implementation caches a list of workouts into the workouts variable for future use. The first request makes a HTTP call to retrieve data, but subsequent requests just return the cached data as promised. The usage of $q.resolve makes sure that the function always returns a promise. Angular $http cache: Angular's $http service comes with a configuration option cache. When set to true, $http caches the response of the particular GET request into a local cache (again an in-memory cache). Here is how we cache a GET request: $http.get(url, { cache: true}); Angular caches this cache for the lifetime of the app, and clearing it is not easy. We need to get hold of the cache dedicated to caching HTTP responses and clear the cache key manually. The caching strategy of an application is never complete without a cache invalidation strategy. With cache, there is always a possibility that caches are out of sync with respect to the actual data store. We cannot affect the server-side caching behavior from the client; consequently, let's focus on how to perform cache invalidation (clearing) for the two client-side caching mechanisms described earlier. If we use the first approach to cache data, we are responsible for clearing cache ourselves. In the case of the second approach, the default $http service does not support clearing cache. We either need to get hold of the underlying $http cache store and clear the cache key manually (as shown here) or implement our own cache that manages cache data and invalidates cache based on some criteria: var cache = $cacheFactory.get('$http');cache.remove("http://myserver/workouts"); //full url Using Batarang to measure performance Batarang (a Chrome extension), as we have already seen, is an extremely handy tool for Angular applications. Using Batarang to visualize app usage is like looking at an X-Ray of the app. It allows us to: View the scope data, scope hierarchy, and how the scopes are linked to HTML elements Evaluate the performance of the application Check the application dependency graph, helping us understand how components are linked to each other, and with other framework components. If we enable Batarang and then play around with our application, Batarang captures performance metrics for all watched expressions in the app. This data is nicely presented as a graph available on the Performance tab inside Batarang: That is pretty sweet! When building an app, use Batarang to gauge the most expensive watches and take corrective measures, if required. Play around with Batarang and see what other features it has. This is a very handy tool for Angular applications. This brings us to the end of the performance guidelines that we wanted to share in this article. Some of these guidelines are preventive measures that we should take to make sure we get optimal app performance whereas others are there to help when the performance is not up to the mark. Summary In this article, we looked at the ever-so-important topic of performance, where you learned ways to optimize an Angular app performance. Resources for Article: Further resources on this subject: Role of AngularJS [article] The First Step [article] Recursive directives [article]
Read more
  • 0
  • 0
  • 5548
article-image-knockoutjs-templates
Packt
04 Mar 2015
38 min read
Save for later

KnockoutJS Templates

Packt
04 Mar 2015
38 min read
 In this article by Jorge Ferrando, author of the book KnockoutJS Essentials, we are going talk about how to design our templates with the native engine and then we will speak about mechanisms and external libraries we can use to improve the Knockout template engine. When our code begins to grow, it's necessary to split it in several parts to keep it maintainable. When we split JavaScript code, we are talking about modules, classes, function, libraries, and so on. When we talk about HTML, we call these parts templates. KnockoutJS has a native template engine that we can use to manage our HTML. It is very simple, but also has a big inconvenience: templates, it should be loaded in the current HTML page. This is not a problem if our app is small, but it could be a problem if our application begins to need more and more templates. (For more resources related to this topic, see here.) Preparing the project First of all, we are going to add some style to the page. Add a file called style.css into the css folder. Add a reference in the index.html file, just below the bootstrap reference. The following is the content of the file: .container-fluid { margin-top: 20px; } .row { margin-bottom: 20px; } .cart-unit { width: 80px; } .btn-xs { font-size:8px; } .list-group-item { overflow: hidden; } .list-group-item h4 { float:left; width: 100px; } .list-group-item .input-group-addon { padding: 0; } .btn-group-vertical > .btn-default { border-color: transparent; } .form-control[disabled], .form-control[readonly] { background-color: transparent !important; } Now remove all the content from the body tag except for the script tags and paste in these lines: <div class="container-fluid"> <div class="row" id="catalogContainer">    <div class="col-xs-12"       data-bind="template:{name:'header'}"></div>    <div class="col-xs-6"       data-bind="template:{name:'catalog'}"></div>    <div id="cartContainer" class="col-xs-6 well hidden"       data-bind="template:{name:'cart'}"></div> </div> <div class="row hidden" id="orderContainer"     data-bind="template:{name:'order'}"> </div> <div data-bind="template: {name:'add-to-catalog-modal'}"></div> <div data-bind="template: {name:'finish-order-modal'}"></div> </div> Let's review this code. We have two row classes. They will be our containers. The first container is named with the id value as catalogContainer and it will contain the catalog view and the cart. The second one is referenced by the id value as orderContainer and we will set our final order there. We also have two more <div> tags at the bottom that will contain the modal dialogs to show the form to add products to our catalog and the other one will contain a modal message to tell the user that our order is finished. Along with this code you can see a template binding inside the data-bind attribute. This is the binding that Knockout uses to bind templates to the element. It contains a name parameter that represents the ID of a template. <div class="col-xs-12" data-bind="template:{name:'header'}"></div> In this example, this <div> element will contain the HTML that is inside the <script> tag with the ID header. Creating templates Template elements are commonly declared at the bottom of the body, just above the <script> tags that have references to our external libraries. We are going to define some templates and then we will talk about each one of them: <!-- templates --> <script type="text/html" id="header"></script> <script type="text/html" id="catalog"></script> <script type="text/html" id="add-to-catalog-modal"></script> <script type="text/html" id="cart-widget"></script> <script type="text/html" id="cart-item"></script> <script type="text/html" id="cart"></script> <script type="text/html" id="order"></script> <script type="text/html" id="finish-order-modal"></script> Each template name is descriptive enough by itself, so it's easy to know what we are going to set inside them. Let's see a diagram showing where we dispose each template on the screen:   Notice that the cart-item template will be repeated for each item in the cart collection. Modal templates will appear only when a modal dialog is displayed. Finally, the order template is hidden until we click to confirm the order. In the header template, we will have the title and the menu of the page. The add-to-catalog-modal template will contain the modal that shows the form to add a product to our catalog. The cart-widget template will show a summary of our cart. The cart-item template will contain the template of each item in the cart. The cart template will have the layout of the cart. The order template will show the final list of products we want to buy and a button to confirm our order. The header template Let's begin with the HTML markup that should contain the header template: <script type="text/html" id="header"> <h1>    Catalog </h1>   <button class="btn btn-primary btn-sm" data-toggle="modal"     data-target="#addToCatalogModal">    Add New Product </button> <button class="btn btn-primary btn-sm" data-bind="click:     showCartDetails, css:{ disabled: cart().length < 1}">    Show Cart Details </button> <hr/> </script> We define a <h1> tag, and two <button> tags. The first button tag is attached to the modal that has the ID #addToCatalogModal. Since we are using Bootstrap as the CSS framework, we can attach modals by ID using the data-target attribute, and activate the modal using the data-toggle attribute. The second button will show the full cart view and it will be available only if the cart has items. To achieve this, there are a number of different ways. The first one is to use the CSS-disabled class that comes with Twitter Bootstrap. This is the way we have used in the example. CSS binding allows us to activate or deactivate a class in the element depending on the result of the expression that is attached to the class. The other method is to use the enable binding. This binding enables an element if the expression evaluates to true. We can use the opposite binding, which is named disable. There is a complete documentation on the Knockout website http://knockoutjs.com/documentation/enable-binding.html: <button class="btn btn-primary btn-sm" data-bind="click:   showCartDetails, enable: cart().length > 0"> Show Cart Details </button>   <button class="btn btn-primary btn-sm" data-bind="click:   showCartDetails, disable: cart().length < 1"> Show Cart Details </button> The first method uses CSS classes to enable and disable the button. The second method uses the HTML attribute, disabled. We can use a third option, which is to use a computed observable. We can create a computed observable variable in our view-model that returns true or false depending on the length of the cart: //in the viewmodel. Remember to expose it var cartHasProducts = ko.computed(function(){ return (cart().length > 0); }); //HTML <button class="btn btn-primary btn-sm" data-bind="click:   showCartDetails, enable: cartHasProducts"> Show Cart Details </button> To show the cart, we will use the click binding. Now we should go to our viewmodel.js file and add all the information we need to make this template work: var cart = ko.observableArray([]); var showCartDetails = function () { if (cart().length > 0) {    $("#cartContainer").removeClass("hidden"); } }; And you should expose these two objects in the view-model: return {    searchTerm: searchTerm,    catalog: filteredCatalog,    newProduct: newProduct,    totalItems:totalItems,    addProduct: addProduct,    cart: cart,    showCartDetails: showCartDetails, }; The catalog template The next step is to define the catalog template just below the header template: <script type="text/html" id="catalog"> <div class="input-group">    <span class="input-group-addon">      <i class="glyphicon glyphicon-search"></i> Search    </span>    <input type="text" class="form-control" data-bind="textInput:       searchTerm"> </div> <table class="table">    <thead>    <tr>      <th>Name</th>      <th>Price</th>      <th>Stock</th>      <th></th>    </tr>    </thead>    <tbody data-bind="foreach:catalog">    <tr data-bind="style:color:stock() < 5?'red':'black'">      <td data-bind="text:name"></td>      <td data-bind="text:price"></td>      <td data-bind="text:stock"></td>      <td>        <button class="btn btn-primary"          data-bind="click:$parent.addToCart">          <i class="glyphicon glyphicon-plus-sign"></i> Add        </button>      </td>    </tr>    </tbody>    <tfoot>    <tr>      <td colspan="3">        <strong>Items:</strong><span           data-bind="text:catalog().length"></span>      </td>      <td colspan="1">        <span data-bind="template:{name:'cart-widget'}"></span>      </td>    </tr>    </tfoot> </table> </script> Now, each line uses the style binding to alert the user, while they are shopping, that the stock is reaching the maximum limit. The style binding works the same way that CSS binding does with classes. It allows us to add style attributes depending on the value of the expression. In this case, the color of the text in the line must be black if the stock is higher than five, and red if it is four or less. We can use other CSS attributes, so feel free to try other behaviors. For example, set the line of the catalog to green if the element is inside the cart. We should remember that if an attribute has dashes, you should wrap it in single quotes. For example, background-color will throw an error, so you should write 'background-color'. When we work with bindings that are activated depending on the values of the viewmodel, it is good practice to use computed observables. Therefore, we can create a computed value in our product model that returns the value of the color that should be displayed: //In the Product.js var _lineColor = ko.computed(function(){ return (_stock() < 5)? 'red' : 'black'; }); return { lineColor:_lineColor }; //In the template <tr data-bind="style:lineColor"> ... </tr> It would be even better if we create a class in our style.css file that is called stock-alert and we use the CSS binding: //In the style file .stock-alert { color: #f00; } //In the Product.js var _hasStock = ko.computed(function(){ return (_stock() < 5);   }); return { hasStock: _hasStock }; //In the template <tr data-bind="css: hasStock"> ... </tr> Now, look inside the <tfoot> tag. <td colspan="1"> <span data-bind="template:{name:'cart-widget'}"></span> </td> As you can see, we can have nested templates. In this case, we have the cart-widget template inside our catalog template. This give us the possibility of having very complex templates, splitting them into very small pieces, and combining them, to keep our code clean and maintainable. Finally, look at the last cell of each row: <td> <button class="btn btn-primary"     data-bind="click:$parent.addToCart">    <i class="glyphicon glyphicon-plus-sign"></i> Add </button> </td> Look at how we call the addToCart method using the magic variable $parent. Knockout gives us some magic words to navigate through the different contexts we have in our app. In this case, we are in the catalog context and we want to call a method that lies one level up. We can use the magical variable called $parent. There are other variables we can use when we are inside a Knockout context. There is complete documentation on the Knockout website http://knockoutjs.com/documentation/binding-context.html. In this project, we are not going to use all of them. But we are going quickly explain these binding context variables, just to understand them better. If we don't know how many levels deep we are, we can navigate to the top of the view-model using the magic word $root. When we have many parents, we can get the magic array $parents and access each parent using indexes, for example, $parents[0], $parents[1]. Imagine that you have a list of categories where each category contains a list of products. These products are a list of IDs and the category has a method to get the name of their products. We can use the $parents array to obtain the reference to the category: <ul data-bind="foreach: {data: categories}"> <li data-bind="text: $data.name"></li> <ul data-bind="foreach: {data: $data.products, as: 'prod'}>    <li data-bind="text:       $parents[0].getProductName(prod.ID)"></li> </ul> </ul> Look how helpful the as attribute is inside the foreach binding. It makes code more readable. But if you are inside a foreach loop, you can also access each item using the $data magic variable, and you can access the position index that each element has in the collection using the $index magic variable. For example, if we have a list of products, we can do this: <ul data-bind="foreach: cart"> <li><span data-bind="text:$index">    </span> - <span data-bind="text:$data.name"></span> </ul> This should display: 0 – Product 1 1 – Product 2 2 – Product 3 ...  KnockoutJS magic variables to navigate through contexts Now that we know more about what binding variables are, let's go back to our code. We are now going to write the addToCart method. We are going to define the cart items in our js/models folder. Create a file called CartProduct.js and insert the following code in it: //js/models/CartProduct.js var CartProduct = function (product, units) { "use strict";   var _product = product,    _units = ko.observable(units);   var subtotal = ko.computed(function(){    return _product.price() * _units(); });   var addUnit = function () {    var u = _units();    var _stock = _product.stock();    if (_stock === 0) {      return;    } _units(u+1);    _product.stock(--_stock); };   var removeUnit = function () {    var u = _units();    var _stock = _product.stock();    if (u === 0) {      return;    }    _units(u-1);    _product.stock(++_stock); };   return {    product: _product,    units: _units,    subtotal: subtotal,    addUnit : addUnit,    removeUnit: removeUnit, }; }; Each cart product is composed of the product itself and the units of the product we want to buy. We will also have a computed field that contains the subtotal of the line. We should give the object the responsibility for managing its units and the stock of the product. For this reason, we have added the addUnit and removeUnit methods. These methods add one unit or remove one unit of the product if they are called. We should reference this JavaScript file into our index.html file with the other <script> tags. In the viewmodel, we should create a cart array and expose it in the return statement, as we have done earlier: var cart = ko.observableArray([]); It's time to write the addToCart method: var addToCart = function(data) { var item = null; var tmpCart = cart(); var n = tmpCart.length; while(n--) {    if (tmpCart[n].product.id() === data.id()) {      item = tmpCart[n];    } } if (item) {    item.addUnit(); } else {    item = new CartProduct(data,0);    item.addUnit();    tmpCart.push(item);       } cart(tmpCart); }; This method searches the product in the cart. If it exists, it updates its units, and if not, it creates a new one. Since the cart is an observable array, we need to get it, manipulate it, and overwrite it, because we need to access the product object to know if the product is in the cart. Remember that observable arrays do not observe the objects they contain, just the array properties. The add-to-cart-modal template This is a very simple template. We just wrap the code to add a product to a Bootstrap modal: <script type="text/html" id="add-to-catalog-modal"> <div class="modal fade" id="addToCatalogModal">    <div class="modal-dialog">      <div class="modal-content">        <form class="form-horizontal" role="form"           data-bind="with:newProduct">          <div class="modal-header">            <button type="button" class="close"               data-dismiss="modal">              <span aria-hidden="true">&times;</span>              <span class="sr-only">Close</span>            </button><h3>Add New Product to the Catalog</h3>          </div>          <div class="modal-body">            <div class="form-group">              <div class="col-sm-12">                <input type="text" class="form-control"                  placeholder="Name" data-bind="textInput:name">              </div>            </div>            <div class="form-group">              <div class="col-sm-12">                <input type="text" class="form-control"                   placeholder="Price" data-bind="textInput:price">              </div>            </div>            <div class="form-group">              <div class="col-sm-12">                <input type="text" class="form-control"                   placeholder="Stock" data-bind="textInput:stock">              </div>            </div>          </div>          <div class="modal-footer">            <div class="form-group">              <div class="col-sm-12">                <button type="submit" class="btn btn-default"                  data-bind="{click:$parent.addProduct}">                  <i class="glyphicon glyphicon-plus-sign">                  </i> Add Product                </button>              </div>            </div>          </div>        </form>      </div><!-- /.modal-content -->    </div><!-- /.modal-dialog --> </div><!-- /.modal --> </script> The cart-widget template This template gives the user information quickly about how many items are in the cart and how much all of them cost: <script type="text/html" id="cart-widget"> Total Items: <span data-bind="text:totalItems"></span> Price: <span data-bind="text:grandTotal"></span> </script> We should define totalItems and grandTotal in our viewmodel: var totalItems = ko.computed(function(){ var tmpCart = cart(); var total = 0; tmpCart.forEach(function(item){    total += parseInt(item.units(),10); }); return total; }); var grandTotal = ko.computed(function(){ var tmpCart = cart(); var total = 0; tmpCart.forEach(function(item){    total += (item.units() * item.product.price()); }); return total; }); Now you should expose them in the return statement, as we always do. Don't worry about the format now, you will learn how to format currency or any kind of data in the future. Now you must focus on learning how to manage information and how to show it to the user. The cart-item template The cart-item template displays each line in the cart: <script type="text/html" id="cart-item"> <div class="list-group-item" style="overflow: hidden">    <button type="button" class="close pull-right" data-bind="click:$root.removeFromCart"><span>&times;</span></button>    <h4 class="" data-bind="text:product.name"></h4>    <div class="input-group cart-unit">      <input type="text" class="form-control" data-bind="textInput:units" readonly/>        <span class="input-group-addon">          <div class="btn-group-vertical">            <button class="btn btn-default btn-xs"               data-bind="click:addUnit">              <i class="glyphicon glyphicon-chevron-up"></i>            </button>            <button class="btn btn-default btn-xs"               data-bind="click:removeUnit">              <i class="glyphicon glyphicon-chevron-down"></i>            </button>          </div>        </span>    </div> </div> </script> We set an x button in the top-right of each line to easily remove a line from the cart. As you can see, we have used the $root magic variable to navigate to the top context because we are going to use this template inside a foreach loop, and it means this template will be in the loop context. If we consider this template as an isolated element, we can't be sure how deep we are in the context navigation. To be sure, we go to the right context to call the removeFormCart method. It's better to use $root instead of $parent in this case. The code for removeFromCart should lie in the viewmodel context and should look like this: var removeFromCart = function (data) { var units = data.units(); var stock = data.product.stock(); data.product.stock(units+stock); cart.remove(data); }; Notice that in the addToCart method, we get the array that is inside the observable. We did that because we need to navigate inside the elements of the array. In this case, Knockout observable arrays have a method called remove that allows us to remove the object that we pass as a parameter. If the object is in the array, it will be removed. Remember that the data context is always passed as the first parameter in the function we use in the click events. The cart template The cart template should display the layout of the cart: <script type="text/html" id="cart"> <button type="button" class="close pull-right"     data-bind="click:hideCartDetails">    <span>&times;</span> </button> <h1>Cart</h1> <div data-bind="template: {name: 'cart-item', foreach:cart}"     class="list-group"></div> <div data-bind="template:{name:'cart-widget'}"></div> <button class="btn btn-primary btn-sm"     data-bind="click:showOrder">    Confirm Order </button> </script> It's important that you notice the template binding that we have just below <h1>Cart</h1>. We are binding a template with an array using the foreach argument. With this binding, Knockout renders the cart-item template for each element inside the cart collection. This considerably reduces the code we write in each template and in addition makes them more readable. We have once again used the cart-widget template to show the total items and the total amount. This is one of the good features of templates, we can reuse content over and over. Observe that we have put a button at the top-right of the cart to close it when we don't need to see the details of our cart, and the other one to confirm the order when we are done. The code in our viewmodel should be as follows: var hideCartDetails = function () { $("#cartContainer").addClass("hidden"); }; var showOrder = function () { $("#catalogContainer").addClass("hidden"); $("#orderContainer").removeClass("hidden"); }; As you can see, to show and hide elements we use jQuery and CSS classes from the Bootstrap framework. The hidden class just adds the display: none style to the elements. We just need to toggle this class to show or hide elements in our view. Expose these two methods in the return statement of your view-model. We will come back to this when we need to display the order template. This is the result once we have our catalog and our cart:   The order template Once we have clicked on the Confirm Order button, the order should be shown to us, to review and confirm if we agree. <script type="text/html" id="order"> <div class="col-xs-12">    <button class="btn btn-sm btn-primary"       data-bind="click:showCatalog">      Back to catalog    </button>    <button class="btn btn-sm btn-primary"       data-bind="click:finishOrder">      Buy & finish    </button> </div> <div class="col-xs-6">    <table class="table">      <thead>      <tr>        <th>Name</th>        <th>Price</th>        <th>Units</th>        <th>Subtotal</th>      </tr>      </thead>      <tbody data-bind="foreach:cart">      <tr>        <td data-bind="text:product.name"></td>        <td data-bind="text:product.price"></td>        <td data-bind="text:units"></td>        <td data-bind="text:subtotal"></td>      </tr>      </tbody>      <tfoot>      <tr>        <td colspan="3"></td>        <td>Total:<span data-bind="text:grandTotal"></span></td>      </tr>      </tfoot>    </table> </div> </script> Here we have a read-only table with all cart lines and two buttons. One is to confirm, which will show the modal dialog saying the order is completed, and the other gives us the option to go back to the catalog and keep on shopping. There is some code we need to add to our viewmodel and expose to the user: var showCatalog = function () { $("#catalogContainer").removeClass("hidden"); $("#orderContainer").addClass("hidden"); }; var finishOrder = function() { cart([]); hideCartDetails(); showCatalog(); $("#finishOrderModal").modal('show'); }; As we have done in previous methods, we add and remove the hidden class from the elements we want to show and hide. The finishOrder method removes all the items of the cart because our order is complete; hides the cart and shows the catalog. It also displays a modal that gives confirmation to the user that the order is done.  Order details template The finish-order-modal template The last template is the modal that tells the user that the order is complete: <script type="text/html" id="finish-order-modal"> <div class="modal fade" id="finishOrderModal">    <div class="modal-dialog">            <div class="modal-content">        <div class="modal-body">        <h2>Your order has been completed!</h2>        </div>        <div class="modal-footer">          <div class="form-group">            <div class="col-sm-12">              <button type="submit" class="btn btn-success"                 data-dismiss="modal">Continue Shopping              </button>            </div>          </div>        </div>      </div><!-- /.modal-content -->    </div><!-- /.modal-dialog --> </div><!-- /.modal --> </script> The following screenshot displays the output:   Handling templates with if and ifnot bindings You have learned how to show and hide templates with the power of jQuery and Bootstrap. This is quite good because you can use this technique with any framework you want. The problem with this type of code is that since jQuery is a DOM manipulation library, you need to reference elements to manipulate them. This means you need to know over which element you want to apply the action. Knockout gives us some bindings to hide and show elements depending on the values of our view-model. Let's update the show and hide methods and the templates. Add both the control variables to your viewmodel and expose them in the return statement. var visibleCatalog = ko.observable(true); var visibleCart = ko.observable(false); Now update the show and hide methods: var showCartDetails = function () { if (cart().length > 0) {    visibleCart(true); } };   var hideCartDetails = function () { visibleCart(false); };   var showOrder = function () { visibleCatalog(false); };   var showCatalog = function () { visibleCatalog(true); }; We can appreciate how the code becomes more readable and meaningful. Now, update the cart template, the catalog template, and the order template. In index.html, consider this line: <div class="row" id="catalogContainer"> Replace it with the following line: <div class="row" data-bind="if: visibleCatalog"> Then consider the following line: <div id="cartContainer" class="col-xs-6 well hidden"   data-bind="template:{name:'cart'}"></div> Replace it with this one: <div class="col-xs-6" data-bind="if: visibleCart"> <div class="well" data-bind="template:{name:'cart'}"></div> </div> It is important to know that the if binding and the template binding can't share the same data-bind attribute. This is why we go from one element to two nested elements in this template. In other words, this example is not allowed: <div class="col-xs-6" data-bind="if:visibleCart,   template:{name:'cart'}"></div> Finally, consider this line: <div class="row hidden" id="orderContainer"   data-bind="template:{name:'order'}"> Replace it with this one: <div class="row" data-bind="ifnot: visibleCatalog"> <div data-bind="template:{name:'order'}"></div> </div> With the changes we have made, showing or hiding elements now depends on our data and not on our CSS. This is much better because now we can show and hide any element we want using the if and ifnot binding. Let's review, roughly speaking, how our files are now: We have our index.html file that has the main container, templates, and libraries: <!DOCTYPE html> <html> <head> <title>KO Shopping Cart</title> <meta name="viewport" content="width=device-width,     initial-scale=1"> <link rel="stylesheet" type="text/css"     href="css/bootstrap.min.css"> <link rel="stylesheet" type="text/css" href="css/style.css"> </head> <body>   <div class="container-fluid"> <div class="row" data-bind="if: visibleCatalog">    <div class="col-xs-12"       data-bind="template:{name:'header'}"></div>    <div class="col-xs-6"       data-bind="template:{name:'catalog'}"></div>    <div class="col-xs-6" data-bind="if: visibleCart">      <div class="well" data-bind="template:{name:'cart'}"></div>    </div> </div> <div class="row" data-bind="ifnot: visibleCatalog">    <div data-bind="template:{name:'order'}"></div> </div> <div data-bind="template: {name:'add-to-catalog-modal'}"></div> <div data-bind="template: {name:'finish-order-modal'}"></div> </div>   <!-- templates --> <script type="text/html" id="header"> ... </script> <script type="text/html" id="catalog"> ... </script> <script type="text/html" id="add-to-catalog-modal"> ... </script> <script type="text/html" id="cart-widget"> ... </script> <script type="text/html" id="cart-item"> ... </script> <script type="text/html" id="cart"> ... </script> <script type="text/html" id="order"> ... </script> <script type="text/html" id="finish-order-modal"> ... </script> <!-- libraries --> <script type="text/javascript"   src="js/vendors/jquery.min.js"></script> <script type="text/javascript"   src="js/vendors/bootstrap.min.js"></script> <script type="text/javascript"   src="js/vendors/knockout.debug.js"></script> <script type="text/javascript"   src="js/models/product.js"></script> <script type="text/javascript"   src="js/models/cartProduct.js"></script> <script type="text/javascript" src="js/viewmodel.js"></script> </body> </html> We also have our viewmodel.js file: var vm = (function () { "use strict"; var visibleCatalog = ko.observable(true); var visibleCart = ko.observable(false); var catalog = ko.observableArray([...]); var cart = ko.observableArray([]); var newProduct = {...}; var totalItems = ko.computed(function(){...}); var grandTotal = ko.computed(function(){...}); var searchTerm = ko.observable(""); var filteredCatalog = ko.computed(function () {...}); var addProduct = function (data) {...}; var addToCart = function(data) {...}; var removeFromCart = function (data) {...}; var showCartDetails = function () {...}; var hideCartDetails = function () {...}; var showOrder = function () {...}; var showCatalog = function () {...}; var finishOrder = function() {...}; return {    searchTerm: searchTerm,    catalog: filteredCatalog,    cart: cart,    newProduct: newProduct,    totalItems:totalItems,    grandTotal:grandTotal,    addProduct: addProduct,    addToCart: addToCart,    removeFromCart:removeFromCart,    visibleCatalog: visibleCatalog,    visibleCart: visibleCart,    showCartDetails: showCartDetails,    hideCartDetails: hideCartDetails,    showOrder: showOrder,    showCatalog: showCatalog,    finishOrder: finishOrder }; })(); ko.applyBindings(vm); It is useful to debug to globalize the view-model. It is not good practice in production environments, but it is good when you are debugging your application. Window.vm = vm; Now you have easy access to your view-model from the browser debugger or from your IDE debugger. In addition to the product model, we have created a new model called CartProduct: var CartProduct = function (product, units) { "use strict"; var _product = product,    _units = ko.observable(units); var subtotal = ko.computed(function(){...}); var addUnit = function () {...}; var removeUnit = function () {...}; return {    product: _product,    units: _units,    subtotal: subtotal,    addUnit : addUnit,    removeUnit: removeUnit }; }; You have learned how to manage templates with Knockout, but maybe you have noticed that having all templates in the index.html file is not the best approach. We are going to talk about two mechanisms. The first one is more home-made and the second one is an external library used by lots of Knockout developers, created by Jim Cowart, called Knockout.js-External-Template-Engine (https://github.com/ifandelse/Knockout.js-External-Template-Engine). Managing templates with jQuery Since we want to load templates from different files, let's move all our templates to a folder called views and make one file per template. Each file will have the same name the template has as an ID. So if the template has the ID, cart-item, the file should be called cart-item.html and will contain the full cart-item template: <script type="text/html" id="cart-item"></script>  The views folder with all templates Now in the viewmodel.js file, remove the last line (ko.applyBindings(vm)) and add this code: var templates = [ 'header', 'catalog', 'cart', 'cart-item', 'cart-widget', 'order', 'add-to-catalog-modal', 'finish-order-modal' ];   var busy = templates.length; templates.forEach(function(tpl){ "use strict"; $.get('views/'+ tpl + '.html').then(function(data){    $('body').append(data);    busy--;    if (!busy) {      ko.applyBindings(vm);    } }); }); This code gets all the templates we need and appends them to the body. Once all the templates are loaded, we call the applyBindings method. We should do it this way because we are loading templates asynchronously and we need to make sure that we bind our view-model when all templates are loaded. This is good enough to make our code more maintainable and readable, but is still problematic if we need to handle lots of templates. Further more, if we have nested folders, it becomes a headache listing all our templates in one array. There should be a better approach. Managing templates with koExternalTemplateEngine We have seen two ways of loading templates, both of them are good enough to manage a low number of templates, but when lines of code begin to grow, we need something that allows us to forget about template management. We just want to call a template and get the content. For this purpose, Jim Cowart's library, koExternalTemplateEngine, is perfect. This project was abandoned by the author in 2014, but it is still a good library that we can use when we develop simple projects. We just need to download the library in the js/vendors folder and then link it in our index.html file just below the Knockout library. <script type="text/javascript" src="js/vendors/knockout.debug.js"></script> <script type="text/javascript"   src="js/vendors/koExternalTemplateEngine_all.min.js"></script> Now you should configure it in the viewmodel.js file. Remove the templates array and the foreach statement, and add these three lines of code: infuser.defaults.templateSuffix = ".html"; infuser.defaults.templateUrl = "views"; ko.applyBindings(vm); Here, infuser is a global variable that we use to configure the template engine. We should indicate which suffix will have our templates and in which folder they will be. We don't need the <script type="text/html" id="template-id"></script> tags any more, so we should remove them from each file. So now everything should be working, and the code we needed to succeed was not much. KnockoutJS has its own template engine, but you can see that adding new ones is not difficult. If you have experience with other template engines such as jQuery Templates, Underscore, or Handlebars, just load them in your index.html file and use them, there is no problem with that. This is why Knockout is beautiful, you can use any tool you like with it. You have learned a lot of things in this article, haven't you? Knockout gives us the CSS binding to activate and deactivate CSS classes according to an expression. We can use the style binding to add CSS rules to elements. The template binding helps us to manage templates that are already loaded in the DOM. We can iterate along collections with the foreach binding. Inside a foreach, Knockout gives us some magic variables such as $parent, $parents, $index, $data, and $root. We can use the binding as along with the foreach binding to get an alias for each element. We can show and hide content using just jQuery and CSS. We can show and hide content using the bindings: if, ifnot, and visible. jQuery helps us to load Knockout templates asynchronously. You can use the koExternalTemplateEngine plugin to manage templates in a more efficient way. The project is abandoned but it is still a good solution. Summary In this article, you have learned how to split an application using templates that share the same view-model. Now that we know the basics, it would be interesting to extend the application. Maybe we can try to create a detailed view of the product, or maybe we can give the user the option to register where to send the order. Resources for Article: Further resources on this subject: Components [article] Web Application Testing [article] Top features of KnockoutJS [article]
Read more
  • 0
  • 0
  • 11034

article-image-our-app-and-tool-stack
Packt
04 Mar 2015
33 min read
Save for later

Our App and Tool Stack

Packt
04 Mar 2015
33 min read
In this article by Zachariah Moreno, author of the book AngularJS Deployment Essentials, you will learn how to do the following: Minimize efforts and maximize results using a tool stack optimized for AngularJS development Access the krakn app via GitHub Scaffold an Angular app with Yeoman, Grunt, and Bower Set up a local Node.js development server Read through krakn's source code Before NASA or Space X launches a vessel into the cosmos, there is a tremendous amount of planning and preparation involved. The guiding principle when planning for any successful mission is similar to minimizing efforts and resources while retaining maximum return on the mission. Our principles for development and deployment are no exception to this axiom, and you will gain a firmer working knowledge of how to do so in this article. (For more resources related to this topic, see here.) The right tools for the job Web applications can be compared to buildings; without tools, neither would be a pleasure to build. This makes tools an indispensable factor in both development and construction. When tools are combined, they form a workflow that can be repeated across any project built with the same stack, facilitating the practices of design, development, and deployment. The argument can be made that it is just as paramount to document workflow as an application's source code or API. Along with grouping tools into categories based on the phases of building applications, it is also useful to group tools based on the opinions of a respective project—in our case, Angular, Ionic, and Firebase. I call tools grouped into opinionated workflows tool stacks. For example, the remainder of this article discusses the tool stack used to build the application that we will deploy across environments in this book. In contrast, if you were to build a Ruby on Rails application, the tool stack would be completely different because the project's opinions are different. Our app is called krakn, and it functions as a real-time chat application built on top of the opinions of Angular, the Ionic Framework, and Firebase. You can find all of krakn's source code at https://github.com/zachmoreno/krakn. Version control with Git and GitHub Git is a command-line interface (CLI) developed by Linus Torvalds, to use on the famed Linux kernel. Git is mostly popular due to its distributed architecture making it nearly impossible for corruption to occur. Git's distributed architecture means that any remote repository has all of the same information as your local repository. It is useful to think of Git as a free insurance policy for my code. You will need to install Git using the instructions provided at www.git-scm.com/ for your development workstation's operating system. GitHub.com has played a notable role in Git's popularization, turning its functionality into a social network focused on open source code contributions. With a pricing model that incentivizes Open Source contributions and licensing for private, GitHub elevated the use of Git to heights never seen before. If you don't already have an account on GitHub, now is the perfect time to visit github.com to provision a free account. I mentioned earlier that krakn's code is available for forking at github.com/ZachMoreno/krakn. This means that any person with a GitHub account has the ability to view my version of krakn, and clone a copy of their own for further modifications or contributions. In GitHub's web application, forking manifests itself as a button located to the right of the repository's title, which in this case is XachMoreno/krakn. When you click on the button, you will see an animation that simulates the hardcore forking action. This results in a cloned repository under your account that will have a title to the tune of YourName/krakn. Node.js Node.js, commonly known as Node, is a community-driven server environment built on Google Chrome's V8 JavaScript runtime that is entirely event driven and facilitates a nonblocking I/O model. According to www.nodejs.org, it is best suited for: "Data-intensive real-time applications that run across distributed devices." So what does all this boil down to? Node empowers web developers to write JavaScript both on the client and server with bidirectional real-time I/O. The advent of Node has empowered developers to take their skills from the client to the server, evolving from frontend to full stack (like a caterpillar evolving into a butterfly). Not only do these skills facilitate a pay increase, they also advance the Web towards the same functionality as the traditional desktop or native application. For our purposes, we use Node as a tool; a tool to build real-time applications in the fewest number of keystrokes, videos watched, and words read as possible. Node is, in fact, a modular tool through its extensible package interface, called Node Package Manager (NPM). You will use NPM as a means to install the remainder of our tool stack. NPM The NPM is a means to install Node packages on your local or remote server. NPM is how we will install the majority of the tools and software used in this book. This is achieved by running the $ npm install –g [PackageName] command in your command line or terminal. To search the full list of Node packages, visit www.npmjs.org or run $ npm search [Search Term] in your command line or terminal as shown in the following screenshot: Yeoman's workflow Yeoman is a CLI that is the glue that holds your tools into your opinionated workflow. Although the term opinionated might sound off-putting, you must first consider the wisdom and experience of the developers and community before you who maintain Yeoman. In this context, opinionated means a little more than a collection of the best practices that are all aimed at improving your developer's experience of building static websites, single page applications, and everything in between. Opinionated does not mean that you are locked into what someone else feels is best for you, nor does it mean that you must strictly adhere to the opinions or best practices included. Yeoman is general enough to help you build nearly anything for the Web as well as improving your workflow while developing it. The tools that make up Yeoman's workflow are Yo, Grunt.js, Bower, and a few others that are more-or-less optional, but are probably worth your time. Yo Apart from having one of the hippest namespaces, Yo is a powerful code generator that is intelligent enough to scaffold most sites and applications. By default, instantiating a yo command assumes that you mean to scaffold something at a project level, but yo can also be scoped more granularly by means of sub-generators. For example, the command for instantiating a new vanilla Angular project is as follows: $ yo angular radicalApp Yo will not finish your request until you provide some further information about your desired Angular project. This is achieved by asking you a series of relevant questions, and based on your answers, yo will scaffold a familiar application folder/file structure, along with all the boilerplate code. Note that if you have worked with the angular-seed project, then the Angular application that yo generates will look very familiar to you. Once you have an Angular app scaffolded, you can begin using sub-generator commands. The following command scaffolds a new route, radicalRoute, within radicalApp: $ yo angular:route radicalRoute The :route sub-generator is a very powerful command, as it automates all of the following key tasks: It creates a new file, radicalApp/scripts/controllers/radicalRoute.js, that contains the controller logic for the radicalRoute view It creates another new file, radicalApp/views/radicalRoute.html, that contains the associated view markup and directives Lastly, it adds an additional route within, radicalApp/scripts/app.js, that connects the view to the controller Additionally, the sub-generators for yo angular include the following: :controller :directive :filter :service :provider :factory :value :constant :decorator :view All the sub-generators allow you to execute finer detailed commands for scaffolding smaller components when compared to :route, which executes a combination of sub-generators. Installing Yo Within your workstation's terminal or command-line application type, insert the following command, followed by a return: $ npm install -g yo If you are a Linux or Mac user, you might want to prefix the command with sudo, as follows: $ sudo npm install –g yo Grunt Grunt.js is a task runner that enhances your existing and/or Yeoman's workflow by automating repetitive tasks. Each time you generate a new project with yo, it creates a /Gruntfile.js file that wires up all of the curated tasks. You might have noticed that installing Yo also installs all of Yo's dependencies. Reading through /Gruntfile.js should incite a fair amount of awe, as it gives you a snapshot of what is going on under the hood of Yeoman's curated Grunt tasks and its dependencies. Generating a vanilla Angular app produces a /Gruntfile.js file, as it is responsible for performing the following tasks: It defines where Yo places Bower packages, which is covered in the next section It defines the path where the grunt build command places the production-ready code It initializes the watch task to run: JSHint when JavaScript files are saved Karma's test runner when JavaScript files are saved Compass when SCSS or SASS files are saved The saved /Gruntfile.js file It initializes LiveReload when any HTML or CSS files are saved It configures the grunt server command to run a Node.js server on localhost:9000, or to show test results on localhost:9001 It autoprefixes CSS rules on LiveReload and grunt build It renames files for optimizing browser caching It configures the grunt build command to minify images, SVG, HTML, and CSS files or to safely minify Angular files Let us pause for a moment to reflect on the amount of time it would take to find, learn, and implement each dependency into our existing workflow for each project we undertake. Ok, we should now have a greater appreciation for Yeoman and its community. For the vast majority of the time, you will likely only use a few Grunt commands, which include the following: $ grunt server $ grunt test $ grunt build Bower If Yo scaffolds our application's structure and files, and Grunt automates repetitive tasks for us, then what does Bower bring to the party? Bower is web development's missing package manager. Its functionality parallels that of Ruby Gems for the Ruby on Rails MVC framework, but is not limited to any single framework or technology stack. The explicit use of Bower is not required by the Yeoman workflow, but as I mentioned previously, the use of Bower is configured automatically for you in your project's /Gruntfile.js file. How does managing packages improve our development workflow? With all of the time we've been spending in our command lines and terminals, it is handy to have the ability to automate the management of third-party dependencies within our application. This ability manifests itself in a few simple commands, the most ubiquitous being the following command: $ bower install [PackageName] --save With this command, Bower will automate the following steps: First, search its packages for the specified package name Download the latest stable version of the package if found Move the package to the location defined in your project's /Gruntfile.js file, typically a folder named /bower_components Insert dependencies in the form of <link> elements for CSS files in the document's <head> element, and <script> elements for JavaScript files right above the document's closing </body> tag, to the package's files within your project's /index.html file This process is one that web developers are more than familiar with because adding a JavaScript library or new dependency happens multiple times within every project. Bower speeds up our existing manual process through automation and improves it by providing the latest stable version of a package and then notifying us of an update if one is available. This last part, "notifying us of an update if … available", is important because as a web developer advances from one project to the next, it is easy to overlook keeping dependencies as up to date as possible. This is achieved by running the following command: $ bower update This command returns all the available updates, if available, and will go through the same process of inserting new references where applicable. Bower.io includes all of the documentation on how to use Bower to its fullest potential along with the ability to search through all of the available Bower packages. Searching for available Bower packages can also be achieved by running the following command: $ bower search [SearchTerm] If you cannot find the specific dependency for which you search, and the project is on GitHub, consider contributing a bower.json file to the project's root and inviting the owner to register it by running the following command: $ bower register [ThePackageName] [GitEndpoint] Registration allows you to install your dependency by running the next command: $ bower install [ThePackageName] The Ionic framework The Ionic framework is a truly remarkable advancement in bridging the gap between web applications and native mobile applications. In some ways, Ionic parallels Yeoman where it assembles tools that were already available to developers into a neat package, and structures a workflow around them, inherently improving our experience as developers. If Ionic is analogous to Yeoman, then what are the tools that make up Ionic's workflow? The tools that, when combined, make Ionic noteworthy are Apache Cordova, Angular, Ionic's suite of Angular directives, and Ionic's mobile UI framework. Batarang An invaluable piece to our Angular tool stack is the Google Chrome Developer Tools extension, Batarang, by Brian Ford. Batarang adds a third-party panel (on the right-hand side of Console) to DevTools that facilitates Angular's specific inspection in the event of debugging. We can view data in the scopes of each model, analyze each expression's performance, and view a beautiful visualization of service dependencies all from within Batarang. Because Angular augments the DOM with ng- attributes, it also provides a Properties pane within the Elements panel, to inspect the models attached to a given element's scope. The extension is easy to install from either the Chrome Web Store or the project's GitHub repository and inspection can be enabled by performing the following steps: Firstly, open the Chrome Developer Tools. You should then navigate to the AngularJS panel. Finally, select the Enable checkbox on the far right tab. Your active Chrome tab will then be reloaded automatically, and the AngularJS panel will begin populating the inspection data. In addition, you can leverage the Angular pane with the Elements panel to view Angular-specific properties at an elemental level, and observe the $scope variable from within the Console panel. Sublime Text and Editor Integration While developing any Angular app, it is helpful to augment our workflow further with Angular-specific syntax completion, snippets, go to definition, and quick panel search in the form of a Sublime Text package. Perform the following steps: If you haven't installed Sublime Text already, you need to first install Package Control. Otherwise, continue with the next step. Once installed, press command + Shift + P in Sublime. Then, you need to select the Package Control: Install Package option. Finally, type angularjs and press Enter on your keyboard. In addition to support within Sublime, Angular enhancements exist for lots of popular editors, including WebStorm, Coda, and TextMate. Krakn As a quick refresher, krakn was constructed using all of the tools that are covered in this article. These include Git, GitHub, Node.js, NPM, Yeoman's workflow, Yo, Grunt, Bower, Batarang, and Sublime Text. The application builds on Angular, Firebase, the Ionic Framework, and a few other minor dependencies. The workflow I used to develop krakn went something like the following. Follow these steps to achieve the same thing. Note that you can skip the remainder of this section if you'd like to get straight to the deployment action, and feel free to rename things where necessary. Setting up Git and GitHub The workflow I followed while developing krakn begins with initializing our local Git repository and connecting it to our remote master repository on GitHub. In order to install and set up both, perform the following steps: Firstly, install all the tool stack dependencies, and create a folder called krakn. Following this, run $ git init, and you will create a README.md file. You should then run $ git add README.md and commit README.md to the local master branch. You then need to create a new remote repository on GitHub called XachMoreno/krakn. Following this, run the following command: $ git remote add origin git@github.com:[YourGitHubUserName] /krakn.git Conclude the setup by running $ git push –u origin master. Scaffolding the app with Yo Scaffolding our app couldn't be easier with the yo ionic generator. To do this, perform the following steps: Firstly, install Yo by running $ npm install -g yo. After this, install generator-ionicjs by running $ npm install -g generator-ionicjs. To conclude the scaffolding of your application, run the yo ionic command. Development After scaffolding the folder structure and boilerplate code, our workflow advances to the development phase, which is encompassed in the following steps: To begin, run grunt server. You are now in a position to make changes, for example, these being deletions or additions. Once these are saved, LiveReload will automatically reload your browser. You can then review the changes in the browser. Repeat steps 2-4 until you are ready to advance to the predeployment phase. Views, controllers, and routes Being a simple chat application, krakn has only a handful of views/routes. They are login, chat, account, menu, and about. The menu view is present in all the other views in the form of an off-canvas menu. The login view The default view/route/controller is named login. The login view utilizes the Firebase's Simple Login feature to authenticate users before proceeding to the rest of the application. Apart from logging into krakn, users can register a new account by entering their desired credentials. An interesting part of the login view is the use of the ng-show directive to toggle the second password field if the user selects the register button. However, the ng-model directive is the first step here, as it is used to pass the input text from the view to the controller and ultimately, the Firebase Simple Login. Other than the Angular magic, this view uses the ion-view directive, grid, and buttons that are all core to Ionic. Each view within an Ionic app is wrapped within an ion-view directive that contains a title attribute as follows: <ion-view title="Login"> The login view uses the standard input elements that contain a ng-model attribute to bind the input's value back to the controller's $scope as follows:   <input type="text" placeholder="you@email.com" ng-model= "data.email" />     <input type="password" placeholder=  "embody strength" ng-model="data.pass" />     <input type="password" placeholder=  "embody strength" ng-model="data.confirm" /> The Log In and Register buttons call their respective functions using the ng-click attribute, with the value set to the function's name as follows:   <button class="button button-block button-positive" ng-  click="login()" ng-hide="createMode">Log In</button> The Register and Cancel buttons set the value of $scope.createMode to true or false to show or hide the correct buttons for either action:   <button class="button button-block button-calm" ng-  click="createMode = true" ng-hide=  "createMode">Register</button>   <button class="button button-block button-calm" ng-  show="createMode" ng-click=  "createAccount()">Create Account</button>     <button class="button button-block button-  assertive" ng-show="createMode" ng-click="createMode =   false">Cancel</button> $scope.err is displayed only when you want to show the feedback to the user:   <p ng-show="err" class="assertive text-center">{{err}}</p>   </ion-view> The login controller is dependent on Firebase's loginService module and Angular's core $location module: controller('LoginCtrl', ['$scope', 'loginService', '$location',   function($scope, loginService, $location) { Ionic's directives tend to create isolated scopes, so it was useful here to wrap our controller's variables within a $scope.data object to avoid issues within the isolated scope as follows:     $scope.data = {       "email"   : null,       "pass"   : null,       "confirm"  : null,       "createMode" : false     } The login() function easily checks the credentials before authentication and sends feedback to the user if needed:     $scope.login = function(cb) {       $scope.err = null;       if( !$scope.data.email ) {         $scope.err = 'Please enter an email address';       }       else if( !$scope.data.pass ) {         $scope.err = 'Please enter a password';       } If the credentials are sound, we send them to Firebase for authentication, and when we receive a success callback, we route the user to the chat view using $location.path() as follows:       else {         loginService.login($scope.data.email,         $scope.data.pass, function(err, user) {          $scope.err = err? err + '' : null;          if( !err ) {           cb && cb(user);           $location.path('krakn/chat');          }        });       }     }; The createAccount() function works in much the same way as login(), except that it ensures that the users don't already exist before adding them to your Firebase and logging them in:     $scope.createAccount = function() {       $scope.err = null;       if( assertValidLoginAttempt() ) {        loginService.createAccount($scope.data.email,    $scope.data.pass,          function(err, user) {           if( err ) {             $scope.err = err? err + '' : null;           }           else {             // must be logged in before I can write to     my profile             $scope.login(function() {              loginService.createProfile(user.uid,     user.email);              $location.path('krakn/account');             });           }          });       }     }; The assertValidLoginAttempt() function is a function used to ensure that no errors are received through the account creation and authentication flows:     function assertValidLoginAttempt() {       if( !$scope.data.email ) {        $scope.err = 'Please enter an email address';       }       else if( !$scope.data.pass ) {        $scope.err = 'Please enter a password';       }       else if( $scope.data.pass !== $scope.data.confirm ) {        $scope.err = 'Passwords do not match';       }       return !$scope.err;     }    }]) The chat view Keeping vegan practices aside, the meat and potatoes of krakn's functionality lives within the chat view/controller/route. The design is similar to most SMS clients, with the input in the footer of the view and messages listed chronologically in the main content area. The ng-repeat directive is used to display a message every time a message is added to the messages collection in Firebase. If you submit a message successfully, unsuccessfully, or without any text, feedback is provided via the placeholder attribute of the message input. There are two filters being utilized within the chat view: orderByPriority and timeAgo. The orderByPriority filter is defined within the firebase module that uses the Firebase object IDs that ensure objects are always chronological. The timeAgo filter is an open source Angular module that I found. You can access it at JS Fiddle. The ion-view directive is used once again to contain our chat view: <ion-view title="Chat"> Our list of messages is composed using the ion-list and ion-item directives, in addition to a couple of key attributes. The ion-list directive gives us some nice interactive controls using the option-buttons and can-swipe attributes. This results in each list item being swipable to the left, revealing our option-buttons as follows:    <ion-list option-buttons="itemButtons" can-swipe=     "true" ng-show="messages"> Our workhorse in the chat view is the trusty ng-repeat directive, responsible for persisting our data from Firebase to our service to our controller and into our view and back again:    <ion-item ng-repeat="message in messages |      orderByPriority" item="item" can-swipe="true"> Then, we bind our data into vanilla HTML elements that have some custom styles applied to them:     <h2 class="user">{{ message.user }}</h2> The third-party timeago filter converts the time into something such as, "5 min ago", similar to Instagram or Facebook:     <small class="time">{{ message.receivedTime |       timeago }}</small>     <p class="message">{{ message.text }}</p>    </ion-item>   </ion-list> A vanilla input element is used to accept chat messages from our users. The input data is bound to $scope.data.newMessage for sending data to Firebase and $scope.feedback is used to keep our users informed:   <input type="text" class="{{ feeling }}" placeholder=    "{{ feedback }}" ng-model="data.newMessage" /> When you click on the send/submit button, the addMessage() function sends the message to your Firebase, and adds it to the list of chat messages, in real time:   <button type="submit" id="chat-send" class="button button-small button-clear" ng-click="addMessage()"><span class="ion-android-send"></span></button> </ion-view> The ChatCtrl controller is dependant on a few more modules other than our LoginCtrl, including syncData, $ionicScrollDelegate, $ionicLoading, and $rootScope: controller('ChatCtrl', ['$scope', 'syncData', '$ionicScrollDelegate', '$ionicLoading', '$rootScope',    function($scope, syncData, $ionicScrollDelegate, $ionicLoading, $rootScope) { The userName variable is derived from the authenticated user's e-mail address (saved within the application's $rootScope) by splitting the e-mail and using everything before the @ symbol: var userEmail = $rootScope.auth.user.e-mail       userName = userEmail.split('@'); Avoid isolated scope issue in the same fashion, as we did in LoginCtrl:     $scope.data = {       newMessage   : null,       user      : userName[0]     } Our view will only contain the latest 20 messages that have been synced from Firebase:     $scope.messages = syncData('messages', 20); When a new message is saved/synced, it is added to the bottom of the ng-repeated list, so we use the $ionicScrollDeligate variable to automatically scroll the new message into view on the display as follows: $ionicScrollDelegate.scrollBottom(true); Our default chat input placeholder text is something on your mind?:     $scope.feedback = 'something on your mind?';     // displays as class on chat input placeholder     $scope.feeling = 'stable'; If we have a new message and a valid username (shortened), then we can call the $add() function, which syncs the new message to Firebase and our view is as follows:     $scope.addMessage = function() {       if(  $scope.data.newMessage         && $scope.data.user ) {        // new data elements cannot be synced without adding          them to FB Security Rules        $scope.messages.$add({                    text    : $scope.data.newMessage,                    user    : $scope.data.user,                    receivedTime : Number(new Date())                  });        // clean up        $scope.data.newMessage = null; On a successful sync, the feedback updates say Done! What's next?, as shown in the following code snippet:        $scope.feedback = 'Done! What's next?';        $scope.feeling = 'stable';       }       else {        $scope.feedback = 'Please write a message before sending';        $scope.feeling = 'assertive';       }     };       $ionicScrollDelegate.scrollBottom(true); ]) The account view The account view allows the logged in users to view their current name and e-mail address along with providing them with the ability to update their password and e-mail address. The input fields interact with Firebase in the same way as the chat view does using the syncData method defined in the firebase module: <ion-view title="'Account'" left-buttons="leftButtons"> The $scope.user object contains our logged in user's account credentials, and we bind them into our view as follows:   <p>{{ user.name }}</p>  …   <p>{{ user.email }}</p> The basic account management functionality is provided within this view; so users can update their e-mail address and or password if they choose to, using the following code snippet:   <input type="password" ng-keypress=    "reset()" ng-model="oldpass"/>  …   <input type="password" ng-keypress=    "reset()" ng-model="newpass"/>  …   <input type="password" ng-keypress=    "reset()" ng-model="confirm"/> Both the updatePassword() and updateEmail() functions work in much the same fashion as our createAccount() function within the LoginCtrl controller. They check whether the new e-mail or password is not the same as the old, and if all is well, it syncs them to Firebase and back again:   <button class="button button-block button-calm" ng-click=    "updatePassword()">update password</button>  …    <p class="error" ng-show="err">{{err}}</p>   <p class="good" ng-show="msg">{{msg}}</p>  …   <input type="text" ng-keypress="reset()" ng-model="newemail"/>  …   <input type="password" ng-keypress="reset()" ng-model="pass"/>  …   <button class="button button-block button-calm" ng-click=    "updateEmail()">update email</button>  …   <p class="error" ng-show="emailerr">{{emailerr}}</p>   <p class="good" ng-show="emailmsg">{{emailmsg}}</p>  … </ion-view> The menu view Within krakn/app/scripts/app.js, the menu route is defined as the only abstract state. Because of its abstract state, it can be presented in the app along with the other views by the ion-side-menus directive provided by Ionic. You might have noticed that only two menu options are available before signing into the application and that the rest appear only after authenticating. This is achieved using the ng-show-auth directive on the chat, account, and log out menu items. The majority of the options for Ionic's directives are available through attributes making them simple to use. For example, take a look at the animation="slide-left-right" attribute. You will find Ionic's use of custom attributes within the directives as one of the ways that the Ionic Framework is setting itself apart from other options within this space. The ion-side-menu directive contains our menu list similarly to the one we previously covered, the ion-view directive, as follows: <ion-side-menus>  <ion-pane ion-side-menu-content>   <ion-nav-bar class="bar-positive"> Our back button is displayed by including the ion-nav-back-button directive within the ion-nav-bar directive:    <ion-nav-back-button class="button-clear"><i class=     "icon ion-chevron-left"></i> Back</ion-nav-back-button>   </ion-nav-bar> Animations within Ionic are exposed and used through the animation attribute, which is built atop the ngAnimate module. In this case, we are doing a simple animation that replicates the experience of a native mobile app:   <ion-nav-view name="menuContent" animation="slide-left-right"></ion-nav-view>  </ion-pane>    <ion-side-menu side="left">   <header class="bar bar-header bar-positive">    <h1 class="title">Menu</h1>   </header>   <ion-content class="has-header"> A simple ion-list directive/element is used to display our navigation items in a vertical list. The ng-show attribute handles the display of menu items before and after a user has authenticated. Before a user logs in, they can access the navigation, but only the About and Log In views are available until after successful authentication.    <ion-list>     <ion-item nav-clear menu-close href=      "#/app/chat" ng-show-auth="'login'">      Chat     </ion-item>       <ion-item nav-clear menu-close href="#/app/about">      About     </ion-item>       <ion-item nav-clear menu-close href=      "#/app/login" ng-show-auth="['logout','error']">      Log In     </ion-item> The Log Out navigation item is only displayed once logged in, and upon a click, it calls the logout() function in addition to navigating to the login view:     <ion-item nav-clear menu-close href="#/app/login" ng-click=      "logout()" ng-show-auth="'login'">      Log Out     </ion-item>    </ion-list>   </ion-content>  </ion-side-menu> </ion-side-menus> The MenuCtrl controller is the simplest controller in this application, as all it contains is the toggleMenu() and logout() functions: controller("MenuCtrl", ['$scope', 'loginService', '$location',   '$ionicScrollDelegate', function($scope, loginService,   $location, $ionicScrollDelegate) {   $scope.toggleMenu = function() {    $scope.sideMenuController.toggleLeft();   };     $scope.logout = function() {     loginService.logout();     $scope.toggleMenu();  };  }]) The about view The about view is 100 percent static, and its only real purpose is to present the credits for all the open source projects used in the application. Global controller constants All of krakn's controllers share only two dependencies: ionic and ngAnimate. Because Firebase's modules are defined within /app/scripts/app.js, they are available for consumption by all the controllers without the need to define them as dependencies. Therefore, the firebase service's syncData and loginService are available to ChatCtrl and LoginCtrl for use. The syncData service is how krakn utilizes three-way data binding provided by krakenjs.com. For example, within the ChatCtrl controller, we use syncData( 'messages', 20 ) to bind the latest twenty messages within the messages collection to $scope for consumption by the chat view. Conversely, when a ng-click user clicks the submit button, we write the data to the messages collection by use of the syncData.$add() method inside the $scope.addMessage() function: $scope.addMessage = function() {   if(...) { $scope.messages.$add({ ... });   } }; Models and services The model for krakn is www.krakn.firebaseio.com. The services that consume krakn's Firebase API are as follows: The firebase service in krakn/app/scripts/service.firebase.js The login service in krakn/app/scripts/service.login.js The changeEmail service in krakn/app/scripts/changeEmail.firebase.js The firebase service defines the syncData service that is responsible for routing data bidirectionally between krakn/app/bower_components/angularfire.js and our controllers. Please note that the reason I have not mentioned angularfire.js until this point is that it is basically an abstract data translation layer between firebaseio.com and Angular applications that intend on consuming data as a service. Predeployment Once the majority of an application's development phase has been completed, at least for the initial launch, it is important to run all of the code through a build process that optimizes the file size through compression of images and minification of text files. This piece of the workflow was not overlooked by Yeoman and is available through the use of the $ grunt build command. As mentioned in the section on Grunt, the /Gruntfile.js file defines where built code is placed once it is optimized for deployment. Yeoman's default location for built code is the /dist folder, which might or might not exist depending on whether you have run the grunt build command before. Summary In this article, we discussed the tool stack and workflow used to build the app. Together, Git and Yeoman formed a solid foundation for building krakn. Git and GitHub provided us with distributed version control and a platform for sharing the application's source code with you and the world. Yeoman facilitated the remainder of the workflow: scaffolding with Yo, automation with Grunt, and package management with Bower. With our app fully scaffolded, we were able to build our interface with the directives provided by the Ionic Framework, and wire up the real-time data synchronization forged by our Firebase instance. With a few key tools, we were able to minimize our development time while maximizing our return. Resources for Article: Further resources on this subject: Role of AngularJS? [article] AngularJS Project [article] Creating Our First Animation AngularJS [article]
Read more
  • 0
  • 0
  • 2688
Modal Close icon
Modal Close icon