Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Front-End Web Development

341 Articles
article-image-making-app-react-and-material-design
Soham Kamani
21 Mar 2016
7 min read
Save for later

Making an App with React and Material Design

Soham Kamani
21 Mar 2016
7 min read
There has been much progression in the hybrid app development space, and also in React.js. Currently, almost all hybrid apps use cordova to build and run web applications on their platform of choice. Although learning React can be a bit of a steep curve, the benefit you get is that you are forced to make your code more modular, and this leads to huge long-term gains. This is great for developing applications for the browser, but when it comes to developing mobile apps, most web apps fall short because they fail to create the "native" experience that so many users know and love. Implementing these features on your own (through playing around with CSS and JavaScript) may work, but it's a huge pain for even something as simple as a material-design-oriented button. Fortunately, there is a library of react components to help us out with getting the look and feel of material design in our web application, which can then be ported to a mobile to get a native look and feel. This post will take you through all the steps required to build a mobile app with react and then port it to your phone using cordova. Prerequisites and dependencies Globally, you will require cordova, which can be installed by executing this line: npm install -g cordova Now that this is done, you should make a new directory for your project and set up a build environment to use es6 and jsx. Currently, webpack is the most popular build system for react, but if that's not according to your taste, there are many more build systems out there. Once you have your project folder set up, install react as well as all the other libraries you would be needing: npm init npm install --save react react-dom material-ui react-tap-event-plugin Making your app Once we're done, the app should look something like this:   If you just want to get your hands dirty, you can find the source files here. Like all web applications, your app will start with an index.html file: <html> <head> <title>My Mobile App</title> </head> <body> <div id="app-node"> </div> <script src="bundle.js" ></script> </body> </html> Yup, that's it. If you are using webpack, your CSS will be included in the bundle.js file itself, so there's no need to put "style" tags either. This is the only HTML you will need for your application. Next, let's take a look at index.js, the entry point to the application code: //index.js import React from 'react'; import ReactDOM from 'react-dom'; import App from './app.jsx'; const node = document.getElementById('app-node'); ReactDOM.render( <App/>, node ); What this does is grab the main App component and attach it to the app-node DOM node. Drilling down further, let's look at the app.jsx file: //app.jsx'use strict';import React from 'react';import AppBar from 'material-ui/lib/app-bar';import MyTabs from './my-tabs.jsx';let App = React.createClass({ render : function(){ return ( <div> <AppBar title="My App" /> <MyTabs /> </div> ); }});module.exports = App; Following react's philosophy of structuring our code, we can roughly break our app down into two parts: The title bar The tabs below The title bar is more straightforward and directly fetched from the material-ui library. All we have to do is supply a "title" property to the AppBar component. MyTabs is another component that we have made, put in a different file because of the complexity: 'use strict';import React from 'react';import Tabs from 'material-ui/lib/tabs/tabs';import Tab from 'material-ui/lib/tabs/tab';import Slider from 'material-ui/lib/slider';import Checkbox from 'material-ui/lib/checkbox';import DatePicker from 'material-ui/lib/date-picker/date-picker';import injectTapEventPlugin from 'react-tap-event-plugin';injectTapEventPlugin();const styles = { headline: { fontSize: 24, paddingTop: 16, marginBottom: 12, fontWeight: 400 }};const TabsSimple = React.createClass({ render: () => ( <Tabs> <Tab label="Item One"> <div> <h2 style={styles.headline}>Tab One Template Example</h2> <p> This is the first tab. </p> <p> This is to demonstrate how easy it is to build mobile apps with react </p> <Slider name="slider0" defaultValue={0.5}/> </div> </Tab> <Tab label="Item 2"> <div> <h2 style={styles.headline}>Tab Two Template Example</h2> <p> This is the second tab </p> <Checkbox name="checkboxName1" value="checkboxValue1" label="Installed Cordova"/> <Checkbox name="checkboxName2" value="checkboxValue2" label="Installed React"/> <Checkbox name="checkboxName3" value="checkboxValue3" label="Built the app"/> </div> </Tab> <Tab label="Item 3"> <div> <h2 style={styles.headline}>Tab Three Template Example</h2> <p> Choose a Date:</p> <DatePicker hintText="Select date"/> </div> </Tab> </Tabs> )});module.exports = TabsSimple; This file has quite a lot going on, so let’s break it down step by step: We import all the components that we're going to use in our app. This includes tabs, sliders, checkboxes, and datepickers. injectTapEventPlugin is a plugin that we need in order to get tab switching to work. We decide the style used for our tabs. Next, we make our Tabs react component, which consists of three tabs: The first tab has some text along with a slider. The second tab has a group of checkboxes. The third tab has a pop-up datepicker. Each component has a few keys, which are specific to it (such as the initial value of the slider, the value reference of the checkbox, or the placeholder for the datepicker). There are a lot more properties you can assign, which are specific to each component. Building your App For building on Android, you will first need to install the Android SDK. Now that we have all the code in place, all that is left is building the app. For this, make a new directory, start a new cordova project, and add the Android platform, by running the following on your terminal: mkdir my-cordova-project cd my-cordova-project cordova create . cordova platform add android Once the installation is complete, build the code we just wrote previously. If you are using the same build system as the source code, you will have only two files, that is, index.html and bundle.min.js. Delete all the files that are currently present in the www folder of your cordova project and copy those two files there instead. You can check whether your app is working on your computer by running cordova serve and going to the appropriate address on your browser. If all is well, you can build and deploy your app: cordova build android cordova run android This will build and install the app on your Android device (provided it is in debug mode and connected to your computer). Similarly, you can build and install the same app for iOS or windows (you may need additional tools such as XCode or .NET for iOS or Windows). You can also use any other framework to build your mobile app. The angular framework also comes with its own set of material design components. About the Author Soham Kamani is a full-stack web developer and electronics hobbyist.  He is especially interested in JavaScript, Python, and IoT.
Read more
  • 0
  • 0
  • 14657

article-image-classes-and-instances-ember-object-model
Packt
05 Feb 2016
12 min read
Save for later

Classes and Instances of Ember Object Model

Packt
05 Feb 2016
12 min read
In this article by Erik Hanchett, author of the book Ember.js cookbook, Ember.js is an open source JavaScript framework that will make you more productive. It uses common idioms and practices, making it simple to create amazing single-page applications. It also let's you create code in a modular way using the latest JavaScript features. Not only that, it also has a great set of APIs in order to get any task done. The Ember.js community is welcoming newcomers and is ready to help you when required. (For more resources related to this topic, see here.) Working with classes and instances Creating and extending classes is a major feature of the Ember object model. In this recipe, we'll take a look at how creating and extending objects works. How to do it Let's begin by creating a very simple Ember class using extend(), as follows: const Light = Ember.Object.extend({ isOn: false }); This defines a new Light class with a isOn property. Light inherits the properties and behavior from the Ember object such as initializers, mixins, and computed properties. Ember Twiddle Tip At some point of time, you might need to test out small snippets of the Ember code. An easy way to do this is to use a website called Ember Twiddle. From that website, you can create an Ember application and run it in the browser as if you were using Ember CLI. You can even save and share it. It has similar tools like JSFiddle; however, only for Ember. Check it out at http://ember-twiddle.com. Once you have defined a class you'll need to be able to create an instance of it. You can do this by using the create() method. We'll go ahead and create an instance of Light. constbulb = Light.create(); Accessing properties within the bulb instance We can access the properties of the bulb object using the set and get accessor methods. Let's go ahead and get the isOn property of the Light class, as follows: console.log(bulb.get('isOn')); The preceding code will get the isOn property from the bulb instance. To change the isOn property, we can use the set accessor method: bulb.set('isOn', true) The isOn property will now be set to true instead of false. Initializing the Ember object The init method is invoked whenever a new instance is created. This is a great place to put in any code that you may need for the new instance. In our example, we'll go ahead and add an alert message that displays the default setting for the isOn property: const Light = Ember.Object.extend({ init(){ alert('The isON property is defaulted to ' + this.get('isOn')); }, isOn: false }); As soon as the Light.create line of code is executed, the instance will be created and this message will pop up on the screen. The isON property is defaulted to false. Subclass Be aware that you can create subclasses of your objects in Ember. You can override methods and access the parent class by using the _super() keyword method. This is done by creating a new object that uses the Ember extend method on the parent class. Another important thing to realize is that if you're subclassing a framework class such as Ember.Component and you override the init method, you'll need to make sure that you call this._super(). If not, the component may not work properly. Reopening classes At anytime, you can reopen a class and define new properties or methods in it. For this, use the reopen method. In our previous example, we had an isON property. Let's reopen the same class and add a color property, as follows: To add the color property, we need to use the reopen() method: Light.reopen({ color: 'yellow' }); If required, you can add static methods or properties using reopenClass, as follows: Light.reopen({ wattage: 40 }); You can now access the static property: Light.wattage How it works In the preceding examples, we have created an Ember object using extend. This tells Ember to create a new Ember class. The extend method uses inheritance in the Ember.js framework. The Light object inherits all the methods and bindings of the Ember object. The create method also inherits from the Ember object class and returns a new instance of this class. The bulb object is the new instance of the Ember object that we created. There's more To use the previous examples, we can create our own module and have it imported to our project. To do this, create a new MyObject.js file in the app folder, as follows: // app/myObject.js import Ember from 'ember'; export default function() { const Light = Ember.Object.extend({ init(){ alert('The isON property is defaulted to ' + this.get('isOn')); }, isOn: false }); Light.reopen({ color: 'yellow' }); Light.reopenClass({ wattage: 80 }); const bulb = Light.create(); console.log(bulb.get('color')); console.log(Light.wattage); } This is the module that we can now import to any file of our Ember application. In the app folder, edit the app.js file. You'll need to add the following line at the top of the file: // app/app.js import myObject from './myObject'; At the bottom, before the export, add the following line: myObject(); This will execute the myObject function that we created in the myObject.js file. After running the Ember server, you'll see the isOn property defaulted to the false pop-up message. Working with computed properties In this recipe, we'll take a look at the computed properties and how they can be used to display data, even if that data changes as the application is running. How to do it Let's create a new Ember.Object and add a computed property to it, as shown in the following: Begin by creating a new description computed property. This property will reflect the status of isOn and color properties: const Light = Ember.Object.extend({ isOn: false, color: 'yellow', description: Ember.computed('isOn','color',function() { return 'The ' + this.get('color') + ' light is set to ' + this.get('isOn'); }) }); We can now create a new Light object and get the computed property description: const bulb = Light.create(); bulb.get('description'); //The yellow light is set to false The preceding example creates a computed property that depends on the isOn and color properties. When the description function is called, it returns a string describing the state of the light. Computed properties will observe changes and dynamically update whenever they occur. To see this in action, we can change the preceding example and set the isOn property to false. Use the following code to accomplish this: bulb.set('isOn', true); bulb.get('description') //The yellow light is set to true The description has been automatically updated and will now display that the yellow light is set to true. Chaining the Light object Ember provides a nice feature that allows computed properties to be present in other computed properties. In the previous example, we created a description property that outputted some basic information about the Light object, as follows: Let's add another property that gives a full description: const Light = Ember.Object.extend({ isOn: false, color: 'yellow', age: null, description: Ember.computed('isOn','color',function() { return 'The ' + this.get('color') + ' light is set to ' + this.get('isOn'); }), fullDescription: Ember.computed('description','age',function() { return this.get('description') + ' and the age is ' + this.get('age') }), }); The fullDescription function returns a string that concatenates the output from description with a new string that displays the age: const bulb = Light.create({age:22}); bulb.get('fullDescription'); //The yellow light is set to false and the age is 22 In this example, during instantiation of the Light object, we set the age to 22. We can overwrite any property if required. Alias The Ember.computed.alias method allows us to create a property that is an alias for another property or object. Any call to get or set will behave as if the changes were made to the original property, as shown in the following: const Light = Ember.Object.extend({ isOn: false, color: 'yellow', age: null, description: Ember.computed('isOn','color',function() { return 'The ' + this.get('color') + ' light is set to ' + this.get('isOn'); }), fullDescription: Ember.computed('description','age',function() { return this.get('description') + ' and the age is ' + this.get('age') }), aliasDescription: Ember.computed.alias('fullDescription') }); const bulb = Light.create({age: 22}); bulb.get('aliasDescription'); //The yellow light is set to false and the age is 22. The aliasDescription will display the same text as fullDescription since it's just an alias of this object. If we made any changes later to any properties in the Light object, the alias would also observe these changes and be computed properly. How it works Computed properties are built on top of the observer pattern. Whenever an observation shows a state change, it recomputes the output. If no changes occur, then the result is cached. In other words, the computed properties are functions that get updated whenever any of their dependent value changes. You can use it in the same way that you would use a static property. They are common and useful throughout Ember and it's codebase. Keep in mind that a computed property will only update if it is in a template or function that is being used. If the function or template is not being called, then nothing will occur. This will help with performance. Working with Ember observers in Ember.js Observers are fundamental to the Ember object model. In the next recipe, we'll take our light example and add in an observer and see how it operates. How to do it To begin, we'll add a new isOnChanged observer. This will only trigger when the isOn property changes: const Light = Ember.Object.extend({ isOn: false, color: 'yellow', age: null, description: Ember.computed('isOn','color',function() { return 'The ' + this.get('color') + ' light is set to ' + this.get('isOn') }), fullDescription: Ember.computed('description','age',function() { return this.get('description') + ' and the age is ' + this.get('age') }), desc: Ember.computed.alias('description'), isOnChanged: Ember.observer('isOn',function() { console.log('isOn value changed') }) }); const bulb = Light.create({age: 22}); bulb.set('isOn',true); //console logs isOn value changed The Ember.observer isOnChanged monitors the isOn property. If any changes occur to this property, isOnChanged is invoked. Computed Properties vs Observers At first glance, it might seem that observers are the same as computed properties. In fact, they are very different. Computed properties can use the get and set methods and can be used in templates. Observers, on the other hand, just monitor property changes. They can neither be used in templates nor be accessed like properties. They also don't return any values. With that said, be careful not to overuse observers. For many instances, a computed property is the most appropriate solution. Also, if required, you can add multiple properties to the observer. Just use the following command: Light.reopen({ isAnythingChanged: Ember.observer('isOn','color',function() { console.log('isOn or color value changed') }) }); const bulb = Light.create({age: 22}); bulb.set('isOn',true); // console logs isOn or color value changed bulb.set('color','blue'); // console logs isOn or color value changed The isAnything observer is invoked whenever the isOn or color properties changes. The observer will fire twice as each property has changed. Synchronous issues with the Light object and observers It's very easy to get observers out of sync. For example, if a property that it observes changes, it will be invoked as expected. After being invoked, it might manipulate a property that hasn't been updated yet. This can cause synchronization issues as everything happens at the same time, as follows: The following example shows this behavior: Light.reopen({ checkIsOn: Ember.observer('isOn', function() { console.log(this.get('fullDescription')); }) }); const bulb = Light.create({age: 22}); bulb.set('isOn', true); When isOn is changed, it's not clear whether fullDescription, a computed property, has been updated yet or not. As observers work synchronously, it's difficult to tell what has been fired and changed. This can lead to unexpected behavior. To counter this, it's best to use the Ember.run.once method. This method is a part of the Ember run loop, which is Ember's way of managing how the code is executed. Reopen the Light object and you can see the following occurring: Light.reopen({ checkIsOn: Ember.observer('isOn','color', function() { Ember.run.once(this,'checkChanged'); }), checkChanged: Ember.observer('description',function() { console.log(this.get('description')); }) }); const bulb = Light.create({age: 22}); bulb.set('isOn', true); bulb.set('color', 'blue'); The checkIsOn observer calls the checkChanged observer using Ember.run.once. This method is only run once per run loop. Normally, checkChanged would be fired twice; however, since it's be called using Ember.run.once, it only outputs once. How it works Ember observers are mixins from the Ember.Observable class. They work by monitoring property changes. When any change occurs, they are triggered. Keep in mind that these are not the same as computed properties and cannot be used in templates or with getters or setters. Summary In this article you learned classes and instances. You also learned computed properties and how they can be used to display data. Resources for Article: Further resources on this subject: Introducing the Ember.JS framework [article] Building Reusable Components [article] Using JavaScript with HTML [article]
Read more
  • 0
  • 0
  • 14632

article-image-angular-2-components-what-you-need-know
David Meza
08 Jun 2016
10 min read
Save for later

Angular 2 Components: What You Need to Know

David Meza
08 Jun 2016
10 min read
From the 7th to the 13th of November you can save up to 80% on some of our very best Angular content - along with our hottest React eBooks and video courses. If you're curious about the cutting-edge of modern web development we think you should click here and invest in your skills... The Angular team introduced quite a few changes in version 2 of the framework, and components are one of the important ones. If you are familiar with Angular 1 applications, components are actually a form of directives that are extended with template-oriented features. In addition, components are optimized for better performance and simpler configuration than a directive as Angular doesn’t support all its features. Also, while a component is technically a directive, it is so distinctive and central to Angular 2 applications that you’ll find that it is often separated as a different ingredient for the architecture of an application. So, what is a component? In simple words, a component is a building block of an application that controls a part of your screen real estate or your “view”. It does one thing, and it does it well. For example, you may have a component to display a list of active chats in a messaging app (which, in turn, may have child components to display the details of the chat or the actual conversation). Or you may have an input field that uses Angular’s two-way data binding to keep your markup in sync with your JavaScript code. Or, at the most elementary level, you may have a component that substitutes an HTML template with no special functionality just because you wanted to break down something complex into smaller, more manageable parts. Now, I don’t believe too much in learning something by only reading about it, so let’s get your hands dirty and write your own component to see some sample usage. I will assume that you already have Typescript installed and have done the initial configuration required for any Angular 2 app. If you haven’t, you can check out how to do so by clicking on this link. You may have already seen a component at its most basic level: import {Component} from 'angular2/core'; @Component({ selector: 'my-app', template: '<h1>{{ title }}</h1>' }) export class AppComponent { title = 'Hello World!'; } That’s it! That’s all you really need to have a component. Three things are happening here: You are importing the Component class from the Angular 2 core package. You are using a Typescript decorator to attach some metadata to your AppComponent class. If you don’t know what a decorator is, it is simply a function that extends your class with Angular code so that it becomes an Angular component. Otherwise, it would just be a plain class with no relation to the Angular framework. In the options, you defined a selector, which is the tag name used in the HTML code so that Angular can find where to insert your component, and a template, which is applied to the inner contents of the selector tag. You may notice that we also used interpolation to bind the component data and display the value of the public variable in the template. You are exporting your AppComponent class so that you can import it elsewhere (in this case, you would import it in your main script so that you can bootstrap your application). That’s a good start, but let’s get into a more complex example that showcases other powerful features of Angular and Typescript/ES2015. In the following example, I've decided to stuff everything into one component. However, if you'd like to stick to best practices and divide the code into different components and services or if you get lost at any point, you can check out the finished/refactored example here. Without any further ado, let’s make a quick page that displays a list of products. Let’s start with the index: <html> <head> <title>Products</title> <meta name="viewport" content="width=device-width, initial-scale=1"> <script src="node_modules/es6-shim/es6-shim.min.js"></script> <script src="node_modules/systemjs/dist/system-polyfills.js"></script> <script src="node_modules/angular2/bundles/angular2-polyfills.js"></script> <script src="node_modules/systemjs/dist/system.src.js"></script> <script src="node_modules/rxjs/bundles/Rx.js"></script> <script src="node_modules/angular2/bundles/angular2.dev.js"></script> <link rel="stylesheet" href="styles.css"> <script> System.config({ packages: { app: { format: 'register', defaultExtension: 'js' } } }); System.import('app/main') .then(null, console.error.bind(console)); </script> </head> <body> <my-app>Loading...</my-app> </body> </html> There’s nothing out of the ordinary going on here. You are just importing all of the necessary scripts for your application to work as demonstrated in the quick-start. The app/main.ts file should already look somewhat similar to this: import {bootstrap} from ‘angular2/platform/browser’ import {AppComponent} from ‘./app.component’ bootstrap(AppComponent); Here, we imported the bootstrap function from the Angular 2 package and an AppComponent class from the local directory. Then, we initialized the application. First, create a product class that defines the constructor and type definition of any products made. Then, create app/product.ts, as follows: export class Product { id: number; price: number; name: string; } Next, you will create an app.component.ts file, which is where the magic happens. I've decided to stuff everything in here for demonstration purposes, but ideally, you would want to extract the products array into its own service, the HTML template into its own file, and the product details into its own component. This is how the component will look: import {Component} from 'angular2/core'; import {Product} from './product' @Component({ selector: 'my-app', template: ` <h1>{{title}}</h1> <ul class="products"> <li *ngFor="#product of products" [class.selected]="product === selectedProduct" (click)="onSelect(product)"> <span class="badge">{{product.id}}</span> {{product.name}} </li> </ul> <div *ngIf="selectedProduct"> <h2>{{selectedProduct.name}} details!</h2> <div><label>id: </label>{{selectedProduct.id}}</div> <div><label>Price: </label>{{selectedProduct.price | currency: 'USD': true }}</div> <div> <label>name: </label> <input [(ngModel)]="selectedProduct.name" placeholder="name"/> </div> </div> `, styleUrls: ['app/app.component.css'] }) export class AppComponent { title = 'My Products'; products = PRODUCTS; selectedProduct: Product; onSelect(product: Product) { this.selectedProduct = product; } } const PRODUCTS: Product[] = [ { "id": 1, "price": 45.12, "name": "TV Stand" }, { "id": 2, "price": 25.12, "name": "BBQ Grill" }, { "id": 3, "price": 43.12, "name": "Magic Carpet" }, { "id": 4, "price": 12.12, "name": "Instant liquidifier" }, { "id": 5, "price": 9.12, "name": "Box of puppies" }, { "id": 6, "price": 7.34, "name": "Laptop Desk" }, { "id": 7, "price": 5.34, "name": "Water Heater" }, { "id": 8, "price": 4.34, "name": "Smart Microwave" }, { "id": 9, "price": 93.34, "name": "Circus Elephant" }, { "id": 10, "price": 87.34, "name": "Tinted Window" } ]; The app/app.component.css file will look something similar to this: .selected { background-color: #CFD8DC !important; color: white; } .products { margin: 0 0 2em 0; list-style-type: none; padding: 0; width: 15em; } .products li { position: relative; min-height: 2em; cursor: pointer; position: relative; left: 0; background-color: #EEE; margin: .5em; padding: .3em 0; border-radius: 4px; font-size: 16px; overflow: hidden; white-space: nowrap; text-overflow: ellipsis; color: #3F51B5; display: block; width: 100%; -webkit-transition: all 0.3s ease; -moz-transition: all 0.3s ease; -o-transition: all 0.3s ease; -ms-transition: all 0.3s ease; transition: all 0.3s ease; } .products li.selected:hover { background-color: #BBD8DC !important; color: white; } .products li:hover { color: #607D8B; background-color: #DDD; left: .1em; color: #3F51B5; text-decoration: none; font-size: 1.2em; background-color: rgba(0,0,0,0.01); } .products .text { position: relative; top: -3px; } .products .badge { display: inline-block; font-size: small; color: white; padding: 0.8em 0.7em 0 0.7em; background-color: #607D8B; line-height: 1em; position: relative; left: -1px; top: 0; height: 2em; margin-right: .8em; border-radius: 4px 0 0 4px; } I'll explain what is happening: We imported from Component so that we can decorate your new component and imported Product so that we can create an array of products and have access to Typescript type infererences. We decorated our component with a “my-app” selector property, which finds <my-app></my-app> tags and inserts our component there. I decided to define the template in this file instead of using a URL so that I can demonstrate how handy the ES2015 template string syntax is (no more long strings or plus-separated strings). Finally, the styleUrls property uses an absolute file path, and any styles applied will only affect the template in this scope. The actual component only has a few properties outside of the decorator configuration. It has a title that you can bind to the template, a products array that will iterate in the markup, a selectedProduct variable that is a scope variable that will initialize as undefined and an onSelect method that will be run every time you click on a list item. Finally, define a constant (const because I've hardcoded it in and it won't change in runtime) PRODUCTS array to mock an object that is usually returned by a service after an external request. Also worth noting are the following: As you are using Typescript, you can make inferences about what type of data your variables will hold. For example, you may have noticed that I defined the Product type whenever I knew that this the only kind of object I want to allow for this variable or to be passed to a function. Angular 2 has different property prefixes, and if you would like to learn when to use each one, you can check out this Stack Overflow question. That's it! You now have a bit more complex component that has a particular functionality. As I previously mentioned, this could be refactored, and that would look something similar to this: import {Component, OnInit} from 'angular2/core'; import {Product} from './product'; import {ProductDetailComponent} from './product-detail.component'; import {ProductService} from './product.service'; @Component({ selector: 'my-app', templateUrl: 'app/app.component.html', styleUrls: ['app/app.component.css'], directives: [ProductDetailComponent], providers: [ProductService] }) export class AppComponent implements OnInit { title = 'Products'; products: Product[]; selectedProduct: Product; constructor(private _productService: ProductService) { } getProducts() { this._productService.getProducts().then(products => this.products = products); } ngOnInit() { this.getProducts(); } onSelect(product: Product) { this.selectedProduct = product; } } In this example, you get your product data from a service and separate the product detail template into a child component, which is much more modular. I hope you've enjoyed reading this post. About this author David Meza is an AngularJS developer at the City of Raleigh. He is passionate about software engineering and learning new programming languages and frameworks. He is most familiar working with Ruby, Rails, and PostgreSQL in the backend and HTML5, CSS3, JavaScript, and AngularJS in the frontend. He can be found at here.
Read more
  • 0
  • 0
  • 14527

article-image-getting-started-spring-boot
Packt
03 Jan 2017
17 min read
Save for later

Getting Started with Spring Boot

Packt
03 Jan 2017
17 min read
In this article by, Greg Turnquist, author of the book, Learning Spring Boot – Second Edition, we will cover the following topics: Introduction Creating a bare project using http://start.spring.io Seeing how to run our app straight inside our IDE with no stand alone containers (For more resources related to this topic, see here.) Perhaps you've heard about Spring Boot? It's only cultivated the most popular explosion in software development in years. Clocking millions of downloads per month, the community has exploded since it's debut in 2013. I hope you're ready for some fun, because we are going to take things to the next level as we use Spring Boot to build a social media platform. We'll explore its many valuable features all the way from tools designed to speed up development efforts to production-ready support as well as cloud native features. Despite some rapid fire demos you might have caught on YouTube, Spring Boot isn't just for quick demos. Built atop the de facto standard toolkit for Java, the Spring Framework, Spring Boot will help us build this social media platform with lightning speed AND stability. In this article, we'll get a quick kick off with Spring Boot using Java the programming language. Maybe that makes you chuckle? People have been dinging Java for years as being slow, bulky, and not the means for agile shops. Together, we'll see how that is not the case. At any time, if you're interested in a more visual medium, feel free to checkout my Learning Spring Boot [Video] at https://www.packtpub.com/application-development/learning-spring-boot-video. What is step #1 when we get underway with a project? We visit Stack Overflow and look for an example project to build a project! Seriously, the amount of time spent adapting another project's build file, picking dependencies, and filling in other details about our project adds up. At the Spring Initializr (http://start.spring.io), we can enter minimal details about our app, pick our favorite build system, the version of Spring Boot we wish to use, and then choose our dependencies off a menu. Click the Download button, and we have a free standing, ready-to-run application. In this article, let's take a quick test drive and build small web app. We can start by picking Gradle from the dropdown. Then, select 1.4.1.BUILD-SNAPSHOT as the version of Spring Boot we wish to use. Next, we need to pick our application's coordinates: Group - com.greglturnquist.learningspringboot Artifact - learning-spring-boot Now comes the fun part. We get to pick the ingredients for our application like picking off a delicious menu. If we start typing, for example, "Web", into the Dependencies box, we'll see several options appear. To see all the available options, click on the Switch to the full version link toward the bottom. There are lots of overrides, such as switching from JAR to WAR, or using an older version of Java. You can also pick Kotlin or Groovy as the primary language for your application. For starters, in this day and age, there is no reason to use anything older than Java 8. And JAR files are the way to go. WAR files are only needed when applying Spring Boot to an old container. To build our social media platform, we need a few ingredients as shown: Web (embedded Tomcat + Spring MVC) WebSocket JPA (Spring Data JPA) H2 (embedded SQL data store) Thymeleaf template engine Lombok (to simplify writing POJOs) The following diagram shows an overview of these ingredients: With these items selected, click on Generate Project. There are LOTS of other tools that leverage this site. For example, IntelliJ IDEA lets you create a new project inside the IDE, giving you the same options shown here. It invokes the web site's REST API, and imports your new project. You can also interact with the site via cURL or any other REST-based tool. Now let's unpack that ZIP file and see what we've got: a build.gradle build file a Gradle wrapper, so there's no need to install Gradle a LearningSpringBootApplication.java application class an application.properties file a LearningSpringBootApplicationTests.java test class We built an empty Spring Boot project. Now what? Before we sink our teeth into writing code, let's take a peek at the build file. It's quite terse, but carries some key bits. Starting from the top: buildscript { ext { springBootVersion = '1.4.1.BUILD-SNAPSHOT' } repositories { mavenCentral() maven { url "https://repo.spring.io/snapshot" } maven { url "https://repo.spring.io/milestone" } } dependencies { classpath("org.springframework.boot:spring-boot-gradle- plugin:${springBootVersion}") } } This contains the basis for our project: springBootVersion shows us we are using Spring Boot 1.4.1.BUILD-SNAPSHOT The Maven repositories it will pull from are listed next Finally, we see the spring-boot-gradle-plugin, a critical tool for any Spring Boot project The first piece, the version of Spring Boot, is important. That's because Spring Boot comes with a curated list of 140 third party library versions extending well beyond the Spring portfolio and into some of the most commonly used libraries in the Java ecosystem. By simply changing the version of Spring Boot, we can upgrade all these libraries to newer versions known to work together. There is an extra project, the Spring IO Platform (https://spring.io/platform), which includes an additional 134 curated versions, bringing the total to 274. The repositories aren't as critical, but it's important to add milestones and snapshots if fetching a library not released to Maven central or hosted on some vendor's local repository. Thankfully, the Spring Initializr does this for us based on the version of Spring Boot selected on the site. Finally, we have the spring-boot-gradle-plugin (and there is a corresponding spring-boot-maven-plugin for Maven users). This plugin is responsible for linking Spring Boot's curated list of versions with the libraries we select in the build file. That way, we don't have to specify the version number. Additionally, this plugin hooks into the build phase and bundle our application into a runnable über JAR, also known as a shaded or fat JAR. Java doesn't provide a standardized way to load nested JAR files into the classpath. Spring Boot provides the means to bundle up third-party JARs inside an enclosing JAR file and properly load them at runtime. Read more at http://docs.spring.io/spring-boot/docs/1.4.1.BUILD-SNAPSHOT/reference/htmlsingle/#executable-jar. With an über JAR in hand, we only need put it on a thumb drive, and carry it to another machine, to a hundred virtual machines in the cloud or your data center, or anywhere else, and it simply runs where we can find a JVM. Peeking a little further down in build.gradle, we can see the plugins that are enabled by default: apply plugin: 'java' apply plugin: 'eclipse' apply plugin: 'spring-boot' The java plugin indicates the various tasks expected for a Java project The eclipse plugin helps generate project metadata for Eclipse users The spring-boot plugin is where the actual spring-boot-gradle-plugin is activated An up-to-date copy of IntelliJ IDEA can ready a plain old Gradle build file fine without extra plugins. Which brings us to the final ingredient used to build our application: dependencies. Spring Boot starters No application is complete without specifying dependencies. A valuable facet of Spring Boot are its virtual packages. These are published packages that don't contain any code, but instead simply list other dependencies. The following list shows all the dependencies we selected on the Spring Initializr site: dependencies { compile('org.springframework.boot:spring-boot-starter-data-jpa') compile('org.springframework.boot:spring-boot-starter- thymeleaf') compile('org.springframework.boot:spring-boot-starter-web') compile('org.springframework.boot:spring-boot-starter- websocket') compile('org.projectlombok:lombok') runtime('com.h2database:h2') testCompile('org.springframework.boot:spring-boot-starter-test') } If you'll notice, most of these packages are Spring Boot starters: spring-boot-starter-data-jpa pulls in Spring Data JPA, Spring JDBC, Spring ORM, and Hibernate spring-boot-starter-thymeleaf pulls in Thymeleaf template engine along with Spring Web and the Spring dialect of Thymeleaf spring-boot-starter-web pulls in Spring MVC, Jackson JSON support, embedded Tomcat, and Hibernate's JSR-303 validators spring-boot-starter-websocket pulls in Spring WebSocket and Spring Messaging These starter packages allow us to quickly grab the bits we need to get up and running. Spring Boot starters have gotten so popular that lots of other third party library developers are crafting their own. In addition to starters, we have three extra libraries: Project Lombok makes it dead simple to define POJOs without getting bogged down in getters, setters, and others details. H2 is an embedded database allowing us to write tests, noodle out solutions, and get things moving before getting involved with an external database. spring-boot-starter-test pulls in Spring Boot Test, JSON Path, JUnit, AssertJ, Mockito, Hamcrest, JSON Assert, and Spring Test, all within test scope. The value of this last starter, spring-boot-starter-test, cannot be overstated. With a single line, the most powerful test utilities are at our fingertips, allowing us to write unit tests, slice tests, and full blown our-app-inside-embedded-Tomcat tests. It's why this starter is included in all projects without checking a box on the Spring Initializr site. Now to get things off the ground, we need to shift focus to the tiny bit of code written for us by the Spring Initializr. Running a Spring Boot application The fabulous http://start.spring.io website created a tiny class, LearningSpringBootApplication as shown in the following code: package com.greglturnquist.learningspringboot; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; @SpringBootApplication public class LearningSpringBootApplication { public static void main(String[] args) { SpringApplication.run( LearningSpringBootApplication.class, args); } } This tiny class is actually a fully operational web application! The @SpringBootApplication annotation tells Spring Boot, when launched, to scan recursively for Spring components inside this package and register them. It also tells Spring Boot to enable autoconfiguration, a process where beans are automatically created based on classpath settings, property settings, and other factors. Finally, it indicates that this class itself can be a source for Spring bean definitions. It holds a public static void main(), a simple method to run the application. There is no need to drop this code into an application server or servlet container. We can just run it straight up, inside our IDE. The amount of time saved by this feature, over the long haul, adds up fast. SpringApplication.run() points Spring Boot at the leap off point. In this case, this very class. But it's possible to run other classes. This little class is runnable. Right now! In fact, let's give it a shot. . ____ _ __ _ _ /\ / ___'_ __ _ _(_)_ __ __ _ ( ( )___ | '_ | '_| | '_ / _` | \/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |___, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v1.4.1.BUILD-SNAPSHOT) 2016-09-18 19:52:44.214: Starting LearningSpringBootApplication on ret... 2016-09-18 19:52:44.217: No active profile set, falling back to defaul... 2016-09-18 19:52:44.513: Refreshing org.springframework.boot.context.e... 2016-09-18 19:52:45.785: Bean 'org.springframework.transaction.annotat... 2016-09-18 19:52:46.188: Tomcat initialized with port(s): 8080 (http) 2016-09-18 19:52:46.201: Starting service Tomcat 2016-09-18 19:52:46.202: Starting Servlet Engine: Apache Tomcat/8.5.5 2016-09-18 19:52:46.323: Initializing Spring embedded WebApplicationCo... 2016-09-18 19:52:46.324: Root WebApplicationContext: initialization co... 2016-09-18 19:52:46.466: Mapping servlet: 'dispatcherServlet' to [/] 2016-09-18 19:52:46.469: Mapping filter: 'characterEncodingFilter' to:... 2016-09-18 19:52:46.470: Mapping filter: 'hiddenHttpMethodFilter' to: ... 2016-09-18 19:52:46.470: Mapping filter: 'httpPutFormContentFilter' to... 2016-09-18 19:52:46.470: Mapping filter: 'requestContextFilter' to: [/*] 2016-09-18 19:52:46.794: Building JPA container EntityManagerFactory f... 2016-09-18 19:52:46.809: HHH000204: Processing PersistenceUnitInfo [ name: default ...] 2016-09-18 19:52:46.882: HHH000412: Hibernate Core {5.0.9.Final} 2016-09-18 19:52:46.883: HHH000206: hibernate.properties not found 2016-09-18 19:52:46.884: javassist 2016-09-18 19:52:46.976: HCANN000001: Hibernate Commons Annotations {5... 2016-09-18 19:52:47.169: Using dialect: org.hibernate.dialect.H2Dialect 2016-09-18 19:52:47.358: HHH000227: Running hbm2ddl schema export 2016-09-18 19:52:47.359: HHH000230: Schema export complete 2016-09-18 19:52:47.390: Initialized JPA EntityManagerFactory for pers... 2016-09-18 19:52:47.628: Looking for @ControllerAdvice: org.springfram... 2016-09-18 19:52:47.702: Mapped "{[/error]}" onto public org.springfra... 2016-09-18 19:52:47.703: Mapped "{[/error],produces=[text/html]}" onto... 2016-09-18 19:52:47.724: Mapped URL path [/webjars/**] onto handler of... 2016-09-18 19:52:47.724: Mapped URL path [/**] onto handler of type [c... 2016-09-18 19:52:47.752: Mapped URL path [/**/favicon.ico] onto handle... 2016-09-18 19:52:47.778: Cannot find template location: classpath:/tem... 2016-09-18 19:52:48.229: Registering beans for JMX exposure on startup 2016-09-18 19:52:48.278: Tomcat started on port(s): 8080 (http) 2016-09-18 19:52:48.282: Started LearningSpringBootApplication in 4.57... Scrolling through the output, we can see several things: The banner at the top gives us a readout of the version of Spring Boot. (BTW, you can create your own ASCII art banner by creating either banner.txt or banner.png into src/main/resources/) Embedded Tomcat is initialized on port 8080, indicating it's ready for web requests Hibernate is online with the H2 dialect enabled A few Spring MVC routes are registered, such as /error, /webjars, a favicon.ico, and a static resource handler And the wonderful Started LearningSpringBootApplication in 4.571 seconds message Spring Boot uses embedded Tomcat, so there's no need to install a container on our target machine. Non-web apps don't even require Apache Tomcat. The JAR itself is the new container that allows us to stop thinking in terms of old fashioned servlet containers. Instead, we can think in terms of apps. All these factors add up to maximum flexibility in application deployment. How does Spring Boot use embedded Tomcat among other things? As mentioned earlier, it has autoconfiguration meaning it has Spring beans that are created based on different conditions. When Spring Boot sees Apache Tomcat on the classpath, it creates an embedded Tomcat instance along with several beans to support that. When it spots Spring MVC on the classpath, it creates view resolution engines, handler mappers, and a whole host of other beans needed to support that, letting us focus on coding custom routes. With H2 on the classpath, it spins up an in-memory, embedded SQL data store. Spring Data JPA will cause Spring Boot to craft an EntityManager along with everything else needed to start speaking JPA, letting us focus on defining repositories. At this stage, we have a running web application, albeit an empty one. There are no custom routes and no means to handle data. But we can add some real fast. Let's draft a simple REST controller: package com.greglturnquist.learningspringboot; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RequestParam; import org.springframework.web.bind.annotation.RestController; @RestController public class HomeController { @GetMapping public String greeting(@RequestParam(required = false, defaultValue = "") String name) { return name.equals("") ? "Hey!" : "Hey, " + name + "!"; } } Let's examine this tiny REST controller in detail: The @RestController annotation indicates that we don't want to render views, but instead write the results straight into the response body. @GetMapping is Spring's shorthand annotation for @RequestMapping(method = RequestMethod.GET, …[]). In this case, it defaults the route to "/". Our greeting() method has one argument: @RequestParam(required = false, defaultValue = "") String name. It indicates that this value can be requested via an HTTP query (?name=Greg), the query isn't required, and in case it's missing, supply an empty string. Finally, we are returning one of two messages depending on whether or not name is empty using Java's classic ternary operator. If we re-launch the LearningSpringBootApplication in our IDE, we'll see a new entry in the console. 2016-09-18 20:13:08.149: Mapped "{[],methods=[GET]}" onto public java.... We can then ping our new route in the browser at http://localhost:8080 and http://localhost:8080?name=Greg. Try it out! That's nice, but since we picked Spring Data JPA, how hard would it be to load some sample data and retrieve it from another route? (Spoiler alert: not hard at all.) We can start out by defining a simple Chapter entity to capture book details as shown in the following code: package com.greglturnquist.learningspringboot; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.Id; import lombok.Data; @Data @Entity public class Chapter { @Id @GeneratedValue private Long id; private String name; private Chapter() { // No one but JPA uses this. } public Chapter(String name) { this.name = name; } } This little POJO let's us the details about the chapter of a book as follows: The @Data annotation from Lombok will generate getters, setters, a toString() method, a constructor for all required fields (those marked final), an equals() method, and a hashCode() method. The @Entity annotation flags this class as suitable for storing in a JPA data store. The id field is marked with JPA's @Id and @GeneratedValue annotations, indicating this is the primary key, and that writing new rows into the corresponding table will create a PK automatically. Spring Data JPA will by default create a table named CHAPTER with two columns, ID, and NAME. The key field is name, which is populated by the publicly visible constructor. JPA requires a no-arg constructor, so we have included one, but marked it private so no one but JPA may access it. To interact with this entity and it's corresponding table in H2, we could dig in and start using the autoconfigured EntityManager supplied by Spring Boot. By why do that, when we can declare a repository-based solution? To do so, we'll create an interface defining the operations we need. Check out this simple interface: package com.greglturnquist.learningspringboot; import org.springframework.data.repository.CrudRepository; public interface ChapterRepository extends CrudRepository<Chapter, Long> { } This declarative interface creates a Spring Data repository as follows: CrudRepository extends Repository, a Spring Data Commons marker interface that signals Spring Data to create a concrete implementation while also capturing domain information. CrudRepository, also from Spring Data Commons, has some pre-defined CRUD operations (save, delete, deleteAll, findOne, findAll). It specifies the entity type (Chapter) and the type of the primary key (Long). Spring Data JPA will automatically wire up a concrete implementation of our interface. Spring Data doesn't engage in code generation. Code generation has a sordid history of being out of date at some of the worst times. Instead, Spring Data uses proxies and other mechanisms to support all these operations. Never forget - the code you don't write has no bugs. With Chapter and ChapterRepository defined, we can now pre-load the database, as shown in the following code: package com.greglturnquist.learningspringboot; import org.springframework.boot.CommandLineRunner; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; @Configuration public class LoadDatabase { @Bean CommandLineRunner init(ChapterRepository repository) { return args -> { repository.save( new Chapter("Quick start with Java")); repository.save( new Chapter("Reactive Web with Spring Boot")); repository.save( new Chapter("...and more!")); }; } } This class will be automatically scanned in by Spring Boot and run in the following way: @Configuration marks this class as a source of beans. @Bean indicates that the return value of init() is a Spring Bean. In this case, a CommandLineRunner. Spring Boot runs all CommandLineRunner beans after the entire application is up and running. This bean definition is requesting a copy of the ChapterRepository. Using Java 8's ability to coerce the args → {} lambda function into a CommandLineRunner, we are able to write several save() operations, pre-loading our data. With this in place, all that's left is write a REST controller to serve up the data! package com.greglturnquist.learningspringboot; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RestController; @RestController public class ChapterController { private final ChapterRepository repository; public ChapterController(ChapterRepository repository) { this.repository = repository; } @GetMapping("/chapters") public Iterable<Chapter> listing() { return repository.findAll(); } } This controller is able to serve up our data as follows: @RestController indicates this is another REST controller. Constructor injection is used to automatically load it with a copy of the ChapterRepository. With Spring, if there is a only one constructor call, there is no need to include an @Autowired annotation. @GetMapping tells Spring that this is the place to route /chapters calls. In this case, it returns the results of the findAll() call found in CrudRepository. If we re-launch our application and visit http://localhost:8080/chapters, we can see our pre-loaded data served up as a nicely formatted JSON document: It's not very elaborate, but this small collection of classes has helped us quickly define a slice of functionality. And if you'll notice, we spent zero effort configuring JSON converters, route handlers, embedded settings, or any other infrastructure. Spring Boot is designed to let us focus on functional needs, not low level plumbing. Summary So in this article we introduced the Spring Boot concept in brief and we rapidly crafted a Spring MVC application using the Spring stack on top of Apache Tomcat with little configuration from our end. Resources for Article: Further resources on this subject: Writing Custom Spring Boot Starters [article] Modernizing our Spring Boot app [article] Introduction to Spring Framework [article]
Read more
  • 0
  • 0
  • 14149

article-image-getting-started-react-and-bootstrap
Packt
13 Dec 2016
18 min read
Save for later

Getting Started with React and Bootstrap

Packt
13 Dec 2016
18 min read
In this article by Mehul Bhat and Harmeet Singh, the authors of the book Learning Web Development with React and Bootstrap, you will learn to build two simple real-time examples: Hello World! with ReactJS A simple static form application with React and Bootstrap There are many different ways to build modern web applications with JavaScript and CSS, including a lot of different tool choices, and there is a lot of new theory to learn. This article will introduce you to ReactJS and Bootstrap, which you will likely come across as you learn about modern web application development. It will then take you through the theory behind the Virtual DOM and managing the state to create interactive, reusable and stateful components ourselves and then push it to a Redux architecture. You will have to learn very carefully about the concept of props and state management and where it belongs. If you want to learn code then you have to write code, in whichever way you feel comfortable. Try to create small components/code samples, which will give you more clarity/understanding of any technology. Now, let's see how this article is going to make your life easier when it comes to Bootstrap and ReactJS. Facebook has really changed the way we think about frontend UI development with the introduction of React. One of the main advantages of this component-based approach is that it is easy to understand, as the view is just a function of the properties and state. We're going to cover the following topics: Setting up the environment ReactJS setup Bootstrap setup Why Bootstrap? Static form example with React and Bootstrap (For more resources related to this topic, see here.) ReactJS React (sometimes called React.js or ReactJS) is an open-source JavaScript library that provides a view for data rendered as HTML. Components have been used typically to render React views that contain additional components specified as custom HTML tags. React gives you a trivial virtual DOM, powerful views without templates, unidirectional data flow, and explicit mutation. It is very methodical in updating the HTML document when the data changes; and provides a clean separation of components on a modern single-page application. From below example, we will have clear idea on normal HTML encapsulation and ReactJS custom HTML tag. JavaScript code: <section> <h2>Add your Ticket</h2> </section> <script> var root = document.querySelector('section').createShadowRoot(); root.innerHTML = '<style>h2{ color: red; }</style>' + '<h2>Hello World!</h2>'; </script> ReactJS code: var sectionStyle = { color: 'red' }; var AddTicket = React.createClass({ render: function() { return (<section><h2 style={sectionStyle}> Hello World!</h2></section>)} }) ReactDOM.render(<AddTicket/>, mountNode); As your app comes into existence and develops, it's advantageous to ensure that your components are used in the right manner. The React app consists of reusable components, which makes code reuse, testing, and separation of concerns easy. React is not only the V in MVC, it has stateful components (stateful components remembers everything within this.state). It handles mapping from input to state changes, and it renders components. In this sense, it does everything that an MVC does. Let's look at React's component life cycle and its different levels. Observe the following screenshot: React isn't an MVC framework; it's a library for building a composable user interface and reusable components. React used at Facebook in production and https://www.instagram.com/ is entirely built on React. Setting up the environment When we start to make an application with ReactJS, we need to do some setup, which just involves an HTML page and includes a few files. First, we create a directory (folder) called Chapter 1. Open it up in any code editor of your choice. Create a new file called index.html directly inside it and add the following HTML5 boilerplate code: <!doctype html> <html class="no-js" lang=""> <head> <meta charset="utf-8"> <title>ReactJS Chapter 1</title> </head> <body> <!--[if lt IE 8]> <p class="browserupgrade">You are using an <strong>outdated</strong> browser. Please <a href="http://browsehappy.com/">upgrade your browser</a> to improve your experience.</p> <![endif]--> <!-- Add your site or application content here --> <p>Hello world! This is HTML5 Boilerplate.</p> </body> </html> This is a standard HTML page that we can update once we have included the React and Bootstrap libraries. Now we need to create couple of folders inside the Chapter 1 folder named images, css, and js (JavaScript) to make your application manageable. Once you have completed the folder structure it will look like this: Installing ReactJS and Bootstrap Once we have finished creating the folder structure, we need to install both of our frameworks, ReactJS and Bootstrap. It's as simple as including JavaScript and CSS files in your page. We can do this via a content delivery network (CDN), such as Google or Microsoft, but we are going to fetch the files manually in our application so we don't have to be dependent on the Internet while working offline. Installing React First, we have to go to this URL https://facebook.github.io/react/ and hit the download button. This will give you a ZIP file of the latest version of ReactJS that includes ReactJS library files and some sample code for ReactJS. For now, we will only need two files in our application: react.min.js and react-dom.min.js from the build directory of the extracted folder. Here are a few steps we need to follow: Copy react.min.js and react-dom.min.js to your project directory, the Chapter 1/js folder, and open up your index.html file in your editor. Now you just need to add the following script in your page's head tag section: <script type="text/js" src="js/react.min.js"></script> <script type="text/js" src="js/react-dom.min.js"></script> Now we need to include the compiler in our project to build the code because right now we are using tools such as npm. We will download the file from the following CDN path, https://cdnjs.cloudflare.com/ajax/libs/babel-core/5.8.23/browser.min.js, or you can give the CDN path directly. The Head tag section will look this: <script type="text/js" src="js/react.min.js"></script> <script type="text/js" src="js/react-dom.min.js"></script> <script type="text/js" src="js/browser.min.js"></script> Here is what the final structure of your js folder will look like: Bootstrap Bootstrap is an open source frontend framework maintained by Twitter for developing responsive websites and web applications. It includes HTML, CSS, and JavaScript code to build user interface components. It's a fast and easy way to develop a powerful mobile first user interface. The Bootstrap grid system allows you to create responsive 12-column grids, layouts, and components. It includes predefined classes for easy layout options (fixed-width and full width). Bootstrap has a dozen pre-styled reusable components and custom jQuery plugins, such as button, alerts, dropdown, modal, tooltip tab, pagination, carousal, badges, icons, and much more. Installing Bootstrap Now, we need to install Bootstrap. Visit http://getbootstrap.com/getting-started/#download and hit on the Download Bootstrap button. This includes the compiled and minified version of css and js for our app; we just need the CSS bootstrap.min.css and fonts folder. This style sheet will provide you with the look and feel of all of the components, and is responsive layout structure for our application. Previous versions of Bootstrap included icons as images but, in version 3, icons have been replaced as fonts. We can also customize the Bootstrap CSS style sheet as per the component used in your application: Extract the ZIP folder and copy the Bootstrap CSS from the css folder to your project folder CSS. Now copy the fonts folder of Bootstrap into your project root directory. Open your index.html in your editor and add this link tag in your head section: <link rel="stylesheet" href="css/bootstrap.min.css"> That's it. Now we can open up index.html again, but this time in your browser, to see what we are working with. Here is the code that we have written so far: <!doctype html> <html class="no-js" lang=""> <head> <meta charset="utf-8"> <title>ReactJS Chapter 1</title> <link rel="stylesheet" href="css/bootstrap.min.css"> <script type="text/javascript" src="js/react.min.js"></script> <script type="text/javascript" src="js/react-dom.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/babel-core/5.8.23/browser.min.js"></script> </head> <body> <!--[if lt IE 8]> <p class="browserupgrade">You are using an <strong>outdated</strong> browser. Please <a href="http://browsehappy.com/">upgrade your browser</a> to improve your experience.</p> <![endif]--> <!-- Add your site or application content here --> </body> </html> Using React So now we've got the ReactJS and Bootstrap style sheet and in there we've initialized our app. Now let's start to write our first Hello World app using reactDOM.render(). The first argument of the ReactDOM.render method is the component we want to render and the second is the DOM node to which it should mount (append) to: ReactDOM.render( ReactElement element, DOMElement container, [function callback]) In order to translate it to vanilla JavaScript, we use wraps in our React code, <script type"text/babel">, tag that actually performs the transformation in the browser. Let's start out by putting one div tag in our body tag: <div id="hello"></div> Now, add the script tag with the React code: <script type="text/babel"> ReactDOM.render( <h1>Hello, world!</h1>, document.getElementById('hello') ); </script> Let's open the HTML page in your browser. If you see Hello, world! in your browser then we are on good track. In the preceding screenshot, you can see it shows the Hello, world! in your browser. That's great. We have successfully completed our setup and built our first Hello World app. Here is the full code that we have written so far: <!doctype html> <html class="no-js" lang=""> <head> <meta charset="utf-8"> <title>ReactJS Chapter 1</title> <link rel="stylesheet" href="css/bootstrap.min.css"> <script type="text/javascript" src="js/react.min.js"></script> <script type="text/javascript" src="js/react-dom.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/babel-core/5.8.23/browser.min.js"></script> </head> <body> <!--[if lt IE 8]> <p class="browserupgrade">You are using an <strong>outdated</strong> browser. Please <a href="http://browsehappy.com/">upgrade your browser</a> to improve your experience.</p> <![endif]--> <!-- Add your site or application content here --> <div id="hello"></div> <script type="text/babel"> ReactDOM.render( <h1>Hello, world!</h1>, document.getElementById('hello') ); </script> </body> </html> Static form with React and Bootstrap We have completed our first Hello World app with React and Bootstrap and everything looks good and as expected. Now it's time do more and create one static login form, applying the Bootstrap look and feel to it. Bootstrap is a great way to make you app responsive grid system for different mobile devices and apply the fundamental styles on HTML elements with the inclusion of a few classes and div's. The responsive grid system is an easy, flexible, and quick way to make your web application responsive and mobile first that appropriately scales up to 12 columns per device and viewport size. First, let's start to make an HTML structure to follow the Bootstrap grid system. Create a div and add a className .container for (fixed width) and .container-fluid for (full width). Use the className attribute instead of using class: <div className="container-fluid"></div> As we know, class and for are discouraged as XML attribute names. Moreover, these are reserved words in many JavaScript libraries so, to have clear difference and identical understanding, instead of using class and for, we can use className and htmlFor create a div and adding the className row. The row must be placed within .container-fluid: <div className="container-fluid"> <div className="row"></div> </div> Now create columns that must be immediate children of a row: <div className="container-fluid"> <div className="row"> <div className="col-lg-6"></div> </div> </div> .row and .col-xs-4 are predefined classes that available for quickly making grid layouts. Add the h1 tag for the title of the page: <div className="container-fluid"> <div className="row"> <div className="col-sm-6"> <h1>Login Form</h1> </div> </div> </div> Grid columns are created by the given specified number of col-sm-* of 12 available columns. For example, if we are using a four column layout, we need to specify to col-sm-3 lead-in equal columns. Col-sm-* Small devices Col-md-* Medium devices Col-lg-* Large devices We are using the col-sm-* prefix to resize our columns for small devices. Inside the columns, we need to wrap our form elements label and input tags into a div tag with the form-group class: <div className="form-group"> <label for="emailInput">Email address</label> <input type="email" className="form-control" id="emailInput" placeholder="Email"/> </div> Forget the style of Bootstrap; we need to add the form-control class in our input elements. If we need extra padding in our label tag then we can add the control-label class on the label. Let's quickly add the rest of the elements. I am going to add a password and submit button. In previous versions of Bootstrap, form elements were usually wrapped in an element with the form-actions class. However, in Bootstrap 3, we just need to use the same form-group instead of form-actions. Here is our complete HTML code: <div className="container-fluid"> <div className="row"> <div className="col-lg-6"> <form> <h1>Login Form</h1> <hr/> <div className="form-group"> <label for="emailInput">Email address</label> <input type="email" className="form-control" id="emailInput" placeholder="Email"/> </div> <div className="form-group"> <label for="passwordInput">Password</label> <input type="password" className="form- control" id="passwordInput" placeholder="Password"/> </div> <button type="submit" className="btn btn-default col-xs-offset-9 col-xs-3">Submit</button> </form> </div> </div> </div> Now create one object inside the var loginFormHTML script tag and assign this HTML them: Var loginFormHTML = <div className="container-fluid"> <div className="row"> <div className="col-lg-6"> <form> <h1>Login Form</h1> <hr/> <div className="form-group"> <label for="emailInput">Email address</label> <input type="email" className="form-control" id="emailInput" placeholder="Email"/> </div> <div className="form-group"> <label for="passwordInput">Password</label> <input type="password" className="form- control" id="passwordInput" placeholder="Password"/> </div> <button type="submit" className="btn btn-default col-xs-offset-9 col-xs-3">Submit</button> </form> </div> </div> We will pass this object in the React.DOM()method instead of directly passing the HTML: ReactDOM.render(LoginformHTML,document.getElementById('hello')); Our form is ready. Now let's see how it looks in the browser: The compiler is unable to parse our HTML because we have not enclosed one of the div tags properly. You can see in our HTML that we have not closed the wrapper container-fluid at the end. Now close the wrapper tag at the end and open the file again in your browser. Note: Whenever you hand-code (write) your HTML code, please double check your "Start Tag" and "End Tag". It should be written/closed properly, otherwise it will break your UI/frontend look and feel. Here is the HTML after closing the div tag: <!doctype html> <html class="no-js" lang=""> <head> <meta charset="utf-8"> <title>ReactJS Chapter 1</title> <link rel="stylesheet" href="css/bootstrap.min.css"> <script type="text/javascript" src="js/react.min.js"></script> <script type="text/javascript" src="js/react-dom.min.js"></script> <script src="js/browser.min.js"></script> </head> <body> <!-- Add your site or application content here --> <div id="loginForm"></div> <script type="text/babel"> var LoginformHTML = <div className="container-fluid"> <div className="row"> <div className="col-lg-6"> <form> <h1>Login Form</h1> <hr/> <div className="form-group"> <label for="emailInput">Email address</label> <input type="email" className="form-control" id="emailInput" placeholder="Email"/> </div> <div className="form-group"> <label for="passwordInput">Password</label> <input type="password" className="form-control" id="passwordInput" placeholder="Password"/> </div> <button type="submit" className="btn btn-default col-xs-offset-9 col-xs-3">Submit</button> </form> </div> </div> </div> ReactDOM.render(LoginformHTML,document.getElementById('loginForm'); </script> </body> </html> Now, you can check your page on browser and you will be able to see the form with below shown look and feel. Now it's working fine and looks good. Bootstrap also provides two additional classes to make your elements smaller and larger: input-lg and input-sm. You can also check the responsive behavior by resizing the browser. That's look great. Our small static login form application is ready with responsive behavior. Some of the benefits are: Rendering your component is very easy Reading component's code would be very easy with help of JSX JSX will also help you to check your layout as well as components plug-in with each other You can test your code easily and it also allows other tools integration for enhancement As we know, React is view layer, you can also use it with other JavaScript frameworks Summary Our simple static login form application and Hello World examples are looking great and working exactly how they should, so let's recap what we've learned in the this article. To begin with, we saw just how easy it is to get ReactJS and Bootstrap installed with the inclusion of JavaScript files and a style sheet. We also looked at how the React application is initialized and started building our first form application. The Hello World app and form application which we have created demonstrates some of React's and Bootstrap's basic features such as the following: ReactDOM Render Browserify Bootstrap With Bootstrap, we worked towards having a responsive grid system for different mobile devices and applied the fundamental styles of HTML elements with the inclusion of a few classes and divs. We also saw the framework's new mobile-first responsive design in action without cluttering up our markup with unnecessary classes or elements. Resources for Article: Further resources on this subject: Getting Started with ASP.NET Core and Bootstrap 4 [article] Frontend development with Bootstrap 4 [article] Gearing Up for Bootstrap 4 [article]
Read more
  • 0
  • 0
  • 13983

article-image-building-our-first-app-7-minute-workout
Packt
14 Nov 2016
27 min read
Save for later

Building Our First App – 7 Minute Workout

Packt
14 Nov 2016
27 min read
In this article by, Chandermani Arora and Kevin Hennessy, the authors of the book Angular 2 By Example, we will build a new app in Angular, and in the process, develop a better understanding of the framework. This app will also help us explore some new capabilities of the framework. (For more resources related to this topic, see here.) The topics that we will cover in this article include the following: 7 Minute Workout problem description: We detail the functionality of the app that we build. Code organization: For our first real app, we will try to explain how to organize code, specifically Angular code. Designing the model: One of the building blocks of our app is its model. We design the app model based on the app's requirements. Understanding the data binding infrastructure: While building the 7 Minute Workout view, we will look at the data binding capabilities of the framework, which include property, attribute, class, style, and event bindings. Let's get started! The first thing we will do is define the scope of our 7 Minute Workout app. What is 7 Minute Workout? We want everyone reading this article to be physically fit. Our purpose is to simulate your grey matter. What better way to do it than to build an app that targets physical fitness! 7 Minute Workout is an exercise/workout plan that requires us to perform a set of twelve exercises in quick succession within the seven minute time span. 7 Minute Workout has become quite popular due to its benefits and the short duration of the workout. We cannot confirm or refute the claims but doing any form of strenuous physical activity is better than doing nothing at all. If you are interested to know more about the workout, then check out http://well.blogs.nytimes.com/2013/05/09/the-scientific-7-minute-workout/. The technicalities of the app include performing a set of 12 exercises, dedicating 30 seconds for each of the exercises. This is followed by a brief rest period before starting the next exercise. For the app that we are building, we will be taking rest periods of 10 seconds each. So, the total duration comes out be a little more than 7 minutes. Once the 7 Minute Workout app ready, it will look something like this: Downloading the code base The code for this app can be downloaded from the GitHub site https://github.com/chandermani/angular2byexample dedicated to this article. Since we are building the app incrementally, we have created multiple checkpoints that map to GitHub branches such as checkpoint2.1, checkpoint2.2, and so on. During the narration, we will highlight the branch for reference. These branches will contain the work done on the app up to that point in time. The 7 Minute Workout code is available inside the repository folder named trainer. So let's get started! Setting up the build Remember that we are building on a modern platform for which browsers still lack support. Therefore, directly referencing script files in HTML is out of question (while common, it's a dated approach that we should avoid anyway). The current browsers do not understand TypeScript; as a matter of fact, even ES 2015 (also known as ES6) is not supported. This implies that there has to be a process that converts code written in TypeScript into standard JavaScript (ES5), which browsers can work with. Hence, having a build setup for almost any Angular 2 app becomes imperative. Having a build process may seem like overkill for a small application, but it has some other advantages as well. If you are a frontend developer working on the web stack, you cannot avoid Node.js. This is the most widely used platform for Web/JavaScript development. So, no prizes for guessing that the Angular 2 build setup too is supported over Node.js with tools such as Grunt, Gulp, JSPM, and webpack. Since we are building on the Node.js platform, install Node.js before starting. While there are quite elaborate build setup options available online, we go for a minimal setup using Gulp. The reason is that there is no one size fits all solution out there. Also, the primary aim here is to learn about Angular 2 and not to worry too much about the intricacies of setting up and running a build. Some of the notable starter sites plus build setups created by the community are as follows: Start site Location angular2-webpack-starter http://bit.ly/ng2webpack angular2-seed http://bit.ly/ng2seed angular-cli— It allows us to generate the initial code setup, including the build configurations, and has good scaffolding capabilities too. http://bit.ly/ng2-cli A natural question arises if you are very new to Node.js or the overall build process: what does a typical Angular build involve? It depends! To get an idea about this process, it would be beneficial if we look at the build setup defined for our app. Let's set up the app's build locally then. Follow these steps to have the boilerplate Angular 2 app up and running: Download the base version of this app from http://bit.ly/ng2be-base and unzip it to a location on your machine. If you are familiar with how Git works, you can just checkout the branch base: git checkout base This code serves as the starting point for our app. Navigate to the trainer folder from command line and execute these commands: npm i -g gulp npm install The first command installs Gulp globally so that you can invoke the Gulp command line tool from anywhere and execute Gulp tasks. A Gulp task is an activity that Gulp performs during the build execution. If we look at the Gulp build script (which we will do shortly), we realize that it is nothing but a sequence of tasks performed whenever a build occurs. The second command installs the app's dependencies (in the form of npm packages). Packages in the Node.js world are third-party libraries that are either used by the app or support the app's building process. For example, Gulp itself is a Node.js package. The npm is a command-line tool for pulling these packages from a central repository. Once Gulp is installed and npm pulls dependencies from the npm store, we are ready to build and run the application. From the command line, enter the following command: gulp play This compiles and runs the app. If the build process goes fine, the default browser window/tab will open with a rudimentary "Hello World" page (http://localhost:9000/index.html). We are all set to begin developing our app in Angular 2! But before we do that, it would be interesting to know what has happened under the hood. The build internals Even if you are new to Gulp, looking at gulpfile.js gives you a fair idea about what the build process is doing. A Gulp build is a set of tasks performed in a predefined order. The end result of such a process is some form of package code that is ready to be run. And if we are building our apps using TypeScript/ES2015 or some other similar language that browsers do not understand natively, then we need an additional build step, called transpilation. Code transpiling As it stands in 2016, browsers still cannot run ES2015 code. While we are quick to embrace languages that hide the not-so-good parts of JavaScript (ES5), we are still limited by the browser's capabilities. When it comes to language features, ES5 is still the safest bet as all browsers support it. Clearly, we need a mechanism to convert our TypeScript code into plain JavaScript (ES5). Microsoft has a TypeScript compiler that does this job. The TypeScript compiler takes the TypeScript code and converts it into ES5-format code that can run in all browsers. This process is commonly referred to as transpiling, and since the TypeScript compiler does it, it's called a transpiler. Interestingly, transpilation can happen at both build/compile time and runtime: Build-time transpilation: Transpilation as part of the build process takes the script files (in our case, TypeScript .ts files) and compiles them into plain JavaScript. Our build setup uses build-time transpilation. Runtime transpilation: This happens in the browser at runtime. We include the raw language-specific script files (.ts in our case), and the TypeScript compiler—which is loaded in the browser beforehand—compiles these script files on the fly.  While runtime transpilation simplifies the build setup process, as a recommendation, it should be limited to development workflows only, considering the additional performance overheard involved in loading the transpiler and transpiling the code on the fly. The process of transpiling is not limited to TypeScript. Every language targeted towards the Web such as CoffeeScript, ES2015, or any other language that is not inherently understood by a browser needs transpilation. There are transpilers for most languages, and the prominent ones (other than TypeScript) are tracuer and babel. To compile TypeScript files, we can install the TypeScript compiler manually from the command line using this: npm install -g typescript Once installed, we can compile any TypeScript file into ES5 format using the compiler (tsc.exe). But for our build setup, this process is automated using the ts2js Gulp task (check out gulpfile.js). And if you are wondering when we installed TypeScript… well, we did it as part of the npm install step, when setting up the code for the first time. The gulp-typescript package downloads the TypeScript compiler as a dependency. With this basic understanding of transpilation, we can summarize what happens with our build setup: The gulp play command kicks off the build process. This command tells Gulp to start the build process by invoking the play task. Since the play task has a dependency on the ts2js task, ts2js is executed first. The ts2js compiles the TypeScript files (.ts) located in src folder and outputs them to the dist folder at the root. Post build, a static file server is started that serves all the app files, including static files (images, videos, and HTML) and script files (check gulp.play task). Thenceforth, the build process keeps a watch on any script file changes (the gulp.watch task) you make and recompiles the code on the fly. livereload also has been set up for the app. Any changes to the code refresh the browser running the app automatically. In case browser refresh fails, we can always do a manual refresh. This is a rudimentary build setup required to run an Angular app. For complex build requirements, we can always look at the starter/seed projects that have a more complete and robust build setup, or build something of our own. Next let's look at the boilerplate app code already there and the overall code organization. Organizing code This is how we are going to organize our code and other assets for the app: The trainer folder is the root folder for the app and it has a folder (static) for the static content (such as images, CSS, audio files, and others) and a folder (src) for the app's source code. The organization of the app's source code is heavily influenced by the design of Angular and the Angular style guide (http://bit.ly/ng2-style-guide) released by the Angular team. The components folder hosts all the components that we create. We will be creating subfolders in this folder for every major component of the application. Each component folder will contain artifacts related to that component, which includes its template, its implementation and other related item. We will also keep adding more top-level folders (inside the src folder) as we build the application. If we look at the code now, the components/app folder has defined a root level component TrainerAppComponent and root level module AppModule. bootstrap.ts contains code to bootstrap/load the application module (AppModule). 7 Minute Workout uses Just In Time (JIT) compilation to compile Angular views. This implies that views are compiled just before they are rendered in the browser. Angular has a compiler running in the browser that compiles these views. Angular also supports the Ahead Of Time (AoT) compilation model. With AoT, the views are compiled on the server side using a server version of the Angular compiler. The views returned to the browser are precompiled and ready to be used. For 7 Minute Workout, we stick to the JIT compilation model just because it is easy to set up as compared to AoT, which requires server-side tweaks and package installation. We highly recommend that you use AoT compilation for production apps due the numerous benefits it offers. AoT can improve the application's initial load time and reduce its size too. Look at the AoT platform documentation (cookbook) at http://bit.ly/ng2-aot to understand how AoT compilation can benefit you. Time to start working on our first focus area, which is the app's model! The 7 Minute Workout model Designing the model for this app requires us to first detail the functional aspects of the 7 Minute Workout app, and then derive a model that satisfies those requirements. Based on the problem statement defined earlier, some of the obvious requirements are as follows: Being able to start the workout. Providing a visual clue about the current exercise and its progress. This includes the following: Providing a visual depiction of the current exercise Providing step-by-step instructions on how to do a specific exercise The time left for the current exercise Notifying the user when the workout ends. Some other valuable features that we will add to this app are as follows: The ability to pause the current workout. Providing information about the next exercise to follow. Providing audio clues so that the user can perform the workout without constantly looking at the screen. This includes: A timer click sound Details about the next exercise Signaling that the exercise is about to start Showing related videos for the exercise in progress and the ability to play them. As we can see, the central theme for this app is workout and exercise. Here, a workout is a set of exercises performed in a specific order for a particular duration. So, let's go ahead and define the model for our workout and exercise. Based on the requirements just mentioned, we will need the following details about an exercise: The name. This should be unique. The title. This is shown to the user. The description of the exercise. Instructions on how to perform the exercise. Images for the exercise. The name of the audio clip for the exercise. Related videos. With TypeScript, we can define the classes for our model. Create a folder called workout-runner inside the src/components folder and copy the model.ts file from the checkpoint2.1 branch folder workout-runner(http://bit.ly/ng2be-2-1-model-ts) to the corresponding local folder. model.ts contains the model definition for our app. The Exercise class looks like this: export class Exercise { constructor( public name: string, public title: string, public description: string, public image: string, public nameSound?: string, public procedure?: string, public videos?: Array<string>) { } } TypeScript Tips Passing constructor parameters with public or private is a shorthand for creating and initializing class members at one go. The ? suffix after nameSound, procedure, and videos implies that these are optional parameters. For the workout, we need to track the following properties: The name. This should be unique. The title. This is shown to the user. The exercises that are part of the workout. The duration for each exercise. The rest duration between two exercises. So, the model class (WorkoutPlan) looks like this: export class WorkoutPlan { constructor( public name: string, public title: string, public restBetweenExercise: number, public exercises: ExercisePlan[], public description?: string) { } totalWorkoutDuration(): number { … } } The totalWorkoutDuration function returns the total duration of the workout in seconds. WorkoutPlan has a reference to another class in the preceding definition—ExercisePlan. It tracks the exercise and the duration of the exercise in a workout, which is quite apparent once we look at the definition of ExercisePlan: export class ExercisePlan { constructor( public exercise: Exercise, public duration: number) { } } These three classes constitute our base model, and we will decide in the future whether or not we need to extend this model as we start implementing the app's functionality. Since we have started with a preconfigured and basic Angular app, you just need to understand how this app bootstrapping is occurring. App bootstrapping Look at the src folder. There is a bootstrap.ts file with only the execution bit (other than imports): platformBrowserDynamic().bootstrapModule(AppModule); The boostrapModule function call actually bootstraps the application by loading the root module, AppModule. The process is triggered by this call in index.html: System.import('app').catch(console.log.bind(console)); The System.import statement sets off the app bootstrapping process by loading the first module from bootstrap.ts. Modules defined in the context of Angular 2, (using @NgModule decorator) are different from modules SystemJS loads. SystemJS modules are JavaScript modules, which can be in different formats adhering to CommonJS, AMD, or ES2015 specifications. Angular modules are constructs used by Angular to segregate and organize its artifacts. Unless the context of discussion is SystemJS, any reference to module implies Angular module. Now we have details on how SystemJS loads our Angular app. App loading with SystemJS SystemJS starts loading the JavaScript module with the call to System.import('app') in index.html. SystemJS starts by loading bootstrap.ts first. The imports defined inside bootstrap.ts cause SystemJS to then load the imported modules. If these module imports have further import statements, SystemJS loads them too, recursively. And finally the platformBrowserDynamic().bootstrapModule(AppModule); function gets executed once all the imported modules are loaded. For the SystemJS import function to work, it needs to know where the module is located. We define this in the file, systemjs.config.js, and reference it in index.html, before the System.import script: <script src="systemjs.config.js"></script> This configuration file contains all of the necessary configuration for SystemJS to work correctly. Open systemjs.config.js, the app parameter to import function points to a folder dist as defined on the map object: var map = { 'app': 'dist', ... } And the next variable, packages, contains settings that hint SystemJS how to load a module from a package when no filename/extension is specified. For app, the default module is bootstrap.js: var packages = { 'app': { main: 'bootstrap.js', defaultExtension: 'js' }, ... }; Are you wondering what the dist folder has to do with our application? Well, this is where our transpiled scripts end up.  As we build our app in TypeScript, the TypeScript compiler converts these .ts script files in the src folder to JavaScript modules and deposits them into the dist folder. SystemJS then loads these compiled JavaScript modules. The transpiled code location has been configured as part of the build definition in gulpfile.js. Look for this excerpt in gulpfile.ts: return tsResult.js .pipe(sourcemaps.write()) .pipe(gulp.dest('dist')) The module specification used by our app can again be verified in gulpfile.js. Take a look at this line: noImplicitAny: true, module: 'system', target: 'ES5', These are TypeScript compiler options, with one being module, that is, the target module definition format. The system module type is a new module format designed to support the exact semantics of ES2015 modules within ES5. Once the scripts are transpiled and the module definitions created (in the target format), SystemJS can load these modules and their dependencies. It's time to get into the thick of action; let's build our first component. Our first component – WorkoutRunnerComponent To implement the WorkoutRunnerComponent, we need to outline the behavior of the application. What we are going to do in the WorkoutRunnerComponent implementation is as follows: Start the workout. Show the workout in progress and show the progress indicator. After the time elapses for an exercise, show the next exercise. Repeat this process until all the exercises are over. Let's start with the implementation. The first thing that we will create is the WorkoutRunnerComponent implementation. Open workout-runner folder in the src/components folder and add a new code file called workout-runner.component.ts to it. Add this chunk of code to the file: import {WorkoutPlan, ExercisePlan, Exercise} from './model' export class WorkoutRunnerComponent { } The import module declaration allows us to reference the classes defined in the model.ts file in WorkoutRunnerComponent. We first need to set up the workout data. Let's do that by adding a constructor and related class properties to the WorkoutRunnerComponent class: workoutPlan: WorkoutPlan; restExercise: ExercisePlan; constructor() { this.workoutPlan = this.buildWorkout(); this.restExercise = new ExercisePlan( new Exercise("rest", "Relax!", "Relax a bit", "rest.png"), this.workoutPlan.restBetweenExercise); } The buildWorkout on WorkoutRunnerComponent sets up the complete workout, as we will see shortly. We also initialize a restExercise variable to track even the rest periods as exercise (note that restExercise is an object of type ExercisePlan). The buildWorkout function is a lengthy function, so it's better if we copy the implementation from the workout runner's implementation available in Git branch checkpoint2.1 (http://bit.ly/ng2be-2-1-workout-runner-component-ts). The buildWorkout code looks like this: buildWorkout(): WorkoutPlan { let workout = new WorkoutPlan("7MinWorkout", "7 Minute Workout", 10, []); workout.exercises.push( new ExercisePlan( new Exercise( "jumpingJacks", "Jumping Jacks", "A jumping jack or star jump, also called side-straddle hop is a physical jumping exercise.", "JumpingJacks.png", "jumpingjacks.wav", `Assume an erect position, with feet together and arms at your side. …`, ["dmYwZH_BNd0", "BABOdJ-2Z6o", "c4DAnQ6DtF8"]), 30)); // (TRUNCATED) Other 11 workout exercise data. return workout; } This code builds the WorkoutPlan object and pushes the exercise data into the exercises array (an array of ExercisePlan objects), returning the newly built workout. The initialization is complete; now, it's time to actually implement the start workout. Add a start function to the WorkoutRunnerComponent implementation, as follows: start() { this.workoutTimeRemaining = this.workoutPlan.totalWorkoutDuration(); this.currentExerciseIndex = 0; this.startExercise(this.workoutPlan.exercises[this.currentExerciseIndex]); } Then declare the new variables used in the function at the top, with other variable declarations: workoutTimeRemaining: number; currentExerciseIndex: number; The workoutTimeRemaining variable tracks the total time remaining for the workout, and currentExerciseIndex tracks the currently executing exercise index. The call to startExercise actually starts an exercise. This how the code for startExercise looks: startExercise(exercisePlan: ExercisePlan) { this.currentExercise = exercisePlan; this.exerciseRunningDuration = 0; let intervalId = setInterval(() => { if (this.exerciseRunningDuration >= this.currentExercise.duration) { clearInterval(intervalId); } else { this.exerciseRunningDuration++; } }, 1000); } We start by initializing currentExercise and exerciseRunningDuration. The currentExercise variable tracks the exercise in progress and exerciseRunningDuration tracks its duration. These two variables also need to be declared at the top: currentExercise: ExercisePlan; exerciseRunningDuration: number; We use the setInterval JavaScript function with a delay of 1 second (1,000 milliseconds) to track the exercise progress by incrementing exerciseRunningDuration. The setInterval invokes the callback every second. The clearInterval call stops the timer once the exercise duration lapses. TypeScript Arrow functions The callback parameter passed to setInterval (()=>{…}) is a lambda function (or an arrow function in ES 2015). Lambda functions are short-form representations of anonymous functions, with added benefits. You can learn more about them at https://basarat.gitbooks.io/typescript/content/docs/arrow-functions.html. As of now, we have a WorkoutRunnerComponent class. We need to convert it into an Angular component and define the component view. Add the import for Component and a component decorator (highlighted code): import {WorkoutPlan, ExercisePlan, Exercise} from './model' import {Component} from '@angular/core'; @Component({ selector: 'workout-runner', template: ` <pre>Current Exercise: {{currentExercise | json}}</pre> <pre>Time Left: {{currentExercise.duration- exerciseRunningDuration}}</pre>` }) export class WorkoutRunnerComponent { As you already know how to create an Angular component. You understand the role of the @Component decorator, what selector does, and how the template is used. The JavaScript generated for the @Component decorator contains enough metadata about the component. This allows Angular framework to instantiate the correct component at runtime. String enclosed in backticks (` `) are a new addition to ES2015. Also called template literals, such string literals can be multi line and allow expressions to be embedded inside (not to be confused with Angular expressions). Look at the MDN article here at http://bit.ly/template-literals for more details. The preceding template HTML will render the raw ExercisePlan object and the exercise time remaining. It has an interesting expression inside the first interpolation: currentExercise | json. The currentExercise property is defined in WorkoutRunnerComponent, but what about the | symbol and what follows it (json)? In the Angular 2 world, it is called a pipe. The sole purpose of a pipe is to transform/format template data. The json pipe here does JSON data formatting. A general sense of what the json pipe does, we can remove the json pipe plus the | symbol and render the template; we are going to do this next. As our app currently has only one module (AppModule), we add the WorkoutRunnerComponent declaration to it. Update app.module.ts by adding the highlighted code: import {WorkoutRunnerComponent} from '../workout-runner/workout-runner.component'; @NgModule({ imports: [BrowserModule], declarations: [TrainerAppComponent, WorkoutRunnerComponent], Now WorkoutRunnerComponent can be referenced in the root component so that it can be rendered. Modify src/components/app/app.component.ts as highlighted in the following code: @Component({ ... template: ` <div class="navbar ...> ... </div> <div class="container ...> <workout-runner></workout-runner> </div>` }) We have changed the root component template and added the workout-runner element to it. This will render the WorkoutRunnerComponent inside our root component. While the implementation may look complete there is a crucial piece missing. Nowhere in the code do we actually start the workout. The workout should start as soon as we load the page. Component life cycle hooks are going to rescue us! Component life cycle hooks Life of an Angular component is eventful. Components get created, change state during their lifetime and finally they are destroyed. Angular provides some life cycle hooks/functions that the framework invokes (on the component) when such event occurs. Consider these examples: When component is initialized, Angular invokes ngOnInit When a component's input properties change, Angular invokes ngOnChanges When a component is destroyed, Angular invokes ngOnDestroy As developers, we can tap into these key moments and perform some custom logic inside the respective component. Angular has TypeScript interfaces for each of these hooks that can be applied to the component class to clearly communicate the intent. For example: class WorkoutRunnerComponent implements OnInit { ngOnInit (){ ... } ... The interface name can be derived by removing the prefix ng from the function names. The hook we are going to utilize here is ngOnInit. The ngOnInit function gets fired when the component's data-bound properties are initialized but before the view initialization starts. Add the ngOnInit function to the WorkoutRunnerComponent class with a call to start the workout: ngOnInit() { this.start(); } And implement the OnInit interface on WorkoutRunnerComponent; it defines the ngOnInit method: import {Component,OnInit} from '@angular/core'; … export class WorkoutRunnerComponent implements OnInit { There are a number of other life cycle hooks, including ngOnDestroy, ngOnChanges, and ngAfterViewInit, that components support; but we are not going to dwell into any of them here. Look at the developer guide (http://bit.ly/ng2-lifecycle) on Life cycle Hooks to learn more about other such hooks. Time to run our app! Open the command line, navigate to the trainer folder, and type this line: gulp play If there are no compilation errors and the browser automatically loads the app (http://localhost:9000/index.html), we should see the following output: The model data updates with every passing second! Now you'll understand why interpolations ({{ }}) are a great debugging tool. This will also be a good time to try rendering currentExercise without the json pipe (use {{currentExercise}}), and see what gets rendered. We are not done yet! Wait long enough on the index.html page and you will realize that the timer stops after 30 seconds. The app does not load the next exercise data. Time to fix it! Update the code inside the setInterval if condition: if (this.exerciseRunningDuration >= this.currentExercise.duration) { clearInterval(intervalId); let next: ExercisePlan = this.getNextExercise(); if (next) { if (next !== this.restExercise) { this.currentExerciseIndex++; } this.startExercise(next); } else { console.log("Workout complete!"); } } The if condition if (this.exerciseRunningDuration >= this.currentExercise.duration) is used to transition to the next exercise once the time duration of the current exercise lapses. We use getNextExercise to get the next exercise and call startExercise again to repeat the process. If no exercise is returned by the getNextExercise call, the workout is considered complete. During exercise transitioning, we increment currentExerciseIndex only if the next exercise is not a rest exercise. Remember that the original workout plan does not have a rest exercise. For the sake of consistency, we have created a rest exercise and are now swapping between rest and the standard exercises that are part of the workout plan. Therefore, currentExerciseIndex does not change when the next exercise is rest. Let's quickly add the getNextExercise function too. Add the function to the WorkoutRunnerComponent class: getNextExercise(): ExercisePlan { let nextExercise: ExercisePlan = null; if (this.currentExercise === this.restExercise) { nextExercise = this.workoutPlan.exercises[this.currentExerciseIndex + 1]; } else if (this.currentExerciseIndex < this.workoutPlan.exercises.length - 1) { nextExercise = this.restExercise; } return nextExercise; } The WorkoutRunnerComponent.getNextExercise returns the next exercise that needs to be performed. Note that the returned object for getNextExercise is an ExercisePlan object that internally contains the exercise details and the duration for which the exercise runs. The implementation is quite self-explanatory. If the current exercise is rest, take the next exercise from the workoutPlan.exercises array (based on currentExerciseIndex); otherwise, the next exercise is rest, given that we are not on the last exercise (the else if condition check). With this, we are ready to test our implementation. So go ahead and refresh index.html. Exercises should flip after every 10 or 30 seconds. Great! The current build setup automatically compiles any changes made to the script files when the files are saved; it also refreshes the browser post these changes. But just in case the UI does not update or things do not work as expected, refresh the browser window. If you are having a problem with running the code, look at the Git branch checkpoint2.1 for a working version of what we have done thus far. Or if you are not using Git, download the snapshot of checkpoint2.1 (a zip file) from http://bit.ly/ng2be-checkpoint2-1. Refer to the README.md file in the trainer folder when setting up the snapshot for the first time. We have are done with controller. Summary We started this article with the aim of creating an Angular app which is  complex. The 7 Minute Workout app fitted the bill, and you learned a lot about the Angular framework while building this app. We started by defining the functional specifications of the 7 Minute Workout app. We then focused our efforts on defining the code structure for the app. To build the app, we started off by defining the model of the app. Once the model was in place, we started the actual implementation, by building an Angular component. Angular components are nothing but classes that are decorated with a framework-specific decorator, @Component. We now have a basic 7 Minute Workout app. For a better user experience, we have added a number of small enhancements to it too, but we are still missing some good-to-have features that would make our app more usable. Resources for Article: Further resources on this subject: Angular.js in a Nutshell [article] Introduction to JavaScript [article] Create Your First React Element [article] <hr noshade="noshade" size="1"
Read more
  • 0
  • 0
  • 13971
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-setting-environment
Packt
04 Jan 2017
14 min read
Save for later

Setting Up the Environment

Packt
04 Jan 2017
14 min read
In this article by Sohail Salehi, the author of the book Angular 2 Services, you can ask the two fundamental questions, when a new development tool is announced or launched are: how different is the new tool from other competitor tools and how enhanced it is when compared to its own the previous versions? If we are going to invest our time in learning a new framework, common sense says we need to make sure we get a good return on our investment. There are so many good articles out there about cons and pros of each framework. To me, choosing Angular 2 boils down to three aspects: The foundation: Angular is introduced and supported by Google and targeted at “ever green” modern browsers. This means we, as developers, don't need to lookout for hacky solutions in each browser upgrade, anymore. The browser will always be updated to the latest version available, letting Angular worry about the new changes and leaving us out of it. This way we can focus more on our development tasks. The community: Think about the community as an asset. The bigger the community, the wider range of solutions to a particular problem. Looking at the statistics, Angular community still way ahead of others and the good news is this community is leaning towards being more involved and more contributing on all levels. The solution: If you look at the previous JS frameworks, you will see most of them focusing on solving a problem for a browser first, and then for mobile devices. The argument for that could be simple: JS wasn't meant to be a language for mobile development. But things have changed to a great extent over the recent years and people now use mobile devices more than before. I personally believe a complex native mobile application – which is implemented in Java or C – is more performant, as compared to its equivalent implemented in JS. But the thing here is that not every mobile application needs to be complex. So business owners have started asking questions like: Why do I need a machine-gun to kill a fly? (For more resources related to this topic, see here.) With that question in mind, Angular 2 chose a different approach. It solves the performance challenges faced by mobile devices first. In other words, if your Angular 2 application is fast enough on mobile environments, then it is lightning fast inside the “ever green” browsers. So that is what we are going to do in this article. First we are going to learn about Angular 2 and the main problem it is going to solve. Then we talk a little bit about the JavaScript history and the differences between Angular 2 and AngularJS 1. Introducing “The Sherlock Project” is next and finally we install the tools and libraries we need to implement this project. Introducing Angular 2 The previous JS frameworks we've used already have a fluid and easy workflow towards building web applications rapidly. But as developers what we are struggling with is the technical debt. In simple words, we could quickly build a web application with an impressive UI. But as the product kept growing and the change requests started kicking in, we had to deal with all maintenance nightmares which forces a long list of maintenance tasks and heavy costs to the businesses. Basically the framework that used to be an amazing asset, turned into a hairy liability (or technical debt if you like). One of the major "revamps" in Angular 2 is the removal of a lot of modules resulting in a lighter and faster core. For example, if you are coming from an Angular 1.x background and don't see $scope or $log in the new version, don't panic, they are still available to you via other means, But there is no need to add overhead to the loading time if we are not going to use all modules. So taking the modules out of the core results in a better performance. So to answer the question, one of the main issues Angular 2 addresses is the performance issues. This is done through a lot of structural changes. There is no backward compatibility We don't have backward compatibility. If you have some Angular projects implemented with the previous version (v1.x), depending on the complexity of the project, I wouldn't recommend migrating to the new version. Most likely, you will end up hammering a lot of changes into your migrated Angular 2 project and at the end you will realize it was more cost effective if you would just create a new project based on Angular 2 from scratch. Please keep in mind, the previous versions of AngularJS and Angular 2 share just a name, but they have huge differences in their nature and that is the price we pay for a better performance. Previous knowledge of AngularJS 1.x is not necessary You might wondering if you need to know AngularJS 1.x before diving into Angular 2. Absolutely not. To be more specific, it might be even better if you didn't have any experience using Angular at all. This is because your mind wouldn't be preoccupied with obsolete rules and syntaxes. For example, we will see a lot of annotations in Angular 2 which might make you feel uncomfortable if you come from a Angular 1 background. Also, there are different types of dependency injections and child injectors which are totally new concepts that have been introduced in Angular 2. Moreover there are new features for templating and data-binding which help to improve loading time by asynchronous processing. The relationship between ECMAScript, AtScript and TypeScript The current edition of ECMAScript (ES5) is the one which is widely accepted among all well known browsers. You can think of it as the traditional JavaScript. Whatever code is written in ES5 can be executed directly in the browsers. The problem is most of modern JavaScript frameworks contain features which require more than the traditional JavaScript capabilities. That is why ES6 was introduced. With this edition – and any future ECMAScript editions – we will be able to empower JavaScript with the features we need. Now, the challenge is running the new code in the current browsers. Most browsers, nowadays recognize standard JavaScript codes only. So we need a mediator to transform ES6 to ES5. That mediator is called a transpiler and the technical term for transformations is transpiling. There are many good transpilers out there and you are free to choose whatever you feel comfortable with. Apart from TypeScript, you might want to consider Babel (babeljs.io) as your main transpiler. Google originally planned to use AtScript to implement Angular 2, but later they joined forces with Microsoft and introduced TypeScript as the official transpiler for Angular 2. The following figure summarizes the relationship between various editions of ECMAScript, AtScript and TypeScript. For more details about JavaScript, ECMAScript and how they evolved during the past decade visit the following link: https://en.wikipedia.org/wiki/ECMAScript Setting up tools and getting started! It is important to get the foundation right before installing anything else. Depending on your operating system, install Node.js and its package manager- npm. . You can find a detailed installation manual on Node.js official website. https://nodejs.org/en/ Make sure both Node.js and npm are installed globally (they are accessible system wide) and have the right permissions. At the time of writing npm comes with Node.js out of the box. But in case their policy changes in the future, you can always download the npm and follow the installation process from the following link. https://npmjs.com The next stop would be the IDE. Feel free to choose anything that you are comfortable with. Even a simple text editor will do. I am going to use WebStorm because of its embedded TypeScript syntax support and Angular 2 features which speeds up development process. Moreover it is light weight enough to handle the project we are about to develop. You can download it from here: https://jetbrains.com/webstorm/download We are going to use simple objects and arrays as a place holder for the data. But at some stage we will need to persist the data in our application. That means we need a database. We will use the Google's Firebase Realtime database for our needs. It is fast, it doesn't need to download or setup anything locally and more over it responds instantly to your requests and aligns perfectly with Angular's two-way data-binding. For now just leave the database as it is. You don't need to create any connections or database objects. Setting up the seed project The final requirement to get started would be an Angular 2 seed project to shape the initial structure of our project. If you look at the public source code repositories you can find several versions of these seeds. But I prefer the official one for two reasons: Custom made seeds usually come with a personal twist depending on the developers taste. Although sometimes it might be a great help, but since we are going to build everything from scratch and learn the fundamental concepts, they are not favorable to our project. The official seeds are usually minimal. They are very slim and don't contain overwhelming amount of 3rd party packages and environmental configurations. Speaking about packages, you might be wondering what happened to the other JavaScript packages we needed for this application. We didn't install anything else other than Node and NPM. The next section will answer this question. Setting up an Angular 2 project in WebStorm Assuming you have installed WebStorm, fire the IDE and and checkout a new project from a git repository. Now set the repository URL to: https://github.com/angular/angular2-seed.git and save the project in a folder called “the sherlock project” as shown in the figure below: Hit the Clone button and open the project in WebStorm. Next, click on the package.json file and observe the dependencies. As you see, this is a very lean seed with minimal configurations. It contains the Angular 2 plus required modules to get the project up and running. The first thing we need to do is install all required dependencies defined inside the package.json file. Right click on the file and select the “run npm install” option. Installing the packages for the first time will take a while. In the mean time, explore the devDependencies section of package.json in the editor. As you see, we have all the required bells and whistles to run the project including TypeScript, web server and so on to start the development: "devDependencies": { "@types/core-js": "^0.9.32", "@types/node": "^6.0.38", "angular2-template-loader": "^0.4.0", "awesome-typescript-loader": "^1.1.1", "css-loader": "^0.23.1", "raw-loader": "^0.5.1", "to-string-loader": "^1.1.4", "typescript": "^2.0.2", "webpack": "^1.12.9", "webpack-dev-server": "^1.14.0", "webpack-merge": "^0.8.4" }, We also have some nifty scripts defined inside package.json that automate useful processes. For example, to start a webserver and see the seed in action we can simply execute following command in a terminal: $ npm run server or we can right click on package.json file and select “Show npm Scripts”. This will open another side tab in WebStorm and shows all available scripts inside the current file. Basically all npm related commands (which you can run from command-line) are located inside the package.json file under the scripts key. That means if you have a special need, you can create your own script and add it there. You can also modify the current ones. Double click on start script and it will run the web server and loads the seed application on port 3000. That means if you visit http://localhost:3000 you will see the seed application in your browser: If you are wondering where the port number comes from, look into package.json file and examine the server key under the scripts section:    "server": "webpack-dev-server --inline --colors --progress --display-error-details --display-cached --port 3000 --content-base src", There is one more thing before we move on to the next topic. If you open any .ts file, WebStorm will ask you if you want it to transpile the code to JavaScript. If you say No once and it will never show up again. We don't need WebStorm to transpile for us because the start script is already contains a transpiler which takes care of all transformations for us. Front-end developers versus back-end developers Recently, I had an interesting conversation with a couple of colleagues of mine which worth sharing here in this article. One of them is an avid front-end developer and the other is a seasoned back-end developer. You guessed what I'm going to talk about: The debate between back-end/front-end developers and who is the better half. We have seen these kind of debates between back-end and front-end people in development communities long enough. But the interesting thing which – in my opinion – will show up more often in the next few months (years) is a fading border between the ends (front-end/back-end). It feels like the reason that some traditional front-end developers are holding up their guard against new changes in Angular 2, is not just because the syntax has changed thus causing a steep learning curve, but mostly because they now have to deal with concepts which have existed natively in back-end development for many years. Hence, the reason that back-end developers are becoming more open to the changes introduced in Angular 2 is mostly because these changes seem natural to them. Annotations or child dependency injections for example is not a big deal to back-enders, as much as it bothers the front-enders. I won't be surprised to see a noticeable shift in both camps in the years to come. Probably we will see more back-enders who are willing to use Angular as a good candidate for some – if not all – of their back-end projects and probably we will see more front-enders taking Object-Oriented concepts and best practices more seriously. Given that JavaScript was originally a functional scripting language they probably will try to catch-up with the other camp as fast as they can. There is no comparison here and I am not saying which camp has advantage over the other one. My point is, before modern front-end frameworks, JavaScript was open to be used in a quick and dirty inline scripts to solve problems quickly. While this is a very convenient approach it causes serious problems when you want to scale a web application. Imagine the time and effort you might have to make finding all of those dependent codes and re-factor them to reflect the required changes. When it comes to scalability, we need a full separation between layers and that requires developers to move away from traditional JavaScript and embrace more OOP best practices in their day to day development tasks. That what has been practiced in all modern front-end frameworks and Angular 2 takes it to the next level by completely restructuring the model-view-* concept and opening doors to the future features which eventually will be native part of any web browser. Introducing “The Sherlock Project” During the course of this journey we are going to dive into all new Angular 2 concepts by implementing a semi-AI project called: “The Sherlock Project”. This project will basically be about collecting facts, evaluating and ranking them, and making a decision about how truthful they are. To achieve this goal, we will implement a couple of services and inject them to the project, wherever they are needed. We will discuss one aspect of the project and will focus on one related service. At the end, all services will come together to act as one big project. Summary This article covered a brief introduction to Angular 2. We saw where Angular2 comes from, what problems is it going to solve and how we can benefit from it. We talked about the project that we will be creating with Angular 2. We also saw whic other tools are needed for our project and how to install and configure them. Finally, we cloned an Angular 2 seed project, installed all its dependencies and started a web server to examine the application output in a browser Resources for Article: Further resources on this subject: Introduction to JavaScript [article] Third Party Libraries [article] API with MongoDB and Node.js [article]
Read more
  • 0
  • 0
  • 13931

article-image-getting-started-react
Packt
24 Feb 2016
7 min read
Save for later

Getting Started with React

Packt
24 Feb 2016
7 min read
In this article by Vipul Amler and Prathamesh Sonpatki, author of the book ReactJS by Example- Building Modern Web Applications with React, we will learn how web development has seen a huge advent of Single Page Application (SPA) in the past couple of years. Early development was simple—reload a complete page to perform a change in the display or perform a user action. The problem with this was a huge round-trip time for the complete request to reach the web server and back to the client. Then came AJAX, which sent a request to the server, and could update parts of the page without reloading the current page. Moving in the same direction, we saw the emergence of the SPAs. Wrapping up the heavy frontend content and delivering it to the client browser just once, while maintaining a small channel for communication with the server based on any event; this is usually complemented by thin API on the web server. The growth in such apps has been complemented by JavaScript libraries and frameworks such as Ext JS, KnockoutJS, BackboneJS, AngularJS, EmberJS, and more recently, React and Polymer. (For more resources related to this topic, see here.) Let's take a look at how React fits in this ecosystem and get introduced to it in this article. What is React? ReactJS tries to solve the problem from the View layer. It can very well be defined and used as the V in any of the MVC frameworks. It's not opinionated about how it should be used. It creates abstract representations of views. It breaks down parts of the view in the Components. These components encompass both the logic to handle the display of view and the view itself. It can contain data that it uses to render the state of the app. To avoid complexity of interactions and subsequent render processing required, React does a full render of the application. It maintains a simple flow of work. React is founded on the idea that DOM manipulation is an expensive operation and should be minimized. It also recognizes that optimizing DOM manipulation by hand will result in a lot of boilerplate code, which is error-prone, boring, and repetitive. React solves this by giving the developer a virtual DOM to render to instead of the actual DOM. It finds difference between the real DOM and virtual DOM and conducts the minimum number of DOM operations required to achieve the new state. React is also declarative. When the data changes, React conceptually hits the refresh button and knows to only update the changed parts. This simple flow of data, coupled with dead simple display logic, makes development with ReactJS straightforward and simple to understand. Who uses React? If you've used any of the services such as Facebook, Instagram, Netflix, Alibaba, Yahoo, E-Bay, Khan-Academy, AirBnB, Sony, and Atlassian, you've already come across and used React on the Web. In just under a year, React has seen adoption from major Internet companies in their core products. In its first-ever conference, React also announced the development of React Native. React Native allows the development of mobile applications using React. It transpiles React code to the native application code, such as Objective-C for iOS applications. At the time of writing this, Facebook already uses React Native in its Groups iOS app. In this article, we will be following a conversation between two developers, Mike and Shawn. Mike is a senior developer at Adequate Consulting and Shawn has just joined the company. Mike will be mentoring Shawn and conducting pair programming with him. When Shawn meets Mike and ReactJS It's a bright day at Adequate Consulting. Its' also Shawn's first day at the company. Shawn had joined Adequate to work on its amazing products and also because it uses and develops exciting new technologies. After onboarding the company, Shelly, the CTO, introduced Shawn to Mike. Mike, a senior developer at Adequate, is a jolly man, who loves exploring new things. "So Shawn, here's Mike", said Shelly. "He'll be mentoring you as well as pairing with you on development. We follow pair programming, so expect a lot of it with him. He's an excellent help." With that, Shelly took leave. "Hey Shawn!" Mike began, "are you all set to begin?" "Yeah, all set! So what are we working on?" "Well we are about to start working on an app using https://openlibrary.org/. Open Library is collection of the world's classic literature. It's an open, editable library catalog for all the books. It's an initiative under https://archive.org/ and lists free book titles. We need to build an app to display the most recent changes in the record by Open Library. You can call this the Activities page. Many people contribute to Open Library. We want to display the changes made by these users to the books, addition of new books, edits, and so on, as shown in the following screenshot: "Oh nice! What are we using to build it?" "Open Library provides us with a neat REST API that we can consume to fetch the data. We are just going to build a simple page that displays the fetched data and format it for display. I've been experimenting and using ReactJS for this. Have you used it before?" "Nope. However, I have heard about it. Isn't it the one from Facebook and Instagram?" "That's right. It's an amazing way to define our UI. As the app isn't going to have much of logic on the server or perform any display, it is an easy option to use it." "As you've not used it before, let me provide you a quick introduction." "Have you tried services such as JSBin and JSFiddle before?" "No, but I have seen them." "Cool. We'll be using one of these, therefore, we don't need anything set up on our machines to start with." "Let's try on your machine", Mike instructed. "Fire up http://jsbin.com/?html,output" "You should see something similar to the tabs and panes to code on and their output in adjacent pane." "Go ahead and make sure that the HTML, JavaScript, and Output tabs are clicked and you can see three frames for them so that we are able to edit HTML and JS and see the corresponding output." "That's nice." "Yeah, good thing about this is that you don't need to perform any setups. Did you notice the Auto-run JS option? Make sure its selected. This option causes JSBin to reload our code and see its output so that we don't need to keep saying Run with JS to execute and see its output." "Ok." Requiring React library "Alright then! Let's begin. Go ahead and change the title of the page, to say, React JS Example. Next, we need to set up and we require the React library in our file." "React's homepage is located at http://facebook.github.io/react/. Here, we'll also locate the downloads available for us so that we can include them in our project. There are different ways to include and use the library. We can make use of bower or install via npm. We can also just include it as an individual download, directly available from the fb.me domain. There are development versions that are full version of the library as well as production version which is its minified version. There is also its version of add-on. We'll take a look at this later though." "Let's start by using the development version, which is the unminified version of the React source. Add the following to the file header:" <script src="http://fb.me/react-0.13.0.js"></script> "Done". "Awesome, let's see how this looks." <!DOCTYPE html> <html> <head> <script src="http://fb.me/react-0.13.0.js"></script> <meta charset="utf-8"> <title>React JS Example</title> </head> <body> </body> </html> Summary In this article, we started with React and built our first component. In the process we studied top level API of React for constructing components and elements. Resources for Article: Further resources on this subject: Create Your First React Element [article] An Introduction to ReactJs [article] An Introduction to Reactive Programming [article]
Read more
  • 0
  • 0
  • 13735

article-image-how-build-javascript-microservices-platform
Andrea Falzetti
20 Dec 2016
6 min read
Save for later

How to build a JavaScript Microservices platform

Andrea Falzetti
20 Dec 2016
6 min read
Microservices is one of the most popular topics in software development these days, as well as JavaScript and chat bots. In this post, I share my experience in designing and implementing a platform using a microservices architecture. I will also talk about which tools were picked for this platform and how they work together. Let’s start by giving you some context about the project. The challenge The startup I am working with had the idea of building a platform to help people with depression and-tracking and managing the habits that keep them healthy, positive, and productive. The final product is an iOS app with a conversational UI, similar to a chat bot, and probably not very intelligent in version 1.0, but still with some feelings! Technology stack For this project, we decided to use Node.js for building the microservices, ReactNative for the mobile app, ReactJS for the Admin Dashboard, and ElasticSearch and Kibana for logging and monitoring the applications. And yes, we do like JavaScript! Node.js Microservices Toolbox There are many definitions of a microservice, but I am assuming we agree on a common statement that describes a microservice as an independent component that performs certain actions within your systems, or in a nutshell, a part of your software that solves a specific problem that you have. I got interested in microservices this year, especially when I found out there was a node.js toolkit called Seneca that helps you organize and scale your code. Unfortunately, my enthusiasm didn’t last long as I faced the first issue: the learning curve to approach Seneca was too high for this project; however, even if I ended up not using it, I wanted to include it here because many people are successfully using it, and I think you should be aware of it, and at least consider looking at it. Instead, we decided to go for a simpler way. We split our project into small Node applications, and using pm2, we deploy our microservices using a pm2 configuration file called ecosystem.json. As explained in the documentation, this is a good way of keeping your deployment simple, organized, and monitored. If you like control dashboards, graphs, and colored progress bars, you should look at pm2 Keymetrics--it offers a nice overview of all your processes. It has also been extremely useful in creating a GitHub Machine User, which essentially is a normal GitHub account, with its own attached ssh-key, which grants access to the repositories that contain the project’s code. Additionally, we created a Node user on our virtual machine with the ssh-key loaded in. All of the microservices run under this node user, which has access to the code base through the machine user ssh-key. In this way, we can easily pull the latest code and deploy new versions. We finally attached our ssh-keys to the Node user so each developer can login as the Node user via ssh: ssh node@<IP> cd ./project/frontend-api git pull Without being prompted for a password or authorization token from GitHub, and then using pm2, restart the microservice: pm2 restart frontend-api Another very important element that makes our microservices architecture is the API Gateway. After evaluating different options, including AWS API Gateway, HAproxy, Varnish, and Tyk, we decided to use Kong, an open source API Layer that offers modularity via plugins (everybody loves plugins!) and scalability features. We picked Kong because it supports WebSockets, as well as HTTP, but unfortunately, not all of the alternatives did. Using Kong in combination with nginx, we mapped our microservices under a single domain name, http://api.example.com, and exposed each microservices under a specific path: api.example.com/chat-bot api.example.com/admin-backend api.example.com/frontend-api … This allows us to run the microservices on separate servers on different ports, and having one single gate for the clients consuming our APIs. Finally, the API Gateway is responsible of allowing only authenticated requests to pass through, so this is a great way of protecting the microservices, because the gateway is the only public component, and all of the microservices run in a private network. What does a microservice look like, and how do they talk to each other? We started creating a microservice-boilerplate package that includes Express to expose some APIs, Passport to allow only authorized clients to use them, winston for logging their activities, and mocha and chai for testing the microservice. We then created an index.js file that initializes the express app with a default route: /api/ping. This returns a simple JSON containing the message ‘pong’, which we use to know if the service is down. One alternative is to get the status of the process using pm2: pm2 list pm2 status <microservice-pid> Whenever we want to create a new microservice, we start from this boilerplate code. It saves a lot of time, especially if the microservices have a similar shape. The main communication channel between microservices is HTTP via API calls. We are also using web sockets to allow a faster communication between some parts of the platform. We decided to use socket.io, a very simple and efficient way of implementing web sockets. I recommend creating a Node package that contains the business logic, including the object, models, prototypes, and common functions, such as read and write methods for the database. Using this approach allows you to include the package into each microservice, with the benefit of having just one place to update if something needs to change. Conclusions In this post, I covered the tools used for building a microservice architecture in Node.js. I hope you have found this useful. About the author Andrea Falzetti is an enthusiastic Full Stack Developer based in London. He has been designing and developing web applications for over 5 years. He is currently focused on node.js, react, microservices architecture, serverless, conversational UI, chat bots, and machine learning. He is currently working at Activate Media, where his role is to estimate, design, and lead the development of web and mobile platforms.
Read more
  • 0
  • 0
  • 13625

article-image-testing-node-and-hapi
Packt
09 Feb 2016
22 min read
Save for later

Testing in Node and Hapi

Packt
09 Feb 2016
22 min read
In this article by John Brett, the author of the book Getting Started with Hapi.js, we are going to explore the topic of testing in node and hapi. We will look at what is involved in writing a simple test using hapi's test runner, lab, how to test hapi applications, techniques to make testing easier, and finally how to achieve the all-important 100% code coverage. (For more resources related to this topic, see here.) The benefits and importance of testing code Technical debt is developmental work that must be done before a particular job is complete, or else it will make future changes much harder to implement later on. A codebase without tests is a clear indication of technical debt. Let's explore this statement in more detail. Even very simple applications will generally comprise: Features, which the end user interacts with Shared services, such as authentication and authorization, that features interact with These will all generally depend on some direct persistent storage or API. Finally, to implement most of these features and services, we will use libraries, frameworks, and modules regardless of language. So, even for simpler applications, we have already arrived at a few dependencies to manage, where a change that causes a break in one place could possibly break everything up the chain. So let's take a common use case, in which a new version of one of your dependencies is released. This could be a new hapi version, a smaller library, your persistent storage engine, MySQL, MongoDB, or even an operating system or language version. SemVer, as mentioned previously, attempts to mitigate this somewhat, but you are taking someone at their word when they say that they have adhered to this correctly, and SemVer is not used everywhere. So, in the case of a break-causing change, will the current application work with this new dependency version? What will fail? What percentage of tests fail? What's the risk if we don't upgrade? Will support eventually be dropped, including security patches? Without a good automated test suite, these have to be answered by manual testing, which is a huge waste of developer time. Development progress stops here every time these tasks have to be done, meaning that these types of tasks are rarely done, building further technical debt. Apart from this, humans are proven to be poor at repetitive tasks, prone to error, and I know I personally don't enjoy testing manually, which makes me poor at it. I view repetitive manual testing like this as time wasted, as these questions could easily be answered by running a test suite against the new dependency so that developer time could be spent on something more productive. Now, let's look at a worse and even more common example: a security exploit has been identified in one of your dependencies. As mentioned previously, if it's not easy to update, you won't do it often, so you could be on an outdated version that won't receive this security update. Now you have to jump multiple versions at once and scramble to test them manually. This usually means many quick fixes, which often just cause more bugs. In my experience, code changes under pressure are what deteriorate the structure and readability in a codebase, lead to a much higher number of bugs, and are a clear sign of poor planning. A good development team will, instead of looking at what is currently available, look ahead to what is in beta and will know ahead of time if they expect to run into issues. The questions asked will be: Will our application break in the next version of Chrome? What about the next version of node? Hapi does this by running the full test suite against future versions of node in order to alert the node community of how planned changes will impact hapi and the node community as a whole. This is what we should all aim to do as developers. A good test suite has even bigger advantages when working in a team or when adding new developers to a team. Most development teams start out small and grow, meaning all the knowledge of the initial development needs to be passed on to new developers joining the team. So, how do tests lead to a benefit here? For one, tests are a great documentation on how parts of the application work for other members of a team. When trying to communicate a problem in an application, a failing test is a perfect illustration of what and where the problem is. When working as a team, for every code change from yourself or another member of the team, you're faced with the preceding problem of changing a dependency. Do we just test the code that was changed? What about the code that depends on the changed code? Is it going to be manual testing again? If this is the case, how much time in a week would be spent on manual testing versus development? Often, with changes, existing functionality can be broken along with new functionality, which is called regression. Having a good test suite highlights this and makes it much easier to prevent. These are the questions and topics that need to be answered when discussing the importance of tests. Writing tests can also improve code quality. For one, identifying dead code is much easier when you have a good testing suite. If you find that you can only get 90% code coverage, what does the extra 10% do? Is it used at all if it's unreachable? Does it break other parts of the application if removed? Writing tests will often improve your skills in writing easily testable code. Software applications usually grow to be complex pretty quickly—it happens, but we always need to be active in dealing with this, or software complexity will win. A good test suite is one of the best tools we have to tackle this. The preceding is not an exhaustive list on the importance or benefits of writing tests for your code, but hopefully it has convinced you of the importance of having a good testing suite. So, now that we know why we need to write good tests, let's look at hapi's test runner lab and assertion library code and how, along with some tools from hapi, they make the process of writing tests much easier and a more enjoyable experience. Introducing hapi's testing utilities The test runner in the hapi ecosystem is called lab. If you're not familiar with test runners, they are command-line interface tools for you to run your testing suite. Lab was inspired by a similar test tool called mocha, if you are familiar with it, and in fact was initially begun as a fork of the mocha codebase. But, as hapi's needs diverged from the original focus of mocha, lab was born. The assertion library commonly used in the hapi ecosystem is code. An assertion library forms the part of a test that performs the actual checks to judge whether a test case has passed or not, for example, checking that the value of a variable is true after an action has been taken. Lets look at our first test script; then, we can take a deeper look at lab and code, how they function under the hood, and some of the differences they have with other commonly used libraries, such as mocha and chai. Installing lab and code You can install lab and code the same as any other module on npm: npm install lab code -–save-dev Note the --save-dev flag added to the install command here. Remember your package.json file, which describes an npm module? This adds the modules to the devDependencies section of your npm module. These are dependencies that are required for the development and testing of a module but are not required for using the module. The reason why these are separated is that when we run npm install in an application codebase, it only installs the dependencies and devDependencies of package.json in that directory. For all the modules installed, only their dependencies are installed, not their development dependencies. This is because we only want to download the dependencies required to run that application; we don't need to download all the development dependencies for every module. The npm install command installs all the dependencies and devDependencies of package.json in the current working directory, and only the dependencies of the other installed module, not devDependencies. To install the development dependencies of a particular module, navigate to the root directory of the module and run npm install. After you have installed lab, you can then run it with the following: ./node_modules/lab/bin/lab test.js This is quite long to type every time, but fortunately due to a handy feature of npm called npm scripts, we can shorten it. If you look at package.json generated by npm init in the first chapter, depending on your version of npm, you may see the following (some code removed for brevity): ... "scripts": { "test": "echo "Error: no test specified" && exit 1" }, ... Scripts are a list of commands related to the project; they can be for testing purposes, as we will see in this example; to start an application; for build steps; and to start extra servers, among many other options. They offer huge flexibility in how these are combined to manage scripts related to a module or application, and I could spend a chapter, or even a book, on just these, but they are outside the scope of this book, so let's just focus on what is important to us here. To get a list of available scripts for a module application, in the module directory, simply run: $ npm run To then run the listed scripts, such as test you can just use: $ npm run test As you can see, this gives a very clean API for scripts and the documentation for each of them in the project's package.json. From this point on in this book, all code snippets will use npm scripts to test or run any examples. We should strive to use these in our projects to simplify and document commands related to applications and modules for ourselves and others. Let's now add the ability to run a test file to our package.json file. This just requires modifying the scripts section to be the following: ... "scripts": { "test": "./node_modules/lab/bin/lab ./test/index.js" }, ... It is common practice in node to place all tests in a project within the test directory. A handy addition to note here is that when calling a command with npm run, the bin directory of every module in your node_modules directory is added to PATH when running these scripts, so we can actually shorten this script to: … "scripts": { "test": "lab ./test/index.js" }, … This type of module install is considered to be local, as the dependency is local to the application directory it is being run in. While I believe this is how we should all install our modules, it is worth pointing it out that it is also possible to install a module globally. This means that when installing something like lab, it is immediately added to PATH and can be run from anywhere. We do this by adding a -g flag to the install, as follows: $ npm install lab code -g This may appear handier than having to add npm scripts or running commands locally outside of an npm script but should be avoided where possible. Often, installing globally requires sudo to run, meaning you are taking a script from the Internet and allowing it to have complete access to your system. Hopefully, the security concerns here are obvious. Other than that, different projects may use different versions of test runners, assertion libraries, or build tools, which can have unknown side effects and cause debugging headaches. The only time I would use globally installed modules are for command-line tools that I may use outside a particular project—for example, a node base terminal IDE such as slap (https://www.npmjs.com/package/slap) or a process manager such as PM2 (https://www.npmjs.com/package/pm2)—but never with sudo! Now that we are familiar with installing lab and code and the different ways or running it inside and outside of npm scripts, let's look at writing our first test script and take a more in-depth look at lab and code. Our First Test Script Let's take a look at what a simple test script in lab looks like using the code assertion library: const Code = require('code'); [1] const Lab = require('lab'); [1] const lab = exports.lab = Lab.script(); [2] lab.experiment('Testing example', () => { [3] lab.test('fails here', (done) => { [4] Code.expect(false).to.be.true(); [4] return done(); [4] }); [4] lab.test('passes here', (done) => { [4] Code.expect(true).to.be.true(); [4] return done(); [4] }); [4] }); This script, even though small, includes a number of new concepts, so let's go through it with reference to the numbers in the preceding code: [1]: Here, we just include the code and lab modules, as we would any other node module. [2]: As mentioned before, it is common convention to place all test files within the test directory of a project. However, there may be JavaScript files in there that aren't tests, and therefore should not be tested. To avoid this, we inform lab of which files are test scripts by calling Lab.script() and assigning the value to lab and exports.lab. [3]: The lab.experiment() function (aliased lab.describe()) is just a way to group tests neatly. In test output, tests will have the experiment string prefixed to the message, for example, "Testing example fails here". This is optional, however. [4]: These are the actual test cases. Here, we define the name of the test and pass a callback function with the parameter function done(). We see code in action here for managing our assertions. And finally, we call the done() function when finished with our test case. Things to note here: lab tests are always asynchronous. In every test, we have to call done() to finish the test; there is no counting of function parameters or checking whether synchronous functions have completed in order to ensure that a test is finished. Although this requires the boilerplate of calling the done() function at the end of every test, it means that all tests, synchronous or asynchronous, have a consistent structure. In Chai, which was originally used for hapi, some of the assertions such as .ok, .true, and .false use properties instead of functions for assertions, while assertions like .equal(), and .above() use functions. This type of inconsistency leads to us easily forgetting that an assertion should be a method call and hence omitting the (). This means that the assertion is never called and the test may pass as a false positive. Code's API is more consistent in that every assertion is a function call. Here is a comparison of the two: Chai: expect('hello').to.equal('hello'); expect(foo).to.exist; Code: expect('hello').to.equal('hello'); expect('foot').to.exist(); Notice the difference in the second exist() assertion. In Chai, you see the property form of the assertion, while in Code, you see the required function call. Through this, lab can make sure all assertions within a test case are called before done is complete, or it will fail the test. So let's try running our first test script. As we already updated our package.json script, we can run our test with the following command: $ npm run test This will generate the following output: There are a couple of things to note from this. Tests run are symbolized with a . or an X, depending on whether they pass or not. You can get a lab list of the full test title by adding the -v or -–verbose flag to our npm test script command. There are lots of flags to customize the running and output of lab, so I recommend using the full labels for each of these, for example, --verbose and --lint instead of -v and -l, in order to save you the time spent referring back to the documentation each time. You may have noticed the No global variable leaks detected message at the bottom. Lab assumes that the global object won't be polluted and checks that no extra properties have been added after running tests. Lab can be configured to not run this check or whitelist certain globals. Details of this are in the lab documentation availbale at https://github.com/hapijs/lab. Testing approaches This is one of the many known approaches to building a test suite, as is BDD (Behavior Driven Development), and like most test runners in node, lab is unopinionated about how you structure your tests. Details of how to structure your tests in a BDD can again be found easily in the lab documentation. Testing with hapi As I mentioned before, testing is considered paramount in the hapi ecosystem, with every module in the ecosystem having to maintain 100% code coverage at all times, as with all module dependencies. Fortunately, hapi provides us with some tools to make the testing of hapi apps much easier through a module called Shot, which simulates network requests to a hapi server. Let's take the example of a Hello World server and write a simple test for it: const Code = require('code'); const Lab = require('lab'); const Hapi = require('hapi'); const lab = exports.lab = Lab.script(); lab.test('It will return Hello World', (done) => { const server = new Hapi.Server(); server.connection(); server.route({ method: 'GET', path: '/', handler: function (request, reply) { return reply('Hello Worldn'); } }); server.inject('/', (res) => { Code.expect(res.statusCode).to.equal(200); Code.expect(res.result).to.equal('Hello Worldn'); done(); }); }); Now that we are more familiar with with what a test script looks like, most of this will look familiar. However, you may have noticed we never started our hapi server. This means the server was never started and no port assigned, but thanks to the shot module (https://github.com/hapijs/shot), we can still make requests against it using the server.inject API. Not having to start a server means less setup and teardown before and after tests and means that a test suite can run quicker as less resources are required. server.inject can still be used if used with the same API whether the server has been started or not. Code coverage As I mentioned earlier in the article, having 100% code coverage is paramount in the hapi ecosystem and, in my opinion, hugely important for any application to have. Without a code coverage target, writing tests can feel like an empty or unrewarding task where we don't know how many tests are enough or how much of our application or module has been covered. With any task, we should know what our goal is; testing is no different, and this is what code coverage gives us. Even with 100% coverage, things can still go wrong, but it means that at the very least, every line of code has been considered and has at least one test covering it. I've found from working on modules for hapi that trying to achieve 100% code coverage actually gamifies the process of writing tests, making it a more enjoyable experience overall. Fortunately, lab has code coverage integrated, so we don't need to rely on an extra module to achieve this. It's as simple as adding the --coverage or -c flag to our test script command. Under the hood, lab will then build an abstract syntax tree so it can evaluate which lines are executed, thus producing our coverage, which will be added to the console output when we run tests. The code coverage tool will also highlight which lines are not covered by tests, so you know where to focus your testing effort, which is extremely useful in identifying where to focus your testing effort. It is also possible to enforce a minimum threshold as to the percentage of code coverage required to pass a suite of tests with lab through the --threshold or -t flag followed by an integer. This is used for all the modules in the hapi ecosystem, and all thresholds are set to 100. Having a threshold of 100% for code coverage makes it much easier to manage changes to a codebase. When any update or pull request is submitted, the test suite is run against the changes, so we can know that all tests have passed and all code covered before we even look at what has been changed in the proposed submission. There are services that even automate this process for us, such as TravisCI (https://travis-ci.org/). It's also worth knowing that the coverage report can be displayed in a number of formats; For a full list of these reporters with explanations, I suggest reading the lab documentation available at https://github.com/hapijs/lab. Let's now look at what's involved in getting 100% coverage for our previous example. First of all, we'll move our server code to a separate file, which we will place in the lib folder and call index.js. It's worth noting here that it's good testing practice and also the typical module structure in the hapi ecosystem to place all module code in a folder called lib and the associated tests for each file within lib in a folder called test, preferably with a one-to-one mapping like we have done here, where all the tests for lib/index.js are in test/index.js. When trying to find out how a feature within a module works, the one-to-one mapping makes it much easier to find the associated tests and see examples of it in use. So, having separated our server from our tests, let's look at what our two files now look like; first, ./lib/index.js: const Hapi = require('hapi'); const server = new Hapi.Server(); server.connection(); server.route({ method: 'GET', path: '/', handler: function (request, reply) { return reply('Hello Worldn'); } }); module.exports = server; The main change here is that we export our server at the end for another file to acquire and start it if necessary. Our test file at ./test/index.js will now look like this: const Code = require('code'); const Lab = require('lab'); const server = require('../lib/index.js'); const lab = exports.lab = Lab.script(); lab.test('It will return Hello World', (done) => { server.inject('/', (res) => { Code.expect(res.statusCode).to.equal(200); Code.expect(res.result).to.equal('Hello Worldn'); done(); }); }); Finally, for us to test our code coverage, we update our npm test script to include the coverage flag --coverage or -c. The final example of this is in the second example of the source code of Chapter 4, Adding Tests and the Importance of 100% Coverage, which is supplied with this book. If you run this, you'll find that we actually already have 100% of the code covered with this one test. An interesting exercise here would be to find out what versions of hapi this code functions correctly with. At the time of writing, this code was written for hapi version 11.x.x on node.js version 4.0.0. Will it work if run with hapi version 9 or 10? You can test this now by installing an older version with the help of the following command: $ npm install hapi@10 This will give you an idea of how easy it can be to check whether your codebase works with different versions of libraries. If you have some time, it would be interesting to see how this example runs on different versions of node (Hint: it breaks on any version earlier than 4.0.0). In this example, we got 100% code coverage with one test. Unfortunately, we are rarely this fortunate when we increase the complexity of our codebase, and so will the complexity of our tests be, which is where knowledge of writing testable code comes in. This is something that comes with practice by writing tests while writing application or module code. Linting Also built into lab is linting support. Linting enforces a code style that is adhered to, which can be specified through an .eslintrc or .jshintrc file. By default, lab will enforce the the hapi style guide rules. The idea of linting is that all code will have the same structure, making it much easier to spot bugs and keep code tidy. As JavaScript is a very flexible language, linters are used regularly to forbid bad practices such as global or unused variables. To enable the lab linter, simply add the linter flag to the test command, which is --lint or -L. I generally stick with the default hapi style guide rules as they are chosen to promote easy-to-read code that is easily testable and forbids many bad practices. However, it's easy to customize the linting rules used; for this, I recommend referring to the lab documentation. Summary In this article, we covered testing in node and hapi and how testing and code coverage are paramount in the hapi ecosystem. We saw justification for their need in application development and where they can make us more productive developers. We also introduced the test runner and code assertion libraries lab and code in the ecosystem. We saw justification for their use and also how to use them to write simple tests and how to use the tools provided in lab and hapi to test hapi applications. We also learned about some of the extra features baked into lab, such as code coverage and linting. We looked at how to test the code coverage of an application and get it to 100% and how the hapi ecosystem applies the hapi styleguide to all modules using lab's linting integration. Resources for Article: Further resources on this subject: Welcome to JavaScript in the full stack[article] A Typical JavaScript Project[article] An Introduction to Mastering JavaScript Promises and Its Implementation in Angular.js[article]
Read more
  • 0
  • 0
  • 13200
article-image-app-development-using-react-native-vs-androidios
Manuel Nakamurakare
03 Mar 2016
6 min read
Save for later

App Development Using React Native vs. Android/iOS

Manuel Nakamurakare
03 Mar 2016
6 min read
Until two years ago, I had exclusively done Android native development. I had never developed iOS apps, but that changed last year, when my company decided that I had to learn iOS development. I was super excited at first, but all that excitement started to fade away as I started developing our iOS app and I quickly saw how my productivity was declining. I realized I had to basically re-learn everything I learnt in Android: the framework, the tools, the IDE, etc. I am a person who likes going to meetups, so suddenly I started going to both Android and iOS meetups. I needed to keep up-to-date with the latest features in both platforms. It was very time-consuming and at the same time somewhat frustrating since I was feeling my learning pace was not fast enough. Then, React Native for iOS came out. We didn’t start using it until mid 2015. We started playing around with it and we really liked it. What is React Native? React Native is a technology created by Facebook. It allows developers to use JavaScript in order to create mobile apps in both Android and iOS that look, feel, and are native. A good way to explain how it works is to think of it as a wrapper of native code. There are many components that have been created that are basically wrapping the native iOS or Android functionality. React Native has been gaining a lot of traction since it was released because it has basically changed the game in many ways. Two Ecosystems One reason why mobile development is so difficult and time consuming is the fact that two entirely different ecosystems need to be learned. If you want to develop an iOS app, then you need to learn Swift or Objective-C and Cocoa Touch. If you want to develop Android apps, you need to learn Java and the Android SDK. I have written code in the three languages, Swift, Objective C, and Java. I don’t really want to get into the argument of comparing which of these is better. However, what I can say is that they are different and learning each of them takes a considerable amount of time. A similar thing happens with the frameworks: Cocoa Touch and the Android SDK. Of course, with each of these frameworks, there is also a big bag of other tools such as testing tools, libraries, packages, etc. And we are not even considering that developers need to stay up-to-date with the latest features each ecosystem offers. On the other hand, if you choose to develop on React Native, you will, most of the time, only need to learn one set of tools. It is true that there are many things that you will need to get familiar with: JavaScript, Node, React Native, etc. However, it is only one set of tools to learn. Reusability Reusability is a big thing in software development. Whenever you are able to reuse code that is a good thing. React Native is not meant to be a write once, run everywhere platform. Whenever you want to build an app for them, you have to build a UI that looks and feels native. For this reason, some of the UI code needs to be written according to the platform's best practices and standards. However, there will always be some common UI code that can be shared together with all the logic. Being able to share code has many advantages: better use of human resources, less code to maintain, less chance of bugs, features in both platforms are more likely to be on parity, etc. Learn Once, Write Everywhere As I mentioned before, React Native is not meant to be a write once, run everywhere platform. As the Facebook team that created React Native says, the goal is to be a learn once, write everywhere platform. And this totally makes sense. Since all of the code for Android and iOS is written using the same set of tools, it is very easy to imagine having a team of developers building the app for both platforms. This is not something that will usually happen when doing native Android and iOS development because there are very few developers that do both. I can even go farther and say that a team that is developing a web app using React.js will not have a very hard time learning React Native development and start developing mobile apps. Declarative API When you build applications using React Native, your UI is more predictable and easier to understand since it has a declarative API as opposed to an imperative one. The difference between these approaches is that when you have an application that has different states, you usually need to keep track of all the changes in the UI and modify them. This can become a complex and very unpredictable task as your application grows. This is called imperative programming. If you use React Native, which has declarative APIs, you just need to worry about what the current UI state looks like without having to keep track of the older ones. Hot Reloading The usual developer routine when coding is to test changes every time some code has been written. For this to happen, the application needs to be compiled and then installed in either a simulator or a real device. In case of React Native, you don’t, most of the time, need to recompile the app every time you make a change. You just need to refresh the app in the simulator, emulator, or device and that’s it. There is even a feature called Live Reload that will refresh the app automatically every time it detects a change in the code. Isn’t that cool? Open Source React Native is still a very new technology; it was made open source less than a year ago. It is not perfect yet. It still has some bugs, but, overall, I think it is ready to be used in production for most mobile apps. There are still some features that are available in the native frameworks that have not been exposed to React Native but that is not really a big deal. I can tell from experience that it is somewhat easy to do in case you are familiar with native development. Also, since React Native is open source, there is a big community of developers helping to implement more features, fix bugs, and help people. Most of the time, if you are trying to build something that is common in mobile apps, it is very likely that it has already been built. As you can see, I am really bullish on React Native. I miss native Android and iOS development, but I really feel excited to be using React Native these days. I really think React Native is a game-changer in mobile development and I cannot wait until it becomes the to-go platform for mobile development!
Read more
  • 0
  • 0
  • 12911

article-image-react-dashboard-and-visualizing-data
Xavier Bruhiere
26 Nov 2015
8 min read
Save for later

React Dashboard and Visualizing Data

Xavier Bruhiere
26 Nov 2015
8 min read
I spent the last six months working on data analytics and machine learning to feed my curiosity and prepare for my new job. It is a challenging mission and I chose to give up for a while on my current web projects to stay focused. Back then, I was coding a dashboard for an automated trading system, powered by an exciting new framework from Facebook : React. In my opinion, Web Components was the way to go and React seemed gentler with my brain than, say, Polymer. One just needed to carefully design components boundaries, properties and states and bam, you got a reusable piece of web to plug anywhere. Beautiful. This is quite a naive way to put it of course but, for an MVP, it actually kind of worked. Fast forward to last week, I was needing a new dashboard to monitor various metrics from my shiny new infrastructure. Specialized requirements kept me away from a full-fledged solution like InfluxDB and Grafana combo, so I naturally starred at my old code. Well, it turned out I did not reuse a single line of code. Since the last time I spent in web development, new tools, frameworks and methodologies had taken over the world : es6 (and transpilers), isomorphic applications, one-way data flow, hot reloading, module bundler, ... Even starter kits are remarkably complex (at least for me) and I got overwhelmed. But those new toys are also truly empowering and I persevered. In this post, we will learn to leverage them, build the simplest dashboard possible and pave the way toward modern, real-time metrics monitoring. Tooling & Motivations I think the points of so much tooling are productivity and complexity management. New single page applications usually involve a significant number of moving parts : front and backend development, data management, scaling, appealing UX, ... Isomorphic webapps with nodejs and es6 try to harmonize this workflow sharing one readable language across the stack. Node already sells the "javascript everywhere" argument but here, it goes even further, with code that can be executed both on the server and in the browser, indifferently. Team work and reusability are improved, as well as SEO (Search Engine optimization) when rendering HTML on server-side. Yet, applications' codebase can turn into a massive mess and that's where Web Components come handy. Providing clear contracts between modules, a developer is able to focus on subpart of the UI with an explicit definition of its parameters and states. This level of abstraction makes the application much more easy to navigate, maintain and reuse. Working with React gives a sense of clarity with components as Javascript objects. Lifecycle and behavior are explicitly detailed by pre-defined hooks, while properties and states are distinct attributes. We still need to glue all of those components and their dependencies together. That's where npm, Webpack and Gulp join the party. Npm is the de facto package manager for nodejs, and more and more for frontend development. What's more, it can run for you scripts and spare you from using a task runner like Gulp. Webpack, meanwhile, bundles pretty much anything thanks to its loaders. Feed it an entrypoint which require your js, jsx, css, whatever ... and it will transform and package them for the browser. Given the steep learning curve of modern full-stack development, I hope you can see the mean of those tools. Last pieces I would like to introduce for our little project are metrics-graphics and react-sparklines (that I won't actually describe but worth noting for our purpose). Both are neat frameworks to visualize data and play nicely with React, as we are going to see now. Graph Component When building components-based interfaces, first things to define are what subpart of the UI those components are. Since we start a spartiate implementation, we are only going to define a Graph. // Graph.jsx // new es6 import syntax import React from 'react'; // graph renderer import MG from 'metrics-graphics'; export default class Graph extends React.Component { // called after the `render` method below componentDidMount () { // use d3 to load data from metrics-graphics samples d3.json('node_modules/metrics-graphics/examples/data/confidence_band.json', function(data) { data = MG.convert.date(data, 'date'); MG.data_graphic({ title: {this.props.title}, data: data, format: 'percentage', width: 600, height: 200, right: 40, target: '#confidence', show_secondary_x_label: false, show_confidence_band: ['l', 'u'], x_extended_ticks: true }); }); } render () { // render the element targeted by the graph return <div id="confidence"></div>; } } This code, a trendy combination of es6 and jsx, defines in the DOM a standalone graph from the json data in confidence_band.json I stole on Mozilla official examples. Now let's actually mount and render the DOM in the main entrypoint of the application (I mentioned above with Webpack). // main.jsx // tell webpack to bundle style along with the javascript import 'metrics-graphics/dist/metricsgraphics.css'; import 'metrics-graphics/examples/css/metricsgraphics-demo.css'; import 'metrics-graphics/examples/css/highlightjs-default.css'; import React from 'react'; import Graph from './components/Graph'; function main() { // it is recommended to not directly render on body var app = document.createElement('div'); document.body.appendChild(app); // key/value pairs are available under `this.props` hash within the component React.render(<Graph title={Keep calm and build a dashboard}/>, app); } main(); Now that we defined in plain javascript the web page, it's time for our tools to take over and actually build it. Build workflow This is mostly a matter of configuration. First, create the following structure. $ tree . ├── app │ ├── components │ │ ├── Graph.jsx │ ├── main.jsx ├── build └── package.json Where package.json is defined like below. { "name": "react-dashboard", "scripts": { "build": "TARGET=build webpack", "dev": "TARGET=dev webpack-dev-server --host 0.0.0.0 --devtool eval-source --progress --colors --hot --inline --history-api-fallback" }, "devDependencies": { "babel-core": "^5.6.18", "babel-loader": "^5.3.2", "css-loader": "^0.15.1", "html-webpack-plugin": "^1.5.2", "node-libs-browser": "^0.5.2", "react-hot-loader": "^1.2.7", "style-loader": "^0.12.3", "webpack": "^1.10.1", "webpack-dev-server": "^1.10.1", "webpack-merge": "^0.1.2" }, "dependencies": { "metrics-graphics": "^2.6.0", "react": "^0.13.3" } } A quick npm install will download every package we need for development and production. Two scripts are even defined to build a static version of the site, or serve a dynamic one that will be updated on file changes detection. This formidable feature becomes essential once tasted. But we have yet to configure Webpack to enjoy it. var path = require('path'); var HtmlWebpackPlugin = require('html-webpack-plugin'); var webpack = require('webpack'); var merge = require('webpack-merge'); // discern development server from static build var TARGET = process.env.TARGET; // webpack prefers abolute path var ROOT_PATH = path.resolve(__dirname); // common environments configuration var common = { // input main.js we wrote earlier entry: [path.resolve(ROOT_PATH, 'app/main')], // import requirements with following extensions resolve: { extensions: ['', '.js', '.jsx'] }, // define the single bundle file output by the build output: { path: path.resolve(ROOT_PATH, 'build'), filename: 'bundle.js' }, module: { // also support css loading from main.js loaders: [ { test: /.css$/, loaders: ['style', 'css'] } ] }, plugins: [ // automatically generate a standard index.html to attach on the React app new HtmlWebpackPlugin({ title: 'React Dashboard' }) ] }; // production specific configuration if(TARGET === 'build') { module.exports = merge(common, { module: { // compile es6 jsx to standard es5 loaders: [ { test: /.jsx?$/, loader: 'babel?stage=1', include: path.resolve(ROOT_PATH, 'app') } ] }, // optimize output size plugins: [ new webpack.DefinePlugin({ 'process.env': { // This has effect on the react lib size 'NODE_ENV': JSON.stringify('production') } }), new webpack.optimize.UglifyJsPlugin({ compress: { warnings: false } }) ] }); } // development specific configuration if(TARGET === 'dev') { module.exports = merge(common, { module: { // also transpile javascript, but also use react-hot-loader, to automagically update web page on changes loaders: [ { test: /.jsx?$/, loaders: ['react-hot', 'babel?stage=1'], include: path.resolve(ROOT_PATH, 'app'), }, ], }, }); } Webpack configuration can be hard to swallow at first but, given the huge amount of transformations to operate, this style scales very well. Plus, once setup, the development environment becomes remarkably productive. To convince yourself, run webpack-dev-server and reach localhost:8080/assets/bundle.js in your browser. Tweak the title argument in main.jsx, save the file and watch the browser update itself. We are ready to build new components and extend our modular dashboard. Conclusion We condensed in a few paragraphs a lot of what makes the current web ecosystem effervescent. I strongly encourage the reader to deepen its knowledge on those matters and consider this post as it is : an introduction. Web components, like micro-services, are fun, powerful and bleeding edges. But also complex, fast-moving and unstable. The tooling, especially, is impressive. Spend a hard time to master them and craft something cool ! About the Author Xavier Bruhiere is a Lead Developer at AppTurbo in Paris, where he develops innovative prototypes to support company growth. He is addicted to learning, hacking on intriguing hot techs (both soft and hard), and practicing high intensity sports.
Read more
  • 0
  • 0
  • 12873

article-image-using-reactjs-without-jsx
Richard Feldman
30 Jun 2014
6 min read
Save for later

Using React.js without JSX

Richard Feldman
30 Jun 2014
6 min read
React.js was clearly designed with JSX in mind, however, there are plenty of good reasons to use React without it. Using React as a standalone library lets you evaluate the technology without having to spend time learning a new syntax. Some teams—including my own—prefer to have their entire frontend code base in one compile-to-JavaScript language, such as CoffeeScript or TypeScript. Others might find that adding another JavaScript library to their dependencies is no big deal, but adding a compilation step to the build chain is a deal-breaker. There are two primary drawbacks to eschewing JSX. One is that it makes using React significantly more verbose. The other is that the React docs use JSX everywhere; examples demonstrating vanilla JavaScript are few and far between. Fortunately, both drawbacks are easy to work around. Translating documentation The first code sample you see in the React Documentation includes this JSX snippet: /** @jsx React.DOM */ React.renderComponent( <h1>Hello, world!</h1>, document.getElementById('example') ); Suppose we want to see the vanilla JS equivalent. Although the code samples on the React homepage include a helpful Compiled JS tab, the samples in the docs—not to mention React examples you find elsewhere on the Web—will not. Fortunately, React’s Live JSX Compiler can help. To translate the above JSX into vanilla JS, simply copy and paste it into the left side of the Live JSX Compiler. The output on the right should look like this: /** @jsx React.DOM */ React.renderComponent( React.DOM.h1(null, "Hello, world!"), document.getElementById('example') ); Pretty similar, right? We can discard the comment, as it only represents a necessary directive in JSX. When writing React in vanilla JS, it’s just another comment that will be disregarded as usual. Take a look at the call to React.renderComponent. Here we have a plain old two-argument function, which takes a React DOM element (in this case, the one returned by React.DOM.h1) as its first argument, and a regular DOM element (in this case, the one returned by document.getElementById('example')) as its second. jQuery users should note that the second argument will not accept jQuery objects, so you will have to extract the underlying DOM element with $("#example")[0] or something similar. The React.DOM object has a method for every supported tag. In this case we’re using h1, but we could just as easily have used h2, div, span, input, a, p, or any other supported tag. The first argument to these methods is optional; it can either be null (as in this case), or an object specifying the element’s attributes. This argument is how you specify things like class, ID, and so on. The second argument is either a string, in which case it specifies the object’s text content, or a list of child React DOM elements. Let’s put this together with a more advanced example, starting with the vanilla JS: React.DOM.form({className:"commentForm"}, React.DOM.input({type:"text", placeholder:"Your name"}), React.DOM.input({type:"text", placeholder:"Say something..."}), React.DOM.input({type:"submit", value:"Post"}) ) For the most part, the attributes translate as you would expect: type, value, and placeholder do exactly what they would do if used in HTML. The one exception is className, which you use in place of the usual class. The above is equivalent to the following JSX: /** @jsx React.DOM */ <form className="commentForm"> <input type="text" placeholder="Your name" /> <input type="text" placeholder="Say something..." /> <input type="submit" value="Post" /> </form> This JSX is a snippet found elsewhere in the React docs, and again you can view its vanilla JS equivalent by pasting it into the Live JSX Compiler. Note that you can include pure JSX here without any surrounding JavaScript code (unlike the JSX playground), but you do need the /** @jsx React.DOM */ comment at the top of the JSX side. Without the comment, the compiler will simply output the JSX you put in. Simple DSLs to make things concise Although these two implementations are functionally identical, clearly the JSX version is more concise. How can we make the vanilla JS version less verbose? A very quick improvement is to alias the React.DOM object: var R = React.DOM; R.form({className:"commentForm"}, R.input({type:"text", placeholder:"Your name"}), R.input({type:"text", placeholder:"Say something..."}), R.input({type:"submit", value:"Post"})) You can take it even further with a tiny bit of DSL: var R = React.DOM; var form = R.form; var input = R.input; form({className:"commentForm"}, input({type:"text", placeholder:"Your name"}), input({type:"text", placeholder:"Say something..."}), input({type:"submit", value:"Post"}) ) This is more verbose in terms of lines of code, but if you have a large DOM to set up, the extra up-front declarations can make the rest of the file much nicer to read. In CoffeeScript, a DSL like this can tidy things up even further: {form, input} = React.DOM form {className:"commentForm"}, [ input type: "text", placeholder:"Your name" input type:"text", placeholder:"Say something..." input type:"submit", value:"Post" ] Note that in this example, the form’s children are passed as an array rather than as a list of extra arguments (which, in CoffeeScript, allows you to omit commas after each line). React DOM element constructors support either approach. (Also note that CoffeeScript coders who don’t mind mixing languages can use the coffee-react compiler or set up a custom build chain that allows for inline JSX in CoffeeScript sources instead.) Takeaways No matter your particular use case, there are plenty of ways to effectively use React without JSX. Thanks to the Live JSX Compiler ’s ability to quickly translate documentation code samples, and the ease with which you can set up a simple DSL to reduce verbosity, there really is very little overhead to using React as a JavaScript library like any other. About the author Richard Feldman is a functional programmer who specializes in pushing the limits of browser-based UIs. He’s built a framework that performantly renders hundreds of thousands of shapes in the HTML5 canvas, a writing web app that functions like a desktop app in the absence of an Internet connection, and much more in between
Read more
  • 0
  • 0
  • 12853
article-image-what-flux
Packt
27 Apr 2016
27 min read
Save for later

What is Flux?

Packt
27 Apr 2016
27 min read
In this article by Adam Boduch, author of Flux Architecture covers the basic idea of Flux. Flux is supposed to be this great new way of building complex user interfaces that scale well. At least that's the general messaging around Flux, if you're only skimming the Internet literature. But, how do we define this great new way of building user interfaces? What makes it superior to other more established frontend architectures? The aim of this article is to cut through the sales bullet points and explicitly spell out what Flux is, and what it isn't, by looking at the patterns that Flux provides. And since Flux isn't a software package in the traditional sense, we'll go over the conceptual problems that we're trying to solve with Flux. Finally, we'll close the article by walking through the core components found in any Flux architecture, and we'll install the Flux npm package and write a hello world Flux application right away. Let's get started. (For more resources related to this topic, see here.) Flux is a set of patterns We should probably get the harsh reality out of the way first—Flux is not a software package. It's a set of architectural patterns for us to follow. While this might sound disappointing to some, don't despair—there's good reasons for not implementing yet another framework. Throughout the course of this book, we'll see the value of Flux existing as a set of patterns instead of a de facto implementation. For now, we'll go over some of the high-level architectural patterns put in place by Flux. Data entry points With traditional approaches to building frontend architectures, we don't put much thought into how data enters the system. We might entertain the idea of data entry points, but not in any detail. For example, with MVC (Model View Controller) architectures, the controller is supposed control the flow of data. And for the most part, it does exactly that. On the other hand, the controller is really just about controlling what happens after it already has the data. How does the controller get data in the first place? Consider the following illustration: At first glance, there's nothing wrong with this picture. The data flow, represented by the arrows, is easy to follow. But where does the data originate? For example, the view can create new data and pass it to the controller, in response to a user event. A controller can create new data and pass it to another controller, depending on the composition of our controller hierarchy. What about the controller in question—can it create data itself and then use it? In a diagram such as this one, these questions don't have much virtue. But, if we're trying to scale an architecture to have hundreds of these components, the points at which data enters the system becomes very important. Since Flux is used to build architectures that scale, it considers data entry points an important architectural pattern. Managing state State is one of those realities we need to cope with in frontend development. Unfortunately, we can't compose our entire application of pure functions with no side effects for two reasons. First, our code needs to interact with the DOM interface, in one way or another. This is how the user sees changes in the UI. Second, we don't store all our application data in the DOM (at least we shouldn't do this). As time passes and the user interacts with the application, this data will change. There's no cut-and-dry approach to managing state in a web application, but there are several ways to limit the amount of state changes that can happen, and enforce how they happen. For example, pure functions don't change the state of anything, they can only create new data. Here's an example of what this looks like: As you can see, there's no side effects with pure functions because no data changes state as a result of calling them. So why is this a desirable trait, if state changes are inevitable? The idea is to enforce where state changes happen. For example, perhaps we only allow certain types of components to change the state of our application data. This way, we can rule out several sources as the cause of a state change. Flux is big on controlling where state changes happen. Later on in the article, we'll see how Flux stores manage state changes. What's important about how Flux manages state is that it's handled at an architectural layer. Contrast this with an approach that lays out a set of rules that say which component types are allowed to mutate application data—things get confusing. With Flux, there's less room for guessing where state changes take place. Keeping updates synchronous Complimentary to data entry points is the notion of update synchronicity. That is, in addition to managing where the state changes originate from, we have to manage the ordering of these changes relative to other things. If the data entry points are the what of our data, then synchronously applying state changes across all the data in our system is the when. Let's think about why this matters for a moment. In a system where data is updated asynchronously, we have to account for race conditions. Race conditions can be problematic because one piece of data can depend on another, and if they're updated in the wrong order, we see cascading problems, from one component to another. Take a look at this diagram, which illustrates this problem: When something is asynchronous, we have no control over when that something changes state. So, all we can do is wait for the asynchronous updates to happen, and then go through our data and make sure all of our data dependencies are satisfied. Without tools that automatically handle these dependencies for us, we end up writing a lot of state-checking code. Flux addresses this problem by ensuring that the updates that take place across our data stores are synchronous. This means that the scenario illustrated in the preceding diagram isn't possible. Here's a better visualization of how Flux handles the data synchronization issues that are typical of JavaScript applications today: Information architecture It's easy to forget that we work in information technology and that we should be building technology around information. In recent times, however, we seem to have moved in the other direction, where we're forced to think about implementation before we think about information. More often than not, the data exposed by the sources used by our application, don't have what the user needs. It's up to our JavaScript to turn this raw data into something consumable by the user. This is our information architecture. Does this mean that Flux is used to design information architectures as opposed to a software architecture? This isn't the case at all. In fact, Flux components are realized as true software components that perform actual computations. The trick is that Flux patterns enable us to think about information architecture as a first-class design consideration. Rather than having to sift through all sorts of components and their implementation concerns, we can make sure that we're getting the right information to the user. Once our information architecture takes shape, the larger architecture of our application follows, as a natural extension to the information we're trying to communicate to our users. Producing information from data is the difficult part. We have to distill many sources of data into not only information, but information that's also of value to the user. Getting this wrong is a huge risk for any project. When we get it right, we can then move on to the specific application components, like the state of a button widget, and so on. Flux architectures keep data transformations confined to their stores. A store is an information factory—raw data goes in and new information comes out. Stores control how data enters the system, the synchronicity of state changes, and they define how the state changes. When we go into more depth on stores as we progress through the book, we'll see how they're the pillars of our information architecture. Flux isn't another framework Now that we've explored some of the high-level patterns of Flux, it's time to revisit the question: what is Flux again? Well, it is just a set of architectural patterns we can apply to our frontend JavaScript applications. Flux scales well because it puts information first. Information is the most difficult aspect of software to scale; Flux tackles information architecture head on. So, why aren't Flux patterns implemented as a Framework? This way, Flux would have a canonical implementation for everyone to use; and like any other large scale open source project, the code would improve over time as the project matures. The main problem is that Flux operates at an architectural level. It's used to address information problems that prevent a given application from scaling to meet user demand. If Facebook decided to release Flux as yet another JavaScript framework, it would likely have the same types of implementation issues that plague other frameworks out there. For example, if some component in a framework isn't implemented in a way that best suits the project we're working on, then it's not so easy to implement a better alternative, without hacking the framework to bits. What's nice about Flux is that Facebook decided to leave the implementation options on the table. They do provide a few Flux component implementations, but these are reference implementations. They're functional, but the idea is that they're a starting point for us to understand the mechanics of how things such as dispatchers are expected to work. We're free to implement the same Flux architectural pattern as we see it. Flux isn't a framework. Does this mean we have to implement everything ourselves? No, we do not. In fact, developers are implementing Flux frameworks and releasing them as open source projects. Some Flux libraries stick more closely to the Flux patterns than others. These implementations are opinionated, and there's nothing wrong with using them if they're a good fit for what we're building. The Flux patterns aim to solve generic conceptual problems with JavaScript development, so you'll learn what they are before diving into Flux implementation discussions. Flux solves conceptual problems If Flux is simply a collection of architectural patterns instead of a software framework, what sort of problems does it solve? In this section, we'll look at some of the conceptual problems that Flux addresses from an architectural perspective. These include unidirectional data flow, traceability, consistency, component layering, and loosely coupled components. Each of these conceptual problems pose a degree of risk to our software, in particular, the ability to scale it. Flux helps us get out in front of these issues as we're building the software. Data flow direction We're creating an information architecture to support the feature-rich application that will ultimately sit on top of this architecture. Data flows into the system, and will eventually reach an endpoint, terminating the flow. It's what happens in between the entry point and the termination point that determines the data flow within a Flux architecture. This is illustrated here: Data flow is a useful abstraction, because it's easy to visualize data as it enters the system and moves from one point to another. Eventually, the flow stops. But before it does, several side effects happen along the way. It's that middle block in the preceding diagram that's concerning, because we don't know exactly how the data-flow reached the end. Let's say that our architecture doesn't pose any restrictions on data flow. Any component is allowed to pass data to any other component, regardless of where that component lives. Let's try to visualize this setup: As you can see, our system has clearly defined entry and exit points for our data. This is good because it means that we can confidently say that the data flows through our system. The problem with this picture is with how the data flows between the components of the system. There's no direction, or rather, it's multidirectional. This isn't a good thing. Flux is a unidirectional data flow architecture. This means that the preceding component layout isn't possible. The question is—why does this matter? At times, it might seem convenient to be able to pass data around in any direction, that is, from any component to any other component. This in and of itself isn't the issue—passing data alone doesn't break our architecture. However, when data moves around our system in more than one direction, there's more opportunity for components to fall out of sync with one another. This simply means that if data doesn't always move in the same direction, there's always the possibility of ordering bugs. Flux enforces the direction of data flows, and thus eliminates the possibility of components updating themselves in an order that breaks the system. No matter what data has just entered the system, it'll always flow through the system in the same order as any other data, as illustrated here: Predictable root cause With data entering our system and flowing through our components in one direction, we can more easily trace any effect to it's cause. In contrast, when a component sends data to any other component residing in any architectural layer, it's a lot more difficult to figure how the data reached it's destination. Why does this matter? Debuggers are sophisticated enough that we can easily traverse any level of complexity during runtime. The problem with this notion is that it presumes we only need to trace what's happening in our code for the purposes of debugging. Flux architectures have inherently predictable data flows. This is important for a number of design activities and not just debugging. Programmers working on Flux applications will begin to intuitively sense what's going to happen. Anticipation is key, because it let's us avoid design dead-ends before we hit them. When the cause and effect are easy to tease out, we can spend more time focusing on building application features—the things the customers care about. Consistent notifications The direction in which we pass data from component to component in Flux architectures should be consistent. In terms of consistency, we also need to think about the mechanism used to move data around our system. For example, publish/subscribe (pub/sub) is a popular mechanism used for inter-component communication. What's neat about this approach is that our components can communicate with one another, and yet, we're able to maintain a level of decoupling. In fact, this is fairly common in frontend development because component communication is largely driven by user events. These events can be thought of as fire-and-forget. Any other components that want to respond to these events in some way, need to take it upon themselves to subscribe to the particular event. While pub/sub does have some nice properties, it also poses architectural challenges, in particular, scaling complexities. For example, let's say that we've just added several new components for a new feature. Well, in which order do these components receive update messages relative to pre-existing components? Do they get notified after all the pre-existing components? Should they come first? This presents a data dependency scaling issue. The other challenge with pub-sub is that the events that get published are often fine grained to the point where we'll want to subscribe and later unsubscribe from the notifications. This leads to consistency challenges because trying to code lifecycle changes when there's a large number of components in the system is difficult and presents opportunities for missed events. The idea with Flux is to sidestep the issue by maintaining a static inter-component messaging infrastructure that issues notifications to every component. In other words, programmers don't get to pick and choose the events their components will subscribe to. Instead, they have to figure out which of the events that are dispatched to them are relevant, ignoring the rest. Here's a visualization of how Flux dispatches events to components: The Flux dispatcher sends the event to every component; there's no getting around this. Instead of trying to fiddle with the messaging infrastructure, which is difficult to scale, we implement logic within the component to determine whether or not the message is of interest. It's also within the component that we can declare dependencies on other components, which helps influence the ordering of messages. Simple architectural layers Layers can be a great way to organize an architecture of components. For one thing, it's an obvious way to categorize the various components that make up our application. For another thing, layers serve as a means to put constraints around communication paths. This latter point is especially relevant to Flux architectures since it's imperative that data flow in one direction. It's much easier to apply constraints to layers than it is to individual components. Here is an illustration of Flux layers: This diagram isn't intended to capture the entire data flow of a Flux architecture, just how data flows between the main three layers. It also doesn't give any detail about what's in the layers. Don't worry, the next section gives introductory explanations of the types of Flux components and the communication that happens between the layers is the focus of this entire book. As you can see, the data flows from one layer to the next, in one direction. Flux only has a few layers, and as our applications scale in terms of component counts, the layer counts remains fixed. This puts a cap on the complexity involved with adding new features to an already large application. In addition to constraining the layer count and the data flow direction, Flux architectures are strict about which layers are actually allowed to communicate with one another. For example, the action layer could communicate with the view layer, and we would still be moving in one direction. We would still have the layers that Flux expects. However, skipping a layer like this is prohibited. By ensuring that layers only communicate with the layer directly beneath it, we can rule out bugs introduced by doing something out-of-order. Loosely coupled rendering One decision made by the Flux designers that stands out is that Flux architectures don't care how UI elements are rendered. That is to say, the view layer is loosely coupled to the rest of the architecture. There are good reasons for this. Flux is an information architecture first, and a software architecture second. We start with the former and graduate toward the latter. The challenge with view technology is that it can exert a negative influence on the rest of the architecture. For example, one view has a particular way of interacting with the DOM. Then, if we've already decided on this technology, we'll end up letting it influence the way our information architecture is structured. This isn't necessarily a bad thing, but it can lead to us making concessions about the information we ultimately display to our users. What we should really be thinking about is the information itself and how this information changes over time. What actions are involved that bring about these changes? How is one piece of data dependent on another piece of data? Flux naturally removes itself from the browser technology constraints of the day so that we can focus on the information first. It's easy to plug views into our information architecture as it evolves into a software product. Flux components In this section, we'll begin our journey into the concepts of Flux. These concepts are the essential ingredients used in formulating a Flux architecture. While there's no detailed specifications for how these components should be implemented, they nevertheless lay the foundation of our implementation. This is a high-level introduction to the components we'll be implementing throughout this book. Action Actions are the verbs of the system. In fact, it's helpful if we derive the name of an action directly from a sentence. These sentences are typically statements of functionality; something we want the application to do. Here are some examples: Fetch the session Navigate to the settings page Filter the user list Toggle the visibility of the details section These are simple capabilities of the application, and when we implement them as part of a Flux architecture, actions are the starting point. These human-readable action statements often require other new components elsewhere in the system, but the first step is always an action. So, what exactly is a Flux action? At it's simplest, an action is nothing more than a string—a name that helps identify the purpose of the action. More typically, actions consist of a name and a payload. Don't worry about the payload specifics just yet—as far as actions are concerned, they're just opaque pieces of data being delivered into the system. Put differently, actions are like mail parcels. The entry point into our Flux system doesn't care about the internals of the parcel, only that they get to where they need to go. Here's an illustration of actions entering a Flux system: This diagram might give the impression that actions are external to Flux when in fact, they're an integral part of the system. The reason this perspective is valuable is because it forces us to think about actions as the only means to deliver new data into the system. Golden Flux Rule: If it's not an action, it can't happen. Dispatcher The dispatcher in a Flux architecture is responsible for distributing actions to the store components (we'll talk about stores next). A dispatcher is actually kind of like a broker—if actions want to deliver new data to a store, they have to talk to the broker, so it can figure out the best way to deliver them. Think about a message broker in a system like RabbitMQ. It's the central hub where everything is sent before it's actually delivered. Here is a diagram depicting a Flux dispatcher receiving actions and dispatching them to stores: In a Flux application, there's only one dispatcher. It can be thought of more as a pseudo layer than an explicit layer. We know the dispatcher is there, but it's not essential to this level of abstraction. What we're concerned about at an architectural level, is making sure that when a given action is dispatched, we know that it's going to make it's way to every store in the system. Having said that, the dispatcher's role is critical to how Flux works. It's the place where store callback functions are registered. And it's how data dependencies are handled. Stores tell the dispatcher about other stores that it depends on, and it's up to the dispatcher to make sure these dependencies are properly handled. Golden Flux Rule: The dispatcher is the ultimate arbiter of data dependencies. Store Stores are where state is kept in a Flux application. Typically, this means the application data that's sent to the frontend from the API. However, Flux stores take this a step further and explicitly model the state of the entire application. For now, just know that stores are where state that matters can be found. Other Flux components don't have state—they have implicit state at the code level, but we're not interested in this, from an architectural point of view. Actions are the delivery mechanism for new data entering the system. The term new data doesn't imply that we're simply appending it to some collection in a store. All data entering the system is new in the sense that it hasn't been dispatched as an action yet—it could in fact result in a store changing state. Let's look at a visualization of an action that results in a store changing state: The key aspect of how stores change state is that there's no external logic that determines a state change should happen. It's the store, and only the store, that makes this decision and then carries out the state transformation. This is all tightly encapsulated within the store. This means that when we need to reason about a particular information, we need not look any further than the stores. They're their own boss—they're self-employed. Golden Flux Rule: Stores are where state lives, and only stores themselves can change this state. View The last Flux component we're going to look at in this section is the view, and it technically isn't even a part of Flux. At the same time, views are obviously a critical part of our application. Views are almost universally understood as the part of our architecture that's responsible for displaying data to the user—it's the last stop as data flows through our information architecture. For example, in MVC architectures, views take model data and display it. In this sense, views in a Flux-based application aren't all that different from MVC views. Where they differ markedly is with regard to handling events. Let's take a look at the following diagram: Here we can see the contrasting responsibilities of a Flux view, compared with a view component found in your typical MVC architecture. The two view types have similar types of data flowing into them—application data used to render the component and events (often user input). What's different between the two types of views is what flows out of them. The typical view doesn't really have any constraints in how it's event handler functions communicate with other components. For example, in response to a user clicking a button, the view could directly invoke behavior on a controller, change the state of a model, or it might query the state of another view. On the other hand, the Flux view can only dispatch new actions. This keeps our single entry point into the system intact and consistent with other mechanisms that want to change the state of our store data. In other words, an API response updates state in the exact same way as a user clicking a button does. Given that views should be restricted in terms of how data flows out of them (besides DOM updates) in a Flux architecture, you would think that views should be an actual Flux component. This would make sense insofar as making actions the only possible option for views. However, there's also no reason we can't enforce this now, with the benefit being that Flux remains entirely focused on creating information architectures. Keep in mind, however, that Flux is still in it's infancy. There's no doubt going to be external influences as more people start adopting Flux. Maybe Flux will have something to say about views in the future. Until then, views exist outside of Flux but are constrained by the unidirectional nature of Flux. Golden Flux Rule: The only way data flows out of a view is by dispatching an action. Installing the Flux package We'll get some of our boilerplate code setup tasks out of the way too, since we'll be using a similar setup throughout the book. We'll skip going over Node + NPM installation since it's sufficiently covered in great detail all over the Internet. We'll assume Node is installed and ready to go from this point forward. The first NPM package we'll need installed is Webpack. This is an advanced module bundler that's well suited for modern JavaScript applications, including Flux-based applications. We'll want to install this package globally so that the webpack command gets installed on our system: npm install webpack -g With Webpack in place, we can build each of the code examples that ship with this book. However, our project does require a couple local NPM packages, and these can be installed as follows: npm install flux babel-core babel-loader babel-preset-es2015 --save-dev The --save-dev option adds these development dependencies to our file, if one exists. This is just to get started—it isn't necessary to manually install these packages to run the code examples in this book. The examples you've downloaded already come with a package.json, so to install the local dependencies, simply run the following from within the same directory as the package.json file: npm install Now the webpack command can be used to build the example. Alternatively, if you plan on playing with the code, which is obviously encouraged, try running webpack --watch. This latter form of the command will monitor for file changes to the files used in the build, and run the build whenever they change. This is indeed a simple hello world to get us off to a running start, in preparation for the remainder of the book. We've taken care of all the boilerplate setup tasks by installing Webpack and it's supporting modules. Let's take a look at the code now. We'll start by looking at the markup that's used. <!doctype html> <html>   <head>     <title>Hello Flux</title>     <script src="main-bundle.js" defer></script>   </head>   <body></body> </html> Not a lot to it is there? There isn't even content within the body tag. The important part is the main-bundle.js script—this is the code that's built for us by Webpack. Let's take a look at this code now: // Imports the "flux" module. import * as flux from 'flux'; // Creates a new dispatcher instance. "Dispatcher" is // the only useful construct found in the "flux" module. const dispatcher = new flux.Dispatcher(); // Registers a callback function, invoked every time // an action is dispatched. dispatcher.register((e) => {   var p;   // Determines how to respond to the action. In this case,   // we're simply creating new content using the "payload"   // property. The "type" property determines how we create   // the content.   switch (e.type) {     case 'hello':       p = document.createElement('p');       p.textContent = e.payload;       document.body.appendChild(p);       break;     case 'world':       p = document.createElement('p');       p.textContent = `${e.payload}!`;       p.style.fontWeight = 'bold';       document.body.appendChild(p);       break;     default:       break;   } });   // Dispatches a "hello" action. dispatcher.dispatch({   type: 'hello',   payload: 'Hello' }); // Dispatches a "world" action. dispatcher.dispatch({   type: 'world',   payload: 'World' }); As you can see, there's not much too this hello world Flux application. In fact, the only Flux-specific component this code creates is a dispatcher. It then dispatches a couple of actions and the handler function that's registered to the store processes the actions. Don't worry that there's no stores or views in this example. The idea is that we've got the basic Flux NPM package installed and ready to go. Summary This article introduced you to Flux. Specifically, we looked at both what Flux is and what it isn't. Flux is a set of architectural patterns, that when applied to our JavaScript application, help with getting the data flow aspect of our architecture right. Flux isn't yet another framework used for solving specific implementation challenges, be it browser quirks or performance gains—there's a multitude of tools already available for these purposes. Perhaps the most important defining aspect of Flux are the conceptual problems it solves—things like unidirectional data flow. This is a major reason that there's no de facto Flux implementation. We wrapped the article up by walking through the setup of our build components used throughout the book. To test that the packages are all in place, we created a very basic hello world Flux application. Resources for Article: Further resources on this subject: Reactive Programming and the Flux Architecture [article] Advanced React [article] Qlik Sense's Vision [article]  
Read more
  • 0
  • 0
  • 12755

article-image-mvvm-and-data-binding
Packt
28 Dec 2016
9 min read
Save for later

MVVM and Data Binding

Packt
28 Dec 2016
9 min read
In this article by Steven F. Daniel, author of the book Mastering Xamarin UI Development, you will learn how to build stunning, maintainable, cross-platform mobile application user interfaces with the power of Xamarin. In this article, we will cover the following topics: Understanding the MVVM pattern architecture Implement the MVVM ViewModels within the app (For more resources related to this topic, see here.) Understanding the MVVM pattern architecture In this section we will be taking a look at the MVVM pattern architecture and the communication between the components that make up the architecture. The MVVM design pattern is designed to control the separation between the user interfaces (Views), the ViewModels that contain the actual binding to the Model, and the models that contain the actual structure of the entities representing information stored on a database or from a web service. The following screenshot shows the communication between each of the components contained within the MVVM design pattern architecture: The MVVM design pattern is divided into three main areas, as you can see from the preceding screenshot and these are explained in the following table: MVVM type Description Model The Model is basically a representation of business related entities used by an application, and is responsible for fetching data from either a database, or web service, and then de-serialized to the entities contained within the Model. View The View component of the MVVM model basically represents the actual screens that make up the application, along with any custom control components, and control elements, such as buttons, labels, and text fields. The Views contained within the MVVM pattern are platform-specific and are dependent on the platform APIs that are used to render the information that is contained within the application's user interface. ViewModel The ViewModel essentially controls, and manipulates the Views by acting as their main data context. The ViewModel contains a series of properties that are bound to the information contained within each Model, and those properties are then bound to each of the Views to represent this information within the user interface. ViewModels can also contain command objects that provide action-based events that can trigger the execution of event methods that occur within the View. For example, when the user taps on a toolbar item, or a button. ViewModels generally implement the INotifyPropertyChanged interface. Such a class fires a PropertyChanged event whenever one of their properties change. The data binding mechanism in Xamarin.Forms attaches a handler to this PropertyChanged event so it can be notified when a property changes and keep the target updated with the new value. Now that you have a good understanding of the components that are contained within MVVM design pattern architecture, we can begin to create our entity models and update our user interface files. In Xamarin.Forms, the term View is used to describe form controls, such as buttons and labels, and uses the term Page to describe the user interface or screen. Whereas, in MVVM, Views are used to describe the user interface, or screen. Implementing the MVVM ViewModels within your app In this section, we will begin by setting up the basic structure for our TrackMyWalks solution to include the folder that will be used to represent our ViewModels. Let's take a look at how we can achieve this, by following the steps: Launch the Xamarin Studio application and ensure that the TrackMyWalks solution is loaded within the Xamarin Studio IDE. Next, create a new folder within the TrackMyWalks PCL project, called ViewModels as shown in the following screenshot: Creating the WalkBaseViewModel for the TrackMyWalks app In this section, we will begin by creating a base MVVM ViewModel that will be used by each of our ViewModels when we create these, and then the Views (pages) will implement those ViewModels and use them as their BindingContext. Let's take a look at how we can achieve this, by following the steps: Create an empty class within the ViewModels folder, shown in the following screenshot: Next, choose the Empty Class option located within the General section, and enter in WalkBaseViewModel for the name of the new class file to create, as shown in the following screenshot: Next, click on the New button to allow the wizard to proceed and create the new empty class file, as shown in the preceding screenshot. Up until this point, all we have done is create our WalkBaseViewModel class file. This abstract class will act as the base ViewModel class that will contain the basic functionality that each of our ViewModels will inherit from. As we start to build the base class, you will see that it contains a couple of members and it will implement the INotifyPropertyChangedInterface,. As we progress through this article, we will build to this class, which will be used by the TrackMyWalks application. To proceed with creating the base ViewModel class, perform the following step as shown: Ensure that the WalkBaseViewModel.cs file is displayed within the code editor, and enter in the following code snippet: // // WalkBaseViewModel.cs // TrackMyWalks Base ViewModel // // Created by Steven F. Daniel on 22/08/2016. // Copyright © 2016 GENIESOFT STUDIOS. All rights reserved. // using System.ComponentModel; using System.Runtime.CompilerServices; namespace TrackMyWalks.ViewModels { public abstract class WalkBaseViewModel : INotifyPropertyChanged { protected WalkBaseViewModel() { } public event PropertyChangedEventHandler PropertyChanged; protected virtual void OnPropertyChanged([CallerMemberName] string propertyName = null) { var handler = PropertyChanged; if (handler != null) { handler(this, new PropertyChangedEventArgs(propertyName)); } } } } In the preceding code snippet, we begin by creating a new abstract class for our WalkBaseViewModel that implements from the INotifyPropertyChanged interface class, that allows the View or page to be notified whenever properties contained within the ViewModel have changed. Next, we declare a variable PropertyChanged that inherits from the PropertyChangedEventHandler that will be used to indicate whenever properties on the object have changed. Finally, within the OnPropertyChanged method, this will be called when it has determined that a change has occurred on a property within the ViewModel from a child class. The INotifyPropertyChanged interface is used to notify clients, typically binding clients, when the value of a property has changed. Implementing the WalksPageViewModel In the previous section, we built our base class ViewModel for our TrackMyWalks application, and this will act as the main class that will allow our View or pages to be notified whenever changes to properties within the ViewModel have been changed. In this section, we will need to begin building the ViewModel for our WalksPage,. This model will be used to store the WalkEntries, which will later be used and displayed within the ListView on the WalksPage content page. Let's take a look at how we can achieve this, by following the steps: First, create a new class file within the ViewModels folder called WalksPageViewModel, as you did in the previous section, entitled Creating the WalkBaseViewModel located within this article. Next, ensure that the WalksPageViewModel.cs file is displayed within the code editor, and enter in the following code snippet: // // WalksPageViewModel.cs // TrackMyWalks ViewModels // // Created by Steven F. Daniel on 22/08/2016. // Copyright © 2016 GENIESOFT STUDIOS. All rights reserved. // using System.Collections.ObjectModel; using TrackMyWalks.Models; namespace TrackMyWalks.ViewModels { public class WalksPageViewModel : WalkBaseViewModel { ObservableCollection<WalkEntries> _walkEntries; public ObservableCollection<WalkEntries> walkEntries { get { return _walkEntries; } set { _walkEntries = value; OnPropertyChanged(); } } In the above code snippet, we begin by ensuring that our ViewModel inherits from the WalkBaseViewModel class. Next, we create an ObservableCollection variable _walkEntries which is very useful when you want to know when the collection has changed, and an event is triggered that will tell the user what entries have been added or removed from the WalkEntries model. In our next step, we create the ObservableCollection constructor WalkEntries, that is defined within the System.Collections.ObjectModel class, and accepts a List parameter containing our WalkEntries model. The WalkEntries property will be used to bind to the ItemSource property of the ListView within the WalksMainPage. Finally, we define the getter (get) and setter (set) methods that will return and set the content of our _walkEntries when it has been determined when a property has been modified or not. Next, locate the WalksPageViewModel class constructor, and enter the following highlighted code sections:         public WalksPageViewModel() { walkEntries = new ObservableCollection<WalkEntries>() { new WalkEntries { Title = "10 Mile Brook Trail, Margaret River", Notes = "The 10 Mile Brook Trail starts in the Rotary Park near Old Kate, a preserved steam " + "engine at the northern edge of Margaret River. ", Latitude = -33.9727604, Longitude = 115.0861599, Kilometers = 7.5, Distance = 0, Difficulty = "Medium", ImageUrl = "http://trailswa.com.au/media/cache/media/images/trails/_mid/" + "FullSizeRender1_600_480_c1.jpg" }, new WalkEntries { Title = "Ancient Empire Walk, Valley of the Giants", Notes = "The Ancient Empire is a 450 metre walk trail that takes you around and through some of " + "the giant tingle trees including the most popular of the gnarled veterans, known as " + "Grandma Tingle.", Latitude = -34.9749188, Longitude = 117.3560796, Kilometers = 450, Distance = 0, Difficulty = "Hard", ImageUrl = "http://trailswa.com.au/media/cache/media/images/trails/_mid/" + "Ancient_Empire_534_480_c1.jpg" }, }; } } } In the preceding code snippet, we began by creating a new ObservableCollection for our walkEntries method and then added each of the walk list items that we would like to store within our model. As each item is added, the ObservableCollection, constructor is called, and the setter (set) method is invoked to add the item, and then the INotifyPropertyChanged event will be triggered to notify that a change has occurred. Summary In this article, you learned about the MVVM pattern architecture; we also implemented the MVVM ViewModels within the app. Additionally, we created and implemented the WalkBaseViewModel for the TrackMyWalks application. Resources for Article: Further resources on this subject: A cross-platform solution with Xamarin.Forms and MVVM architecture [article] Building a Gallery Application [article] Heads up to MvvmCross [article]
Read more
  • 0
  • 0
  • 12553
Modal Close icon
Modal Close icon