Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Front-End Web Development

341 Articles
article-image-recursive-directives
Packt
22 Dec 2014
13 min read
Save for later

Recursive directives

Packt
22 Dec 2014
13 min read
In this article by Matt Frisbie, the author of AngularJS Web Application Development Cookbook, we will see recursive directives. The power of directives can also be effectively applied when consuming data in a more unwieldy format. Consider the case in which you have a JavaScript object that exists in some sort of recursive tree structure. The view that you will generate for this object will also reflect its recursive nature and will have nested HTML elements that match the underlying data structure. (For more resources related to this topic, see here.) Recursive directives In this article by Matt Frisbie, the author of AngularJS Web Application Development Cookbook, we will see recursive directives. The power of directives can also be effectively applied when consuming data in a more unwieldy format. Consider the case in which you have a JavaScript object that exists in some sort of recursive tree structure. The view that you will generate for this object will also reflect its recursive nature and will have nested HTML elements that match the underlying data structure. Getting ready Suppose you had a recursive data object in your controller as follows: (app.js)   angular.module('myApp', []) .controller('MainCtrl', function($scope) { $scope.data = {    text: 'Primates',    items: [      {        text: 'Anthropoidea',        items: [          {            text: 'New World Anthropoids'          },          {            text: 'Old World Anthropoids',            items: [              {                text: 'Apes',                items: [                 {                    text: 'Lesser Apes'                  },                  {                    text: 'Greater Apes'                  }                ]              },              {                text: 'Monkeys'              }            ]          }        ]      },      {        text: 'Prosimii'      }    ] }; }); How to do it… As you might imagine, iteratively constructing a view or only partially using directives to accomplish this will become extremely messy very quickly. Instead, it would be better if you were able to create a directive that would seamlessly break apart the data recursively, and define and render the sub-HTML fragments cleanly. By cleverly using directives and the $compile service, this exact directive functionality is possible. The ideal directive in this scenario will be able to handle the recursive object without any additional parameters or outside assistance in parsing and rendering the object. So, in the main view, your directive will look something like this: <recursive value="nestedObject"></recursive> The directive is accepting an isolate scope = binding to the parent scope object, which will remain structurally identical as the directive descends through the recursive object. The $compile service You will need to inject the $compile service in order to make the recursive directive work. The reason for this is that each level of the directive can instantiate directives inside it and convert them from an uncompiled template to real DOM material. The angular.element() method The angular.element() method can be thought of as the jQuery $() equivalent. It accepts a string template or DOM fragment and returns a jqLite object that can be modified, inserted, or compiled for your purposes. If the jQuery library is present when the application is initialized, AngularJS will use that instead of jqLite. If you use the AngularJS template cache, retrieved templates will already exist as if you had called the angular.element() method on the template text. The $templateCache Inside a directive, it's possible to create a template using angular.element() and a string of HTML similar to an underscore.js template. However, it's completely unnecessary and quite unwieldy to use compared to AngularJS templates. When you declare a template and register it with AngularJS, it can be accessed through the injected $templateCache, which acts as a key-value store for your templates. The recursive template is as follows: <script type="text/ng-template" id="recursive.html"> <span>{{ val.text }}</span> <button ng-click="delSubtree()">delete</button> <ul ng-if="isParent" style="margin-left:30px">    <li ng-repeat="item in val.items">      <tree val="item" parent-data="val.items"></tree>    </li> </ul> </script> The <span> and <button> elements are present at each instance of a node, and they present the data at that node as well as an interface to the click event (which we will define in a moment) that will destroy it and all its children. Following these, the conditional <ul> element renders only if the isParent flag is set in the scope, and it repeats through the items array, recursing the child data and creating new instances of the directive. Here, you can see the full template definition of the directive: <tree val="item" parent-data="val.items"></tree> Not only does the directive take a val attribute for the local node data, but you can also see its parent-data attribute, which is the point of scope indirection that allows the tree structure. To make more sense of this, examine the following directive code: (app.js)   .directive('tree', function($compile, $templateCache) { return {    restrict: 'E',    scope: {      val: '=',      parentData: '='    },    link: function(scope, el, attrs) {      scope.isParent = angular.isArray(scope.val.items)      scope.delSubtree = function() {        if(scope.parentData) {            scope.parentData.splice(            scope.parentData.indexOf(scope.val),            1          );        }        scope.val={};      }        el.replaceWith(        $compile(          $templateCache.get('recursive.html')        )(scope)      );      } }; }); With all of this, if you provide the recursive directive with the data object provided at the beginning of this article, it will result in the following (presented here without the auto-added AngularJS comments and directives): (index.html – uncompiled)   <div ng-app="myApp"> <div ng-controller="MainCtrl">    <tree val="data"></tree> </div>    <script type="text/ng-template" id="recursive.html">    <span>{{ val.text }}</span>    <button ng-click="deleteSubtree()">delete</button>    <ul ng-if="isParent" style="margin-left:30px">      <li ng-repeat="item in val.items">        <tree val="item" parent-data="val.items"></tree>      </li>    </ul> </script> </div> The recursive nature of the directive templates enables nesting, and when compiled using the recursive data object located in the wrapping controller, it will compile into the following HTML: (index.html - compiled)   <div ng-controller="MainController"> <span>Primates</span> <button ng-click="delSubtree()">delete</button> <ul ng-if="isParent" style="margin-left:30px">    <li ng-repeat="item in val.items">      <span>Anthropoidea</span>      <button ng-click="delSubtree()">delete</button>      <ul ng-if="isParent" style="margin-left:30px">        <li ng-repeat="item in val.items">          <span>New World Anthropoids</span>          <button ng-click="delSubtree()">delete</button>        </li>        <li ng-repeat="item in val.items">          <span>Old World Anthropoids</span>          <button ng-click="delSubtree()">delete</button>          <ul ng-if="isParent" style="margin-left:30px">            <li ng-repeat="item in val.items">              <span>Apes</span>              <button ng-click="delSubtree()">delete</button>              <ul ng-if="isParent" style="margin-left:30px">                <li ng-repeat="item in val.items">                  <span>Lesser Apes</span>                  <button ng-click="delSubtree()">delete</button>                </li>                <li ng-repeat="item in val.items">                  <span>Greater Apes</span>                  <button ng-click="delSubtree()">delete</button>                </li>              </ul>            </li>            <li ng-repeat="item in val.items">              <span>Monkeys</span>              <button ng-click="delSubtree()">delete</button>            </li>          </ul>         </li>      </ul>    </li>    <li ng-repeat="item in val.items">      <span>Prosimii</span>      <button ng-click="delSubtree()">delete</button>    </li> </ul> </div> JSFiddle: http://jsfiddle.net/msfrisbie/ka46yx4u/ How it works… The definition of the isolate scope through the nested directives described in the previous section allows all or part of the recursive objects to be bound through parentData to the appropriate directive instance, all the while maintaining the nested connectedness afforded by the directive hierarchy. When a parent node is deleted, the lower directives are still bound to the data object and the removal propagates through cleanly. The meatiest and most important part of this directive is, of course, the link function. Here, the link function determines whether the node has any children (which simply checks for the existence of an array in the local data node) and declares the deleting method, which simply removes the relevant portion from the recursive object and cleans up the local node. Up until this point, there haven't been any recursive calls, and there shouldn't need to be. If your directive is constructed correctly, AngularJS data binding and inherent template management will take care of the template cleanup for you. This, of course, leads into the final line of the link function, which is broken up here for readability: el.replaceWith( $compile(    $templateCache.get('recursive.html') )(scope) ); Recall that in a link function, the second parameter is the jqLite-wrapped DOM object that the directive is linking—here, the <tree> element. This exposes to you a subset of jQuery object methods, including replaceWith(), which you will use here. The top-level instance of the directive will be replaced by the recursively-defined template, and this will carry down through the tree. At this point, you should have an idea of how the recursive structure is coming together. The element parameter needs to be replaced with a recursively-compiled template, and for this, you will employ the $compile service. This service accepts a template as a parameter and returns a function that you will invoke with the current scope inside the directive's link function. The template is retrieved from $templateCache by the recursive.html key, and then it's compiled. When the compiler reaches the nested <tree> directive, the recursive directive is realized all the way down through the data in the recursive object. Summary This article demonstrates the power of constructing a directive to convert a complex data object into a large DOM object. Relevant portions can be broken into individual templates, handled with distributed directive logic, and combined together in an elegant fashion to maximize modularity and reusability. Resources for Article:  Further resources on this subject: Working with Live Data and AngularJS [article] Angular Zen [article] AngularJS Project [article]
Read more
  • 0
  • 0
  • 4959

article-image-text-and-appearance-bindings-and-form-field-bindings
Packt
25 May 2015
14 min read
Save for later

Text and appearance bindings and form field bindings

Packt
25 May 2015
14 min read
In this article by Andrey Akinshin, the author of Getting Started with Knockout.js for .Net Developers, we will look at the various binding offered by Knockout.js. Knockout.js provides you with a huge number of useful HTML data bindings to control the text and its appearance. In this section, we take a brief look at the most common bindings: The text binding The html binding The css binding The style binding The attr binding The visible binding (For more resources related to this topic, see here.) The text binding The text binding is one of the most useful bindings. It allows us to bind text of an element (for example, span) to a property of the ViewModel. Let's create an example in which a person has a single firstName property. The Model will be as follows: var person = { firstName: "John" }; The ViewModel will be as follows: var PersonViewModel = function() { var self = this; self.firstName = ko.observable(person.firstName); }; The View will be as follows: The first name is <span data-bind="text: firstName"></span>. It is really a very simple example. The Model (the person object) has only the firstName property with the initial value John. In the ViewModel, we created the firstName property, which is represented by ko.observable. The View contains a span element with a single data binding; the text property of span binds to the firstName property of the ViewModel. In this example, any changes to personViewModel.firstName will entail an automatic update of text in the span element. If we run the example, we will see a single text line: The first name is John. Let's upgrade our example by adding the age property for the person. In the View, we will print young person for age less than 18 or adult person for age greater than or equal to 18 (PersonalPage-Binding-Text2.html): The Model will be as follows: var person = { firstName: "John", age: 30 }; The ViewModel will be as follows: var personViewModel = function() { var self = this; self.firstName = ko.observable(person.firstName); self.age = ko.observable(person.age); }; The View will be as follows: <span data-bind="text: firstName"></span> is <span data- bind="text: age() >= 18 ? 'adult' : 'young'"></span>   person. This example uses an expression binding in the View. The second span element binds its text property to a JavaScript expression. In this case, we will see the text John is adult person because we set age to 30 in the Model. Note that it is bad practice to write expressions such as age() >= 18 directly inside the binding value. The best way is to define the so-called computed observable property that contains a boolean expression and uses the name of the defined property instead of the expression. We will discuss this method later. The html binding In some cases, we may want to use HTML tags inside our data binding. However, if we include HTML tags in the text binding, tags will be shown in the raw form. We should use the html binding to render tags, as shown in the following example: The Model will be as follows: var person = { about: "John's favorite site is <a     href='http://www.packtpub.com'>PacktPub</a>." }; The ViewModel will be as follows: var PersonViewModel = function() { var self = this; self.about = ko.observable(person.about); }; The View will be as follows: <span data-bind="html: about"></span> Thanks to the html binding, the about message will be displayed correctly and the <a> tag will be transformed into a hyperlink. When you try to display a link with the text binding, the HTML will be encoded, so the user will see not a link but special characters. The css binding The html binding is a good way to include HTML tags in the binding value, but it is a bad practice for its styling. Instead of this, we should use the css binding for this aim. Let's consider the following example: The Model will be as follows: var person = { favoriteColor: "red" }; The ViewModel will be as follows: var PersonViewModel = function() { var self = this; self.favoriteColor = ko.observable(person.favoriteColor); }; The View will be as follows: <style type="text/css"> .redStyle {    color: red; } .greenStyle {    color: green; } </style> <div data-bind="css: { redStyle: favoriteColor() == 'red',   greenStyle: favoriteColor() == 'green' }"> John's favorite color is <span data-bind="text:   favoriteColor"></span>. </div> In the View, there are two CSS classes: redStyle and greenStyle. In the Model, we use favoriteColor to define the favorite color of our person. The expression binding for the div element applies the redStyle CSS style for red color and greenStyle for green color. It uses the favoriteColor observable property as a function to get its value. When favoriteColor is not an observable, the data binding will just be favoriteColor== 'red'. Of course, when favoriteColor changes, the DOM will not be updated because it won't be notified. The style binding In some cases, we do not have access to CSS, but we still need to change the style of the View. For example, CSS files are placed in an application theme and we may not have enough rights to modify it. The style binding helps us in such a case: The Model will be as follows: var person = { favoriteColor: "red" }; The ViewModel will be as follows: var PersonViewModel = function() { var self = this; self.favoriteColor = ko.observable(person.favoriteColor); }; The View will be as follows: <div data-bind="style: { color: favoriteColor() }"> John's favorite color is <span data-bind="text:   favoriteColor"></span>. </div> This example is analogous to the previous one, with the only difference being that we use the style binding instead of the css binding. The attr binding The attr binding is also a good way to work with DOM elements. It allows us to set the value of any attributes of elements. Let's look at the following example: The Model will be as follows: var person = { favoriteUrl: "http://www.packtpub.com" }; The ViewModel will be as follows: var PersonViewModel = function() { var self = this; self.favoriteUrl = ko.observable(person.favoriteUrl); }; The View will be as follows: John's favorite site is <a data-bind="attr: { href: favoriteUrl()   }">PacktPub</a>. The href attribute of the <a> element binds to the favoriteUrl property of the ViewModel via the attr binding. The visible binding The visible binding allows us to show or hide some elements according to the ViewModel. Let's consider an example with a div element, which is shown depending on a conditional binding: The Model will be as follows: var person = { favoriteSite: "PacktPub" }; The ViewModel will be as follows: var PersonViewModel = function() { var self = this; self.favoriteSite = ko.observable(person.favoriteSite); }; The View will be as follows: <div data-bind="visible: favoriteSite().length > 0"> John's favorite site is <span data-bind="text:   favoriteSite"></span>. </div> In this example, the div element with information about John's favorite site will be shown only if the information was defined. Form fields bindings Forms are important parts of many web applications. In this section, we will learn about a number of data bindings to work with the form fields: The value binding The click binding The submit binding The event binding The checked binding The enable binging The disable binding The options binding The selectedOptions binding The value binding Very often, forms use the input, select and textarea elements to enter text. Knockout.js allows work with such text via the value binding, as shown in the following example: The Model will be as follows: var person = { firstName: "John" }; The ViewModel will be as follows: var PersonViewModel = function() { var self = this; self.firstName = ko.observable(person.firstName); }; The View will be as follows: <form> The first name is <input type="text" data-bind="value:     firstName" />. </form> The value property of the text input element binds to the firstName property of the ViewModel. The click binding We can add some function as an event handler for the onclick event with the click binding. Let's consider the following example: The Model will be as follows: var person = { age: 30 }; The ViewModel will be as follows: var personViewModel = function() { var self = this; self.age = ko.observable(person.age); self.growOld = function() {    var previousAge = self.age();    self.age(previousAge + 1); } }; The View will be as follows: <div> John's age is <span data-bind="text: age"></span>. <button data-bind="click: growOld">Grow old</button> </div> We have the Grow old button in the View. The click property of this button binds to the growOld function of the ViewModel. This function increases the age of the person by one year. Because the age property is an observable, the text in the span element will automatically be updated to 31. The submit binding Typically, the submit event is the last operation when working with a form. Knockout.js supports the submit binding to add the corresponding event handler. Of course, you can also use the click binding for the "submit" button, but that is a different thing because there are alternative ways to submit the form. For example, a user can use the Enter key while typing into a textbox. Let's update the previous example with the submit binding: The Model will be as follows: var person = { age: 30 }; The ViewModel will be as follows: var PersonViewModel = function() { var self = this; self.age = ko.observable(person.age); self.growOld = function() {    var previousAge = self.age();    self.age(previousAge + 1); } }; The View will be as follows: <div> John's age is <span data-bind="text: age"></span>. <form data-bind="submit: growOld">    <button type="submit">Grow old</button> </form> </div> The only new thing is moving the link to the growOld function to the submit binding of the form. The event binding The event binding also helps us interact with the user. This binding allows us to add an event handler to an element, events such as keypress, mouseover, or mouseout. In the following example, we use this binding to control the visibility of a div element according to the mouse position: The Model will be as follows: var person = { }; The ViewModel will be as follows: var PersonViewModel = function() { var self = this; self.aboutEnabled = ko.observable(false); self.showAbout = function() {    self.aboutEnabled(true); }; self.hideAbout = function() {    self.aboutEnabled(false); } }; The View will be as follows: <div> <div data-bind="event: { mouseover: showAbout, mouseout:     hideAbout }">    Mouse over to view the information about John. </div> <div data-bind="visible: aboutEnabled">    John's favorite site is <a       href='http://www.packtpub.com'>PacktPub</a>. </div> </div> In this example, the Model is empty because the web page doesn't have a state outside of the runtime context. The single property aboutEnabled makes sense only to run an application. In such a case, we can omit the corresponding property in the Model and work only with the ViewModel. In particular, we will work with a single ViewModel property aboutEnabled, which defines the visibility of div. There are two event bindings: mouseover and mouseout. They link the mouse behavior to the value of aboutEnabled with the help of the showAbout and hideAbout ViewModel functions. The checked binding Many forms contain checkboxes (<input type="checkbox" />). We can work with its value with the help of the checked binding, as shown in the following example: The Model will be as follows: var person = { isMarried: false }; The ViewModel will be as follows: var personViewModel = function() { var self = this; self.isMarried = ko.observable(person.isMarried); }; The View is as follows: <form> <input type="checkbox" data-bind="checked: isMarried" /> Is married </form> The View contains the Is married checkbox. Its checked property binds to the Boolean isMarried property of the ViewModel. The enable and disable binding A good usability practice suggests setting the enable property of some elements (such as input, select, and textarea) according to a form state. Knockout.js provides us with the enable binding for this purpose. Let's consider the following example: The Model is as follows: var person = { isMarried: false, wife: "" }; The ViewModel will be as follows: var PersonViewModel = function() { var self = this; self.isMarried = ko.observable(person.isMarried); self.wife = ko.observable(person.wife); }; The View will be as follows: <form> <p>    <input type="checkbox" data-bind="checked: isMarried" />    Is married </p> <p>    Wife's name:    <input type="text" data-bind="value: wife, enable: isMarried" /> </p> </form> The View contains the checkbox from the previous example. Only in the case of a married person can we write the name of his wife. This behavior is provided by the enable binding of the text input element. The disable binding works in exactly the opposite way. It allows you to avoid negative expression bindings in some cases. The options binding If the Model contains some collections, then we need a select element to display it. The options binding allows us to link such an element to the data, as shown in the following example: The Model is as follows: var person = { children: ["Jonnie", "Jane", "Richard", "Mary"] }; The ViewModel will be as follows: var PersonViewModel = function() { var self = this; self.children = person.children; }; The View will be as follows: <form> <select multiple="multiple" data-bind="options:     children"></select> </form> In the preceding example, the Model contains the children array. The View represents this array with the help of multiple select elements. Note that, in this example, children is a non-observable array. Therefore, changes to ViewModel in this case do not affect the View. The code is shown only for demonstration of the options binding. The selectedOptions binding In addition to the options binding, we can use the selectedOptions binding to work with selected items in the select element. Let's look at the following example: The Model will be as follows: var person = { children: ["Jonnie", "Jane", "Richard", "Mary"], selectedChildren: ["Jonnie", "Mary"] }; The ViewModel will be as follows: var PersonViewModel = function() { var self = this; self.children = person.children; self.selectedChildren = person.selectedChildren }; The View will be as follows: <form> <select multiple="multiple" data-bind="options: children,     selectedOptions: selectedChildren"></select> </form> The selectedChildren property of the ViewModel defines a set of selected items in the select element. Note that, as shown in the previous example, selectedChildren is a non-observable array; the preceding code only shows the use of the selectedOptions binding. In a real-world application, most of the time, the value of the selectedChildren binding will be an observable array. Summary In this article, we have looked at examples that illustrate the use of bindings for various purposes. Resources for Article: Further resources on this subject: So, what is Ext JS? [article] Introducing a feature of IntroJs [article] Top features of KnockoutJS [article]
Read more
  • 0
  • 0
  • 4931

article-image-why-meteor-rocks
Packt
08 Jul 2015
23 min read
Save for later

Why Meteor Rocks!

Packt
08 Jul 2015
23 min read
In this article by Isaac Strack, the author of the book, Getting Started with Meteor.js JavaScript Framework - Second Edition, has discussed some really amazing features of Meteor that has contributed a lot to the success of Meteor. Meteor is a disruptive (in a good way!) technology. It enables a new type of web application that is faster, easier to build, and takes advantage of modern techniques, such as Full Stack Reactivity, Latency Compensation, and Data On The Wire. (For more resources related to this topic, see here.) This article explains how web applications have changed over time, why that matters, and how Meteor specifically enables modern web apps through the above-mentioned techniques. By the end of this article, you will have learned: What a modern web application is What Data On The Wire means and how it's different How Latency Compensation can improve your app experience Templates and Reactivity—programming the reactive way! Modern web applications Our world is changing. With continual advancements in displays, computing, and storage capacities, things that weren't even possible a few years ago are now not only possible but are critical to the success of a good application. The Web in particular has undergone significant change. The origin of the web app (client/server) From the beginning, web servers and clients have mimicked the dumb terminal approach to computing where a server with significantly more processing power than a client will perform operations on data (writing records to a database, math calculations, text searches, and so on), transform the data and render it (turn a database record into HTML and so on), and then serve the result to the client, where it is displayed for the user. In other words, the server does all the work, and the client acts as more of a display, or a dumb terminal. This design pattern for this is called…wait for it…the client/server design pattern. The diagrammatic representation of the client-server architecture is shown in the following diagram: This design pattern, borrowed from the dumb terminals and mainframes of the 60s and 70s, was the beginning of the Web as we know it and has continued to be the design pattern that we think of when we think of the Internet. The rise of the machines (MVC) Before the Web (and ever since), desktops were able to run a program such as a spreadsheet or a word processor without needing to talk to a server. This type of application could do everything it needed to, right there on the big and beefy desktop machine. During the early 90s, desktop computers got even more beefy. At the same time, the Web was coming alive, and people started having the idea that a hybrid between the beefy desktop application (a fat app) and the connected client/server application (a thin app) would produce the best of both worlds. This kind of hybrid app—quite the opposite of a dumb terminal—was called a smart app. Many business-oriented smart apps were created, but the easiest examples can be found in computer games. Massively Multiplayer Online games (MMOs), first-person shooters, and real-time strategies are smart apps where information (the data model) is passed between machines through a server. The client in this case does a lot more than just display the information. It performs most of the processing (or acts as a controller) and transforms the data into something to be displayed (the view). This design pattern is simple but very effective. It's called the Model View Controller (MVC) pattern. The model is essentially the data for an application. In the context of a smart app, the model is provided by a server. The client makes requests to the server for data and stores that data as the model. Once the client has a model, it performs actions/logic on that data and then prepares it to be displayed on the screen. This part of the application (talking to the server, modifying the data model, and preparing data for display) is called the controller. The controller sends commands to the view, which displays the information. The view also reports back to the controller when something happens on the screen (a button click, for example). The controller receives the feedback, performs the logic, and updates the model. Lather, rinse, repeat! Since web browsers were built to be "dumb clients", the idea of using a browser as a smart app back then was out of question. Instead, smart apps were built on frameworks such as Microsoft .NET, Java, or Macromedia (now Adobe) Flash. As long as you had the framework installed, you could visit a web page to download/run a smart app. Sometimes, you could run the app inside the browser, and sometimes, you would download it first, but either way, you were running a new type of web app where the client application could talk to the server and share the processing workload. The browser grows up Beginning in the early 2000s, a new twist on the MVC pattern started to emerge. Developers started to realize that, for connected/enterprise "smart apps", there was actually a nested MVC pattern. The server code (controller) was performing business logic against the database (model) through the use of business objects and then sending processed/rendered data to the client application (a "view"). The client was receiving this data from the server and treating it as its own personal "model". The client would then act as a proper controller, perform logic, and send the information to the view to be displayed on the screen. So, the "view" for the server MVC was the "model" for the client MVC. As browser technologies (HTML and JavaScript) matured, it became possible to create smart apps that used the Nested MVC design pattern directly inside an HTML web page. This pattern makes it possible to run a full-sized application using only JavaScript. There is no longer any need to download multiple frameworks or separate apps. You can now get the same functionality from visiting a URL as you could previously by buying a packaged product. A giant Meteor appears! Meteor takes modern web apps to the next level. It enhances and builds upon the nested MVC design pattern by implementing three key features: Data On The Wire through the Distributed Data Protocol (DDP) Latency Compensation with Mini Databases Full Stack Reactivity with Blaze and Tracker Let's walk through these concepts to see why they're valuable, and then, we'll apply them to our Lending Library application. Data On The Wire The concept of Data On The Wire is very simple and in tune with the nested MVC pattern; instead of having a server process everything, render content, and then send HTML across the wire, why not just send the data across the wire and let the client decide what to do with it? This concept is implemented in Meteor using the Distributed Data Protocol, or DDP. DDP has a JSON-based syntax and sends messages similar to the REST protocol. Additions, deletions, and changes are all sent across the wire and handled by the receiving service/client/device. Since DDP uses WebSockets rather than HTTP, the data can be pushed whenever changes occur. But the true beauty of DDP lies in the generic nature of the communication. It doesn't matter what kind of system sends or receives data over DDP—it can be a server, a web service, or a client app—they all use the same protocol to communicate. This means that none of the systems know (or care) whether the other systems are clients or servers. With the exception of the browser, any system can be a server, and without exception, any server can act as a client. All the traffic looks the same and can be treated in a similar manner. In other words, the traditional concept of having a single server for a single client goes away. You can hook multiple servers together, each serving a discreet purpose, or you can have a client connect to multiple servers, interacting with each one differently. Think about what you can do with a system like that: Imagine multiple systems all coming together to create, for example, a health monitoring system. Some systems are built with C++, some with Arduino, some with…well, we don't really care. They all speak DDP. They send and receive data on the wire and decide individually what to do with that data. Suddenly, very difficult and complex problems become much easier to solve. DDP has been implemented in pretty much every major programming language, allowing you true freedom to architect an enterprise application. Latency Compensation Meteor employs a very clever technique called Mini Databases. A mini database is a "lite" version of a normal database that lives in the memory on the client side. Instead of the client sending requests to a server, it can make changes directly to the mini database on the client. This mini database then automatically syncs with the server (using DDP of course), which has the actual database. Out of the box, Meteor uses MongoDB and Minimongo: When the client notices a change, it first executes that change against the client-side Minimongo instance. The client then goes on its merry way and lets the Minimongo handlers communicate with the server over DDP. If the server accepts the change, it then sends out a "changed" message to all connected clients, including the one that made the change. If the server rejects the change, or if a newer change has come in from a different client, the Minimongo instance on the client is corrected, and any affected UI elements are updated as a result. All of this doesn't seem very groundbreaking, but here's the thing—it's all asynchronous, and it's done using DDP. This means that the client doesn't have to wait until it gets a response back from the server. It can immediately update the UI based on what is in the Minimongo instance. What if the change was illegal or other changes have come in from the server? This is not a problem as the client is updated as soon as it gets word from the server. Now, what if you have a slow internet connection or your connection goes down temporarily? In a normal client/server environment, you couldn't make any changes, or the screen would take a while to refresh while the client waits for permission from the server. However, Meteor compensates for this. Since the changes are immediately sent to Minimongo, the UI gets updated immediately. So, if your connection is down, it won't cause a problem: All the changes you make are reflected in your UI, based on the data in Minimongo. When your connection comes back, all the queued changes are sent to the server, and the server will send authorized changes to the client. Basically, Meteor lets the client take things on faith. If there's a problem, the data coming in from the server will fix it, but for the most part, the changes you make will be ratified and broadcast by the server immediately. Coding this type of behavior in Meteor is crazy easy (although you can make it more complex and therefore more controlled if you like): lists = new Mongo.Collection("lists"); This one line declares that there is a lists data model. Both the client and server will have a version of it, but they treat their versions differently. The client will subscribe to changes announced by the server and update its model accordingly. The server will publish changes, listen to change requests from the client, and update its model (its master copy) based on these change requests. Wow, one line of code that does all that! Of course, there is more to it, but that's beyond the scope of this article, so we'll move on. To better understand Meteor data synchronization, see the Publish and subscribe section of the meteor documentation at http://docs.meteor.com/#/full/meteor_publish. Full Stack Reactivity Reactivity is integral to every part of Meteor. On the client side, Meteor has the Blaze library, which uses HTML templates and JavaScript helpers to detect changes and render the data in your UI. Whenever there is a change, the helpers re-run themselves and add, delete, and change UI elements, as appropriate, based on the structure found in the templates. These functions that re-run themselves are called reactive computations. On both the client and the server, Meteor also offers reactive computations without having to use a UI. Called the Tracker library, these helpers also detect any data changes and rerun themselves accordingly. Because both the client and the server are JavaScript-based, you can use the Tracker library anywhere. This is defined as isomorphic or full stack reactivity because you're using the same language (and in some cases the same code!) on both the client and the server. Re-running functions on data changes has a really amazing benefit for you, the programmer: you get to write code declaratively, and Meteor takes care of the reactive part automatically. Just tell Meteor how you want the data displayed, and Meteor will manage any and all data changes. This declarative style is usually accomplished through the use of templates. Templates work their magic through the use of view data bindings. Without getting too deep, a view data binding is a shared piece of data that will be displayed differently if the data changes. Let's look at a very simple data binding—one for which you don't technically need Meteor—to illustrate the point. Let's perform the following set of steps to understand the concept in detail: In LendLib.html, you will see an HTML-based template expression: <div id="categories-container">      {{> categories}}   </div> This expression is a placeholder for an HTML template that is found just below it: <template name="categories">    <h2 class="title">my stuff</h2>.. So, {{> categories}} is basically saying, "put whatever is in the template categories right here." And the HTML template with the matching name is providing that. If you want to see how data changes will affect the display, change the h2 tag to an h4 tag and save the change: <template name="categories">    <h4 class="title">my stuff</h4> You'll see the effect in your browser. (my stuff will become itsy bitsy.) That's view data binding at work. Change the h4 tag back to an h2 tag and save the change, unless you like the change. No judgment here...okay, maybe a little bit of judgment. It's ugly, and tiny, and hard to read. Seriously, you should change it back before someone sees it and makes fun of you! Alright, now that we know what a view data binding is, let's see how Meteor uses it. Inside the categories template in LendLib.html, you'll find even more templates: <template name="categories"> <h4 class="title">my stuff</h4> <div id="categories" class="btn-group">    {{#each lists}}      <div class="category btn btn-primary">        {{Category}}      </div>    {{/each}} </div> </template> Meteor uses a template language called Spacebars to provide instructions inside templates. These instructions are called expressions, and they let us do things like add HTML for every record in a collection, insert the values of properties, and control layouts with conditional statements. The first Spacebars expression is part of a pair and is a for-each statement. {{#each lists}} tells the interpreter to perform the action below it (in this case, it tells it to make a new div element) for each item in the lists collection. lists is the piece of data, and {{#each lists}} is the placeholder. Now, inside the {{#each lists}} expression, there is one more Spacebars expression: {{Category}} Since the expression is found inside the #each expression, it is considered a property. That is to say that {{Category}} is the same as saying this.Category, where this is the current item in the for-each loop. So, the placeholder is saying, "add the value of the Category property for the current record." Now, if we look in LendLib.js, we will see the reactive values (called reactive contexts) behind the templates: lists : function () { return lists.find(... Here, Meteor is declaring a template helper named lists. The helper, lists, is found inside the template helpers belonging to categories. The lists helper happens to be a function that returns all the data in the lists collection, which we defined previously. Remember this line? lists = new Mongo.Collection("lists"); This lists collection is returned by the above-mentioned helper. When there is a change to the lists collection, the helper gets updated and the template's placeholder is changed as well. Let's see this in action. On your web page pointing to http://localhost:3000, open the browser console and enter the following line: > lists.insert({Category:"Games"}); This will update the lists data collection. The template will see this change and update the HTML code/placeholder. Each of the placeholders will run one additional time for the new entry in lists, and you'll see the following screen: When the lists collection was updated, the Template.categories.lists helper detected the change and reran itself (recomputed). This changed the contents of the code meant to be displayed in the {{> categories}} placeholder. Since the contents were changed, the affected part of the template was re-run. Now, take a minute here and think about how little we had to do to get this reactive computation to run: we simply created a template, instructing Blaze how we want the lists data collection to be displayed, and we put in a placeholder. This is simple, declarative programming at its finest! Let's create some templates We'll now see a real-life example of reactive computations and work on our Lending Library at the same time. Adding categories through the console has been a fun exercise, but it's not a long-term solution. Let's make it so that we can do that on the page instead as follows: Open LendLib.html and add a new button just before the {{#each lists}} expression: <div id="categories" class="btn-group"> <div class="category btn btn-primary" id="btnNewCat">    <span class="glyphicon glyphicon-plus"></span> </div> {{#each lists}} This will add a plus button on the page, as follows: Now, we want to change the button into a text field when we click on it. So let's build that functionality by using the reactive pattern. We will make it based on the value of a variable in the template. Add the following {{#if…else}} conditionals around our new button: <div id="categories" class="btn-group"> {{#if new_cat}} {{else}}    <div class="category btn btn-primary" id="btnNewCat">      <span class="glyphicon glyphicon-plus"></span>    </div> {{/if}} {{#each lists}} The first line, {{#if new_cat}}, checks to see whether new_cat is true or false. If it's false, the {{else}} section is triggered, and it means that we haven't yet indicated that we want to add a new category, so we should be displaying the button with the plus sign. In this case, since we haven't defined it yet, new_cat will always be false, and so the display won't change. Now, let's add the HTML code to display when we want to add a new category: {{#if new_cat}} <div class="category form-group" id="newCat">      <input type="text" id="add-category" class="form-control" value="" />    </div> {{else}} ... {{/if}} There's the smallest bit of CSS we need to take care of as well. Open ~/Documents/Meteor/LendLib/LendLib.css and add the following declaration: #newCat { max-width: 250px; } Okay, so now we've added an input field, which will show up when new_cat is true. The input field won't show up unless it is set to true; so, for now, it's hidden. So, how do we make new_cat equal to true? Save your changes if you haven't already done so, and open LendLib.js. First, we'll declare a Session variable, just below our Meteor.isClient check function, at the top of the file: if (Meteor.isClient) { // We are declaring the 'adding_category' flag Session.set('adding_category', false); Now, we'll declare the new template helper new_cat, which will be a function returning the value of adding_category. We need to place the new helper in the Template.categories.helpers() method, just below the declaration for lists: Template.categories.helpers({ lists: function () {    ... }, new_cat: function(){    //returns true if adding_category has been assigned    //a value of true    return Session.equals('adding_category',true); } }); Note the comma (,) on the line above new_cat. It's important that you add that comma, or your code will not execute. Save these changes, and you'll see that nothing has changed. Ta-da! In reality, this is exactly as it should be because we haven't done anything to change the value of adding_category yet. Let's do this now: First, we'll declare our click event handler, which will change the value in our Session variable. To do this, add the following highlighted code just below the Template.categories.helpers() block: Template.categories.helpers({ ... }); Template.categories.events({ 'click #btnNewCat': function (e, t) {    Session.set('adding_category', true);    Tracker.flush();    focusText(t.find("#add-category")); } }); Now, let's take a look at the following line of code: Template.categories.events({ This line declares that events will be found in the category template. Now, let's take a look at the next line: 'click #btnNewCat': function (e, t) { This tells us that we're looking for a click event on the HTML element with an id="btnNewCat" statement (which we already created in LendLib.html). Session.set('adding_category', true); Tracker.flush(); focusText(t.find("#add-category")); Next, we set the Session variable, adding_category = true, flush the DOM (to clear up anything wonky), and then set the focus onto the input box with the id="add-category" expression. There is one last thing to do, and that is to quickly add the focusText(). helper function. To do this, just before the closing tag for the if (Meteor.isClient) function, add the following code: /////Generic Helper Functions///// //this function puts our cursor where it needs to be. function focusText(i) { i.focus(); i.select(); }; } //<------closing bracket for if(Meteor.isClient){} Now, when you save the changes and click on the plus button, you will see the input box: Fancy! However, it's still not useful, and we want to pause for a second and reflect on what just happened; we created a conditional template in the HTML page that will either show an input box or a plus button, depending on the value of a variable. This variable is a reactive variable, called a reactive context. This means that if we change the value of the variable (like we do with the click event handler), then the view automatically updates because the new_cat helpers function (a reactive computation) will rerun. Congratulations, you've just used Meteor's reactive programming model! To really bring this home, let's add a change to the lists collection (which is also a reactive context, remember?) and figure out a way to hide the input field when we're done. First, we need to add a listener for the keyup event. Or, to put it another way, we want to listen when the user types something in the box and hits Enter. When this happens, we want to add a category based on what the user typed. To do this, let's first declare the event handler. Just after the click handler for #btnNewCat, let's add another event handler: 'click #btnNewCat': function (e, t) {    ... }, 'keyup #add-category': function (e,t){    if (e.which === 13)    {      var catVal = String(e.target.value || "");      if (catVal)      {        lists.insert({Category:catVal});        Session.set('adding_category', false);      }    } } We add a "," character at the end of the first click handler, and then add the keyup event handler. Now, let's check each of the lines in the preceding code: This line checks to see whether we hit the Enter/Return key. if (e.which === 13) This line of code checks to see whether the input field has any value in it: var catVal = String(e.target.value || ""); if (catVal) If it does, we want to add an entry to the lists collection: lists.insert({Category:catVal}); Then, we want to hide the input box, which we can do by simply modifying the value of adding_category: Session.set('adding_category', false); There is one more thing to add and then we'll be done. When we click away from the input box, we want to hide it and bring back the plus button. We already know how to do this reactively, so let's add a quick function that changes the value of adding_category. To do this, add one more comma after the keyup event handler and insert the following event handler: 'keyup #add-category': function (e,t){ ... }, 'focusout #add-category': function(e,t){    Session.set('adding_category',false); } Save your changes, and let's see this in action! In your web browser on http://localhost:3000, click on the plus sign, add the word Clothes, and hit Enter. Your screen should now resemble the following screenshot: Feel free to add more categories if you like. Also, experiment by clicking on the plus button, typing something in, and then clicking away from the input field. Summary In this article, you learned about the history of web applications and saw how we've moved from a traditional client/server model to a nested MVC design pattern. You learned what smart apps are, and you also saw how Meteor has taken smart apps to the next level with Data On The Wire, Latency Compensation, and Full Stack Reactivity. You saw how Meteor uses templates and helpers to automatically update content, using reactive variables and reactive computations. Lastly, you added more functionality to the Lending Library. You made a button and an input field to add categories, and you did it all using reactive programming rather than directly editing the HTML code. Resources for Article: Further resources on this subject: Building the next generation Web with Meteor [article] Quick start - creating your first application [article] Meteor.js JavaScript Framework: Why Meteor Rocks! [article]
Read more
  • 0
  • 0
  • 4916

article-image-object-oriented-javascript-backbone-classes
Packt
03 Jun 2015
9 min read
Save for later

Object-Oriented JavaScript with Backbone Classes

Packt
03 Jun 2015
9 min read
In this Article by Jeremy Walker, author of the book Backbone.js Essentials, we will explore the following topics: The differences between JavaScript's class system and the class systems of traditional object-oriented languages How new, this, and prototype enable JavaScript's class system Extend, Backbone's much easier mechanism for creating subclasses (For more resources related to this topic, see here.) JavaScript's class system Programmers who use JavaScript can use classes to encapsulate units of logic in the same way as programmers of other languages. However, unlike those languages, JavaScript relies on a less popular form of inheritance known as prototype-based inheritance. Since Backbone classes are, at their core, just JavaScript classes, they too rely on the prototype system and can be subclassed in the same way as any other JavaScript class. For instance, let's say you wanted to create your own Book subclass of the Backbone Model class with additional logic that Model doesn't have, such as book-related properties and methods. Here's how you can create such a class using only JavaScript's native object-oriented capabilities: // Define Book's Initializervar Book = function() {// define Book's default propertiesthis.currentPage = 1;this.totalPages = 1;}// Define book's parent classBook.prototype = new Backbone.Model();// Define a method of BookBook.prototype.turnPage = function() {this.currentPage += 1;return this.currentPage;} If you've never worked with prototypes in JavaScript, the preceding code may look a little intimidating. Fortunately, Backbone provides a much easier and easier to read mechanism for creating subclasses. However, since that system is built on top of JavaScript's native system, it's important to first understand how the native system works. This understanding will be helpful later when you want to do more complex class-related tasks, such as calling a method defined on a parent class. The new keyword The new keyword is a relatively simple but extremely useful part of JavaScript's class system. The first thing that you need to understand about new is that it doesn't create objects in the same way as other languages. In JavaScript, every variable is either a function, object, or primitive, which means that when we refer to a class, what we're really referring to is a specially designed initialization function. Creating this class-like function is as simple as defining a function that modifies this and then using the new keyword to call that function. Normally, when you call a function, its this is obvious. For instance, when you call the turnPage method of a book object, the this method inside turnPage will be set to this book object, as shown here: var simpleBook = {currentPage: 3, pages: 60};simpleBook.turnPage = function() {this.currentPage += 1;return this.currentPage;}simpleBook.turnPage(); // == 4 Calling a function that isn't attached to an object (in other words, a function that is not a method) results in this being set to the global scope. In a web browser, this means the window object: var testGlobalThis = function() {alert(this);}testGlobalThis(); // alerts window When we use the new keyword before calling an initialization function, three things happen (well, actually four, but we'll wait to explain the fourth one until we explain prototypes): JavaScript creates a brand new object ({})for us JavaScript sets the this method inside the initialization function to the newly created object After the function finishes, JavaScript ignores the normal return value and instead returns the object that was created As you can see, although the new keyword is simple, it's nevertheless important because it allows you to treat initialization functions as if they really are actual classes. At the same time, it does so without violating the JavaScript principle that all variables must either be a function, object, or primitive. Prototypal inheritance That's all well and good, but if JavaScript has no true concept of classes, how can we create subclasses? As it turns out, every object in JavaScript has two special properties to solve this problem: prototype and __proto__ (hidden). These two properties are, perhaps, the most commonly misunderstood aspects of JavaScript, but once you learn how they work, they are actually quite simple to use. When you call a method on an object or try to retrieve a property JavaScript first checks whether the object has the method or property defined in the object itself. In other words if you define a method such as this one: book.turnPage = function()this.currentPage += 1;}; JavaScript will use that definition first when you call turnPage. In real-world code, however, you will almost never want to put methods directly in your objects for two reasons. First, doing that will result in duplicate copies of those methods, as each instance of your class will have its own separate copy. Second, adding methods in this way requires an extra step, and that step can be easily forgotten when you create new instances. If the object doesn't have a turnPage method defined in it, JavaScript will next check the object's hidden __proto__ property. If this __proto__ object doesn't have a turnPage method, then JavaScript will look at the __proto__ property on the object's __proto__. If that doesn't have the method, JavaScript continues to check the __proto__ of the __proto__ of the __proto__ and keeps checking each successive __proto__ until it has exhausted the chain. This is similar to single-class inheritance in more traditional object-oriented languages, except that instead of going through a class chain, JavaScript instead uses a prototype chain. Just as in an object-oriented language we wind up with only a single copy of each method, but instead of the method being defined on the class itself, it's defined on the class's prototype. In a future version of JavaScript (ES6), it will be possible to work with the __proto__ object directly, but for now, the only way to actually see the __proto__ property is to use your browser's debugging tool (for instance, the Chrome Developer Tools debugger):   This means that you can't use this line of code: book.__proto__.turnPage(); Also, you can't use the following code: book.__proto__ = {turnPage: function() {this.currentPage += 1;}}; But, if you can't manipulate __proto__ directly, how can you take advantage of it? Fortunately, it is possible to manipulate __proto__, but you can only do this indirectly by manipulating prototype. Do you remember I mentioned that the new keyword actually does four things? The fourth thing is that it sets the __proto__ property of the new object it creates to the prototype property of the initialization function. In other words, if you want to add a turnPage method to every new instance of Book that you create, you can assign this turnPage method to the prototype property of the Book initialization function, For example: var Book = function() {};Book.prototype.turnPage = function() {this.currentPage += 1;};var book = new Book();book.turnPage();// this works because book.__proto__ == Book.prototype Since these concepts often cause confusion, let's briefly recap: Every object has a prototype property and a hidden __proto__ property An object's __proto__ property is set to the prototype property of its constructor when it is first created and cannot be changed Whenever JavaScript can't find a property or method on an object, it "checks each step of the __proto__ chain until it finds one or until it runs "out of chain Extending Backbone classes With that explanation out of the way, we can finally get down to the workings of Backbone's subclassing system, which revolves around Backbone's extend method. To use extend, you simply call it from the class that your new subclass will be based on, and extend will return the new subclass. This new subclass will have its __proto__ property set to the prototype property of its parent class, allowing objects created with the new subclass to access all the properties and methods of the parent class. Take an example of the following code snippet: var Book = Backbone.Model.extend();// Book.prototype.__proto__ == Backbone.Model.prototype;var book = new Book();book.destroy(); In the preceding example, the last line works because JavaScript will look up the __proto__ chain, find the Model method destroy, and use it. In other words, all the functionality of our original class has been inherited by our new class. But of course, extend wouldn't be exciting if all it can do is make exact clones of the parent classes, which is why extend takes a properties object as its first argument. Any properties or methods on this object will be added to the new class's prototype. For instance, let's try making our Book class a little more interesting by adding a property and a method: var Book = Backbone.Model.extend({currentPage: 1,turnPage: function() {this.currentPage += 1;}});var book = new Book();book.currentPage; // == 1book.turnPage(); // increments book.currentPage by one The extend method also allows you to create static properties or methods, or in other words, properties or methods that live on the class rather than on objects created from that class. These static properties and methods are passed in as the second classProperties argument to extend. Here's a quick example of how to add a static method to our Book class: var Book = Backbone.Model.extend({}, {areBooksGreat: function() {alert("yes they are!");}});Book.areBooksGreat(); // alerts "yes they are!"var book = new Book();book.areBooksGreat(); // fails because static methods must becalled on a class As you can see, there are several advantages to Backbone's approach to inheritance over the native JavaScript approach. First, the word prototype did not appear even once in any of the previously mentioned code; while you still need to understand how prototype works, you don't have to think about it just to create a class. Another benefit is that the entire class definition is contained within a single extend call, keeping all of the class's parts together visually. Also, when we use extend, the various pieces of logic that make up the class are ordered the same way as in most other programming languages, defining the super class first and then the initializer and properties, instead of the other way around. Summary In this article, we explored how JavaScript's native class system works and how the new, this, and prototype keywords/properties form the basis of it. We also learned how Backbone's extend method makes creating new subclasses much more convenient as well as how to use apply and call to invoke parent methods (or when providing callback functions) to preserve the desired this method. Resources for Article: Further resources on this subject: Testing Backbone.js Application [Article] Building an app using Backbone.js [Article] Organizing Backbone Applications - Structure, Optimize, and Deploy [Article]
Read more
  • 0
  • 0
  • 4865

article-image-layout-extnet
Packt
30 Jan 2013
16 min read
Save for later

Layout with Ext.NET

Packt
30 Jan 2013
16 min read
(For more resources related to this topic, see here.) Border layout The Border layout is perhaps one of the more popular layouts. While quite complex at first glance, it is popular because it turns out to be quite flexible to design and to use. It offers common elements often seen in complex web applications, such as an area for header content, footer content, a main content area, plus areas to either side. All are separately scrollable and resizable if needed, among other benefits. In Ext speak, these areas are called Regions, and are given names of North, South, Center, East, and West regions. Only the Center region is mandatory. It is also the one without any given dimensions; it will resize to fit the remaining area after all the other regions have been set. A West or East region must have a width defined, and North or South regions must have a height defined. These can be defined using the Width or Height property (in pixels) or using the Flex property which helps provide ratios. Each region can be any Ext.NET component; a very common option is Panel or a subclass of Panel. There are limits, however: for example, a Window is intended to be floating so cannot be one of the regions. This offers a lot of flexibility and can help avoid nesting too many Panels in order to show other components such as GridPanels or TabPanels, for example. Here is a screenshot showing a simple Border layout being applied to the entire page (that is, the viewport) using a 2-column style layout: We have configured a Border layout with two regions; a West region and a Center region. The Border layout is applied to the whole page (this is an example of using it with Viewport. Here is the code: <%@ Page Language="C#" %> <!DOCTYPE html> <html> <head runat="server"> <title>Border Layout Example</title> </head> <body> <ext:ResourceManager runat="server" Theme="Gray" /> <ext:Viewport runat="server" Layout="border"> <Items> <ext:Panel Region="West" Split="true" Title="West" Width="200" Collapsible="true" /> <ext:Panel Region="Center" Title="Center content" /> </Items> </ext:Viewport> </body> </html> The code has a Viewport configured with a Border layout via the Layout property. Then, into the Items collection two Panels are added, for the West and Center regions. The value of the Layout property is case insensitive and can take variations, such as Border, border, borderlayout, BorderLayout, and so on. As regions of a Border layout we can also configure options such as whether you want split bars, whether Panels are collapsible, and more. Our example uses the following: The West region Panel has been configured to be collapsible (using Collapsible="true"). This creates a small button in the title area which, when clicked, will smoothly animate the collapse of that region (which can then be clicked again to open it). When collapsed, the title area itself can also be clicked which will float the region into appearance, rather than permanently opening it (allowing the user to glimpse at the content and mouse away to close the region). This floating capability can be turned off by using Floatable="false" on the Panel. Split="true" gives a split bar with a collapse button between the regions. This next example shows a more complex Border layout where all regions are used: The markup used for the previous is very similar to the first example, so we will only show the Viewport portion: <ext:Viewport runat="server" Layout="border"> <Items> <ext:Panel Region="North" Split="true" Title="North" Height="75" Collapsible="true" /> <ext:Panel Region="West" Split="true" Title="West" Width="150" Collapsible="true" /> <ext:Panel runat="server" Region="Center" Title="Center content" /> <ext:Panel Region="East" Split="true" Title="East" Width="150" Collapsible="true" /> <ext:Panel Region="South" Split="true" Title="South" Height="75" Collapsible="true" /> </Items> </ext:Viewport> Although each Panel has a title set via the Title property, it is optional. For example, you may want to omit the title from the North region if you want an application header or banner bar, where the title bar could be superfluous. Different ways to create the same components The previous examples were shown using the specific Layout="Border" markup. However, there are a number of ways this can be marked up or written in code. For example, You can code these entirely in markup as we have seen You can create these entirely in code You can use a mixture of markup and code to suit your needs Here are some quick examples: Border layout from code This is the code version of the first two-panel Border layout example: <%@ Page Language="C#" %> <script runat="server"> protected void Page_Load(object sender, EventArgs e) { var viewport = new Viewport { Layout = "border", Items = { new Ext.Net.Panel { Region = Region.West, Title = "West", Width = 200, Collapsible = true, Split = true }, new Ext.Net.Panel { Region = Region.Center, Title = "Center content" } } }; this.Form.Controls.Add(viewport); } </script> <!DOCTYPE html> <html> <head runat="server"> <title>Border Layout Example</title> </head> <body> <form runat="server"> <ext:ResourceManager runat="server" Theme="Gray" /> </form> </body> </html> There are a number of things going on here worth mentioning: The appropriate panels have been added to the Viewport's Items collection Finally, the Viewport is added to the page via the form's Controls Collection If you are used to programming with ASP.NET, you normally add a control to the Controls collection of an ASP.NET control. However, when Ext.NET controls add themselves to each other, it is usually done via the Items collection. This helps create a more optimal initialization script. This also means that only Ext.NET components participate in the layout logic. There is also the Content property in markup (or ContentControls property in code-behind) which can be used to add non-Ext.NET controls or raw HTML, though they will not take part in the layout. It is important to note that configuring Items and Content together should be avoided, especially if a layout is set on the parent container. This is because the parent container will only use the Items collection. Some layouts may hide the Content section altogether or have other undesired results. In general, use only one at a time, not both because the Viewport is the outer-most control; it is added to the Controls collection of the form itself. Another important thing to bear in mind is that the Viewport must be the only top-level visible control. That means it cannot be placed inside a div, for example it must be added directly to the body or to the <form runat="server"> only. In addition, there should not be any sibling controls (except floating widgets, like Window). Mixing markup and code The same 2-panel Border layout can also be mixed in various ways. For example: <%@ Page Language="C#" %> <script runat="server"> protected void Page_Load(object sender, EventArgs e) { this.WestPanel.Title = "West"; this.WestPanel.Split = true; this.WestPanel.Collapsible = true; this.Viewport1.Items.Add(new Ext.Net.Panel { Region = Region.Center, Title = "Center content" }); } </script> <!DOCTYPE html> <html> <head runat="server"> <title>Border Layout Example</title> </head> <body> <ext:ResourceManager runat="server" /> <ext:Viewport ID="Viewport1" runat="server" Layout="Border"> <Items> <ext:Panel ID="WestPanel" runat="server" Region="West" Width="200" /> </Items> </ext:Viewport> </body> </html> In the previous example, the Viewport and the initial part of the West region have been defined in markup. The Center region Panel has been added via code and the rest of the West Panel's properties have been set in code-behind. As with most ASP. NET controls, you can mix and match these as you need. Loading layout items via User Controls A powerful capability that Ext.NET provides is being able to load layout components from User Controls. This is achieved by using the UserControlLoader component. Consider this example: <ext:Viewport runat="server" Layout="Border"> <Items> <ext:UserControlLoader Path="WestPanel.ascx" /> <ext:Panel Region="Center" /> </Items> </ext:Viewport> In this code, we have replaced the West region Panel that was used in earlier examples with a UserControlLoader component and set the Path property to load a user control in the same directory as this page. That user control is very simple for our example: <%@ Control Language="C#" %> <ext:Panel runat="server" Region="West" Split="true" Title="West" Width="200" Collapsible="true" /> In other words, we have simply moved our Panel from our earlier example into a user control and loaded that instead. Though a small example, this demonstrates some useful reuse capability. Also note that although we used the UserControlLoader in this Border layout example, it can be used anywhere else as needed, as it is an Ext.NET component. The containing component does not have to be a Viewport Note also that the containing component does not have to be a Viewport. It can be any other appropriate container, such as another Panel or a Window. Let's do just that: <ext:Window runat="server" Layout="Border" Height="200" Width="400" Border="false"> <Items> <ext:Panel Region="West" Split="true" Title="West" Width="150" Collapsible="true" /> <ext:Panel Region="Center" Title="Center content" /> </Items> </ext:Window> The container has changed from a Viewport to a Window (with dimensions). It will produce this: More than one item with the same region In previous versions of Ext JS and Ext.NET you could only have one component in a given region, for example, only one North region Panel, one West region Panel, and so on. New to Ext.NET 2 is the ability to have more than one item in the same region. This can be very flexible and improve performance slightly. This is because in the past if you wanted the appearance of say multiple West columns, you would need to create nested Border layouts (which is still an option of course). But now, you can simply add two components to a Border layout and give them the same region value. Nested Border layouts are still possible in case the flexibility is needed (and helps make porting from an earlier version easier). First, here is an example using nested Border layouts to achieve three vertical columns: <ext:Window runat="server" Layout="Border" Height="200" Width="400" Border="false"> <Items> <ext:Panel Region="West" Split="true" Title="West" Width="100" Collapsible="true" /> <ext:Panel Region="Center" Layout="Border" Border="false"> <Items> <ext:Panel Region="West" Split="true" Title="Inner West" Width="100" Collapsible="true" /> <ext:Panel Region="Center" Title="Inner Center" /> </Items> </ext:Panel> </Items> </ext:Window> This code will produce the following output: The previous code is only a slight variation of the example preceding it, but has a few notable changes: The Center region Panel has itself been given the layout as Border. This means that although this is a Center region for the window that it is a part of, this Panel is itself another Border layout. The nested Border layout then has two further Panels, an additional West region and an additional Center region. Note, the Title has also been removed from the outer Center region so that when they are rendered, they line up to look like three Panels next to each other. Here is the same example, but without using a nested border Panel and instead, just adding another West region Panel to the containing Window: <ext:Window runat="server" Layout="Border" Height="200" Width="400" Border="false"> <Items> <ext:Panel Region="West" Split="true" Title="West" Width="100" Collapsible="true" /> <ext:Panel Region="West" Split="true" Title="Inner West" Width="100" Collapsible="true" /> <ext:Panel Region="Center" Title="Center content" Border="false" /> </Items> </ext:Window> Regions are not limited to Panels only A common problem with layouts is to start off creating more deeply nested controls than needed and the example earlier shows that it is not always needed. Multiple items with the same region helps to prevent nesting Border Layouts unnecessarily. Another inefficiency typical with the Border layout usage is using too many containing Panels in each region. For example, there may be a Center region Panel which then contains a TabPanel. However, as TabPanel is a subclass of Panel it can be given a region directly, therefore avoiding an unnecessary Panel to contain the TabPanel: <ext:Window runat="server" Layout="Border" Height="200" Width="400" Border="False"> <Items> <ext:Panel Region="West" Split="true" Title="West" Width="100" Collapsible="True" /> <ext:TabPanel Region="Center"> <Items> <ext:Panel Title="First Tab" /> <ext:Panel Title="Second Tab" /> </Items> </ext:TabPanel> </Items> </ext:Window> This code will produce the following output: The differences with the nested Border layout example shown earlier are: The outer Center region has been changed from Panel to TabPanel. TabPanels manage their own items' layout so Layout="Border" is removed. The TabPanel also has Border="false" taken out (so it is true by default). The inner Panels have had their regions, Split, and other border related attributes taken out. This is because they are not inside a nested Border layout now; they are tabs. Other Panels, such as TreePanel or GridPanel, can also be used as we will see. Something that can be fiddly from time to time is knowing which borders to take off and which ones to keep when you have nested layouts and controls like this. There is a logic to it, but sometimes a quick bit of trial and error can also help figure it out! As a programmer this sounds minor and unimportant, but usually you want to prevent the borders becoming too thick, as aesthetically it can be off-putting, whereas just the right amount of borders can help make the application look clean and professional. You can always give components a class via the Cls property and then in CSS you can fine tune the borders (and other styles of course) as you need. Weighted regions Another feature new to Ext.NET 2 is that regions can be given weighting to influence how they are rendered and spaced out. Prior versions would require nested Border layouts to achieve this. To see how this works, consider this example to put a South region only inside the Center Panel: To achieve this output, if we used the old way—the nested Border layouts—we would do something like this: <ext:Window runat="server" Layout="Border" Height="200" Width="400" Border="false"> <Items> <ext:Panel Region="West" Split="true" Title="West" Width="100" Collapsible="true" /> <ext:Panel Region="Center" Layout="Border" Border="false"> <Items> <ext:Panel Region="Center" Title="Center" /> <ext:Panel Region="South" Split="true" Title="South" Height="100" Collapsible="true" /> </Items> </ext:Panel> </Items> </ext:Window> In the preceding code, we make the Center region itself be a Border layout with an inner Center region and a South region. This way the outer West region takes up all the space on the left. If the South region was part of the outer Border layout, then it would span across the entire bottom area of the window. But the same effect can be achieved using weighting. This means you do not need nested Border layouts; the three Panels can all be items of the containing window, which means a few less objects being created on the client: <ext:Window runat="server" Layout="Border" Height="200" Width="400" Border="false"> <Items> <ext:Panel Region="West" Split="true" Title="West" Width="100" Collapsible="true" Weight="10" /> <ext:Panel Region="Center" Title="Center" /> <ext:Panel Region="South" Split="true" Title="South" Height="100" Collapsible="true" /> </Items> </ext:Window> The way region weights work is that the region with the highest weight is assigned space from the border before other regions. If more than one region has the same weight as another, they are assigned space based on their position in the owner's Items collection (that is first come, first served). In the preceding code, we set the Weight property to 10 to the West region only, so it is rendered first and, thus, takes up all the space it can before the other two are rendered. This allows for many flexible options and Ext.NET has an example where you can configure different values to see the effects of different weights: http://examples.ext.net/#/Layout/BorderLayout/Regions_Weights/ As the previous examples show, there are many ways to define the layout, offering you more flexibility, especially if generating from code-behind in a very dynamic way. Knowing that there are so many ways to define the layout, we can now speed up our look at many other types of layouts. Summary This article covered one of the numerous layout options available in Ext.NET, that is, the Border layout, to help you organize your web applications. Resources for Article : Further resources on this subject: Your First ASP.NET MVC Application [Article] Customizing and Extending the ASP.NET MVC Framework [Article] Tips & Tricks for Ext JS 3.x [Article]
Read more
  • 0
  • 0
  • 4842

article-image-transforming-data-service
Packt
20 Aug 2014
4 min read
Save for later

Transforming data in the service

Packt
20 Aug 2014
4 min read
This article written by, Jim Lavin, author of the book AngularJS Services will cover ways on how to transform data. Sometimes, you need to return a subset of your data for a directive or controller, or you need to translate your data into another format for use by an external service. This can be handled in several different ways; you can use AngularJS filters or you could use an external library such as underscore or lodash. (For more resources related to this topic, see here.) How often you need to do such transformations will help you decide on which route you take. If you are going to transform data just a few times, it isn't necessary to add another library to your application; however, if you are going to do it often, using a library such as underscore or lodash will be a big help. We are going to limit our discussion to using AngularJS filters to handle transforming our data. Filters are an often-overlooked component in the AngularJS arsenal. Often, developers will end up writing a lot of methods in a controller or service to filter an array of objects that are iterated over in an ngRepeat directive, when a simple filter could have easily been written and applied to the ngRepeat directive and removed the excess code from the service or controller. First, let's look at creating a filter that will reduce your data based on a property on the object, which is one of the simplest filters to create. This filter is designed to be used as an option to the ngRepeat directive to limit the number of items displayed by the directive. The following fermentableType filter expects an array of fermentable objects as the input parameter and a type value to filter as the arg parameter. If the fermentable's type value matches the arg parameter passed into the filter, it is pushed onto the resultant array, which will in turn cause the object to be included in the set provided to the ngRepeat directive. angular.module('brew-everywhere').filter('fermentableType', function () {return function (input, arg) {var result = [];angular.forEach(input, function(item){if(item.type === arg){result.push(item);}})return result;};}); To use the filter, you include it in your partial in an ngRepeat directive as follows: <table class="table table-bordered"><thead><tr><th>Name</th><th>Type</th><th>Potential</th><th>SRM</th><th>Amount</th><th>&nbsp;</th></tr></thead><tbody><tr ng-repeat="fermentable in fermentables |fermentableType:'Grain'"><td class="col-xs-4">{{fermentable.name}}</td><td class="col-xs-2">{{fermentable.type}}</td><td class="col-xs-2">{{fermentable.potential}}</td><td class="col-xs-2">{{fermentable.color}}</td></tr></tbody></table> The result of calling fermentableType with the value, Grain is only going to display those fermentable objects that have a type property with a value of Grain. Using filters to reduce an array of objects can be as simple or complex as you like. The next filter we are going to look at is one that uses an object to reduce the fermentable object array based on properties in the passed-in object. The following filterFermentable filter expects an array of fermentable objects as an input and an object that defines the various properties and their required values that are needed to return a matching object. To build the resulting array of objects, you walk through each object and compare each property with those of the object passed in as the arg parameter. If all the properties match, the object is added to the array and it is returned. angular.module('brew-everywhere').filter('filterFermentable', function () {return function (input, arg) {var result = [];angular.forEach(input, function (item) {var add = truefor (var key in arg) {if (item.hasOwnProperty(key)) {if (item[key] !== arg[key]) {add = false;}}}if (add) {result.push(item);}});return result;};});
Read more
  • 0
  • 0
  • 4794
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-oracle-apex-plug-ins
Packt
16 Dec 2010
6 min read
Save for later

Oracle APEX Plug-ins

Packt
16 Dec 2010
6 min read
Oracle APEX 4.0 Cookbook Over 80 great recipes to develop and deploy fast, secure, and modern web applications with Oracle Application Express 4.0 Create feature-rich web applications in APEX 4.0 Integrate third-party applications like Google Maps into APEX by using web services Enhance APEX applications by using stylesheets, Plug-ins, Dynamic Actions, AJAX, JavaScript, BI Publisher, and jQuery Hands-on examples to make the most out of the possibilities that APEX has to offer Part of Packt's Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible   Introduction In APEX 4.0, Oracle introduced the plug-in. A plug-in is an extension to the existing functionality of APEX. The idea behind plug-ins is to make life easier for developers. Plug-ins are reusable and can be exported and imported. In this way, it is possible to create functionality which is available to all APEX developers. It is also possible to install and use them without having knowledge of what is inside the plug-in. APEX is actually a program that converts your settings from the APEX builder to HTML and JavaScript. For example, if you created a text item in the APEX builder, APEX converts this to the following code (simplified): <input type="text" id="P12_NAME" name="P12_NAME" value="your name"> When you create an item type plug-in, you actually take over this conversion task of APEX and you generate the HTML and JavaScript code yourself by using PL/SQL procedures. That offers a lot of flexibility because now you can make this code generic so that it can be used for more items. The same goes for region type plug-ins. A region is a container for forms, reports, and such. The region can be a div or a HTML table. By creating a region type plug-in, you create a region yourself with the possibility to add more functionality to the region. There are four types of plug-in: Item type plug-ins Region type plug-ins Dynamic action plug-ins Process type plug-ins In this article, we will discuss all four types of plug-in. Creating an item type plug-in In an item type plug-in you create an item with the possibility of extending its functionality. To demonstrate this, we will make a text field with a tooltip. This functionality is already available in APEX 4.0 by adding the following code to the HTML form element attributes text field in the Element section of the text field: onmouseover="toolTip_enable(event,this,'A tooltip')" But you have to do this for every item that should contain a tooltip. This can be made more easy by creating an item type plug-in with a built-in tooltip. And if you create an item of type plug-in, you will be asked to enter some text for the tooltip. Getting ready For this recipe, you can use an existing page with a region where you can put some text items on. How to do it... Go to Shared Components | User Interface | Plug-ins. Click on the Create button. In the name section, enter a name in the name text field. In this case, we enter tooltip. In the internal name text field, enter an internal name. It is advised to use your company's domain address reversed to ensure the name is unique when you decide to share this plug-in. So, for example, you can use com.packtpub.apex.tooltip. In the source section, enter the following code to the PL/SQL code textarea: function render_simple_tooltip ( p_item in apex_plugin.t_page_item , p_plugin in apex_plugin.t_plugin , p_value in varchar2 , p_is_readonly in boolean , p_is_printer_friendly in boolean ) return apex_plugin.t_page_item_render_result is l_result apex_plugin.t_page_item_render_result; begin if apex_application.g_debug then apex_plugin_util.debug_page_item ( p_plugin => p_plugin , p_page_item => p_item , p_value => p_value , p_is_readonly => p_is_readonly , p_is_printer_friendly => p_is_printer_friendly); end if; -- sys.htp.p('<input type="text" id="'||p_item.name||'" name="'||p_item.name||'" class="text_field" onmouseover="toolTip_enable(event,this,'||''''||p_item.attribute_01||''''||')">'); -- return l_result; end render_simple_tooltip; This function uses the sys.htp.p function to put a text item on the screen. On the text item, the onmouseover event calls the function tooltip_enable(). This function is an APEX function and can be used to put a tooltip on an item. The arguments of the function are mandatory. The function starts with the option to show debug information. This can be very useful when you have created a plug-in and it doesn't work. After the debug information the htp.p function puts the text item on the screen, including the call to tooltip_enable. You can also see that the call to tooltip_enable uses p_item.attribute_01. This is a parameter that you can use to pass a value to the plug-in. That is the following step in this recipe. The function ends with the return of l_result. This variable is of type apex_plugin.t_page_item_render_result. For the other types of plug-in there are also dedicated return types, for example, t_region_render_result. Click on the Create button. The next step is to define the parameter (attribute) for this plug-in. In the Custom Attributes section, click the Add Attribute button. In the name section, enter a name in the label text field, for example tooltip. Ensure that the attribute text field contains the value 1. In the settings section, set the type to text. Click on the Create button. In the callbacks section, enter render_simple_tooltip into the render function name text field. Click on the Apply changes button. The plug-in is ready now. The next step is to create an item of type tooltip plug-in. Go to a page with a region where you want to use an item with a tooltip. In the items section, click on the add icon to create a new item. Select Plug-ins. Now you will get a list of available plug-ins. Select the one we just created, tooltip. Click on Next. In the item name text field, enter a name for the item, for example tt_item. In the region select list, select the region you want to put the item in. Click Next. In the next step, you will get a new option. It's the attribute you created with the plug-in. Enter the tooltip text here. Click Next. In the last step, leave everything as it is and click the Create item button. You are ready now. Run the page. When you move your mouse pointer over the new item, you will see the tooltip.
Read more
  • 0
  • 0
  • 4778

article-image-users-profiles-and-connections-elgg
Packt
23 Oct 2009
8 min read
Save for later

Users, Profiles, and Connections in Elgg

Packt
23 Oct 2009
8 min read
Connecting to Friends and Users I hope you're convinced how important friends are to a social network. Initially, you'll have to manually invite your friends over to join. I say initially, because membership on a social network is viral. Once your friends are registered members of your network, they can also bring in their own friends. This means that soon your friends would have invited their own friends as well. Chances are that you might not know these friends of your friends. So, Elgg not only allows you to invite friends from outside, but also connect with users already on the network. Let's understand these situations in real-life terms. You invite your friends over to a party with you at your new Star Trek themed club. That's what you'll do with Elgg, initially. So your friends like the place and next time around they bring in more friends from work. These friends of friends from work talk about your place with their friends and so on, until you're hosting a bunch of people in the club that you haven't ever met in your life. You overhear some people discussing Geordi La Forge, your favorite character from the show. You invite them over for drinks. That's connecting with users already on the network. So let's head on over to Elgg and invite some friends! Inviting Friends to Join There are two ways of inviting users to join your network. Either send them an email with a link to join the website, or let Elgg handle sending them emails. If you send them emails, you can include a direct link to the registration page. This link is also on the front page of your network, which every visitor will see. It asks visitors to register an account if they like what's on the network. Let Elgg Handle Registration This is the most popular method of inviting users to join the network. It's accessible not only to you, but also to your friends once they've registered with the network. To allow Elgg to send emails on your behalf, you'll have to be logged into Elgg. Once you login, click on the Your Network button on the top navigation bar. This will take you to a page, which links to tools that'll help you connect with others. The last link in this bar (Invite a Friend) does exactly what it says. When you click on this link, it'll explain to you some benefits of inviting friends over. The page has three fields; Their name: Enter the name of the friend you're sending the invitation to. Their email address: Very important. This is the address to where the invitation is sent. An optional message: Elgg sends an email composed using a template. If you want to add a personal message to Elgg's email, you can do so here. In the email, which Elgg sends on behalf of the network's administrator, that means you, it displays the optional message (if you've sent one), along with a link to the registration page. The invitation is valid for seven days, after which the registration link in the email isn't valid. When your friends click on the registration form, it asks them to enter their: Name: This is your friend's real name. When he arrives here by clicking the link in the email, this field already has the same name as the one in the email. Of course, your friend can choose to change it if he pleases. Username: The name your friend wants to use to log in to the network. Elgg automatically suggests one based on your friend's real name. Password: The last two fields ask your friend to enter (and then re-enter to confirm) a password. This is used along with the username to authenticate him on the system. Once your friends enter all the details and click on join, Elgg creates an account for them, logs them in, and dispatches a message to them containing the log in details for reference. Build a Profile The first thing a new user has to do on the network is to create his profile. If you haven't yet built up a profile yourself, now is a good time. To recap, your profile is your digital self. By filling in a form, Elgg helps you define yourself in terms that'll help other members find and connect to you. This is again where socializing using Elgg outscores socializing in real life. You can find people with similar tastes, likes, and dislikes, as soon as you enter the network. So let's steam ahead and create a digital you. The Various Profile Options Once you are logged into your Elgg network, select the Your Profile option from the top navigation-bar. In the page that opens, click the first link, Edit this profile. This opens up a form, divided into five tabs—Basic details, Location, Contact, Employment, and Education. Each tab helps you fill in details regarding that particular area. You don't necessarily have to fill in each and every detail. And you definitely don't have to fill them all in one go. Each tab has a Save your profile button at the end. When you press this button, Elgg updates your profile instantaneously. You can fill in as much detail as you want, and keep coming back to edit your profile, and append new information. Let's look at the various tabs: Basic details: Although filling information in any tab is optional, I'd advise you to fill in all details in this tab. This will make it easy, for you to find others, and for others to find you. The tab basically asks you to introduce yourself, list your interests, your likes, your dislikes, your goals in life, and your main skills. Location: This tab requests information that'll help members reach you physically. Fill in your street address, town, state, postal code, and country. Contact: Do you want members to contact you outside your Elgg network? This tab requests both physical as well as electronic means which members can use to get in touch with you. Physical details include your work, home, and mobile telephone number. Electronic details include your email address, your personal, and official websites. Elgg can also list information to help users connect to you on instant messenger. It supports ICQ, MSN, AIM, Skype, and Jabber. Employment: List your occupation, the industry, and company you work in, your job title, and description. Elgg also lets you list your career goals and suggests you do so to "let colleagues and potential employers know what you'd like to get out of your career. Education: Here you can specify your level of education, and which high school, university or college you attended, and the degree you hold. As you can clearly see, Elgg's profiling options are very diverse and detailed. Rather than serve the sole purpose of describing you to the visitors, the profile also helps you find new friends as well, as we'll see later in this article. What is FOAF? While filling the profile, you must have noticed an Upload a FOAF file area down at the bottom of all tabs. FOAF or Friend of a Friend is a project (http://www.foaf-project.org/) to help create "machine-readable pages that describe people, the links between them, and the things they create, and do". The FOAF file includes lots of details about you, and if you have already created a FOAF profile, Elgg can use that to pick out information describing you from in there. You can modify the information once it's imported into Elgg, if you feel the need to do so. The FOAF-a-Matic tool (http://www.ldodds.com/foaf/foaf-a-matic.en.html) is a simple Web-based program you can use to create a FOAF profile. A Face for Your Profile Once you have created your digital self, why not give it a face as well. The default Elgg picture with a question mark doesn't look like you! To upload your picture, head over to Your Profile and select the Change site picture link. From this page, click Browse to find and select the picture on your computer. Put in an optional description, and then choose to make it your default icon. When you click the Upload new icon button, Elgg will upload the picture. Once the upload completes, Elgg will display the picture. Click the Save button to replace Elgg's default icon with this picture.   Elgg will automatically resize your picture to fit into its small area. You should use a close-up of yourself, otherwise the picture will lose clarity when resizing. If you don't like the picture when it appears on the website, or you want to replace it with a new one, simply tick the Delete check-box associated with the picture you don't like. When you click Save, Elgg will revert to the default question-mark guy.
Read more
  • 0
  • 0
  • 4695

article-image-getting-started-primefaces
Packt
04 Apr 2013
14 min read
Save for later

Getting Started with PrimeFaces

Packt
04 Apr 2013
14 min read
Setting up and configuring the PrimeFaces library PrimeFaces is a lightweight JSF component library with one JAR file, which needs no configuration and does not contain any required external dependencies. To start with the development of the library, all we need is to get the artifact for the library. Getting ready You can download the PrimeFaces library from http://primefaces.org/downloads.html and you need to add the primefaces-{version}.jar file to your classpath. After that, all you need to do is import the namespace of the library, which is necessary to add the PrimeFaces components to your pages, to get started. If you are using Maven (for more information on installing Maven, please visit http://maven.apache.org/guides/getting-started/maven-in-five-minutes.html), you can retrieve the PrimeFaces library by defining the Maven repository in your Project Object Model (POM) file as follows: <repository> <id>prime-repo</id> <name>PrimeFaces Maven Repository</name> <url>http://repository.primefaces.org</url> </repository> Add the dependency configuration as follows: <dependency> <groupId>org.primefaces</groupId> <artifactId>primefaces</artifactId> <version>3.4</version> </dependency> At the time of writing this book, the latest and most stable version of PrimeFaces was 3.4. To check out whether this is the latest available or not, please visit http://primefaces.org/downloads.html The code in this book will work properly with PrimeFaces 3.4. In prior versions or the future versions, some methods, attributes, or components' behaviors may change. How to do it... In order to use PrimeFaces components, we need to add the namespace declarations into our pages. The namespace for PrimeFaces components is as follows: For PrimeFaces Mobile, the namespace is as follows: That is all there is to it. Note that the p prefix is just a symbolic link and any other character can be used to define the PrimeFaces components. Now you can create your first page with a PrimeFaces component as shown in the following code snippet: <html > <f:view contentType="text/html"> <h:head /> <h:body> <h:form> <p:spinner /> </h:form> </h:body> </f:view> </html> This will render a spinner component with an empty value as shown in the following screenshot: A link to the working example for the given page is given at the end of this recipe. How it works... When the page is requested, the p:spinner component is rendered with the renderer implemented by the PrimeFaces library. Since the spinner component is a UI input component, the request-processing lifecycle will get executed when the user inputs data and performs a post back on the page. For the first page, we also needed to provide the contentType parameter for f:view, since the WebKit-based browsers, such as Google Chrome and Apple Safari, request the content type application/xhtml+xml by default. This would overcome unexpected layout and styling issues that might occur. There's more... PrimeFaces only requires Java 5+ runtime and a JSF 2.x implementation as mandatory dependencies. There are some optional libraries for certain features. Dependency Version Type Description JSF runtime iText Apache POI Rome commons-fileupload commons-io 2.0 or 2.1 2.1.7 3.7 1.0 1.2.1 1.4 Required Optional Optional Optional Optional Optional Apache MyFaces or Oracle Mojarra DataExporter (PDF) DataExporter (Excel) FeedReader FileUpload FileUpload Please ensure that you have only one JAR file of PrimeFaces or specific PrimeFaces Theme in your classpath in order to avoid any issues regarding resource rendering. Currently PrimeFaces supports the web browsers IE 7, 8, or 9, Safari, Firefox, Chrome, and Opera. PrimeFaces Cookbook Showcase application This recipe is available in the PrimeFaces Cookbook Showcase application on GitHub at https://github.com/ova2/primefaces-cookbook. You can find the details there for running the project. When the server is running, the showcase for the recipe is available at http://localhost:8080/primefaces-cookbook/views/chapter1 /yourFirstPage.jsf" AJAX basics with Process and Update PrimeFaces provides a partial page rendering (PPR) and view-processing feature based on standard JSF 2 APIs to enable choosing what to process in the JSF lifecycle and what to render in the end with AJAX. PrimeFaces AJAX Framework is based on standard server-side APIs of JSF 2. On the client side, rather than using the client-side API implementations of JSF implementations, such as Mojarra and MyFaces, PrimeFaces scripts are based on the jQuery JavaScript library. How to do it... We can create a simple page with a command button to update a string property with the current time in milliseconds on the server side and an output text to show the value of that string property, as follows: <p:commandButton update="display" action="#{basicPPRController. updateValue}" value="Update" /> <h:outputText id="display" value="#{basicPPRController.value}"/> If we would like to update multiple components with the same trigger mechanism, we can provide the IDs of the components to the update attribute by providing them a space, comma, or both, as follows: <p:commandButton update="display1,display2" /> <p:commandButton update="display1 display2" /> <p:commandButton update="display1,display2 display3" /> In addition, there are reserved keywords that are used for a partial update. We can also make use of these keywords along with the IDs of the components, as described in the following table: Keyword Description @this The component that triggers the PPR is updated @parent The parent of the PPR trigger is updated @form The encapsulating form of the PPR trigger is updated @none PPR does not change the DOM with AJAX response @all The whole document is updated as in non-AJAX requests We can also update a component that resides in a different naming container from the component that triggers the update. In order to achieve this, we need to specify the absolute component identifier of the component that needs to be updated. An example for this could be the following: <h:form id="form1"> <p:commandButton update=":form2:display" action="#{basicPPRController.updateValue}" value="Update" /> </h:form> <h:form id="form2"> <h:outputText id="display" value="#{basicPPRController.value}"/> </h:form> public String updateValue() { value = String.valueOf(System.currentTimeMillis()); return null; } PrimeFaces also provides partial processing, which executes the JSF lifecycle phases—Apply Request Values, Process Validations, Update Model, and Invoke Application—for determined components with the process attribute. This provides the ability to do group validation on the JSF pages easily. Mostly group-validation needs arise in situations where different values need to be validated in the same form, depending on an action that gets executed. By grouping components for validation, errors that would arise from other components when the page has been submitted can be overcome easily. Components like commandButton, commandLink, autoComplete, fileUpload, and many others provide this attribute to process partially instead of the whole view. Partial processing could become very handy in cases when a drop-down list needs to be populated upon a selection on another drop down and when there is an input field on the page with the required attribute set to true. This approach also makes immediate subforms and regions obsolete. It will also prevent submission of the whole page, thus this will result in lightweight requests. Without partially processing the view for the drop downs, a selection on one of the drop downs will result in a validation error on the required field. An example for this is shown in the following code snippet: <h:outputText value="Country: " /> <h:selectOneMenu id="countries" value="#{partialProcessingController. country}"> <f:selectItems value="#{partialProcessingController.countries}" /> <p:ajax listener= "#{partialProcessingController.handleCountryChange}" event="change" update="cities" process="@this"/> </h:selectOneMenu> <h:outputText value="City: " /> <h:selectOneMenu id="cities" value="#{partialProcessingController. city}"> <f:selectItems value="#{partialProcessingController.cities}" /> </h:selectOneMenu> <h:outputText value="Email: " /> <h:inputText value="#{partialProcessingController.email}" required="true" /> With this partial processing mechanism, when a user changes the country, the cities of that country will be populated in the drop down regardless of whether any input exists for the email field. How it works... As seen in partial processing example for updating a component in a different naming container, <p:commandButton> is updating the <h:outputText> component that has the ID display, and absolute client ID :form2:display, which is the search expression for the findComponent method. An absolute client ID starts with the separator character of the naming container, which is : by default. The <h:form>, <h:dataTable>, composite JSF components along with <p:tabView>, <p:accordionPanel>, <p:dataTable>, <p:dataGrid>, <p:dataList>, <p:carousel>, <p:galleria>, <p:ring>, <p:sheet>, and <p:subTable> are the components that implement the NamingContainer interface. The findComponent method, which is described at http://docs.oracle.com/javaee/6/api/javax/faces/component/UIComponent.html, is used by both JSF core implementation and PrimeFaces. There's more... JSF uses : (a colon) as the separator for the NamingContainer interface. The client IDs that will be rendered in the source page will be like :id1:id2:id3. If needed, the configuration of the separator can be changed for the web application to something other than the colon with a context parameter in the web.xml file of the web application, as follows: <context-param> <param-name>javax.faces.SEPARATOR_CHAR</param-name> <param-value>_</param-value> </context-param> It's also possible to escape the : character, if needed, in the CSS files with the character, as :. The problem that might occur with the colon is that it's a reserved keyword for the CSS and JavaScript frameworks, like jQuery, so it might need to be escaped. PrimeFaces Cookbook Showcase application This recipe is available in the PrimeFaces Cookbook Showcase application on GitHub at https://github.com/ova2/primefaces-cookbook. You can find the details there for running the project. For the demos of the showcase, refer to the following: Basic Partial Page Rendering is available at http://localhost:8080/ primefaces-cookbook/views/chapter1/basicPPR.jsf Updating Component in Different Naming Container is available at http://localhost:8080/primefaces-cookbook/views/chapter1/ componentInDifferentNamingContainer.jsf A Partial Processing example at http://localhost:8080/primefacescookbook/ views/chapter1/partialProcessing.jsf Internationalization (i18n) and Localization (L10n) Internationalization (i18n) and Localization (L10n) are two important features that should be provided in the web application's world to make it accessible globally. With Internationalization, we are emphasizing that the web application should support multiple languages; and with Localization, we are stating that the texts, dates, or any other fields should be presented in the form specific to a region. PrimeFaces only provides the English translations. Translations for the other languages should be provided explicitly. In the following sections, you will find the details on how to achieve this. Getting ready For Internationalization, first we need to specify the resource bundle definition under the application tag in faces-config.xml, as follows: <application> <locale-config> <default-locale>en</default-locale> <supported-locale>tr_TR</supported-locale> </locale-config> <resource-bundle> <base-name>messages</base-name> <var>msg</var> </resource-bundle> </application> A resource bundle would be a text file with the .properties suffix that would contain the locale-specific messages. So, the preceding definition states that the resource bundle messages_{localekey}.properties file will reside under classpath and the default value of localekey is en, which is English, and the supported locale is tr_TR, which is Turkish. For projects structured by Maven, the messages_{localekey}.properties file can be created under the src/main/resources project path. How to do it... For showcasing Internationalization, we will broadcast an information message via FacesMessage mechanism that will be displayed in the PrimeFaces growl component. We need two components, the growl itself and a command button, to broadcast the message. <p:growl id="growl" /> <p:commandButton action="#{localizationController.addMessage}" value="Display Message" update="growl" /> The addMessage method of localizationController is as follows: public String addMessage() { addInfoMessage("broadcast.message"); return null; } That uses the addInfoMessage method, which is defined in the static MessageUtil class as follows: public static void addInfoMessage(String str) { FacesContext context = FacesContext.getCurrentInstance(); ResourceBundle bundle = context.getApplication(). getResourceBundle(context, "msg"); String message = bundle.getString(str); FacesContext.getCurrentInstance().addMessage(null, new FacesMessage(FacesMessage.SEVERITY_INFO, message, "")); } Localization of components, such as calendar and schedule, can be achieved by providing the locale attribute. By default, locale information is retrieved from the view's locale and it can be overridden by a string locale key or the java.util.Locale instance. Components such as calendar and schedule use a shared PrimeFaces.locales property to display labels. PrimeFaces only provides English translations, so in order to localize the calendar we need to put corresponding locales into a JavaScript file and include the scripting file to the page. The content for the German locale of the Primefaces.locales property for calendar would be as shown in the following code snippet. For the sake of the recipe, only the German locale definition is given and the Turkish locale definition is omitted. PrimeFaces.locales['de'] = { closeText: 'Schließen', prevText: 'Zurück', nextText: 'Weiter', monthNames: ['Januar', 'Februar', 'März', 'April', 'Mai', 'Juni', 'Juli', 'August', 'September', 'Oktober', 'November', 'Dezember'], monthNamesShort: ['Jan', 'Feb', 'Mär', 'Apr', 'Mai', 'Jun', 'Jul', 'Aug', 'Sep', 'Okt', 'Nov', 'Dez'], dayNames: ['Sonntag', 'Montag', 'Dienstag', 'Mittwoch', 'Donnerstag', 'Freitag', 'Samstag'], dayNamesShort: ['Son', 'Mon', 'Die', 'Mit', 'Don', 'Fre', 'Sam'], dayNamesMin: ['S', 'M', 'D', 'M ', 'D', 'F ', 'S'], weekHeader: 'Woche', FirstDay: 1, isRTL: false, showMonthAfterYear: false, yearSuffix: '', timeOnlyTitle: 'Nur Zeit', timeText: 'Zeit', hourText: 'Stunde', minuteText: 'Minute', secondText: 'Sekunde', currentText: 'Aktuelles Datum', ampm: false, month: 'Monat', week: 'Woche', day: 'Tag', allDayText: 'Ganzer Tag' }; Definition of the calendar components with the locale attribute would be as follows: <p:calendar showButtonPanel="true" navigator="true" mode="inline" id="enCal"/> <p:calendar locale="tr" showButtonPanel="true" navigator="true" mode="inline" id="trCal"/> <p:calendar locale="de" showButtonPanel="true" navigator="true" mode="inline" id="deCal"/> They will be rendered as follows: How it works... For Internationalization of the Faces message, the addInfoMessage method retrieves the message bundle via the defined variable msg. It then gets the string from the bundle with the given key by invoking the bundle.getString(str) method. Finally, the message is added by creating a new Faces message with severity level FacesMessage.SEVERITY_INFO. There's more... For some components, Localization could be accomplished by providing labels to the components via attributes, such as with p:selectBooleanButton. <p:selectBooleanButton value="#{localizationController.selectedValue}" onLabel="#{msg['booleanButton.onLabel']}" offLabel="#{msg['booleanButton.offLabel']}" /> The msg variable is the resource bundle variable that is defined in the resource bundle definition in Faces configuration file. The English version of the bundle key definitions in the messages_en.properties file that resides under classpath would be as follows: booleanButton.onLabel=Yes booleanButton.offLabel=No PrimeFaces Cookbook Showcase application This recipe is available in the PrimeFaces Cookbook Showcase application on GitHub at https://github.com/ova2/primefaces-cookbook. You can find the details there for running the project. For the demos of the showcase, refer to the following: Internationalization is available at http://localhost:8080/primefacescookbook/ views/chapter1/internationalization.jsf Localization of the calendar component is available at http://localhost:8080/ primefaces-cookbook/views/chapter1/localization.jsf Localization with resources is available at http://localhost:8080/ primefaces-cookbook/views/chapter1/localizationWithResources. jsf For already translated locales of the calendar, see https://code.google.com/archive/p/primefaces/wikis/PrimeFacesLocales.wiki
Read more
  • 0
  • 0
  • 4641

article-image-oracle-webcenter-11g-portlets
Packt
21 Sep 2010
6 min read
Save for later

Oracle WebCenter 11g: Portlets

Packt
21 Sep 2010
6 min read
(For more resources on Oracle, see here.) Portlets, JSR-168 specification Specification JSR-168, which defines the Java technologies, gives us a precise definition of Java portlets: Portlets are web components—like Servlets—specifically designed to be aggregated in the context of a composite page. Usually, many Portlets are invoked to in the single request of a Portal page. Each Portlet produces a fragment of markup that is combined with the markup of other Portlets, all within the Portal page markup. You can see more detail of this specification on the following page: http://jcp.org/en/jsr/detail?id=168 While the definition makes a comparison with servlets, it is important to note that the portlets cannot be accessed directly through a URL; instead, it is necessary to use a page-like container of portlets. Consequently, we might consider portlets as tiny web applications that return dynamic content (HTML, WML) into a region of a Portal page. Graphically, we could view a page with portlets as follows: Additionally, we must emphasize that the portlets are not isolated from the rest of the components in the pages, but can also share information and respond to events that occur in other components or portlets. WSRP specification The WSRP specification allows exposing portlets as Web services. For this purpose, clients access portlets through an interface (*. wsdl) and get graphic content associated. Optionally, the portlet might be able to interact directly with the user through events ocurring on them. This way of invoking offers the following advantages: The portals that share a portlet centralize their support in a single point. The portlet integration with the portal is simple and requires no programming. The use of portlets, hosted on different sites, helps to reduce the load on servers. WebCenter portlets Portlets can be built in different ways, and the applications developed with Oracle WebCenter can consume any of these types of portlets. JSF Portlets: This type of portlet is based on a JSF application, which is used to create a portlet using a JSF Portlet Bridge. Web Clipping: Using this tool, we can build portlets declaratively using only a browser. These portlets show content from other sites. OmniPortlet: These portlets can retrieve information from different types of data sources (XML, CSV, database, and so on) to expose different ways of presenting things, such as tables, forms, charts, and so on. Content Presenter: This allows you to drop content from UCM on the page and display this content in any way you like or using a template. Ensemble: This is a way to "mashup" or produce portlets or "pagelets" of information that can be displayed on the page. Programmatic Portlets: Obviously, in addition to the previous technologies that facilitate the construction of portlets, it is also possible to build in a programmatic way. When we build in this way, we reach a high degree of personalization and control. However, we need specialized Java knowledge in order to program in this way. As we can see, there are several ways in which we can build a portlet; however, in order to use the rich components that the ADF Faces framework offers, we will focus on JSF Portlets. Developing a portlet using ADF The portlet that we will build will have a chart, which shows the status of the company's requests. To do this, we must create a model layer that represents our business logic and exposes this information in a page. Therefore, we are going to do the following steps: Create an ADF application. Develop business components. Create a chart page. Generate a portlet using the page. Deploy the portlet. In this example, we use a page for the construction of a portlet; however, ADF also offers the ability to create portlets based on a flow of pages through the use of ADF TaskFlows. You can find more information on the following link: http://download.oracle.com/docs/cd/E15523_01/web.1111/b31974/taskflows.htm#BABDJEDD Creating an ADF application To create the application, do the following steps: Go to JDeveloper. In the menu, choose the option File | New to start the wizard for creating applications. In the window displayed, choose the Application category and choose the Fusion Web Application ADF option and press the OK button. Next, enter the following properties for creating the application: Name: ParacasPortlet Directory: c:ParacasPortlet Application Package Prefix : com.paracasportlet Click Finish to create the application. Developing business components Before starting this activity, make sure you have created a connection to the database. In the project Palette, right-click on Project Model, and choose New. On the next page, select the category Business Tier | ADF Business Components and choose Business Components from Tables. Next, press the OK button. In the following page, you configure the connection to the database. At this point, select the connection db_paracas and press the OK button. In order to build a page with a chart, we need to create a read-only view. For this reason, don't change anything, just press the Next button. (Move the mouse over the image to enlarge.) In this next step, we can create updateable views. But, we don't need this type of component. So, don't change anything. Click the Next button. Now, we need to allow the creation of read-only views. We will use this kind of component in our page; therefore select the table REQUEST, as shown next and press Next. Our next step will allow the creation of an application module. This component is necessary to display the read-only view in the whole application. Keep this screen with the suggested values and click the Finish button. Check the Application Navigator. You must have your components arranged in the same way as shown in the following screenshot: Our query must determine the number of requests for status. Therefore, it will be necessary to make some changes in the created component. To start, double-click on the view RequestView, select the Query category, and click on the Edit SQL Query option as shown in the following screenshot: In the window shown, modify the SQL as shown next and click the OK button. SELECT Request.STATUS, COUNT(*) COUNT_STATUSFROM REQUEST RequestGROUP BY Request.STATUS We only use the attributes Status and CountStatus. For this reason, choose the Attributes category, select the attributes that are not used, and press Delete selected attribute(s) as shown in the following screenshot: Save all changes and verify that the view is similar to that shown next:
Read more
  • 0
  • 0
  • 4617
article-image-ibm-lotus-domino-adding-style-form-and-page-elements
Packt
29 Mar 2011
6 min read
Save for later

IBM Lotus Domino: Adding Style to Form and Page Elements

Packt
29 Mar 2011
6 min read
  IBM Lotus Domino: Classic Web Application Development Techniques A step-by-step guide for web application development and quick tips to enhance applications using Lotus Domino Most of the CSS rules you write for an application relate to design elements on forms and pages. Suggestions and examples in this section just scratch the surface of CSS possibilities. Browse the Web for additional ideas. Here we focus on the mechanics of how elements are styled, rather than on specific recommendations about what looks good, which is largely a matter of taste. Use color effectively Use pleasing, complementary colors. If your organization requires a specific set of colors, then, of course, find out what that palette is and conform to it as much as possible. Color tastes change over the years, primary colors dominating at times and lighter pastels in vogue at others. Here are a few generalities to consider: Use white or very light colors for backgrounds Use stronger colors such as dark red to make important elements stand out Use no more than three or four colors on a form Use black or dark gray text on a light background for lengthy text passages If you have paid little attention to the matter of color in your applications, do some web work on the subject. Once you select a color scheme, provide some samples to your customers for their opinions and suggestions. Style text Typography is a complex topic with a rich history and strong opinions. For web application design purposes, consider using web-safe fonts which are likely to be available on most or all personal computers. If you use a font that is not available to a browser, then text is rendered with a default font. Fonts with serifs are usually considered easier to read on paper, and less so as web page text. Experiment with the following fonts: Bookman Old Style Cambria Garamond Georgia Times New Roman Common fonts without serifs (sans serif) are considered easier to read on the Web. Some examples include: Arial Calibri Helvetica MS Sans Serif Tahoma Trebuchet MS Verdana Mono-spaced fonts are useful when you want text to line up—columns of numbers in a table, perhaps: Courier New Courier Establish a common font style with CSS rules applied to the body type selector or to a main division using a type selector, a class selector, or an ID selector: body { color: #555555; font-family: Verdana; font-size: 8pt; } Style headings and labels If headings and labels are bracketed with HTML heading tags (for example, <h1> or <h2gt;), they can be styled with type selectors: h1 { color: Blue; font-family: Arial; font-size: 18pt; font-weight: bold; } If headings and labels are bracketed with <span> tags, use CSS classes: <span class="highlight1">October News</span> Underline links in text but not in menus When browsers and the Web first appeared in the early 1990's, hyperlinks were a novelty. To distinguish a link from normal text, the convention developed to underscore the text containing the link, and often the link text was colored blue. There is no magic associated with underscoring and making text blue—it was just the convention adopted at the time. Today links in text passages are usually distinguished from adjacent text with color, weight or underscoring. In a menu, however, each item is understood to be a hotspot link. Underscores and blue text are not required. So if you feel like underscoring a link, do so if the link appears within some text, but don't underscore links in menus. At the same time, refrain from highlighting important text with underscoring, which implies that that text is a hyperlink. Use another highlighting technique; italics, bold, or an alternate color work well for this purpose. Style fields Fields can be styled with CSS either with the Style attribute in Field Properties or with CSS rules. The key to understanding how CSS rules can be applied to fields is to understand that fields are translated to the Web using <input> tags. Here is how a simple text field translates into HTML: <input name="FirstName" value=""> Here is how a radio button field translates: <input name="%%Surrogate_Gender" type="hidden" value="1"> <label><input type="radio" name="Gender" value="M">M</label><br> <label><input type="radio" name="Gender" value="F">F</label><br> CSS rules can be defined for the <input> tag, an ID, or a class. For example, assume that a CSS class named requiredtext is defined. If that class name is entered in the Class attribute of Field Properties, the resulting HTML might look like this: <input name="FirstName" value="" class="requiredtext"> CSS style rules coded for the requiredtext class are applied to the field. Highlight required fields Required fields are validated, most likely with JavaScript code, so that complete and good data is saved into the database when a document is submitted. If entered values fail validation, the user is presented with a message of some sort that identifies the problem and requests correction. Web forms typically identify which fields are required. Any of several techniques can be used. Required field labels can be styled with a more prominent color or a special marker such as an asterisk or a checkmark can be positioned near the field. Required fields also can be co-located and set apart using the <fieldset> and <legend> tags. If a field value fails validation, it is common practice to provide an error message and then to set the focus into the field; the cursor is positioned in the field to facilitate an immediate correction. As the cursor can be difficult to spot on a busy form, it is also possible to change the background color of the incorrect field as a way of drawing the user's attention to the field. In this illustration, the background color of the field has been changed to yellow: Implementing this technique requires writing a small JavaScript function that changes the background color of the field, and then calling that function when field validation fails.
Read more
  • 0
  • 0
  • 4550

article-image-reporting
Packt
19 Dec 2013
4 min read
Save for later

Reporting

Packt
19 Dec 2013
4 min read
(For more resources related to this topic, see here.) Creating a pie chart First, we made the component test CT for display purposes, but now let's create the CT to make it run. We will use the Direct function, so let's prepare that as well. In reality we've done this already. Duplicate a different app.html and change the JavaScript file like we have done before. Please see the source file for the code: 03_making_a_pie_chart/ct/dashboard/pie_app.html. Implementing the Direct function Next, prepare the Direct function to read the data. First, it's the config.php file that defines the API. Let's gather them together and implement the four graphs (source file: 04_implement_direct_function/php/config.php). .... 'MyAppDashBoard'=>array( 'methods'=>array( 'getPieData'=>array( 'len'=>0 ), 'getBarData'=>array( 'len'=>0 ), 'getLineData'=>array( 'len'=>0 ), 'getRadarData'=>array( 'len'=>0 ) ) .... Next, let's create the following methods to acquire data for the various charts: getPieData getBarData getLineData getRadarData First, implement the getPieData method for the pie chart. We'll implement the Direct method to get the data for the pie chart. Please see the actual content for the source code (source file: 04_implement_direct_function/php/classes/ MyAppDashBoard.php ). This is acquiring valid quotation and bill data items. With the data to be sent back to the client, set the array in items and set up the various names and data in a key array. You will now combine the definitions in the next model. Preparing the store for the pie chart Charts need a store, so let's define the store and model (source file: 05_prepare_the_store_for_the_pie_chart/app/model/ Pie.js). We'll create the MyApp.model.Pie class that has the name and data fields. Connect this with the data you set with the return value of the Direct function. If you increased the number of fields inside the model you just defined, make sure to amend the return field values, otherwise it won't be applied to the chart, so be careful. We'll use the model we made in the previous step and implement the store (source file: 05_prepare_the_store_for_the_pie_chart/app/model/ Pie.js). Ext.define('MyApp.store.Pie', { extend: 'Ext.data.Store', storeId: 'DashboardPie', model: 'MyApp.model.Pie', proxy: { type: 'direct', directFn: 'MyAppDashboard.getPieData', reader: { type: 'json', root: 'items' } } }) Then, define the store using the model we made and set up the Direct function we made earlier in the proxy. Creating the View We have now prepared the presentation data. Now, let's quickly create the view to display it (source file: 06_making_the_view/app/view/dashboard/Pie.js). Ext.define('MyApp.view.dashboard.Pie', { extend: 'Ext.panel.Panel', alias : 'widget.myapp-dashboard-pie', title: 'Pie Chart', layout: 'fit', requires: [ 'Ext.chart.Chart', 'MyApp.store.Pie' ], initComponent: function() { var me = this, store; store = Ext.create('MyApp.store.Pie'); Ext.apply(me, { items: [{ xtype: 'chart', store: store, series: [{ type: 'pie', field: 'data', showInLegend: true, label: { field: 'name', display: 'rotate', contrast: true, font: '18px Arial' } }] }] }); me.callParent(arguments); } }); Implementing the controller With the previous code, data is not being read by the store and nothing is being displayed. In the same way that reading was performed with onShow, let's implement the controller (source file: 06_making_the_view/app/controller/DashBoard.js): Ext.define('MyApp.controller.dashboard.DashBoard', { extend: 'MyApp.controller.Abstract', screenName: 'dashboard', init: function() { var me = this; me.control({ 'myapp-dashboard': { 'myapp-show': me.onShow, 'myapp-hide': me.onHide } }); }, onShow: function(p) { p.down('myapp-dashboard-pie chart').store.load(); }, onHide: function() { } }); With the charts we create from now on, as we create them it would be good to add the reading process to onShow. Let's take a look at our pie chart which appears as follows: Summary You must agree this is starting to look like an application! The dashboard is the first screen you see right after logging in. Charts are extremely effective in order to visually check a large and complicated amount of data. If you keep adding panels as and when you feel it's needed, you'll increase its practicability. This sample will become a customizable base for you to use in future projects. Resources for Article: Further resources on this subject: So, what is Ext JS? [Article] Buttons, Menus, and Toolbars in Ext JS [Article] Displaying Data with Grids in Ext JS [Article]
Read more
  • 0
  • 0
  • 4550

article-image-handling-authentication
Packt
13 Dec 2013
9 min read
Save for later

Handling Authentication

Packt
13 Dec 2013
9 min read
(for more resources related to this topic, see here.) Understanding Authentication methods In a world where security on the Internet is such a big issue, the need for great authentication methods is something that cannot be missed. Therefore, Zend Framework 2 provides a range of authentication methods that suits everyone's needs. Getting ready To make full use of this, I recommend a working Zend Framework 2 skeleton application to be set up. How to do it… The following is a list of authentication methods—or as they are called adapters—that are readily available in Zend Framework 2. We will provide a small overview of the adapter, and instructions on how you can use it. The DbTable adapter Constructing a DbTable adapter is pretty easy, if we take a look at the following constructor: public function __construct( // The ZendDbAdapterAdapter DbAdapter $zendDb, // The table table name to query on $tableName = null, // The column that serves as 'username' $identityColumn = null, // The column that serves as 'password' $credentialColumn = null, // Any optional treatment of the password before // checking, such as MD5(?), SHA1(?), etcetera $credentialTreatment = null ); The HTTP adapter After constructing the object we need to define the FileResolver to make sure there are actually user details parsed in. Depending on what we configured in the accept_schemes option, the FileResolver can either be set as a BasicResolver, a DigestResolver, or both. Let's take a quick look at how to set a FileResolver as a DigestResolver or BasicResolver (we do this in the /module/Application/src/Application/Controller/IndexController.php file): <?php namespace Application; // Use the FileResolver, and also the Http // authentication adapter. use ZendAuthenticationAdapterHttpFileResolver; use ZendAuthenticationAdapterHttp; use ZendMvcControllerAbstractActionController; class IndexController extends AbstractActionController { public function indexAction() { // Create a new FileResolver and read in our file to use // in the Basic authentication $basicResolver = new FileResolver(); $basicResolver->setFile( '/some/file/with/credentials.txt' ); // Now create a FileResolver to read in our Digest file $digestResolver = new FileResolver(); $digestResolver->setFile( '/some/other/file/with/credentials.txt' ); // Options doesn't really matter at this point, we can // fill them in to anything we like $adapter = new Http($options); // Now set our DigestResolver/BasicResolver, depending // on our $options set $adapter->setBasicResolver($basicResolver); $adapter->setDigestResolver($digestResolver); } } How it works… After two short examples, let's take a look at the other adapters available. The DbTable adapter Let's begin with probably the most used adapter of them all, the DbTable adapter. This adapter connects to a database and pulls the requested username/password combination from a table and, if all went well, it will return to you an identity, which is nothing more than the record that matched the username details. To instantiate the adapter, it requires a ZendDbAdapterAdapter in its constructor to connect with the database with the user details; there are also a couple of other options that can be set. Let's take a look at the definition of the constructor: The second (tableName) option speaks for itself as it is just the table name, which we need to use to get our users, the third and the fourth (identityColumn, credentialColumn) options are logical and they represent the username and password (or what we use) columns in our table. The last option, the credentialTreatment option, however, might not make a lot of sense. The credentialTreatment tells the adapter to treat the credentialColumn with a function before trying to query it. Examples of this could be to use the MD5 (?) function, PASSWORD (?), or SHA1 (?) function, if it was a MySQL database, but obviously this can differ per database as well. To give a small example on how the SQL can look like (the actual adapter builds this query up differently) with and without a credential treatment, take a look at the following examples: With credential treatment: SELECT * FROM `users` WHERE `username` = 'some_user' AND `password` = MD5('some_password'); Without credential treatment: SELECT * FROM `users` WHERE `username` = 'some_user' AND `password` = 'some_password'; When defining the treatment we should always include a question mark for where the password needs to come, for example, MD5 (?) would create MD5 ('some_password'), but without the question mark it would not insert the password. Lastly, instead of giving the options through the constructor, we can also use the setter methods for the properties: setTableName(), setIdentityColumn(), setCredentialColumn(), and setCredentialTreatment(). The HTTP adapter The HTTP authentication adapter is an adapter that we have probably all come across at least once in our Internet lives. We can recognize the authentication when we go to a website and there is a pop up showing where we can fill in our usernames and passwords to continue. This form of authentication is very basic, but still very effective in certain implementations, and therefore, a part of Zend Framework 2. There is only one big massive but to this authentication, and that is that it can (when using the basic authentication) send the username and password clear text through the browser (ouch!). There is however a solution to this problem and that is to use the Digest authentication, which is also supported by this adapter. If we take a look at the constructor of this adapter, we would see the following code line: public function __construct(array $config); The constructor accepts a load of keys in its config parameter, which are as follows: accept_schemes: This refers to what we want to accept authentication wise; this can be basic, digest, or basic digest. realm: This is a description of the realm we are in, for example Member's area. This is for the user only and is only to describe what the user is logging in for. digest_domains: These are URLs for which this authentication is working for. So if a user logs in with his details on any of the URLs defined, they will work. The URLs should be defined in a space-separated (weird, right?) list, for example /members/area /members/login. nonce_timeout: This will set the number of seconds the nonce (the hash users login with when we are using Digest authentication) is valid. Note, however, that nonce tracking and stale support are not implemented in Version 2.2 yet, which means it will authenticate again every time the nonce times out. use_opaque: This is either true or false (by default is true) and tells our adapter to send the opaque header to the client. The opaque header is a string sent by the server, which needs to be returned back on authentication. This does not work sometimes on Microsoft Internet Explorer browsers though, as they seem to ignore that header. Ideally the opaque header is an ever-changing string, to reduce predictability, but ZF 2 doesn't randomize the string and always returns the same hash. algorithm: This includes the algorithm to use for the authentication, it needs to be a supported algorithm that is defined in the supportedAlgos property. At the moment there is only MD5 though. proxy_auth: This boolean (by default is false) tells us if the authentication used is a proxy Authentication or not. It should be noted that there is a slight difference in files when using either Digest or Basic. Although both files have the same layout, they cannot be used interchangeably as the Digest requires the credentials to be MD5 hashed, while the Basic requires the credentials to be plain text. There should also always be a new line after every credential, meaning that the last line in the credential file should be empty. The layout of a credential file is as follows: username:realm:credentials For example: some_user:My Awesome Realm:clear text password Instead of a FileResolver, one can also use the ApacheResolver which can be used to read out htpasswd generated files, which comes in handy when there is already such a file in place. The Digest adapter The Digest adapter is basically the Http adapter without any Basic authentication. As the idea behind it is the same as the Http adapter, we will just go on and talk about the constructor, as that is a bit different in implementation: public function __construct($filename = null, $realm = null, $identity = null, $credential = null); As we can see the following options can be set when constructing the object: filename: This is the direct filename of the file to use with the Digest credentials, so no need to use a FileResolver with this one. realm: This identifies to the user what he/she is logging on to, for example My Awesome Realm or The Dragonborn's lair. As we are immediately trying to log on when constructing this, it does need to correspond with the credential file. identity: This is the username we are trying to log on with, and again it needs to resemble a user that is defined in the credential file to work. credential: This is the Digest password we try to log on with, and this again needs to match the password exactly like the one in the credential file. We can then, for example, just run $digestAdapter->getIdentity() to find out if we are successfully authenticated or not, resulting in NULL if we are not, and resulting in the identity column value if we are. The LDAP adapter Using the LDAP authentication is obviously a little more difficult to explain, so we will not go in to that full as that would take quite a while. What we will do is show the constructor of the LDAP adapter and explain its various options. However, if we want to know more about setting up an LDAP connection, we should take a look at the documentation of ZF2, as it is explained in there very well: public function __construct(array $options = array(), $identity = null, $credential = null); The options parameter in the construct refers to an array of configuration options that are compatible with the ZendLdapLdap configuration. There are literally dozens of options that can be set here so we advice to go and look at the LDAP documentation of ZF2 to know more about that. The next two parameters identity and credential are respectively the username and password again, so that explains itself really. Once you have set up the connection with the LDAP there isn't much left to do but to get the identity and see whether we were successfully validated or not. About Authentication Authentication in Zend Framework 2 works through specific adapters, which are always an implementation of the ZendAuthenticationAdapterAdapterInterface and thus, always provides the methods defined in there. However, the methods of Authentication are all different, and strong knowledge of the methods displayed previously is always a requirement. Some work through the browser, like the Http and Digest adapter, and others just require us to create a whole implementation like the LDAP and the DbTable adapter.
Read more
  • 0
  • 0
  • 4516
article-image-introduction-react-native
Eugene Safronov
23 Sep 2015
7 min read
Save for later

Introduction to React Native

Eugene Safronov
23 Sep 2015
7 min read
React is an open-sourced JavaScript library made by Facebook for building UI applications. The project has a strong emphasis on the component-based approach and utilizes the full power of JavaScript for constructing all elements. The React Native project was introduced during the first React conference in January 2015. It allows you to build native mobile applications using the same concepts from React. In this post I am going to explain the main building blocks of React Native through the example of an iOS demo application. I assume that you have previous experience in writing web applications with React. Setup Please go through getting started section on the React Native website if you would like to build an application on your machine. Quick start When all of the necessary tools are installed, let's initialize the new React application with the following command: react-native init LastFmTopArtists After the command fetches the code and the dependencies, you can open the new project (LastFmTopArtists/LastFmTopArtists.xcodeproj) in Xcode. Then you can build and run the app with cmd+R. You will see a similar screen on the iOS simulator: You can make changes in index.ios.js, then press cmd+R and see instant changes in the simulator. Demo app In this post I will show you how to build a list of popular artists using the Last.fm api. We will display them with help of ListView component and redirect on the artist page using WebView. First screen Let's start with adding a new screen into our application. For now it will contain dump text. Create file ArtistListScreen with the following code: var React = require('react-native'); var { ListView, StyleSheet, Text, View, } = React; class ArtistListScreen extendsReact.Component { render() { return ( <View style={styles.container}> <Text>Artist list would be here</Text> </View> ); } } var styles = StyleSheet.create({ container: { flex: 1, backgroundColor: 'white', marginTop: 64 } }) module.exports = ArtistListScreen; Here are some things to note: I declare react components with ES6 Classes syntax. ES6 Destructuring assignment syntax is used for React objects declaration. FlexBox is a default layout system in React Native. Flex values can be either integers or doubles, indicating the relative size of the box. So, when you have multiple elements they will fill the relative proportion of the view based on their flex value. ListView is declared but will be used later. From index.ios.js we call ArtistListScreen using NavigatorIOS component: var React = require('react-native'); var ArtistListScreen = require('./ArtistListScreen'); var { AppRegistry, NavigatorIOS, StyleSheet } = React; var LastFmArtists = React.createClass({ render: function() { return ( <NavigatorIOS style={styles.container} initialRoute={{ title: "last.fm Top Artists", component: ArtistListScreen }} /> ); } }); var styles = StyleSheet.create({ container: { flex: 1, backgroundColor: 'white', }, }); Switch to iOS Simulator, refresh with cmd+R and you will see: ListView After we have got the empty screen, let's render some mock data in a ListView component. This component has a number of performance improvements such as rendering of only visible elements and removing which are off screen. The new version of ArtistListScreen looks like the following: class ArtistListScreen extendsReact.Component { constructor(props) { super(props) this.state = { isLoading: false, dataSource: newListView.DataSource({ rowHasChanged: (row1, row2) => row1 !== row2 }) } } componentDidMount() { this.loadArtists(); } loadArtists() { this.setState({ dataSource: this.getDataSource([{name: 'Muse'}, {name: 'Radiohead'}]) }) } getDataSource(artists: Array<any>): ListView.DataSource { returnthis.state.dataSource.cloneWithRows(artists); } renderRow(artist) { return ( <Text>{artist.name}</Text> ); } render() { return ( <View style={styles.container}> <ListView dataSource={this.state.dataSource} renderRow={this.renderRow.bind(this)} automaticallyAdjustContentInsets={false} /> </View> ); } } Side notes: The DataSource is an interface that ListView is using to determine which rows have changed over the course of updates. ES6 constructor is an analog of getInitialState. The end result of the changes: Api token The Last.fm web api is free to use but you will need a personal api token in order to access it. At first it is necessary to join Last.fm and then get an API account. Fetching real data I assume you have successfully set up the API account. Let's call a real web service using fetch API: const API_KEY='put token here'; const API_URL = 'http://ws.audioscrobbler.com/2.0/?method=geo.gettopartists&country=ukraine&format=json&limit=40'; const REQUEST_URL = API_URL + '&api_key=' + API_KEY; loadArtists() { this.setState({ isLoading: true }); fetch(REQUEST_URL) .then((response) => response.json()) .catch((error) => { console.error(error); }) .then((responseData) => { this.setState({ isLoading: false, dataSource: this.getDataSource(responseData.topartists.artist) }) }) .done(); } After a refresh, the iOS simulator should display: ArtistCell Since we have real data, it is time to add artist's images and rank them on the display. Let's move artist cell display logic into separate component ArtistCell: 'use strict'; var React = require('react-native'); var { Image, View, Text, TouchableHighlight, StyleSheet } = React; class ArtistCell extendsReact.Component { render() { return ( <View> <View style={styles.container}> <Image source={{uri: this.props.artist.image[2]["#text"]}} style={styles.artistImage} /> <View style={styles.rightContainer}> <Text style={styles.rank}>## {this.props.artist["@attr"].rank}</Text> <Text style={styles.name}>{this.props.artist.name}</Text> </View> </View> <View style={styles.separator}/> </View> ); } } var styles = StyleSheet.create({ container: { flex: 1, flexDirection: 'row', justifyContent: 'center', alignItems: 'center', padding: 5 }, artistImage: { height: 84, width: 126, marginRight: 10 }, rightContainer: { flex: 1 }, name: { textAlign: 'center', fontSize: 14, color: '#999999' }, rank: { textAlign: 'center', marginBottom: 2, fontWeight: '500', fontSize: 16 }, separator: { height: 1, backgroundColor: '#E3E3E3', flex: 1 } }) module.exports = ArtistCell; Changes in ArtistListScreen: // declare new component var ArtistCell = require('./ArtistCell'); // use it in renderRow method: renderRow(artist) { return ( <ArtistCell artist={artist} /> ); } Press cmd+R in iOS Simulator: WebView The last piece of the application would be to open a web page by clicking in ListView. Declare new component WebView: 'use strict'; var React = require('react-native'); var { View, WebView, StyleSheet } = React; class Web extendsReact.Component { render() { return ( <View style={styles.container}> <WebView url={this.props.url}/> </View> ); } } var styles = StyleSheet.create({ container: { flex: 1, backgroundColor: '#F6F6EF', flexDirection: 'column', }, }); Web.propTypes = { url: React.PropTypes.string.isRequired }; module.exports = Web; Then by using TouchableHighlight we will call onOpenPage from ArtistCell: class ArtistCell extendsReact.Component { render() { return ( <View> <TouchableHighlight onPress={this.props.onOpenPage} underlayColor='transparent'> <View style={styles.container}> <Image source={{uri: this.props.artist.image[2]["#text"]}} style={styles.artistImage} /> <View style={styles.rightContainer}> <Text style={styles.rank}>## {this.props.artist["@attr"].rank}</Text> <Text style={styles.name}>{this.props.artist.name}</Text> </View> </View> </TouchableHighlight> <View style={styles.separator}/> </View> ); } } Finally open web page from ArtistListScreen component: // declare new component var WebView = require('WebView'); class ArtistListScreen extendsReact.Component { // will be called on touch from ArtistCell openPage(url) { this.props.navigator.push({ title: 'Web View', component: WebView, passProps: {url} }); } renderRow(artist) { return ( <ArtistCell artist={artist} // specify artist's url on render onOpenPage={this.openPage.bind(this, artist.url)} /> ); } } Now a touch on any cell in ListView will load a web page for selected artist: Conclusion You can explore source code of the app on Github repo. For me it was a real fun to play with React Native. I found debugging in Chrome and error stack messages extremely easy to work with. By using React's component-based approach you can build complex UI without much effort. I highly recommend to explore this technology for rapid prototyping and maybe for your next awesome project. Useful links Building a flashcard app with React Native Examples of React Native apps React Native Videos Video course on React Native Want more JavaScript? Visit our dedicated page here. About the author Eugene Safronov is a software engineer with a proven record of delivering high quality software. He has an extensive experience building successful teams and adjusting development processes to the project’s needs. His primary focuses are Web (.NET, node.js stacks) and cross-platform mobile development (native and hybrid). He can be found on Twitter @sejoker.
Read more
  • 0
  • 0
  • 4446

article-image-null-12
Packt
23 Jul 2012
13 min read
Save for later

Ruby with MongoDB for Web Development

Packt
23 Jul 2012
13 min read
Creating documents Let's first see how we can create documents in MongoDB. As we have briefly seen, MongoDB deals with collections and documents instead of tables and rows. Time for action – creating our first document Suppose we want to create the book object having the following schema: book = { name: "Oliver Twist", author: "Charles Dickens", publisher: "Dover Publications", published_on: "December 30, 2002", category: ['Classics', 'Drama'] }   On the Mongo CLI, we can add this book object to our collection using the following command: > db.books.insert(book)   Suppose we also add the shelf collection (for example, the floor, the row, the column the shelf is in, the book indexes it maintains, and so on that are part of the shelf object), which has the following structure: shelf : { name : 'Fiction', location : { row : 10, column : 3 }, floor : 1 lex : { start : 'O', end : 'P' }, }   Remember, it's quite possible that a few years down the line, some shelf instances may become obsolete and we might want to maintain their record. Maybe we could have another shelf instance containing only books that are to be recycled or donated. What can we do? We can approach this as follows: The SQL way: Add additional columns to the table and ensure that there is a default value set in them. This adds a lot of redundancy to the data. This also reduces the performance a little and considerably increases the storage. Sad but true! The NoSQL way: Add the additional fields whenever you want. The following are the MongoDB schemaless object model instances: > db.book.shelf.find() { "_id" : ObjectId("4e81e0c3eeef2ac76347a01c"), "name" : "Fiction", "location" : { "row" : 10, "column" : 3 }, "floor" : 1 } { "_id" : ObjectId("4e81e0fdeeef2ac76347a01d"), "name" : "Romance", "location" : { "row" : 8, "column" : 5 }, "state" : "window broken", "comments" : "keep away from children" } What just happened? You will notice that the second object has more fields, namely comments and state. When fetching objects, it's fine if you get extra data. That is the beauty of NoSQL. When the first document is fetched (the one with the name Fiction), it will not contain the state and comments fields but the second document (the one with the name Romance) will have them. Are you worried what will happen if we try to access non-existing data from an object, for example, accessing comments from the first object fetched? This can be logically resolved—we can check the existence of a key, or default to a value in case it's not there, or ignore its absence. This is typically done anyway in code when we access objects. Notice that when the schema changed we did not have to add fields in every object with default values like we do when using a SQL database. So there is no redundant information in our database. This ensures that the storage is minimal and in turn the object information fetched will have concise data. So there was no redundancy and no compromise on storage or performance. But wait! There's more. NoSQL scores over SQL databases The way many-to-many relations are managed tells us how we can do more with MongoDB that just cannot be simply done in a relational database. The following is an example: Each book can have reviews and votes given by customers. We should be able to see these reviews and votes and also maintain a list of top voted books. If we had to do this in a relational database, this would be somewhat like the relationship diagram shown as follows: (get scared now!) The vote_count and review_count fields are inside the books table that would need to be updated every time a user votes up/down a book or writes a review. So, to fetch a book along with its votes and reviews, we would need to fire three queries to fetch the information: SELECT * from book where id = 3; SELECT * from reviews where book_id = 3; SELECT * from votes where book_id = 3; We could also use a join for this: SELECT * FROM books JOIN reviews ON reviews.book_id = books.id JOIN votes ON votes.book_id = books.id; In MongoDB, we can do this directly using embedded documents or relational documents. Using MongoDB embedded documents Embedded documents, as the name suggests, are documents that are embedded in other documents. This is one of the features of MongoDB and this cannot be done in relational databases. Ever heard of a table embedded inside another table? Instead of four tables and a complex many-to-many relationship, we can say that reviews and votes are part of a book. So, when we fetch a book, the reviews and the votes automatically come along with the book. Embedded documents are analogous to chapters inside a book. Chapters cannot be read unless you open the book. Similarly embedded documents cannot be accessed unless you access the document. For the UML savvy, embedded documents are similar to the contains or composition relationship. Time for action – embedding reviews and votes In MongoDB, the embedded object physically resides inside the parent. So if we had to maintain reviews and votes we could model the object as follows: book : { name: "Oliver Twist", reviews : [ { user: "Gautam", comment: "Very interesting read" }, { user: "Harry", comment: "Who is Oliver Twist?" } ] votes: [ "Gautam", "Tom", "Dick"] } What just happened? We now have reviews and votes inside the book. They cannot exist on their own. Did you notice that they look similar to JSON hashes and arrays? Indeed, they are an array of hashes. Embedded documents are just like hashes inside another object. There is a subtle difference between hashes and embedded objects as we shall see later on in the book. Have a go hero – adding more embedded objects to the book Try to add more embedded objects such as orders inside the book document. It works! order = { name: "Toby Jones" type: "lease", units: 1, cost: 40 } Fetching embedded objects We can fetch a book along with the reviews and the votes with it. This can be done by executing the following command: > var book = db.books.findOne({name : 'Oliver Twist'}) > book.reviews.length 2 > book.votes.length 3 > book.reviews [ { user: "Gautam", comment: "Very interesting read" }, { user: "Harry", comment: "Who is Oliver Twist?" } ] > book.votes [ "Gautam", "Tom", "Dick"] This does indeed look simple, doesn't it? By fetching a single object, we are able to get the review and vote count along with the data. Use embedded documents only if you really have to! Embedded documents increase the size of the object. So, if we have a large number of embedded documents, it could adversely impact performance. Even to get the name of the book, the reviews and the votes are fetched. Using MongoDB document relationships Just like we have embedded documents, we can also set up relationships between different documents. Time for action – creating document relations The following is another way to create the same relationship between books, users, reviews, and votes. This is more like the SQL way. book: { _id: ObjectId("4e81b95ffed0eb0c23000002"), name: "Oliver Twist", author: "Charles Dickens", publisher: "Dover Publications", published_on: "December 30, 2002", category: ['Classics', 'Drama'] } Every document that is created in MongoDB has an object ID associated with it. In the next chapter, we shall soon learn about object IDs in MongoDB. By using these object IDs we can easily identify different documents. They can be considered as primary keys. So, we can also create the reviews collection and the votes collection as follows: users: [ { _id: ObjectId("8d83b612fed0eb0bee000702"), name: "Gautam" }, { _id : ObjectId("ab93b612fed0eb0bee000883"), name: "Harry" } ] reviews: [ { _id: ObjectId("5e85b612fed0eb0bee000001"), user_id: ObjectId("8d83b612fed0eb0bee000702"), book_id: ObjectId("4e81b95ffed0eb0c23000002"), comment: "Very interesting read" }, { _id: ObjectId("4585b612fed0eb0bee000003"), user_id : ObjectId("ab93b612fed0eb0bee000883"), book_id: ObjectId("4e81b95ffed0eb0c23000002"), comment: "Who is Oliver Twist?" } ] votes: [ { _id: ObjectId("6e95b612fed0eb0bee000123"), user_id : ObjectId("8d83b612fed0eb0bee000702"), book_id: ObjectId("4e81b95ffed0eb0c23000002"), }, { _id: ObjectId("4585b612fed0eb0bee000003"), user_id : ObjectId("ab93b612fed0eb0bee000883"), } ] What just happened? Hmm!! Not very interesting, is it? It doesn't even seem right. That's because it isn't the right choice in this context. It's very important to know how to choose between nesting documents and relating them. In your object model, if you will never search by the nested document (that is, look up for the parent from the child), embed it. Just in case you are not sure about whether you would need to search by an embedded document, don't worry too much – it does not mean that you cannot search among embedded objects. You can use Map/Reduce to gather the information. Comparing MongoDB versus SQL syntax This is a good time to sit back and evaluate the similarities and dissimilarities between the MongoDB syntax and the SQL syntax. Let's map them together: SQL commands NoSQL (MongoDB) equivalent SELECT * FROM books db.books.find() SELECT * FROM books WHERE id = 3; db.books.find( { id : 3 } ) SELECT * FROM books WHERE name LIKE 'Oliver%' db.books.find( { name : /^Oliver/ } ) SELECT * FROM books WHERE name like '%Oliver%' db.books.find( { name : /Oliver/ } ) SELECT * FROM books WHERE publisher = 'Dover Publications' AND published_date = "2011-8-01" db.books.find( { publisher : "Dover Publications", published_date : ISODate("2011-8-01") } ) SELECT * FROM books WHERE published_date > "2011-8-01" db.books.find ( { published_date : { $gt : ISODate("2011-8-01") } } ) SELECT name FROM books ORDER BY published_date db.books.find( {}, { name : 1 } ).sort( { published_date : 1 } ) SELECT name FROM books ORDER BY published_date DESC db.books.find( {}, { name : 1 } ).sort( { published_date : -1 } ) SELECT votes.name from books JOIN votes where votes.book_id = books.id db.books.find( { votes : { $exists : 1 } }, { votes.name : 1 } ) Some more notable comparisons between MongoDB and relational databases are: MongoDB does not support joins. Instead it fires multiple queries or uses Map/Reduce. We shall soon see why the NoSQL faction does not favor joins. SQL has stored procedures. MongoDB supports JavaScript functions. MongoDB has indexes similar to SQL. MongoDB also supports Map/Reduce functionality. MongoDB supports atomic updates like SQL databases. Embedded or related objects are used sometimes instead of a SQL join. MongoDB collections are analogous to SQL tables. MongoDB documents are analogous to SQL rows. Using Map/Reduce instead of join We have seen this mentioned a few times earlier—it's worth jumping into it, at least briefly. Map/Reduce is a concept that was introduced by Google in 2004. It's a way of distributed task processing. We "map" tasks to works and then "reduce" the results. Understanding functional programming Functional programming is a programming paradigm that has its roots from lambda calculus. If that sounds intimidating, remember that JavaScript could be considered a functional language. The following is a snippet of functional programming: $(document).ready( function () { $('#element').click( function () { # do something here }); $('#element2').change( function () { # do something here }) }); We can have functions inside functions. Higher-level languages (such as Java and Ruby) support anonymous functions and closures but are still procedural functions. Functional programs rely on results of a function being chained to other functions. Building the map function The map function processes a chunk of data. Data that is fed to this function could be accessed across a distributed filesystem, multiple databases, the Internet, or even any mathematical computation series! function map(void) -> void The map function "emits" information that is collected by the "mystical super gigantic computer program" and feeds that to the reducer functions as input. MongoDB as a database supports this paradigm making it "the all powerful" (of course I am joking, but it does indeed make MongoDB very powerful). Time for action – writing the map function for calculating vote statistics Let's assume we have a document structure as follows: { name: "Oliver Twist", votes: ['Gautam', 'Harry'] published_on: "December 30, 2002" } The map function for such a structure could be as follows: function() { emit( this.name, {votes : this.votes} ); } What just happened? The emit function emits the data. Notice that the data is emitted as a (key, value) structure. Key: This is the parameter over which we want to gather information. Typically it would be some primary key, or some key that helps identify the information. For the SQL savvy, typically the key is the field we use in the GROUP BY clause. Value: This is a JSON object. This can have multiple values and this is the data that is processed by the reduce function. We can call emit more than once in the map function. This would mean we are processing data multiple times for the same object. Building the reduce function The reduce functions are the consumer functions that process the information emitted from the map functions and emit the results to be aggregated. For each emitted data from the map function, a reduce function emits the result. MongoDB collects and collates the results. This makes the system of collection and processing as a massive parallel processing system giving the all mighty power to MongoDB. The reduce functions have the following signature: function reduce(key, values_array) -> value Time for action – writing the reduce function to process emitted information This could be the reduce function for the previous example: function(key, values) { var result = {votes: 0} values.forEach(function(value) { result.votes += value.votes; }); return result; } What just happened? reduce takes an array of values – so it is important to process an array every time. There are various options to Map/Reduce that help us process data. Let's analyze this function in more detail: function(key, values) { var result = {votes: 0} values.forEach(function(value) { result.votes += value.votes; }); return result; } The variable result has a structure similar to what was emitted from the map function. This is important, as we want the results from every document in the same format. If we need to process more results, we can use the finalize function (more on that later). The result function has the following structure: function(key, values) { var result = {votes: 0} values.forEach(function(value) { result.votes += value.votes; }); return result; } The values are always passed as arrays. It's important that we iterate the array, as there could be multiple values emitted from different map functions with the same key. So, we processed the array to ensure that we don't overwrite the results and collate them.
Read more
  • 0
  • 0
  • 4403
Modal Close icon
Modal Close icon