Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7010 Articles
article-image-media-queries-less
Packt
21 Oct 2014
9 min read
Save for later

Media Queries with Less

Packt
21 Oct 2014
9 min read
In this article by Alex Libby, author of Learning Less.js, we'll see how Less can make creating media queries a cinch; we will cover the following topics: How media queries work What's wrong with CSS? Creating a simple example (For more resources related to this topic, see here.) Introducing media queries If you've ever spent time creating content for sites, particularly for display on a mobile platform, then you might have come across media queries. For those of you who are new to the concept, media queries are a means of tailoring the content that is displayed on screen when the viewport is resized to a smaller size. Historically, websites were always built at a static size—with more and more people viewing content on smartphones and tablets, this means viewing them became harder, as scrolling around a page can be a tiresome process! Thankfully, this became less of an issue with the advent of media queries—they help us with what should or should not be displayed when viewing content on a particular device. Almost all modern browsers offer native support for media queries—the only exception being IE Version 8 or below, where it is not supported natively: Media queries always begin with @media and consist of two parts: The first part, only screen, determines the media type where a rule should apply—in this case, it will only show the rule if we're viewing content on screen; content viewed when printed can easily be different. The second part, or media feature, (min-width: 530px) and (max-width: 949px), means the rule will only apply between a screen size set at a minimum of 530px and a maximum of 949px. This will rule out any smartphones and will apply to larger tablets, laptops, or PCs. There are literally dozens of combinations of media queries to suit a variety of needs—for some good examples, visit http://cssmediaqueries.com/overview.html, where you can see an extensive list, along with an indication whether it is supported in the browser you normally use. Media queries are perfect to dynamically adjust your site to work in multiple browsers—indeed, they are an essential part of a responsive web design. While browsers support media queries, there are some limitations we need to consider; let's take a look at these now. The limitations of CSS If we spend any time working with media queries, there are some limitations we need to consider; these apply equally if we were writing using Less or plain CSS: Not every browser supports media features uniformly; to see the differences, visit http://cssmediaqueries.com/overview.html using different browsers. Current thinking is that a range of breakpoints has to be provided; this can result in a lot of duplication and a constant battle to keep up with numerous different screen sizes! The @media keyword is not supported in IE8 or below; you will need to use JavaScript or jQuery to achieve the same result, or a library such as Modernizr to provide a graceful fallback option. Writing media queries will tie your design to a specific display size; this increases the risk of duplication as you might want the same element to appear in multiple breakpoints, but have to write individual rules to cover each breakpoint. Breakpoints are points where your design will break if it is resized larger or smaller than a particular set of given dimensions. The traditional thinking is that we have to provide different style rules for different breakpoints within our style sheets. While this is valid, ironically it is something we should not follow! The reason for this is the potential proliferation of breakpoint rules that you might need to add, just to manage a site. With care and planning and a design-based breakpoints mindset, we can often get away with a fewer number of rules. There is only one breakpoint given, but it works in a range of sizes without the need for more breakpoints. The key to the process is to start small, then increase the size of your display. As soon as it breaks your design (this is where your first breakpoint is) add a query to fix it, and then, keep doing it until you get to your maximum size. Okay, so we've seen what media queries are; let's change tack and look at what you need to consider when working with clients, before getting down to writing the queries in code. Creating a simple example The best way to see how media queries work is in the form of a simple demo. In this instance, we have a simple set of requirements, in terms of what should be displayed at each size: We need to cater for four different sizes of content The small version must be shown to the authors as plain text e-mail links, with no decoration For medium-sized screens, we will add an icon before the link On large screens, we will add an e-mail address after the e-mail links On extra-large screens, we will combine the medium and large breakpoints together, so both icons and e-mail addresses are displayed In all instances, we will have a simple container in which there will be some dummy text and a list of editors. The media queries we create will control the appearance of the editor list, depending on the window size of the browser being used to display the content. Next, add the following code to a new document. We'll go through it section by section, starting with the variables created for our media queries: @small: ~"(max-width: 699px) and (min-width: 520px)"; @medium: ~"(max-width: 1000px) and (min-width: 700px)"; @large: ~"(min-width: 1001px)"; @xlarge: ~"(min-width: 1151px)"; Next comes some basic styles to define margins, font sizes, and styles: * { margin: 0; padding: 0; } body { font: 14px Georgia, serif; } h3 { margin: 0 0 8px 0; } p { margin: 0 25px } We need to set sizes for each area within our demo, so go ahead and add the following styles: #fluid-wrap { width: 70%; margin: 60px auto; padding: 20px; background: #eee; overflow: hidden; } #main-content { width: 65%; float: right; }  #sidebar { width: 35%; float: left; ul { list-style: none; } ul li a { color: #900; text-decoration: none; padding: 3px 0; display: block; } } Now that the basic styles are set, we can add our media queries—beginning with the query catering for small screens, where we simply display an e-mail logo: @media @small { #sidebar ul li a { padding-left: 21px; background: url(../img/email.png) left center no-repeat; } } The medium query comes next; here, we add the word Email before the e-mail address instead: @media @medium { #sidebar ul li a:before { content: "Email: "; font-style: italic; color: #666; } } In the large media query, we switch to showing the name first, followed by the e-mail (the latter extracted from the data-email attribute): @media @large { #sidebar ul li a:after { content: " (" attr(data-email) ")"; font-size: 11px; font-style: italic; color: #666; } } We finish with the extra-large query, where we use the e-mail address format shown in the large media query, but add an e-mail logo to it: @media @xlarge { #sidebar ul li a { padding-left: 21px; background: url(../img/email.png) left center no-repeat; } } Save the file as simple.less. Now that our files are prepared, let's preview the results in a browser. For this, I recommend that you use Responsive Design View within Firefox (activated by pressing Ctrl + Shift + M). Once activated, resize the view to 416 x 735; here we can see that only the name is displayed as an e-mail link: Increasing the size to 544 x 735 adds an e-mail logo, while still keeping the same name/e-mail format as before: If we increase it further to 716 x 735, the e-mail logo changes to the word Email, as seen in the following screenshot: Let's increase the size even further to 735 x 1029; the format changes again, to a name/e-mail link, followed by an e-mail address in parentheses: In our final change, increase the size to 735 x 1182. Here, we can see the previous style being used, but with the addition of an e-mail logo: These screenshots illustrate perfectly how you can resize your screen and still maintain a suitable layout for each device you decide to support; let's take a moment to consider how the code works. The normal accepted practice for developers is to work on the basis of "mobile first", or create the smallest view so it is perfect, then increase the size of the screen and adjust the content until the maximum size is reached. This works perfectly well for new sites, but the principle might have to be reversed if a mobile view is being retrofitted to an existing site. In our instance, we've produced the content for a full-size screen first. From a Less perspective, there is nothing here that isn't new—we've used nesting for the #sidebar div, but otherwise the rest of this part of the code is standard CSS. The magic happens in two parts—immediately at the top of the file, we've set a number of Less variables, which encapsulate the media definition strings we use in the queries. Here, we've created four definitions, ranging from @small (for devices between 520px to 699px), right through to @xlarge for widths of 1151px or more. We then take each of the variables and use them within each query as appropriate, for example, the @small query is set as shown in the following code: @media @small { #sidebar ul li a { padding-left: 21px; background: url(../img/email.png) left center no-repeat; } } In the preceding code, we have standard CSS style rules to display an e-mail logo before the name/e-mail link. Each of the other queries follows exactly the same principle; they will each compile as valid CSS rules when running through Less. Summary Media queries have rapidly become a de facto part of responsive web design. We started our journey through media queries with a brief introduction, with a review of some of the limitations that we must work around and considerations that need to be considered when working with clients. We then covered how to create a simple media query. Resources for Article: Further resources on this subject: Creating Blog Content in WordPress [Article] Customizing WordPress Settings for SEO [Article] Introduction to a WordPress application's frontend [Article]
Read more
  • 0
  • 0
  • 20734

article-image-introduction-typescript
Packt
20 Oct 2014
16 min read
Save for later

Introduction to TypeScript

Packt
20 Oct 2014
16 min read
One of the primary benefits of compiled languages is that they provide a more plain syntax for the developer to work with before the code is eventually converted to machine code. TypeScript is able to bring this advantage to JavaScript development by wrapping several different patterns into language constructs that allow us to write better code. Every explicit type annotation that is provided is simply syntactic sugar that will be removed during compilation, but not before their constraints are analyzed and any errors are caught. In this article by Christopher Nance, the author of TypeScript Essentials, we will explore this type system in depth. We will also discuss the different language structures that TypeScript introduces. We will look at how these structures are emitted by the compiler into plain JavaScript. This article will contain a detailed at into each of these concepts: (For more resources related to this topic, see here.) Types Functions Interfaces Classes Types These type annotations put a specific set of constraints on the variables being created. These constraints allow the compiler and development tools to better assist in the proper use of the object. This includes a list of functions, variables, and properties available on the object. If a variable is created and no type is provided for it, TypeScript will attempt to infer the type from the context in which it is used. For instance, in the following code, we do not explicitly declare the variable hello as string; however, since it is created with an initial value, TypeScript is able to infer that it should always be treated as a string: var hello = "Hello There"; The ability of TypeScript to do this contextual typing provides development tools with the ability to enhance the development experience in a variety of ways. The type information allows our IDE to warn us of potential errors in our code, or provide intelligent code completion and suggestion. As you can see from the following screenshot, Visual Studio is able to provide a list of methods and properties associated with string objects as well as their type information: When an object’s type is not given and cannot be inferred from its initialization then it will be treated as an Any type. The Any type is the base type for all other types in TypeScript. It can represent any JavaScript value and the minimum amount of type checking is performed on objects of type Any. Every other type that exists in TypeScript falls into one of three categories: primitive types, object types, or type parameters. TypeScript's primitive types closely mirror those of JavaScript. The TypeScript primitive types are as follows: Number: var myNum: number = 2; Boolean: var myBool: boolean = true; String: var myString: string = "Hello"; Void: function(): void { var x = 2; } Null: if (x != null) { alert(x); } Undefined: if (x != undefined) { alert(x); } All of these types correspond directly to JavaScript's primitive types except for Void. The Void type is meant to represent the absence of a value. A function that returns no value has a return type of void. Object types are the most common types you will see in TypeScript and they are made up of references to classes, interfaces, and anonymous object types. Object types are made up of a complex set of members. These members fall into one of four categories: properties, call signatures, constructor signatures, or index signatures. Type parameters are used when referencing generic types or calling generic functions. Type parameters are used to keep code generic enough to be used on a multitude of objects while limiting those objects to a specific set of constraints. An early example of generics that we can cover is arrays. Arrays exist just like they do in JavaScript and they have an extra set of type constraints placed upon them. The array object itself has certain type constraints and methods that are created as being an object of the Array type, the second piece of information that comes from the array declaration is the type of the objects contained in the array. There are two ways to explicitly type an array; otherwise, the contextual typing system will attempt to infer the type information: var array1: string[] = []; var array2: Array<string> = []; Both of these examples are completely legal ways of declaring an array. They both generate the same JavaScript output and they both provide the same type information. The first example is a shorthand type literal using the [ and ] characters to create arrays. The resulting JavaScript for each of these arrays is shown as follows: var array1 = []; var array2 = []; Despite all of the type annotations and compile-time checking, TypeScript compiles to plain JavaScript and therefore adds absolutely no overhead to the run time speed of your applications. All of the type annotations are removed from the final code, providing us with both a much richer development experience and a clean finished product. Functions If you are at all familiar with JavaScript you will be very familiar with the concept of functions. TypeScript has added type annotations to the parameter list as well as the return type. Due to the new constraints being placed on the parameter list, the concept of function overloads was also included in the language specification. TypeScript also takes advantage of JavaScript's arguments object and provides syntax for rest parameters. Let's take a look at a function declaration in TypeScript: function add(x: number, y: number): number {    return x + y; } As you can see, we have created a function called add. It takes two parameters that are both of the type number, one of the primitive types, and it returns a number. This function is useful in its current form but it is a little limited in overall functionality. What if we want to add a third number to the first two? Then we have to call our function multiple times. TypeScript provides a way to provide optional parameters to functions. So now we can modify our function to take a third parameter, z, that will get added to the first two numbers, as shown in the following code: function add(x: number, y: number, z?: number) {    if (z !== undefined) {        return x + y + z;    }    return x + y; } As you can see, we have a third named parameter now but this one is followed by ?. This tells the compiler that this parameter is not required for the function to be called. Optional parameters tell the compiler not to generate an error if the parameter is not provided when the function is called. In JavaScript, this compile-time checking is not performed, meaning an exception could occur at runtime because each missing parameter will have a value of undefined. It is the responsibility of the developer to write code that verifies a value exists before attempting to use it. So now we can add three numbers together and we haven't broken any of our previous code that relied on the add method only taking two parameters. This has added a little bit more functionality but I think it would be nice to extend this code to operate on multiple types. We know that strings can be added together just the same as numbers can, so why not use the same method? In its current form, though, passing strings to the add function will result in compilation errors. We will modify the function's definition to take not only numbers but strings as well, as shown in the following code: function add(x: string, y: string): string; function add(x: number, y: number): number; function add(x: any, y: any): any {    return x + y; } As you can see, we now have two declarations of the add function: one for strings, one for numbers, and then we have the final implementation using the any type. The signature of the actual function implementation is not included in the function’s type definition, though. Attempting to call our add method with anything other than a number or string will fail at compile time, however, the overloads have no effect on the generated JavaScript. All of the type annotations are stripped out, as well as the overloads, and all we are left with is a very simple JavaScript method: function add(x, y) {  return x + y; } Great, so now we have a multipurpose add function that can take two values and combine them together for either strings or numbers. This still feels a little limited in overall functionality though. What if we wanted to add an indeterminate number of values together? We would have to call our add method over and over again until we eventually had only one value. Thankfully, TypeScript includes rest parameters, which is essentially an unbounded list of optional parameters. The following code shows how to modify our add functions to include a rest parameter: function add(arg1: string, ...args: string[]): string; function add(arg1: number, ...args: number[]): number; function add(arg1: any, ...args: any[]): any {    var total = arg1;    for (var i = 0; i < args.length; i++) {        total += args[i];    }    return total; } A rest parameter can only be the final parameter in a function's declaration. The TypeScript compiler recognizes the syntax of this final parameter and generates an extra bit of JavaScript to generate a shifted array from the JavaScript arguments object that is available to code inside of a function. The resulting JavaScript code shows the loop that the compiler has added to create the array that represents our indeterminate list of parameters: function add(arg1) {    var args = [];    for (var _i = 0; _i < (arguments.length - 1); _i++) {        args[_i] = arguments[_i + 1];    }    var total = arg1;    for (var i = 0; i < args.length; i++) {        total += args[i];    }    return total; } Now adding numbers and strings together is very simple and is completely type-safe. If you attempt to mix the different parameter types, a compile error will occur. The first two of the following statements are legal calls to our Add function; however, the third is not because the objects being passed in are not of the same type: alert(add("Hello ", "World!")); alert(add(3, 5, 9, 120, 42)); //Error alert(add(3, "World!")); We are still very early into our exploration of TypeScript but the benefits are already very apparent. There are still a few features of functions that we haven't covered yet but we need to learn more about the language first. Next, we will discuss the interface construct and the benefits it provides with absolutely no cost. Interfaces Interfaces are a key piece of creating large-scale software applications. They are a way of representing complex types about any object. Despite their usefulness they have absolutely no runtime consequences because JavaScript does not include any sort of runtime type checking. Interfaces are analyzed at compile time and then omitted from the resulting JavaScript. Interfaces create a contract for developers to use when developing new objects or writing methods to interact with existing ones. Interfaces are named types that contain a list of members. Let's look at an example of an interface: interface IPoint {    x: number;    y: number; } As you can see we use the interface keyword to start the interface declaration. Then we give the interface a name that we can easily reference from our code. Interfaces can be named anything, for example, foo or bar, however, a simple naming convention will improve the readability of the code. Interfaces will be given the format I<name> and object types will just use <name>, for example, IFoo and Foo. The interfaces' declaration body contains just a list of members and functions and their types. Interface members can only be instance members of an object. Using the static keyword in an interface declaration will result in a compile error. Interfaces have the ability to inherit from base types. This interface inheritance allows us to extend existing interfaces into a more enhanced version as well as merge separate interfaces together. To create an inheritance chain, interfaces use the extends clause. The extends clause is followed by a comma-separated list of types that the interface will merge with. interface IAdder {    add(arg1: number, ...args: number[]): number; } interface ISubtractor {    subtract(arg1: number, ...args: number[]): number; } interface ICalculator extends IAdder, ISubtractor {    multiply(arg1: number, ...args: number[]): number;    divide(arg1: number, arg2: number): number; } Here, we see three interfaces: IAdder, which defines a type that must implement the add method that we wrote earlier ISubtractor, which defines a new method called subtract that any object typed with ISubtractor must define ICalculator, which extends both IAdder and ISubtractor as well as defining two new methods that perform operations a calculator would be responsible for, which an adder or subtractor wouldn't perform These interfaces can now be referenced in our code as type parameters or type declarations. Interfaces cannot be directly instantiated and attempting to reference the members of an interface by using its type name directly will result in an error. In the following function declaration the ICalculator interface is used to restrict the object type that can be passed to the function. The compiler can now examine the function body and infer all of the type information associated with the calculator parameter and warn us if the object used does not implement this interface. function performCalculations(calculator: ICalculator, num1, num2) {    calculator.add(num1, num2);    calculator.subtract(num1, num2);    calculator.multiply(num1, num2);    calculator.divide(num1, num2);    return true; } The last thing that you need to know about interface definitions is that their declarations are open-ended and will implicitly merge together if they have the same type name. Our ICalculator interface could have been split into two separate declarations with each one adding its own list of base types and its own list of members. The resulting type definition from the following declaration is equivalent to the declaration we saw previously: interface ICalculator extends IAdder {    multiply(arg1: number, ...args: number[]): number; } interface ICalculator extends ISubtractor {    divide(arg1: number, arg2: number): number; } Creating large scale applications requires code that is flexible and reusable. Interfaces are a key component of keeping TypeScript as flexible as plain JavaScript, yet allow us to take advantage of the type checking provided at compile time. Your code doesn't have to be dependent on existing object types and will be ready for any new object types that might be introduced. The TypeScript compiler also implements a duck typing system that allows us to create objects on the fly while keeping type safety. The following example shows how we can pass objects that don't explicitly implement an interface but contain all of the required members to a function: function addPoints(p1: IPoint, p2: IPoint): IPoint {    var x = p1.x + p2.x;    var y = p1.y + p2.y;    return { x: x, y: y } } //Valid var newPoint = addPoints({ x: 3, y: 4 }, { x: 5, y: 1 }); //Error var newPoint2 = addPoints({ x: 1 }, { x: 4, y: 3 }); Classes In the next version of JavaScript, ECMAScript 6, a standard has been proposed for the definition of classes. TypeScript brings this concept to the current versions of JavaScript. Classes consist of a variety of different properties and members. These members can be either public or private and static or instance members. Definitions Creating classes in TypeScript is essentially the same as creating interfaces. Let's create a very simple Point class that keeps track of an x and a y position for us: class Point {    public x: number;    public y: number;    constructor(x: number, y = 0) {        this.x = x;        this.y = y;    } } As you can see, defining a class is very simple. Use the keyword class and then provide a name for the new type. Then you create a constructor for the object with any parameters you wish to provide upon creation. Our Point class requires two values that represent a location on a plane. The constructor is completely optional. If a constructor implementation is not provided, the compiler will automatically generate one that takes no parameters and initializes any instance members. We provided a default value for the property y. This default value tells the compiler to generate an extra JavaScript statement than if we had only given it a type. This also allows TypeScript to treat parameters with default values as optional parameters. If the parameter is not provided then the parameter's value is assigned to the default value you provide. This provides a simple method for ensuring that you are always operating on instantiated objects. The best part is that default values are available for all functions, not just constructors. Now let's examine the JavaScript output for the Point class: var Point = (function () {    function Point(x, y) {        if (typeof y === "undefined") { y = 0; }        this.x = x;        this.y = y;    }    return Point; })(); As you can see, a new object is created and assigned to an anonymous function that initializes the definition of the Point class. As we will see later, any public methods or static members will be added to the inner Point function's prototype. JavaScript closures are a very important concept in understanding TypeScript. Classes, modules, and enums in TypeScript all compile into JavaScript closures. Closures are actually a construct of the JavaScript language that provide a way of creating a private state for a specific segment of code. When a closure is created it contains two things: a function, and the state of the environment when the function was created. The function is returned to the caller of the closure and the state is used when the function is called. For more information about JavaScript closures and the module pattern visit http://www.adequatelygood.com/JavaScript-Module-Pattern-In-Depth.html. The optional parameter was accounted for by checking its type and initializing it if a value is not available. You can also see that both x and y properties were added to the new instance and assigned to the values that were passed into the constructor. Summary This article has thoroughly discussed the different language constructs in TypeScript. Resources for Article: Further resources on this subject: Setting Up The Rig [Article] Making Your Code Better [Article] Working with Flexible Content Elements in TYPO3 Templates [Article]
Read more
  • 0
  • 0
  • 22351

article-image-handle-web-applications
Packt
20 Oct 2014
13 min read
Save for later

Handle Web Applications

Packt
20 Oct 2014
13 min read
In this article by Ivo Balbaert author of Dart Cookbook, we will cover the following recipes: Sanitizing HTML Using a browser's local storage Using an application cache to work offline Preventing an onSubmit event from reloading the page (For more resources related to this topic, see here.) Sanitizing HTML We've all heard of (or perhaps even experienced) cross-site scripting (XSS) attacks, where evil minded attackers try to inject client-side script or SQL statements into web pages. This could be done to gain access to session cookies or database data, or to get elevated access-privileges to sensitive page content. To verify an HTML document and produce a new HTML document that preserves only whatever tags are designated safe is called sanitizing the HTML. How to do it... Look at the web project sanitization. Run the following script and see how the text content and default sanitization works: See how the default sanitization works using the following code: var elem1 = new Element.html('<div class="foo">content</div>'); document.body.children.add(elem1); var elem2 = new Element.html('<script class="foo">evil content</script><p>ok?</p>'); document.body.children.add(elem2); The text content and ok? from elem1 and elem2 are displayed, but the console gives the message Removing disallowed element <SCRIPT>. So a script is removed before it can do harm. Sanitize using HtmlEscape, which is mainly used with user-generated content: import 'dart:convert' show HtmlEscape; In main(), use the following code: var unsafe = '<script class="foo">evil   content</script><p>ok?</p>'; var sanitizer = const HtmlEscape(); print(sanitizer.convert(unsafe)); This prints the following output to the console: &lt;script class=&quot;foo&quot;&gt;evil   content&lt;&#x2F;script&gt;&lt;p&gt;ok?&lt;&#x2F;p&gt; Sanitize using node validation. The following code forbids the use of a <p> tag in node1; only <a> tags are allowed: var html_string = '<p class="note">a note aside</p>'; var node1 = new Element.html(        html_string,        validator: new NodeValidatorBuilder()          ..allowElement('a', attributes: ['href'])      ); The console prints the following output: Removing disallowed element <p> Breaking on exception: Bad state: No elements A NullTreeSanitizer for no validation is used as follows: final allHtml = const NullTreeSanitizer(); class NullTreeSanitizer implements NodeTreeSanitizer {      const NullTreeSanitizer();      void sanitizeTree(Node node) {} } It can also be used as follows: var elem3 = new Element.html('<p>a text</p>'); elem3.setInnerHtml(html_string, treeSanitizer: allHtml); How it works... First, we have very good news: Dart automatically sanitizes all methods through which HTML elements are constructed, such as new Element.html(), Element.innerHtml(), and a few others. With them, you can build HTML hardcoded, but also through string interpolation, which entails more risks. The default sanitization removes all scriptable elements and attributes. If you want to escape all characters in a string so that they are transformed into HTML special characters (such as ;&#x2F for a /), use the class HTMLEscape from dart:convert as shown in the second step. The default behavior is to escape apostrophes, greater than/less than, quotes, and slashes. If your application is using untrusted HTML to put in variables, it is strongly advised to use a validation scheme, which only covers the syntax you expect users to feed into your app. This is possible because Element.html() has the following optional arguments: Element.html(String html, {NodeValidator validator, NodeTreeSanitizer treeSanitizer}) In step 3, only <a> was an allowed tag. By adding more allowElement rules in cascade, you can allow more tags. Using allowHtml5() permits all HTML5 tags. If you want to remove all control in some cases (perhaps you are dealing with known safe HTML and need to bypass sanitization for performance reasons), you can add the class NullTreeSanitizer to your code, which has no control at all and defines an object allHtml, as shown in step 4. Then, use setInnerHtml() with an optional named attribute treeSanitizer set to allHtml. Using a browser's local storage Local storage (also called the Web Storage API) is widely supported in modern browsers. It enables the application's data to be persisted locally (on the client side) as a map-like structure: a dictionary of key-value string pairs, in fact using JSON strings to store and retrieve data. It provides our application with an offline mode of functioning when the server is not available to store the data in a database. Local storage does not expire, but every application can only access its own data up to a certain limit depending on the browser. In addition, of course, different browsers can't access each other's stores. How to do it... Look at the following example, the local_storage.dart file: import 'dart:html';  Storage local = window.localStorage;  void main() { var job1 = new Job(1, "Web Developer", 6500, "Dart Unlimited") ; Perform the following steps to use the browser's local storage: Write to a local storage with the key Job:1 using the following code: local["Job:${job1.id}"] = job1.toJson; ButtonElement bel = querySelector('#readls'); bel.onClick.listen(readShowData); } A click on the button checks to see whether the key Job:1 can be found in the local storage, and, if so, reads the data in. This is then shown in the data <div>: readShowData(Event e) {    var key = 'Job:1';    if(local.containsKey(key)) { // read data from local storage:    String job = local[key];    querySelector('#data').appendText(job); } }   class Job { int id; String type; int salary; String company; Job(this.id, this.type, this.salary, this.company); String get toJson => '{ "type": "$type", "salary": "$salary", "company": "$company" } '; } The following screenshot depicts how data is stored in and retrieved from a local storage: How it works... You can store data with a certain key in the local storage from the Window class as follows using window.localStorage[key] = data; (both key and data are Strings). You can retrieve it with var data = window.localStorage[key];. In our code, we used the abbreviation Storage local = window.localStorage; because local is a map. You can check the existence of this piece of data in the local storage with containsKey(key); in Chrome (also in other browsers via Developer Tools). You can verify this by navigating to Extra | Tools | Resources | Local Storage (as shown in the previous screenshot), window.localStorage also has a length property; you can query whether it contains something with isEmpty, and you can loop through all stored values using the following code: for(var key in window.localStorage.keys) { String value = window.localStorage[key]; // more code } There's more... Local storage can be disabled (by user action, or via an installed plugin or extension), so we must alert the user when this needs to be enabled; we can do this by catching the exception that occurs in this case: try { window.localStorage[key] = data; } on Exception catch (ex) { window.alert("Data not stored: Local storage is disabled!"); } Local storage is a simple key-value store and does have good cross-browser coverage. However, it can only store strings and is a blocking (synchronous) API; this means that it can temporarily pause your web page from responding while it is doing its job storing or reading large amounts of data such as images. Moreover, it has a space limit of 5 MB (this varies with browsers); you can't detect when you are nearing this limit and you can't ask for more space. When the limit is reached, an error occurs so that the user can be informed. These properties make local storage only useful as a temporary data storage tool; this means that it is better than cookies, but not suited for a reliable, database kind of storage. Web storage also has another way of storing data called sessionStorage used in the same way, but this limits the persistence of the data to only the current browser session. So, data is lost when the browser is closed or another application is started in the same browser window. Using an application cache to work offline When, for some reason, our users don't have web access or the website is down for maintenance (or even broken), our web-based applications should also work offline. The browser cache is not robust enough to be able to do this, so HTML5 has given us the mechanism of ApplicationCache. This cache tells the browser which files should be made available offline. The effect is that the application loads and works correctly, even when the user is offline. The files to be held in the cache are specified in a manifest file, which has a .mf or .appcache extension. How to do it... Look at the appcache application; it has a manifest file called appcache.mf. The manifest file can be specified in every web page that has to be cached. This is done with the manifest attribute of the <html> tag: <html manifest="appcache.mf"> If a page has to be cached and doesn't have the manifest attribute, it must be specified in the CACHE section of the manifest file. The manifest file has the following (minimum) content: CACHE MANIFEST # 2012-09-28:v3  CACHE: Cached1.html appcache.css appcache.dart http://dart.googlecode.com/svn/branches/bleeding_edge/dart/client/dart.js  NETWORK: *  FALLBACK: / offline.html Run cached1.html. This displays the This page is cached, and works offline! text. Change the text to This page has been changed! and reload the browser. You don't see the changed text because the page is created from the application cache. When the manifest file is changed (change version v1 to v2), the cache becomes invalid and the new version of the page is loaded with the This page has been changed! text. The Dart script appcache.dart of the page should contain the following minimal code to access the cache: main() { new AppCache(window.applicationCache); }  class AppCache { ApplicationCache appCache;  AppCache(this.appCache) {    appCache.onUpdateReady.listen((e) => updateReady());    appCache.onError.listen(onCacheError); }  void updateReady() {    if (appCache.status == ApplicationCache.UPDATEREADY) {      // The browser downloaded a new app cache. Alert the user:      appCache.swapCache();      window.alert('A new version of this site is available. Please reload.');    } }  void onCacheError(Event e) {      print('Cache error: ${e}');      // Implement more complete error reporting to developers } } How it works... The CACHE section in the manifest file enumerates all the entries that have to be cached. The NETWORK: and * options mean that to use all other resources the user has to be online. FALLBACK specifies that offline.html will be displayed if the user is offline and a resource is inaccessible. A page is cached when either of the following is true: Its HTML tag has a manifest attribute pointing to the manifest file The page is specified in the CACHE section of the manifest file The browser is notified when the manifest file is changed, and the user will be forced to refresh its cached resources. Adding a timestamp and/or a version number such as # 2014-05-18:v1 works fine. Changing the date or the version invalidates the cache, and the updated pages are again loaded from the server. To access the browser's app cache from your code, use the window.applicationCache object. Make an object of a class AppCache, and alert the user when the application cache has become invalid (the status is UPDATEREADY) by defining an onUpdateReady listener. There's more... The other known states of the application cache are UNCACHED, IDLE, CHECKING, DOWNLOADING, and OBSOLETE. To log all these cache events, you could add the following listeners to the appCache constructor: appCache.onCached.listen(onCacheEvent); appCache.onChecking.listen(onCacheEvent); appCache.onDownloading.listen(onCacheEvent); appCache.onNoUpdate.listen(onCacheEvent); appCache.onObsolete.listen(onCacheEvent); appCache.onProgress.listen(onCacheEvent); Provide an onCacheEvent handler using the following code: void onCacheEvent(Event e) {    print('Cache event: ${e}'); } Preventing an onSubmit event from reloading the page The default action for a submit button on a web page that contains an HTML form is to post all the form data to the server on which the application runs. What if we don't want this to happen? How to do it... Experiment with the submit application by performing the following steps: Our web page submit.html contains the following code: <form id="form1" action="http://www.dartlang.org" method="POST"> <label>Job:<input type="text" name="Job" size="75"></input>    </label>    <input type="submit" value="Job Search">    </form> Comment out all the code in submit.dart. Run the app, enter a job name, and click on the Job Search submit button; the Dart site appears. When the following code is added to submit.dart, clicking on the no button for a longer duration makes the Dart site appear: import 'dart:html';  void main() { querySelector('#form1').onSubmit.listen(submit); }  submit(Event e) {      e.preventDefault(); // code to be executed when button is clicked  } How it works... In the first step, when the submit button is pressed, the browser sees that the method is POST. This method collects the data and names from the input fields and sends it to the URL specified in action to be executed, which only shows the Dart site in our case. To prevent the form from posting the data, make an event handler for the onSubmit event of the form. In this handler code, e.preventDefault(); as the first statement will cancel the default submit action. However, the rest of the submit event handler (and even the same handler of a parent control, should there be one) is still executed on the client side. Summary In this article we learned how to handle web applications, sanitize a HTML, use a browser's local storage, use application cache to work offline, and how to prevent an onSubmit event from reloading a page. Resources for Article: Further resources on this subject: Handling the DOM in Dart [Article] QR Codes, Geolocation, Google Maps API, and HTML5 Video [Article] HTML5 Game Development – A Ball-shooting Machine with Physics Engine [Article]
Read more
  • 0
  • 0
  • 2140

article-image-cordova-plugins
Packt
17 Oct 2014
20 min read
Save for later

Cordova Plugins

Packt
17 Oct 2014
20 min read
In this article by Hazem Saleh, author of JavaScript Mobile Application Development, we will continue to deep dive into Apache Cordova. You will learn how to create your own custom Cordova plugin on the three most popular mobile platforms: Android (using the Java programming language), iOS (using the Objective-C programming language), and Windows Phone 8 (using the C# programming language). (For more resources related to this topic, see here.) Developing a custom Cordova plugin Before going into the details of the plugin, it is important to note that developing custom Cordova plugins is not a common scenario if you are developing Apache Cordova apps. This is because the Apache Cordova core and community custom plugins already cover many of the use cases that are needed to access a device's native functions. So, make sure of two things: You are not developing a custom plugin that already exists in Apache Cordova core plugins. You are not developing a custom plugin whose functionality already exists in other good Apache Cordova custom plugin(s) that are developed by the Apache Cordova development community. Building plugins from scratch can consume precious time from your project; otherwise, you can save time by reusing one of the available good custom plugins. Another thing to note is that developing custom Cordova plugins is an advanced topic. It requires you to be aware of the native programming languages of the mobile platforms, so make sure you have an overview of Java, Objective-C, and C# (or at least one of them) before reading this section. This will be helpful in understanding all the plugin development steps (plugin structuring, JavaScript interface definition, and native plugin implementation). Now, let's start developing our custom Cordova plugin. It can be used in order to send SMS messages from one of the three popular mobile platforms (Android, iOS, and Windows Phone 8). Before we start creating our plugin, we need to define its API. The following code listing shows you how to call the sms.sendMessage method of our plugin, which will be used in order to send an SMS across platforms: var messageInfo = {    phoneNumber: "xxxxxxxxxx",    textMessage: "This is a test message" }; sms.sendMessage(messageInfo, function(message) {    console.log("success: " + message); }, function(error) {    console.log("code: " + error.code + ", message: " + error.message); }); The sms.sendMessage method has the following parameters: messageInfo: This is a JSON object that contains two main attributes: phoneNumber, which represents the phone number that will receive the SMS message, and textMessage, which represents the text message to be sent. successCallback: This is a callback that will be called if the message is sent successfully. errorCallback: This is a callback that will be called if the message is not sent successfully. This callback receives an error object as a parameter. The error object has code (the error code) and message (the error message) attributes. Using plugman In addition to the Apache Cordova CLI utility, you can use the plugman utility in order to add or remove plugin(s) to/from your Apache Cordova projects. However, it's worth mentioning that plugman is a lower-level tool that you can use if your Apache Cordova application follows platform-centered workflow and not cross-platform workflow. If your application follows cross-platform workflow, then Apache Cordova CLI should be your choice. If you want your application to run on different mobile platforms (which is a common use case if you want to use Apache Cordova), it's recommend that you follow cross-platform workflow. Use platform-centered workflow if you want to develop your Apache Cordova application on a single platform and modify your application using the platform-specific SDK. Besides adding and removing plugins to/from platform-centered workflow, the Cordova projects plugman can also be used: To create basic scaffolding for your custom Cordova plugin To add and remove a platform to/from your custom Cordova plugin To add user(s) to the Cordova plugin registry (a repository that hosts the different Apache Cordova core and custom plugins) To publish your custom Cordova plugin(s) to the Cordova plugin registry To unpublish your custom plugin(s) from the Cordova plugin registry To search for plugin(s) in the Cordova plugin registry In this section, we will use the plugman utility to create the basic scaffolding of our custom SMS plugin. In order to install plugman, you need to make sure that Node.js is installed in your operating system. Then, to install plugman, execute the following command: > npm install -g plugman After installing plugman, we can start generating our initial custom plugin artifacts using the plugman create command as follows: > plugman create --name sms --plugin_id com.jsmobile.plugins.sms -- plugin_version 0.0.1 It is important to note the following parameters: --name: This specifies the plugin name ( in our case, sms) --plugin_id: This specifies an ID for the plugin (in our case, com.jsmobile.plugins.sms) --plugin_version: This specifies the plugin version (in our case, 0.0.1) The following are the two parameters that the plugman create command can accept as well: --path: This specifies the directory path of the plugin --variable: This can specify extra variables such as author or description After executing the previous command, we will have initial artifacts for our custom plugin. As we will be supporting multiple platforms, we can use the plugman platform add command. The following two commands add the Android and iOS platforms to our custom plugin: > plugman platform add --platform_name android > plugman platform add --platform_name ios In order to run the plugman platform add command, we need to run it from the plugin directory. Unfortunately, for Windows Phone 8 platform support, we need to add it manually later to our plugin. Now, let's check the initial scaffolding of our custom plugin code. The following screenshot shows the hierarchy of our initial plugin code: Hierarchy of our initial plugin code As shown in the preceding screenshot, there is one file and two parent directories. They are as follows: plugin.xml file: This contains the plugin definition. src directory: This contains the plugin native implementation code for each platform. For now, it contains two subdirectories: android and ios. The android subdirectory contains sms.java. This represents the initial implementation of the plugin in the Android.ios subdirectory that contains sms.m, which represents the initial implementation of the plugin in iOS. www directory: This mainly contains the JavaScript interface of the plugin. It contains sms.js that represents the initial implementation of the JavaScript API plugin. We will need to edit these generated files (and may be, refactor and add new implementation files) in order to implement our custom SMS plugin. Plugin definition First of all, we need to define our plugin structure. In order to do so, we need to define our plugin in the plugin.xml file. The following code listing shows our plugin.xml code: <?xml version='1.0' encoding='utf-8'?> <plugin id="com.jsmobile.plugins.sms" version="0.0.1"    >      <name>sms</name>    <description>A plugin for sending sms messages</description>    <license>Apache 2.0</license>    <keywords>cordova,plugins,sms</keywords>    <js-module name="sms" src="www/sms.js">        <clobbers target="window.sms" />    </js-module>    <platform name="android">        <config-file parent="/*" target="res/xml/config.xml">            <feature name="Sms">                <param name="android-package" value="com.jsmobile.plugins.sms.Sms" />            </feature>        </config-file>        <config-file target="AndroidManifest.xml" parent="/manifest">          <uses-permission android_name="android.permission.SEND_SMS" />        </config-file>          <source-file src="src/android/Sms.java"                      target-dir="src/com/jsmobile/plugins/sms" />    </platform>      <platform name="ios">        <config-file parent="/*" target="config.xml">            <feature name="Sms">                <param name="ios-package" value="Sms" />            </feature>        </config-file>          <source-file src="src/ios/Sms.h" />        <source-file src="src/ios/Sms.m" />        <framework src="MessageUI.framework" weak="true" />    </platform>    <platform name="wp8">        <config-file target="config.xml" parent="/*">            <feature name="Sms">                <param name="wp-package" value="Sms" />            </feature>        </config-file>        <source-file src="src/wp8/Sms.cs" />    </platform> </plugin> The plugin.xml file defines the plugin structure and contains a top-level <plugin> , which contains the following attributes: /> tag mainly inserts the smsExport JavaScript object that is defined in the www/sms.js file and exported using module.exports (the smsExport object will be illustrated in the Defining the plugin's JavaScript interface section) into the window object as window.sms. This means that our plugin users will be able to access our plugin's API using the window.sms object (this will be shown in detail in the Testing our Cordova plugin section). The <plugin> element can contain one or more <platform> element(s). The <platform> element specifies the platform-specific plugin's configuration. It has mainly one attribute name that specifies the platform name (android, ios, wp8, bb10, wp7, and so on). The <platform> element can have the following child elements: <source-file>: This element represents the native platform source code that will be installed and executed in the plugin-client project. The <source-file> element has the following two main attributes: src: This attribute represents the location of the source file relative to plugin.xml. target-dir: This attribute represents the target directory (that is relative to the project root) in which the source file will be placed when the plugin is installed in the client project. This attribute is mainly needed in a Java platform (Android), because a file under the x.y.z package must be placed under x/y/z directories. For iOS and Windows platforms, this parameter should be ignored. <config-file>: This element represents the configuration file that will be modified. This is required for many cases; for example, in Android, in order to send an SMS from your Android application, you need to modify the Android configuration file to have the permission to send an SMS from the device. The <config-file> has two main attributes: target: This attribute represents the file to be modified and the path relative to the project root. parent: This attribute represents an XPath selector that references the parent of the elements to be added to the configuration file. <framework>: This element specifies a platform-specific framework that the plugin depends on. It mainly has the src attribute to specify the framework name and weak attribute to indicate whether the specified framework should be weakly linked. Giving this explanation for the <platform> element and getting back to our plugin.xml file, you will notice that we have the following three <platform> elements: Android (<platform name="android">) performs the following operations: It creates a <feature> element for our SMS plugin under the root element of the res/xml/config.xml file to register our plugin in an Android project. In Android, the <feature> element's name attribute represents the service name, and its "android-package" parameter represents the fully qualified name of the Java plugin class: <feature name="Sms">    <param name="android-package" value="com.jsmobile.plugins.sms.Sms" /> </feature> It modifies the AndroidManifest.xml file to add the <uses-permission android_name="android.permission.SEND_SMS" /> element (to have a permission to send an SMS in an Android platform) under the <manifest> element. Finally, it specifies the plugin's implementation source file, "src/android/Sms.java", and its target directory, "src/com/jsmobile/plugins/sms" (we will explore the contents of this file in the Developing the Android code section). iOS (<platform name="ios">) performs the following operations: It creates a <feature> element for our SMS plugin under the root element of the config.xml file to register our plugin in the iOS project. In iOS, the <feature> element's name attribute represents the service name, and its "ios-package" parameter represents the Objective-C plugin class name: <feature name="Sms">    <param name="ios-package" value="Sms" /> </feature> It specifies the plugin implementation source files: Sms.h (the header file) and Sms.m (the methods file). We will explore the contents of these files in the Developing the iOS code section. It adds "MessageUI.framework" as a weakly linked dependency for our iOS plugin. Windows Phone 8 (<platform name="wp8">) performs the following operations: It creates a <feature> element for our SMS plugin under the root element of the config.xml file to register our plugin in the Windows Phone 8 project. The <feature> element's name attribute represents the service name, and its "wp-package" parameter represents the C# service class name: <feature name="Sms">        <param name="wp-package" value="Sms" /> </feature> It specifies the plugin implementation source file, "src/wp8/Sms.cs" (we will explore the contents of this file in the Developing Windows Phone 8 code section). This is all we need to know in order to understand the structure of our custom plugin; however, there are many more attributes and elements that are not mentioned here, as we didn't use them in our example. In order to get the complete list of attributes and elements of plugin.xml, you can check out the plugin specification page in the Apache Cordova documentation at http://cordova.apache.org/docs/en/3.4.0/plugin_ref_spec.md.html#Plugin%20Specification. Defining the plugin's JavaScript interface As indicated in the plugin definition file (plugin.xml), our plugin's JavaScript interface is defined in sms.js, which is located under the www directory. The following code snippet shows the sms.js file content: var smsExport = {}; smsExport.sendMessage = function(messageInfo, successCallback, errorCallback) {    if (messageInfo == null || typeof messageInfo !== 'object') {        if (errorCallback) {            errorCallback({                code: "INVALID_INPUT",                message: "Invalid Input"            });        }        return;    }    var phoneNumber = messageInfo.phoneNumber;    var textMessage = messageInfo.textMessage || "Default Text from SMS plugin";    if (! phoneNumber) {        console.log("Missing Phone Number");        if (errorCallback) {            errorCallback({                code: "MISSING_PHONE_NUMBER",                message: "Missing Phone number"            });        }        return;    }    cordova.exec(successCallback, errorCallback, "Sms", "sendMessage", [phoneNumber, textMessage]); }; module.exports = smsExport; The smsExport object contains a single method, sendMessage(messageInfo, successCallback, errorCallback). In the sendMessage method, phoneNumber and textMessage are extracted from the messageInfo object. If a phone number is not specified by the user, then errorCallback will be called with a JSON error object, which has a code attribute set to "MISSING_PHONE_NUMBER" and a message attribute set to "Missing Phone number". After passing this validation, a call is performed to the cordova.exec() API in order to call the native code (whether it is Android, iOS, Windows Phone 8, or any other supported platform) from Apache Cordova JavaScript. It is important to note that the cordova.exec(successCallback, errorCallback, "service", "action", [args]) API has the following parameters: successCallback: This represents the success callback function that will be called (with any specified parameter(s)) if the Cordova exec call completes successfully errorCallback: This represents the error callback function that will be called (with any specified error parameter(s)) if the Cordova exec call does not complete successfully "service": This represents the native service name that is mapped to a native class using the <feature> element (in sms.js, the native service name is "Sms") "action": This represents the action name to be executed, and an action is mapped to a class method in some platforms (in sms.js, the action name is "sendMessage") [args]: This is an array that represents the action arguments (in sms.js, the action arguments are [phoneNumber, textMessage]) It is very important to note that in cordova.exec(successCallback, errorCallback, "service", "action", [args]), the "service" parameter must match the name of the <feature> element, which we set in our plugin.xml file in order to call the mapped native plugin class correctly. Finally, the smsExport object is exported using module.exports. Do not forget that our JavaScript module is mapped to window.sms using the <clobbers target="window.sms" /> element inside <js-module src="www/sms.js"> element, which we discussed in the plugin.xml file. This means that in order to call the sendMessage method of the smsExport object from our plugin-client application, we use the sms.sendMessage() method. Developing the Android code As specified in our plugin.xml file's platform section for Android, the implementation of our plugin in Android is located at src/android/Sms.java. The following code snippet shows the first part of the Sms.java file: package com.jsmobile.plugins.sms;   import org.apache.cordova.CordovaPlugin; import org.apache.cordova.CallbackContext; import org.apache.cordova.PluginResult; import org.apache.cordova.PluginResult.Status; import org.json.JSONArray; import org.json.JSONException; import org.json.JSONObject; import android.app.Activity; import android.app.PendingIntent; import android.content.BroadcastReceiver; import android.content.Context; import android.content.Intent; import android.content.IntentFilter; import android.content.pm.PackageManager; import android.telephony.SmsManager; public class Sms extends CordovaPlugin {    private static final String SMS_GENERAL_ERROR = "SMS_GENERAL_ERROR";    private static final String NO_SMS_SERVICE_AVAILABLE = "NO_SMS_SERVICE_AVAILABLE";    private static final String SMS_FEATURE_NOT_SUPPORTED = "SMS_FEATURE_NOT_SUPPORTED";    private static final String SENDING_SMS_ID = "SENDING_SMS";    @Override    public boolean execute(String action, JSONArray args, CallbackContext callbackContext) throws JSONException {        if (action.equals("sendMessage")) {            String phoneNumber = args.getString(0);            String message = args.getString(1);            boolean isSupported = getActivity().getPackageManager().hasSystemFeature(PackageManager. FEATURE_TELEPHONY);            if (! isSupported) {                JSONObject errorObject = new JSONObject();                errorObject.put("code", SMS_FEATURE_NOT_SUPPORTED);                errorObject.put("message", "SMS feature is not supported on this device");                callbackContext.sendPluginResult(new PluginResult(Status.ERROR, errorObject));                return false;            }            this.sendSMS(phoneNumber, message, callbackContext);            return true;        }        return false;    }    // Code is omitted here for simplicity ...    private Activity getActivity() {        return this.cordova.getActivity();    } } In order to create our Cordova Android plugin class, our Android plugin class must extend the CordovaPlugin class and must override one of the execute() methods of CordovaPlugin. In our Sms Java class, the execute(String action, JSONArray args, CallbackContext callbackContext) execute method, which has the following parameters, is overridden: String action: This represents the action to be performed, and it matches the specified action parameter in the cordova.exec() JavaScript API JSONArray args: This represents the action arguments, and it matches the [args] parameter in the cordova.exec() JavaScript API CallbackContext callbackContext: This represents the callback context used when calling a function back to JavaScript In the execute() method of our Sms class, phoneNumber and message parameters are retrieved from the args parameter. Using getActivity().getPackageManager().hasSystemFeature(PackageManager.FEATURE_TELEPHONY), we can check if the device has a telephony radio with data communication support. If the device does not have this feature, this API returns false, so we create errorObject of the JSONObject type that contains an error code attribute ("code") and an error message attribute ("message") that inform the plugin user that the SMS feature is not supported on this device. The plugin tells the JavaScript caller that the operation failed by calling callbackContext.sendPluginResult() and specifying a PluginResult object as a parameter (the PluginResult object's status is set to Status.ERROR, and message is set to errorObject). As indicated in our Android implementation, in order to send a plugin result to JavaScript from Android, we use the callbackContext.sendPluginResult() method that specifies the PluginResult status and message. Other platforms (iOS and Windows Phone 8) have much a similar way. If an Android device supports sending SMS messages, then a call to the sendSMS() private method is performed. The following code snippet shows the sendSMS() code: private void sendSMS(String phoneNumber, String message, final CallbackContext callbackContext) throws JSONException {    PendingIntent sentPI = PendingIntent.getBroadcast(getActivity(), 0, new Intent(SENDING_SMS_ID), 0);    getActivity().registerReceiver(new BroadcastReceiver() {        @Override        public void onReceive(Context context, Intent intent) {            switch (getResultCode()) {            case Activity.RESULT_OK:                callbackContext.sendPluginResult(new PluginResult(Status.OK, "SMS message is sent successfully"));                break;            case SmsManager.RESULT_ERROR_NO_SERVICE:                try {                    JSONObject errorObject = new JSONObject();                    errorObject.put("code", NO_SMS_SERVICE_AVAILABLE);                    errorObject.put("message", "SMS is not sent because no service is available");                     callbackContext.sendPluginResult(new PluginResult(Status.ERROR, errorObject));                } catch (JSONException exception) {                    exception.printStackTrace();                }                break;            default:                try {                    JSONObject errorObject = new JSONObject();                    errorObject.put("code", SMS_GENERAL_ERROR);                    errorObject.put("message", "SMS general error");                    callbackContext.sendPluginResult(new PluginResult(Status.ERROR, errorObject));                } catch (JSONException exception) {                    exception.printStackTrace();                }                 break;            }        }    }, new IntentFilter(SENDING_SMS_ID));    SmsManager sms = SmsManager.getDefault();    sms.sendTextMessage(phoneNumber, null, message, sentPI, null); } In order to understand the sendSMS() method, let's look into the method's last two lines: SmsManager sms = SmsManager.getDefault(); sms.sendTextMessage(phoneNumber, null, message, sentPI, null); SmsManager is an Android class that provides an API to send text messages. Using SmsManager.getDefault() returns an object of SmsManager. In order to send a text-based message, a call to sms.sendTextMessage() should be performed. The sms.sendTextMessage (String destinationAddress, String scAddress, String text, PendingIntent sentIntent, PendingIntent deliveryIntent) method has the following parameters: destinationAddress: This represents the address (phone number) to send the message to. scAddress: This represents the service center address. It can be set to null to use the current default SMS center. text: This represents the text message to be sent. sentIntent: This represents PendingIntent, which broadcasts when the message is successfully sent or failed. It can be set to null. deliveryIntent: This represents PendingIntent, which broadcasts when the message is delivered to the recipient. It can be set to null. As shown in the preceding code snippet, we specified a destination address (phoneNumber), a text message (message), and finally, a pending intent (sendPI) in order to listen to the message-sending status. If you return to the sendSMS() code and look at it from the beginning, you will notice that sentPI is initialized by calling PendingIntent.getBroadcast(), and in order to receive the SMS-sending broadcast, BroadcastReceiver is registered. When the SMS message is sent successfully or fails, the onReceive() method of BroadcastReceiver will be called, and the resultant code can be retrieved using getResultCode(). The result code can indicate: Success when getResultCode() is equal to Activity.RESULT_OK. In this case, a PluginResult object is constructed with status = Status.OK and message = "SMS message is sent successfully", and it is sent to the client using callbackContext.sendPluginResult(). Failure when getResultCode() is not equal to Activity.RESULT_OK. In this case, a PluginResult object is constructed with status = Status.ERROR and message = errorObject (which contains the error code and error message), and it is sent to the client using callbackContext.sendPluginResult(). These are the details of our SMS plugin implementation in the Android platform. Now, let's move to the iOS implementation of our plugin. Summary This article showed you how to design and develop your own custom Apache Cordova plugin using JavaScript and Java for Android, Objective-C for iOS, and finally, C# for Windows Phone 8. Resources for Article:   Further resources on this subject: Building Mobile Apps [article] Digging into the Architecture [article] So, what is KineticJS? [article]
Read more
  • 0
  • 0
  • 5615

article-image-routing
Packt
16 Oct 2014
17 min read
Save for later

Routing

Packt
16 Oct 2014
17 min read
In this article by Mitchel Kelonye, author of Mastering Ember.js, we will learn URL-based state management in Ember.js, which constitutes routing. Routing enables us to translate different states in our applications into URLs and vice-versa. It is a key concept in Ember.js that enables developers to easily separate application logic. It also enables users to link back to content in the application via the usual HTTP URLs. (For more resources related to this topic, see here.) We all know that in traditional web development, every request is linked by a URL that enables the server make a decision on the incoming request. Typical actions include sending back a resource file or JSON payload, redirecting the request to a different resource, or sending back an error response such as in the case of unauthorized access. Ember.js strives to preserve these ideas in the browser environment by enabling association between these URLs and state of the application. The main component that manages these states is the application router. It is responsible for restoring an application to a state matching the given URL. It also enables the user to navigate between the application's history as expected. The router is automatically created on application initialization and can be referenced as MyApplicationNamespace.Router. Before we proceed, we will be using the bundled sample to better understand this extremely convenient component. The sample is a simple implementation of the Contacts OS X application as shown in the following screenshot: It enables users to add new contacts as well as edit and delete existing ones. For simplicity, we won't support avatars but that could be an implementation exercise for the reader. We already mentioned some of the states in which this application can transition into. These states have to be registered in the same way server-side frameworks have URL dispatchers that backend programmers use to map URL patters to views. The article sample already illustrates how these possible states are defined:  // app.jsvar App = Ember.Application.create();App.Router.map(function() {this.resource('contacts', function(){this.route('new');this.resource('contact', {path: '/:contact_id'}, function(){this.route('edit');});});this.route('about');}); Notice that the already instantiated router was referenced as App.Router. Calling its map method gives the application an opportunity to register its possible states. In addition, two other methods are used to classify these states into routes and resources. Mapping URLs to routes When defining routes and resources, we are essentially mapping URLs to possible states in our application. As shown in the first code snippet, the router's map function takes a function as its only argument. Inside this function, we may define a resource using the corresponding method, which takes the following signature: this.resource(resourceName, options, function); The first argument specifies the name of the resource and coincidentally, the path to match the request URL. The next argument is optional and holds configurations that we may need to specify as we shall see later. The last one is a function that is used to define the routes of that particular resource. For example, the first defined resource in the samples says, let the contacts resource handle any requests whose URL start with /contacts. It also specifies one route, new, that is used to handle creation of new contacts. Routes on the other hand accept the same arguments for the function argument. You must be asking yourself, "So how are routes different from resources?" The two are essentially the same, other than the former offers a way to categorize states (routes) that perform actions on a specific entity. We can think of an Ember.js application as tree, composed of a trunk (the router), branches (resources), and leaves (routes). For example, the contact state (a resource) caters for a specific contact. This resource can be displayed in two modes: read and write; hence, the index and edit routes respectively, as shown: this.resource('contact', {path: '/:contact_id'}, function(){this.route('index'); // auto definedthis.route('edit');}); Because Ember.js encourages convention, there are two components of routes and resources that are always autodefined: A default application resource: This is the master resource into which all other resources are defined. We therefore did not need to define it in the router. It's not mandatory to define resources on every state. For example, our about state is a route because it only needs to display static content to the user. It can however be thought to be a route of the already autodefined application resource. A default index route on every resource: Again, every resource has a default index route. It's autodefined because an application cannot settle on a resource state. The application therefore uses this route if no other route within this same resource was intended to be used. Nesting resources Resources can be nested depending on the architecture of the application. In our case, we need to load contacts in the sidebar before displaying any of them to the user. Therefore, we need to define the contact resource inside the contacts. On the other hand, in an application such as Twitter, it won't make sense to define a tweet resource embedded inside a tweets resource because an extra overhead will be incurred when a user just wants to view a single tweet linked from an external application. Understanding the state transition cycle A request is handled in the same way water travels from the roots (the application), up the trunk, and is eventually lost off leaves. This request we are referring to is a change in the browser location that can be triggered in a number of ways. Before we proceed into finer details about routes, let's discuss what happened when the application was first loaded. On boot, a few things happened as outlined: The application first transitioned into the application state, then the index state. Next, the application index route redirected the request to the contacts resource. Our application uses the browsers local storage to store the contacts and so for demoing purposes, the contacts resource populated this store with fixtures (located at fixtures.js). The application then transitioned into the corresponding contacts resource index route, contacts.index. Again, here we made a few decisions based on whether our store contained any data in it. Since we indeed have data, we redirected the application into the contact resource, passing the ID of the first contact along. Just as in the two preceding resources, the application transitioned from this last resource into the corresponding index route, contact.index. The following figure gives a good view of the preceding state change: Configuring the router The router can be customized in the following ways: Logging state transitions Specifying the root app URL Changing browser location lookup method During development, it may be necessary to track the states into which the application transitions into. Enabling these logs is as simple as: var App = Ember.Application.create({LOG_TRANSITIONS: true}); As illustrated, we enable the LOG_TRANSITIONS flag when creating the application. If an application is not served at the root of the website domain, then it may be necessary to specify the path name used as in the following example: App.Router.reopen({rootURL: '/contacts/'}); One other modification we may need to make revolves around the techniques Ember.js uses to subscribe to the browser's location changes. This makes it possible for the router to do its job of transitioning the app into the matched URL state. Two of these methods are as follows: Subscribing to the hashchange event Using the history.pushState API The default technique used is provided by the HashLocation class documented at http://emberjs.com/api/classes/Ember.HashLocation.html. This means that URL paths are usually prefixed with the hash symbol, for example, /#/contacts/1/edit. The other one is provided by the HistoryLocation class located at http://emberjs.com/api/classes/Ember.HistoryLocation.html. This does not distinguish URLs from the traditional ones and can be enabled as: App.Router.reopen({location: 'history'}); We can also opt to let Ember.js pick which method is best suited for our app with the following code: App.Router.reopen({location: 'auto'}); If we don't need any of these techniques, we could opt to do so especially when performing tests: App.Router.reopen({location: none}); Specifying a route's path We now know that when defining a route or resource, the resource name used also serves as the path the router uses to match request URLs. Sometimes, it may be necessary to specify a different path to use to match states. There are two common reasons that may lead us to do this, the first of which is good for delegating route handling to another route. Although, we have not yet covered route handlers, we already mentioned that our application transitions from the application index route into the contacts.index state. We may however specify that the contacts route handler should manage this path as: this.resource('contacts', {path: '/'}, function(){}); Therefore, to specify an alternative path for a route, simply pass the desired route in a hash as the second argument during resource definition. This also applies when defining routes. The second reason would be when a resource contains dynamic segments. For example, our contact resource handles contacts who should obviously have different URLs linking back to them. Ember.js uses URL pattern matching techniques used by other open source projects such as Ruby on Rails, Sinatra, and Express.js. Therefore, our contact resource should be defined as: this.resource('contact', {path: '/:contact_id'}, function(){}); In the preceding snippet, /:contact_id is the dynamic segment that will be replaced by the actual contact's ID. One thing to note is that nested resources prefix their paths with those of parent resources. Therefore, the contact resource's full path would be /contacts/:contact_id. It's also worth noting that the name of the dynamic segment is not mandated and so we could have named the dynamic segment as /:id. Defining route and resource handlers Now that we have defined all the possible states that our application can transition into, we need to define handlers to these states. From this point onwards, we will use the terms route and resource handlers interchangeably. A route handler performs the following major functions: Providing data (model) to be used by the current state Specifying the view and/or template to use to render the provided data to the user Redirecting an application away into another state Before we move into discussing these roles, we need to know that a route handler is defined from the Ember.Route class as: App.RouteHandlerNameRoute = Ember.Route.extend(); This class is used to define handlers for both resources and routes and therefore, the naming should not be a concern. Just as routes and resources are associated with paths and handlers, they are also associated with controllers, views, and templates using the Ember.js naming conventions. For example, when the application initializes, it enters into the application state and therefore, the following objects are sought: The application route The application controller The application view The application template In the spirit of do more with reduced boilerplate code, Ember.js autogenerates these objects unless explicitly defined in order to override the default implementations. As another example, if we examine our application, we notice that the contact.edit route has a corresponding App.ContactEditController controller and contact/edit template. We did not need to define its route handler or view. Having seen this example, when referring to routes, we normally separate the resource name from the route name by a period as in the following: resourceName.routeName In the case of templates, we may use a period or a forward slash: resourceName/routeName The other objects are usually camelized and suffixed by the class name: ResourcenameRoutenameClassname For example, the following table shows all the objects used. As mentioned earlier, some are autogenerated. Route Name Controller Route Handler View Template  applicationApplicationControllerApplicationRoute  ApplicationViewapplication        ApplicationViewapplication  IndexViewindex       about AboutController  AboutRoute  AboutView about  contactsContactsControllerContactsRoute  ContactsView  contacts      contacts.indexContactsIndexControllerContactsIndexRoute  ContactsIndexViewcontacts/index        ContactsIndexViewcontacts/index  ContactsNewRoute  ContactsNewViewcontacts/new      contact  ContactController  ContactRoute  ContactView contact  contact.index  ContactIndexController  ContactIndexRoute  ContactIndexView contact/index contact.edit  ContactEditController  ContactEditRoute  ContactEditView contact/index One thing to note is that objects associated with the intermediary application state do not need to carry the suffix; hence, just index or about. Specifying a route's model We mentioned that route handlers provide controllers, the data needed to be displayed by templates. These handlers have a model hook that can be used to provide this data in the following format: AppNamespace.RouteHandlerName = Ember.Route.extend({model: function(){}}) For instance, the route contacts handler in the sample loads any saved contacts from local storage as: model: function(){return App.Contact.find();} We have abstracted this logic into our App.Contact model. Notice how we reopen the class in order to define this static method. A static method can only be called by the class of that method and not its instances: App.Contact.reopenClass({find: function(id){return (!!id)? App.Contact.findOne(id): App.Contact.findAll();},…}) If no arguments are passed to the method, it goes ahead and calls the findAll method, which uses the local storage helper to retrieve the contacts: findAll: function(){var contacts = store('contacts') || [];return contacts.map(function(contact){return App.Contact.create(contact);});} Because we want to deal with contact objects, we iteratively convert the contents of the loaded contact list. If we examine the corresponding template, contacts, we notice that we were able to populate the sidebar as shown in the following code: <ul class="nav nav-pills nav-stacked">{{#each model}}<li>{{#link-to "contact.index" this}}{{name}}{{/link-to}}</li>{{/each}}</ul> Do not worry about the template syntax at this point if you're new to Ember.js. The important thing to note is that the model was accessed via the model variable. Of course, before that, we check to see if the model has any content in: {{#if model.length}}...{{else}}<h1>Create contact</h1>{{/if}} As we shall see later, if the list was empty, the application would be forced to transition into the contacts.new state, in order for the user to add the first contact as shown in the following screenshot: The contact handler is a different case. Remember we mentioned that its path has a dynamic segment that would be passed to the handler. This information is passed to the model hook in an options hash as: App.ContactRoute = Ember.Route.extend({model: function(params){return App.Contact.find(params.contact_id);},...}); Notice that we are able to access the contact's ID via the contact_id attribute of the hash. This time, the find method calls the findOne static method of the contact's class, which performs a search for the contact matching the provided ID, as shown in the following code: findOne: function(id){var contacts = store('contacts') || [];var contact = contacts.find(function(contact){return contact.id == id;});if (!contact) return;return App.Contact.create(contact);} Serializing resources We've mentioned that Ember.js supports content to be linked back externally. Internally, Ember.js simplifies creating these links in templates. In our sample application, when the user selects a contact, the application transitions into the contact.index state, passing his/her ID along. This is possible through the use of the link-to handlebars expression: {{#link-to "contact.index" this}}{{name}}{{/link-to}} The important thing to note is that this expression enables us to construct a link that points to the said resource by passing the resource name and the affected model. The destination resource or route handler is responsible for yielding this path constituting serialization. To serialize a resource, we need to override the matching serialize hook as in the contact handler case shown in the following code: App.ContactRoute = Ember.Route.extend({...serialize: function(model, params){var data = {}data[params[0]] = Ember.get(model, 'id');return data;}}); Serialization means that the hook is supposed to return the values of all the specified segments. It receives two arguments, the first of which is the affected resource and the second is an array of all the specified segments during the resource definition. In our case, we only had one and so we returned the required hash that resembled the following code: {contact_id: 1} If we, for example, defined a resource with multiple segments like the following code: this.resource('book',{path: '/name/:name/:publish_year'},function(){}); The serialization hook would need to return something close to: {name: 'jon+doe',publish_year: '1990'} Asynchronous routing In actual apps, we would often need to load the model data in an asynchronous fashion. There are various approaches that can be used to deliver this kind of data. The most robust way to load asynchronous data is through use of promises. Promises are objects whose unknown value can be set at a later point in time. It is very easy to create promises in Ember.js. For example, if our contacts were located in a remote resource, we could use jQuery to load them as: App.ContactsRoute = Ember.Route.extend({model: function(params){return Ember.$.getJSON('/contacts');}}); jQuery's HTTP utilities also return promises that Ember.js can consume. As a by the way, jQuery can also be referenced as Ember.$ in an Ember.js application. In the preceding snippet, once data is loaded, Ember.js would set it as the model of the resource. However, one thing is missing. We require that the loaded data be converted to the defined contact model as shown in the following little modification: App.ContactsRoute = Ember.Route.extend({model: function(params){var promise = Ember.Object.createWithMixins(Ember.DeferredMixin);Ember.$.getJSON('/contacts').then(reject, resolve);function resolve(contacts){contacts = contacts.map(function(contact){return App.Contact.create(contact);});promise.resolve(contacts)}function reject(res){var err = new Error(res.responseText);promise.reject(err);}return promise;}}); We first create the promise, kick off the XHR request, and then return the promise while the request is still being processed. Ember.js will resume routing once this promise is rejected or resolved. The XHR call also creates a promise; so, we need to attach to it, the then method which essentially says, invoke the passed resolve or reject function on successful or failed load respectively. The resolve function converts the loaded data and resolves the promise passing the data along thereby resumes routing. If the promise was rejected, the transition fails with an error. We will see how to handle this error in a moment. Note that there are two other flavors we can use to create promises in Ember.js as shown in the following examples: var promise = Ember.Deferred.create();Ember.$.getJSON('/contacts').then(success, fail);function success(){contacts = contacts.map(function(contact){return App.Contact.create(contact);});promise.resolve(contacts)}function fail(res){var err = new Error(res.responseText);promise.reject(err);}return promise; The second example is as follows: return new Ember.RSVP.Promise(function(resolve, reject){Ember.$.getJSON('/contacts').then(success, fail);function success(){contacts = contacts.map(function(contact){return App.Contact.create(contact);});resolve(contacts)}function fail(res){var err = new Error(res.responseText);reject(err);}}); Summary This article detailed how a browser's location-based state management is accomplished in Ember.js apps. Also, we accomplished how to create a router, define resources and routes, define a route's model, and perform a redirect. Resources for Article: Further resources on this subject: AngularJS Project [Article] Automating performance analysis with YSlow and PhantomJS [Article] AngularJS [Article]
Read more
  • 0
  • 0
  • 1891

article-image-planning-desktop-virtualization
Packt
16 Oct 2014
3 min read
Save for later

Planning Desktop Virtualization

Packt
16 Oct 2014
3 min read
 This article by Andy Paul, author of the book Citrix XenApp® 7.5 Virtualization Solutions, explains the VDI and its building blocks in detail. (For more resources related to this topic, see here.) The building blocks of VDI The first step in understanding Virtual Desktop Infrastructure (VDI) is to identify what VDI means to your environment. VDI is an all-encompassing term for most virtual infrastructure projects. For this book, we will use the definitions cited in the following sections for clarity. Hosted Virtual Desktop (HVD) Hosted Virtual Desktop is a machine running a single-user operating system such as Windows 7 or Windows 8, sometimes called a desktop OS, which is hosted on a virtual platform within the data center. Users remotely access a desktop that may or may not be dedicated but runs with isolated resources. This is typically a Citrix XenDesktop virtual desktop, as shown in the following figure:   Hosted Virtual Desktop model; each user has dedicated resources Hosted Shared Desktop (HSD) Hosted Shared Desktop is a machine running a multiuser operating system such as Windows 2008 Server or Windows 2012 Server, sometimes called a server OS, possibly hosted on a virtual platform within the data center. Users remotely access a desktop that may be using shared resources among multiple users. This will historically be a Citrix XenApp published desktop, as demonstrated in the following figure:   Hosted Shared Desktop model; each user shares the desktop server resources Session-based Computing (SBC) With Session-based Computing, users remotely access applications or other resources on a server running in the data center. These are typically client/server applications. This server may or may not be virtualized. This is a multiuser environment, but the users do not access the underlying operating system directly. This will typically be a Citrix XenApp hosted application, as shown in the following figure:   Session-based computing model; each user accesses applications remotely, but shares resources Application virtualization In application virtualization, applications are centrally managed and distributed, but they are locally executed. This may be in conjunction with, or separate from, the other options mentioned previously. Application virtualization typically involves application isolation, allowing the applications to operate independently of any other software. This will be an example of Citrix XenApp offline applications as well as Citrix profiled applications, Microsoft App-V application packages, and VMware ThinApp solutions. Have a look at the following figure:   Application virtualization model; the application packages execute locally The preceding list is not a definitive list of options, but it serves to highlight the most commonly used elements of VDI. Other options include client-side hypervisors for local execution of a virtual desktop, hosted physical desktops, and cloud-based applications. Depending on the environment, all of these components can be relevant. Summary In this article, we learned the VDI and understood its building blocks in detail. Resources for Article: Further resources on this subject: Installation and Deployment of Citrix Systems®' CPSM [article] Designing, Sizing, Building, and Configuring Citrix VDI-in-a-Box [article] Introduction to Citrix XenDesktop [article]
Read more
  • 0
  • 0
  • 8650
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-introduction-custom-template-filters-and-tags
Packt
13 Oct 2014
25 min read
Save for later

Introduction to Custom Template Filters and Tags

Packt
13 Oct 2014
25 min read
This article is written by Aidas Bendoratis, the author of Web Development with Django Cookbook. In this article, we will cover the following recipes: Following conventions for your own template filters and tags Creating a template filter to show how many days have passed Creating a template filter to extract the first media object Creating a template filter to humanize URLs Creating a template tag to include a template if it exists Creating a template tag to load a QuerySet in a template Creating a template tag to parse content as a template Creating a template tag to modify request query parameters As you know, Django has quite an extensive template system, with features such as template inheritance, filters for changing the representation of values, and tags for presentational logic. Moreover, Django allows you to add your own template filters and tags in your apps. Custom filters or tags should be located in a template-tag library file under the templatetags Python package in your app. Your template-tag library can then be loaded in any template with a {% load %} template tag. In this article, we will create several useful filters and tags that give more control to the template editors. Following conventions for your own template filters and tags Custom template filters and tags can become a total mess if you don't have persistent guidelines to follow. Template filters and tags should serve template editors as much as possible. They should be both handy and flexible. In this recipe, we will look at some conventions that should be used when enhancing the functionality of the Django template system. How to do it... Follow these conventions when extending the Django template system: Don't create or use custom template filters or tags when the logic for the page fits better in the view, context processors, or in model methods. When your page is context-specific, such as a list of objects or an object-detail view, load the object in the view. If you need to show some content on every page, create a context processor. Use custom methods of the model instead of template filters when you need to get some properties of an object not related to the context of the template. Name the template-tag library with the _tags suffix. When your app is named differently than your template-tag library, you can avoid ambiguous package importing problems. In the newly created library, separate filters from tags, for example, by using comments such as the following: # -*- coding: UTF-8 -*-from django import templateregister = template.Library()### FILTERS #### .. your filters go here..### TAGS #### .. your tags go here.. Create template tags that are easy to remember by including the following constructs: for [app_name.model_name]: Include this construct to use a specific model using [template_name]: Include this construct to use a template for the output of the template tag limit [count]: Include this construct to limit the results to a specific amount as [context_variable]: Include this construct to save the results to a context variable that can be reused many times later Try to avoid multiple values defined positionally in template tags unless they are self-explanatory. Otherwise, this will likely confuse the template developers. Make as many arguments resolvable as possible. Strings without quotes should be treated as context variables that need to be resolved or as short words that remind you of the structure of the template tag components. See also The Creating a template filter to show how many days have passed recipe The Creating a template filter to extract the first media object recipe The Creating a template filter to humanize URLs recipe The Creating a template tag to include a template if it exists recipe The Creating a template tag to load a QuerySet in a template recipe The Creating a template tag to parse content as a template recipe The Creating a template tag to modify request query parameters recipe Creating a template filter to show how many days have passed Not all people keep track of the date, and when talking about creation or modification dates of cutting-edge information, for many of us, it is more convenient to read the time difference, for example, the blog entry was posted three days ago, the news article was published today, and the user last logged in yesterday. In this recipe, we will create a template filter named days_since that converts dates to humanized time differences. Getting ready Create the utils app and put it under INSTALLED_APPS in the settings, if you haven't done that yet. Then, create a Python package named templatetags inside this app (Python packages are directories with an empty __init__.py file). How to do it... Create a utility_tags.py file with this content: #utils/templatetags/utility_tags.py# -*- coding: UTF-8 -*-from datetime import datetimefrom django import templatefrom django.utils.translation import ugettext_lazy as _from django.utils.timezone import now as tz_nowregister = template.Library()### FILTERS ###@register.filterdef days_since(value):""" Returns number of days between today and value."""today = tz_now().date()if isinstance(value, datetime.datetime):value = value.date()diff = today - valueif diff.days > 1:return _("%s days ago") % diff.dayselif diff.days == 1:return _("yesterday")elif diff.days == 0:return _("today")else:# Date is in the future; return formatted date.return value.strftime("%B %d, %Y") How it works... If you use this filter in a template like the following, it will render something like yesterday or 5 days ago: {% load utility_tags %}{{ object.created|days_since }} You can apply this filter to the values of the date and datetime types. Each template-tag library has a register where filters and tags are collected. Django filters are functions registered by the register.filter decorator. By default, the filter in the template system will be named the same as the function or the other callable object. If you want, you can set a different name for the filter by passing name to the decorator, as follows: @register.filter(name="humanized_days_since")def days_since(value):... The filter itself is quite self-explanatory. At first, the current date is read. If the given value of the filter is of the datetime type, the date is extracted. Then, the difference between today and the extracted value is calculated. Depending on the number of days, different string results are returned. There's more... This filter is easy to extend to also show the difference in time, such as just now, 7 minutes ago, or 3 hours ago. Just operate the datetime values instead of the date values. See also The Creating a template filter to extract the first media object recipe The Creating a template filter to humanize URLs recipe Creating a template filter to extract the first media object Imagine that you are developing a blog overview page, and for each post, you want to show images, music, or videos in that page taken from the content. In such a case, you need to extract the <img>, <object>, and <embed> tags out of the HTML content of the post. In this recipe, we will see how to do this using regular expressions in the get_first_media filter. Getting ready We will start with the utils app that should be set in INSTALLED_APPS in the settings and the templatetags package inside this app. How to do it... In the utility_tags.py file, add the following content: #utils/templatetags/utility_tags.py# -*- coding: UTF-8 -*-import refrom django import templatefrom django.utils.safestring import mark_saferegister = template.Library()### FILTERS ###media_file_regex = re.compile(r"<object .+?</object>|"r"<(img|embed) [^>]+>") )@register.filterdef get_first_media(content):""" Returns the first image or flash file from the htmlcontent """m = media_file_regex.search(content)media_tag = ""if m:media_tag = m.group()return mark_safe(media_tag) How it works... While the HTML content in the database is valid, when you put the following code in the template, it will retrieve the <object>, <img>, or <embed> tags from the content field of the object, or an empty string if no media is found there: {% load utility_tags %} {{ object.content|get_first_media }} At first, we define the compiled regular expression as media_file_regex, then in the filter, we perform a search for that regular expression pattern. By default, the result will show the <, >, and & symbols escaped as &lt;, &gt;, and &amp; entities. But we use the mark_safe function that marks the result as safe HTML ready to be shown in the template without escaping. There's more... It is very easy to extend this filter to also extract the <iframe> tags (which are more recently being used by Vimeo and YouTube for embedded videos) or the HTML5 <audio> and <video> tags. Just modify the regular expression like this: media_file_regex = re.compile(r"<iframe .+?</iframe>|"r"<audio .+?</ audio>|<video .+?</video>|"r"<object .+?</object>|<(img|embed) [^>]+>") See also The Creating a template filter to show how many days have passed recipe The Creating a template filter to humanize URLs recipe Creating a template filter to humanize URLs Usually, common web users enter URLs into address fields without protocol and trailing slashes. In this recipe, we will create a humanize_url filter used to present URLs to the user in a shorter format, truncating very long addresses, just like what Twitter does with the links in tweets. Getting ready As in the previous recipes, we will start with the utils app that should be set in INSTALLED_APPS in the settings, and should contain the templatetags package. How to do it... In the FILTERS section of the utility_tags.py template library in the utils app, let's add a filter named humanize_url and register it: #utils/templatetags/utility_tags.py# -*- coding: UTF-8 -*-import refrom django import templateregister = template.Library()### FILTERS ###@register.filterdef humanize_url(url, letter_count):""" Returns a shortened human-readable URL """letter_count = int(letter_count)re_start = re.compile(r"^https?://")re_end = re.compile(r"/$")url = re_end.sub("", re_start.sub("", url))if len(url) > letter_count:url = u"%s…" % url[:letter_count - 1]return url How it works... We can use the humanize_url filter in any template like this: {% load utility_tags %}<a href="{{ object.website }}" target="_blank">{{ object.website|humanize_url:30 }}</a> The filter uses regular expressions to remove the leading protocol and the trailing slash, and then shortens the URL to the given amount of letters, adding an ellipsis to the end if the URL doesn't fit into the specified letter count. See also The Creating a template filter to show how many days have passed recipe The Creating a template filter to extract the first media object recipe The Creating a template tag to include a template if it exists recipe Creating a template tag to include a template if it exists Django has the {% include %} template tag that renders and includes another template. However, in some particular situations, there is a problem that an error is raised if the template does not exist. In this recipe, we will show you how to create a {% try_to_include %} template tag that includes another template, but fails silently if there is no such template. Getting ready We will start again with the utils app that should be installed and is ready for custom template tags. How to do it... Template tags consist of two things: the function parsing the arguments of the template tag and the node class that is responsible for the logic of the template tag as well as for the output. Perform the following steps: First, let's create the function parsing the template-tag arguments: #utils/templatetags/utility_tags.py# -*- coding: UTF-8 -*-from django import templatefrom django.template.loader import get_templateregister = template.Library()### TAGS ###@register.tagdef try_to_include(parser, token):"""Usage: {% try_to_include "sometemplate.html" %}This will fail silently if the template doesn't exist.If it does, it will be rendered with the current context."""try:tag_name, template_name = token.split_contents()except ValueError:raise template.TemplateSyntaxError, "%r tag requires a single argument" % token.contents.split()[0]return IncludeNode(template_name) Then, we need the node class in the same file, as follows: class IncludeNode(template.Node):def __init__(self, template_name):self.template_name = template_namedef render(self, context):try:# Loading the template and rendering ittemplate_name = template.resolve_variable(self. template_name, context)included_template = get_template(template_name).render(context)except template.TemplateDoesNotExist:included_template = ""return included_template How it works... The {% try_to_include %} template tag expects one argument, that is, template_name. So, in the try_to_include function, we are trying to assign the split contents of the token only to the tag_name variable (which is "try_to_include") and the template_name variable. If this doesn't work, the template syntax error is raised. The function returns the IncludeNode object, which gets the template_name field for later usage. In the render method of IncludeNode, we resolve the template_name variable. If a context variable was passed to the template tag, then its value will be used here for template_name. If a quoted string was passed to the template tag, then the content within quotes will be used for template_name. Lastly, we try to load the template and render it with the current template context. If that doesn't work, an empty string is returned. There are at least two situations where we could use this template tag: When including a template whose path is defined in a model, as follows: {% load utility_tags %}{% try_to_include object.template_path %} When including a template whose path is defined with the {% with %} template tag somewhere high in the template context variable's scope. This is especially useful when you need to create custom layouts for plugins in the placeholder of a template in Django CMS: #templates/cms/start_page.html{% with editorial_content_template_path="cms/plugins/editorial_content/start_page.html" %}{% placeholder "main_content" %}{% endwith %}#templates/cms/plugins/editorial_content.html{% load utility_tags %}{% if editorial_content_template_path %}{% try_to_include editorial_content_template_path %}{% else %}<div><!-- Some default presentation ofeditorial content plugin --></div>{% endif % There's more... You can use the {% try_to_include %} tag as well as the default {% include %} tag to include templates that extend other templates. This has a beneficial use for large-scale portals where you have different kinds of lists in which complex items share the same structure as widgets but have a different source of data. For example, in the artist list template, you can include the artist item template as follows: {% load utility_tags %}{% for object in object_list %}{% try_to_include "artists/includes/artist_item.html" %}{% endfor %} This template will extend from the item base as follows: {# templates/artists/includes/artist_item.html #}{% extends "utils/includes/item_base.html" %}  {% block item_title %}{{ object.first_name }} {{ object.last_name }}{% endblock %} The item base defines the markup for any item and also includes a Like widget, as follows: {# templates/utils/includes/item_base.html #}{% load likes_tags %}<h3>{% block item_title %}{% endblock %}</h3>{% if request.user.is_authenticated %}{% like_widget for object %}{% endif %} See also  The Creating a template tag to load a QuerySet in a template recipe The Creating a template tag to parse content as a template recipe The Creating a template tag to modify request query parameters recipe Creating a template tag to load a QuerySet in a template Most often, the content that should be shown in a web page will have to be defined in the view. If this is the content to show on every page, it is logical to create a context processor. Another situation is when you need to show additional content such as the latest news or a random quote on some specific pages, for example, the start page or the details page of an object. In this case, you can load the necessary content with the {% get_objects %} template tag, which we will implement in this recipe. Getting ready Once again, we will start with the utils app that should be installed and ready for custom template tags. How to do it... Template tags consist of function parsing arguments passed to the tag and a node class that renders the output of the tag or modifies the template context. Perform the following steps: First, let's create the function parsing the template-tag arguments, as follows: #utils/templatetags/utility_tags.py# -*- coding: UTF-8 -*-from django.db import modelsfrom django import templateregister = template.Library()### TAGS ###@register.tagdef get_objects(parser, token):"""Gets a queryset of objects of the model specified by appandmodel namesUsage:{% get_objects [<manager>.]<method> from<app_name>.<model_name> [limit <amount>] as<var_name> %}Example:{% get_objects latest_published from people.Personlimit 3 as people %}{% get_objects site_objects.all from news.Articlelimit 3 as articles %}{% get_objects site_objects.all from news.Articleas articles %}"""amount = Nonetry:tag_name, manager_method, str_from, appmodel,str_limit,amount, str_as, var_name = token.split_contents()except ValueError:try:tag_name, manager_method, str_from, appmodel, str_as,var_name = token.split_contents()except ValueError:raise template.TemplateSyntaxError, "get_objects tag requires a following syntax: ""{% get_objects [<manager>.]<method> from ""<app_ name>.<model_name>"" [limit <amount>] as <var_name> %}"try:app_name, model_name = appmodel.split(".")except ValueError:raise template.TemplateSyntaxError, "get_objects tag requires application name and ""model name separated by a dot"model = models.get_model(app_name, model_name)return ObjectsNode(model, manager_method, amount, var_name) Then, we create the node class in the same file, as follows: class ObjectsNode(template.Node):def __init__(self, model, manager_method, amount, var_name):self.model = modelself.manager_method = manager_methodself.amount = amountself.var_name = var_namedef render(self, context):if "." in self.manager_method:manager, method = self.manager_method.split(".")else:manager = "_default_manager"method = self.manager_methodqs = getattr(getattr(self.model, manager),method,self.model._default_manager.none,)()if self.amount:amount = template.resolve_variable(self.amount,context)context[self.var_name] = qs[:amount]else:context[self.var_name] = qsreturn "" How it works... The {% get_objects %} template tag loads a QuerySet defined by the manager method from a specified app and model, limits the result to the specified amount, and saves the result to a context variable. This is the simplest example of how to use the template tag that we have just created. It will load five news articles in any template using the following snippet: {% load utility_tags %}{% get_objects all from news.Article limit 5 as latest_articles %}{% for article in latest_articles %}<a href="{{ article.get_url_path }}">{{ article.title }}</a>{% endfor %} This is using the all method of the default objects manager of the Article model, and will sort the articles by the ordering attribute defined in the Meta class. A more advanced example would be required to create a custom manager with a custom method to query objects from the database. A manager is an interface that provides database query operations to models. Each model has at least one manager called objects by default. As an example, let's create the Artist model, which has a draft or published status, and a new manager, custom_manager, which allows you to select random published artists: #artists/models.py# -*- coding: UTF-8 -*-from django.db import modelsfrom django.utils.translation import ugettext_lazy as _STATUS_CHOICES = (('draft', _("Draft"),('published', _("Published"),)class ArtistManager(models.Manager):def random_published(self):return self.filter(status="published").order_by('?')class Artist(models.Model):# ...status = models.CharField(_("Status"), max_length=20,choices=STATUS_CHOICES)custom_manager = ArtistManager() To load a random published artist, you add the following snippet to any template: {% load utility_tags %}{% get_objects custom_manager.random_published from artists.Artistlimit 1 as random_artists %}{% for artist in random_artists %}{{ artist.first_name }} {{ artist.last_name }}{% endfor %} Let's look at the code of the template tag. In the parsing function, there is one of two formats expected: with the limit and without it. The string is parsed, the model is recognized, and then the components of the template tag are passed to the ObjectNode class. In the render method of the node class, we check the manager's name and its method's name. If this is not defined, _default_manager will be used, which is, in most cases, the same as objects. After that, we call the manager method and fall back to empty the QuerySet if the method doesn't exist. If the limit is defined, we resolve the value of it and limit the QuerySet. Lastly, we save the QuerySet to the context variable. See also The Creating a template tag to include a template if it exists recipe The Creating a template tag to parse content as a template recipe The Creating a template tag to modify request query parameters recipe Creating a template tag to parse content as a template In this recipe, we will create a template tag named {% parse %}, which allows you to put template snippets into the database. This is valuable when you want to provide different content for authenticated and non-authenticated users, when you want to include a personalized salutation, or when you don't want to hardcode media paths in the database. Getting ready No surprise, we will start with the utils app that should be installed and ready for custom template tags. How to do it... Template tags consist of two things: the function parsing the arguments of the template tag and the node class that is responsible for the logic of the template tag as well as for the output. Perform the following steps: First, let's create the function parsing the template-tag arguments, as follows: #utils/templatetags/utility_tags.py# -*- coding: UTF-8 -*-from django import templateregister = template.Library()### TAGS ###@register.tagdef parse(parser, token):"""Parses the value as a template and prints it or saves to avariableUsage:{% parse <template_value> [as <variable>] %}Examples:{% parse object.description %}{% parse header as header %}{% parse "{{ MEDIA_URL }}js/" as js_url %}"""bits = token.split_contents()tag_name = bits.pop(0)try:template_value = bits.pop(0)var_name = Noneif len(bits) == 2:bits.pop(0) # remove the word "as"var_name = bits.pop(0)except ValueError:raise template.TemplateSyntaxError, "parse tag requires a following syntax: ""{% parse <template_value> [as <variable>] %}"return ParseNode(template_value, var_name) Then, we create the node class in the same file, as follows: class ParseNode(template.Node):def __init__(self, template_value, var_name):self.template_value = template_valueself.var_name = var_namedef render(self, context):template_value = template.resolve_variable(self.template_value, context)t = template.Template(template_value)context_vars = {}for d in list(context):for var, val in d.items():context_vars[var] = valresult = t.render(template.RequestContext(context['request'], context_vars))if self.var_name:context[self.var_name] = resultreturn ""return result How it works... The {% parse %} template tag allows you to parse a value as a template and to render it immediately or to save it as a context variable. If we have an object with a description field, which can contain template variables or logic, then we can parse it and render it using the following code: {% load utility_tags %}{% parse object.description %} It is also possible to define a value to parse using a quoted string like this: {% load utility_tags %}{% parse "{{ STATIC_URL }}site/img/" as img_path %}<img src="{{ img_path }}someimage.png" alt="" /> Let's have a look at the code of the template tag. The parsing function checks the arguments of the template tag bit by bit. At first, we expect the name parse, then the template value, then optionally the word as, and lastly the context variable name. The template value and the variable name are passed to the ParseNode class. The render method of that class at first resolves the value of the template variable and creates a template object out of it. Then, it renders the template with all the context variables. If the variable name is defined, the result is saved to it; otherwise, the result is shown immediately. See also The Creating a template tag to include a template if it exists recipe The Creating a template tag to load a QuerySet in a template recipe The Creating a template tag to modify request query parameters recipe Creating a template tag to modify request query parameters Django has a convenient and flexible system to create canonical, clean URLs just by adding regular expression rules in the URL configuration files. But there is a lack of built-in mechanisms to manage query parameters. Views such as search or filterable object lists need to accept query parameters to drill down through filtered results using another parameter or to go to another page. In this recipe, we will create a template tag named {% append_to_query %}, which lets you add, change, or remove parameters of the current query. Getting ready Once again, we start with the utils app that should be set in INSTALLED_APPS and should contain the templatetags package. Also, make sure that you have the request context processor set for the TEMPLATE_CONTEXT_PROCESSORS setting, as follows: #settings.pyTEMPLATE_CONTEXT_PROCESSORS = ("django.contrib.auth.context_processors.auth","django.core.context_processors.debug","django.core.context_processors.i18n","django.core.context_processors.media","django.core.context_processors.static","django.core.context_processors.tz","django.contrib.messages.context_processors.messages","django.core.context_processors.request",) How to do it... For this template tag, we will be using the simple_tag decorator that parses the components and requires you to define just the rendering function, as follows: #utils/templatetags/utility_tags.py# -*- coding: UTF-8 -*-import urllibfrom django import templatefrom django.utils.encoding import force_strregister = template.Library()### TAGS ###@register.simple_tag(takes_context=True)def append_to_query(context, **kwargs):""" Renders a link with modified current query parameters """query_params = context['request'].GET.copy()for key, value in kwargs.items():query_params[key] = valuequery_string = u""if len(query_params):query_string += u"?%s" % urllib.urlencode([(key, force_str(value)) for (key, value) inquery_params. iteritems() if value]).replace('&', '&amp;')return query_string How it works... The {% append_to_query %} template tag reads the current query parameters from the request.GET dictionary-like QueryDict object to a new dictionary named query_params, and loops through the keyword parameters passed to the template tag updating the values. Then, the new query string is formed, all spaces and special characters are URL-encoded, and ampersands connecting query parameters are escaped. This new query string is returned to the template. To read more about QueryDict objects, refer to the official Django documentation: https://docs.djangoproject.com/en/1.6/ref/request-response/#querydict-objects Let's have a look at an example of how the {% append_to_query %} template tag can be used. If the current URL is http://127.0.0.1:8000/artists/?category=fine-art&page=1, we can use the following template tag to render a link that goes to the next page: {% load utility_tags %}<a href="{% append_to_query page=2 %}">2</a> The following is the output rendered, using the preceding template tag: <a href="?category=fine-art&amp;page=2">2</a> Or we can use the following template tag to render a link that resets pagination and goes to another category: {% load utility_tags i18n %} <a href="{% append_to_query category="sculpture" page="" %}">{% trans "Sculpture" %}</a> The following is the output rendered, using the preceding template tag: <a href="?category=sculpture">Sculpture</a> See also The Creating a template tag to include a template if it exists recipe The Creating a template tag to load a QuerySet in a template recipe The Creating a template tag to parse content as a template recipe Summary In this article showed you how to create and use your own template filters and tags, as the default Django template system is quite extensive, and there are more things to add for different cases. Resources for Article: Further resources on this subject: Adding a developer with Django forms [Article] So, what is Django? [Article] Django JavaScript Integration: jQuery In-place Editing Using Ajax [Article]
Read more
  • 0
  • 0
  • 5762

article-image-installing-numpy-scipy-matplotlib-ipython
Packt Editorial Staff
12 Oct 2014
7 min read
Save for later

Installing NumPy, SciPy, matplotlib, and IPython

Packt Editorial Staff
12 Oct 2014
7 min read
This article written by Ivan Idris, author of the book, Python Data Analysis, will guide you to install NumPy, SciPy, matplotlib, and IPython. We can find a mind map describing software that can be used for data analysis at https://www.xmind.net/m/WvfC/. Obviously, we can't install all of this software in this article. We will install NumPy, SciPy, matplotlib, and IPython on different operating systems. [box type="info" align="" class="" width=""]Packt has the following books that are focused on NumPy: NumPy Beginner's Guide Second Edition, Ivan Idris NumPy Cookbook, Ivan Idris Learning NumPy Array, Ivan Idris [/box] SciPy is a scientific Python library, which supplements and slightly overlaps NumPy. NumPy and SciPy, historically shared their codebase but were later separated. matplotlib is a plotting library based on NumPy. IPython provides an architecture for interactive computing. The most notable part of this project is the IPython shell. Software used The software used in this article is based on Python, so it is required to have Python installed. On some operating systems, Python is already installed. You, however, need to check whether the Python version is compatible with the software version you want to install. There are many implementations of Python, including commercial implementations and distributions. [box type="note" align="" class="" width=""]You can download Python from https://www.python.org/download/. On this website, we can find installers for Windows and Mac OS X, as well as source archives for Linux, Unix, and Mac OS X.[/box] The software we will install has binary installers for Windows, various Linux distributions, and Mac OS X. There are also source distributions, if you prefer that. You need to have Python 2.4.x or above installed on your system. Python 2.7.x is currently the best Python version to have because most Scientific Python libraries support it. Python 2.7 will be supported and maintained until 2020. After that, we will have to switch to Python 3. Installing software and setup on Windows Installing on Windows is, fortunately, a straightforward task that we will cover in detail. You only need to download an installer, and a wizard will guide you through the installation steps. We will give steps to install NumPy here. The steps to install the other libraries are similar. The actions we will take are as follows: Download installers for Windows from the SourceForge website (refer to the following table). The latest release versions may change, so just choose the one that fits your setup best. Library URL Latest Version NumPy http://sourceforge.net/projects/numpy/files/ 1.8.1 SciPy http://sourceforge.net/projects/scipy/files/ 0.14.0 matplotlib http://sourceforge.net/projects/matplotlib/files/ 1.3.1 IPython http://archive.ipython.org/release/ 2.0.0 Choose the appropriate version. In this example, we chose numpy-1.8.1-win32-superpack-python2.7.exe. Open the EXE installer by double-clicking on it. Now, we can see a description of NumPy and its features. Click on the Next button.If you have Python installed, it should automatically be detected. If it is not detected, maybe your path settings are wrong. Click on the Next button if Python is found; otherwise, click on the Cancel button and install Python (NumPy cannot be installed without Python). Click on the Next button. This is the point of no return. Well, kind of, but it is best to make sure that you are installing to the proper directory and so on and so forth. Now the real installation starts. This may take a while. [box type="note" align="" class="" width=""]The situation around installers is rapidly evolving. Other alternatives exist in various stage of maturity (see https://www.scipy.org/install.html). It might be necessary to put the msvcp71.dll file in your C:Windowssystem32 directory. You can get it from http://www.dll-files.com/dllindex/dll-files.shtml?msvcp71.[/box] Installing software and setup on Linux Installing the recommended software on Linux depends on the distribution you have. We will discuss how you would install NumPy from the command line, although, you could probably use graphical installers; it depends on your distribution (distro). The commands to install matplotlib, SciPy, and IPython are the same – only the package names are different. Installing matplotlib, SciPy, and IPython is recommended, but optional. Most Linux distributions have NumPy packages. We will go through the necessary steps for some of the popular Linux distros: Run the following instructions from the command line for installing NumPy on Red Hat: $ yum install python-numpy To install NumPy on Mandriva, run the following command-line instruction: $ urpmi python-numpy To install NumPy on Gentoo run the following command-line instruction: $ sudo emerge numpy To install NumPy on Debian or Ubuntu, we need to type the following: $ sudo apt-get install python-numpy The following table gives an overview of the Linux distributions and corresponding package names for NumPy, SciPy, matplotlib, and IPython. Linux distribution NumPy SciPy matplotlib IPython Arch Linux python-numpy python-scipy python-matplotlib Ipython Debian python-numpy python-scipy python-matplotlib Ipython Fedora numpy python-scipy python-matplotlib Ipython Gentoo dev-python/numpy scipy matplotlib ipython OpenSUSE python-numpy, python-numpy-devel python-scipy python-matplotlib ipython Slackware numpy scipy matplotlib ipython Installing software and setup on Mac OS X You can install NumPy, matplotlib, and SciPy on the Mac with a graphical installer or from the command line with a port manager such as MacPorts, depending on your preference. Prerequisite is to install XCode as it is not part of OS X releases. We will install NumPy with a GUI installer using the following steps: We can get a NumPy installer from the SourceForge website http://sourceforge.net/projects/numpy/files/. Similar files exist for matplotlib and SciPy. Just change numpy in the previous URL to scipy or matplotlib. IPython didn't have a GUI installer at the time of writing. Download the appropriate DMG file usually the latest one is the best.Another alternative is the SciPy Superpack (https://github.com/fonnesbeck/ScipySuperpack). Whichever option you choose it is important to make sure that updates which impact the system Python library don't negatively influence already installed software by not building against the Python library provided by Apple. Open the DMG file (in this example, numpy-1.8.1-py2.7-python.org-macosx10.6.dmg). Double-click on the icon of the opened box, the one having a subscript that ends with .mpkg. We will be presented with the welcome screen of the installer. Click on the Continue button to go to the Read Me screen, where we will be presented with a short description of NumPy. Click on the Continue button to the License the screen. Read the license, click on the Continue button and then on the Accept button, when prompted to accept the license. Continue through the next screens and click on the Finish button at the end. Alternatively, we can install NumPy, SciPy, matplotlib, and IPython through the MacPorts route, with Fink or Homebrew. The following installation steps shown, installs all these packages. [box type="info" align="" class="" width=""]For installing with MacPorts, type the following command: sudo port install py-numpy py-scipy py-matplotlib py- ipython [/box] Installing with setuptools If you have pip you can install NumPy, SciPy, matplotlib and IPython with the following commands. pip install numpy pip install scipy pip install matplotlib pip install ipython It may be necessary to prepend sudo to these commands, if your current user doesn't have sufficient rights on your system. Summary In this article, we installed NumPy, SciPy, matplotlib and IPython on Windows, Mac OS X and Linux. Resources for Article: Further resources on this subject: Plotting Charts with Images and Maps Importing Dynamic Data Python 3: Designing a Tasklist Application
Read more
  • 0
  • 0
  • 64635

article-image-web-api-and-client-integration
Packt
10 Oct 2014
9 min read
Save for later

Web API and Client Integration

Packt
10 Oct 2014
9 min read
In this article written by Geoff Webber-Cross, the author of Learning Microsoft Azure, we'll create an on-premise production management client Windows application allowing manufacturing staff to view and update order and batch data and a web service to access data in the production SQL database and send order updates to the Service Bus topic. (For more resources related to this topic, see here.) The site's main feature is an ASP.NET Web API 2 HTTP service that allows the clients to read order and batch data. The site will also host a SignalR (http://signalr.net/) hub that allows the client to update order and batch statuses and have the changes broadcast to all the on-premise clients to keep them synchronized in real time. Both the Web API and SignalR hubs will use the Azure Active Directory authentication. We'll cover the following topic in this article: Building a client application Building a client application For the client application, we'll create a WPF client application to display batches and orders and allow us to change their state. We'll use MVVM Light again, like we did for the message simulator we created in the sales solution, to help us implement a neat MVVM pattern. We'll create a number of data services to get data from the API using Azure AD authentication. Preparing the WPF project We'll create a WPF application and install NuGet packages for MVVM Light, JSON.NET, and Azure AD authentication in the following procedure (for the Express version of Visual Studio, you'll need Visual Studio Express for desktops): Add a WPF project to the solution called ManagementApplication. In the NuGet Package Manager Console, enter the following command to install MVVM Light: install-package mvvmlight Now, enter the following command to install the Microsoft.IdentityModel.Clients.ActiveDirectory package: install-package Microsoft.IdentityModel.Clients.ActiveDirectory Now, enter the following command to install JSON.NET: install-package newtonsoft.json Enter the following command to install the SignalR client package (note that this is different from the server package): Install-package Microsoft.AspNet.SignalR.Client Add a project reference to ProductionModel by right-clicking on the References folder and selecting Add Reference, check ProductionModel by navigating to the Solution | Projects tab, and click on OK. Add a project reference to System.Configuraton and System.Net.Http by right-clicking on the References folder and selecting Add Reference, check System.Config and System.Net.Http navigating to the Assemblies | Framework tab, and click on OK. In the project's Settings.settings file, add a string setting called Token to store the user's auth token. Add the following appSettings block to App.config; I've put comments to help you understand (and remember) what they stand for and added commented-out settings for the Azure API: <appSettings> <!-- AD Tenant --> <add key="ida:Tenant" value="azurebakery.onmicrosoft.com" />    <!-- The target api AD application APP ID (get it from    config tab in portal) --> <!-- Local --> <add key="ida:Audience"    value="https://azurebakery.onmicrosoft.com/ManagementWebApi" /> <!-- Azure --> <!-- <add key="ida:Audience"    value="https://azurebakery.onmicrosoft.com/      WebApp-azurebakeryproduction.azurewebsites.net" /> -->    <!-- The client id of THIS application (get it from    config tab in portal) --> <add key="ida:ClientID" value=    "1a1867d4-9972-45bb-a9b8-486f03ad77e9" />    <!-- Callback URI for OAuth workflow --> <add key="ida:CallbackUri"    value="https://azurebakery.com" />    <!-- The URI of the Web API --> <!-- Local --> <add key="serviceUri" value="https://localhost:44303/" /> <!-- Azure --> <!-- <add key="serviceUri" value="https://azurebakeryproduction.azurewebsites.net/" /> --> </appSettings> Add the MVVM Light ViewModelLocator to Application.Resources in App.xaml: <Application.Resources>    <vm:ViewModelLocator x_Key="Locator"      d_IsDataSource="True"                      DataContext="{Binding Source={StaticResource          Locator}, Path=Main}"        Title="Production Management Application"          Height="350" Width="525"> Creating an authentication base class Since the Web API and SignalR hubs use Azure AD authentication, we'll create services to interact with both and create a common base class to ensure that all requests are authenticated. This class uses the AuthenticationContext.AquireToken method to launch a built-in login dialog that handles the OAuth2 workflow and returns an authentication token on successful login: using Microsoft.IdentityModel.Clients.ActiveDirectory; using System; using System.Configuration; using System.Diagnostics; using System.Net;   namespace AzureBakery.Production.ManagementApplication.Services {    public abstract class AzureAdAuthBase    {        protected AuthenticationResult Token = null;          protected readonly string ServiceUri = null;          protected AzureAdAuthBase()        {            this.ServiceUri =              ConfigurationManager.AppSettings["serviceUri"]; #if DEBUG            // This will accept temp SSL certificates            ServicePointManager.ServerCertificateValidationCallback += (se, cert, chain, sslerror) => true; #endif        }          protected bool Login()        {            // Our AD Tenant domain name            var tenantId =              ConfigurationManager.AppSettings["ida:Tenant"];              // Web API resource ID (The resource we want to use)            var resourceId =              ConfigurationManager.AppSettings["ida:Audience"];              // Client App CLIENT ID (The ID of the AD app for this            client application)            var clientId =              ConfigurationManager.AppSettings["ida:ClientID"];              // Callback URI            var callback = new              Uri(ConfigurationManager.AppSettings["ida:CallbackUri"]);              var authContext = new              AuthenticationContext(string.Format("https://login.windows.net/{0}", tenantId));              if(this.Token == null)            {                // See if we have a cached token               var token = Properties.Settings.Default.Token;                if (!string.IsNullOrWhiteSpace(token))                    this.Token = AuthenticationResult.Deserialize(token);            }                       if (this.Token == null)             {                try                {                    // Acquire fresh token - this will get user to                    login                                  this.Token =                      authContext.AcquireToken(resourceId,                         clientId, callback);                }                catch(Exception ex)                {                    Debug.WriteLine(ex.ToString());                      return false;                }            }            else if(this.Token.ExpiresOn < DateTime.UtcNow)            {                // Refresh existing token this will not require                login                this.Token =                  authContext.AcquireTokenByRefreshToken(this.Token.RefreshToken,                   clientId);            }                      if (this.Token != null && this.Token.ExpiresOn >              DateTime.UtcNow)            {                // Store token                Properties.Settings.Default.Token =                  this.Token.Serialize(); // This should be                    encrypted                Properties.Settings.Default.Save();                  return true;            }              // Clear token            this.Token = null;              Properties.Settings.Default.Token = null;            Properties.Settings.Default.Save();              return false;        }    } } The token is stored in user settings and refreshed if necessary, so the users don't have to log in to the application every time they use it. The Login method can be called by derived service classes every time a service is called to check whether the user is logged in and whether there is a valid token to use. Creating a data service We'll create a DataService class that derives from the AzureAdAuthBase class we just created and gets data from the Web API service using AD authentication. First, we'll create a generic helper method that calls an API GET action using the HttpClient class with the authentication token added to the Authorization header, and deserializes the returned JSON object into a .NET-typed object T: private async Task<T> GetData<T>(string action) {    if (!base.Login())        return default(T);      // Call Web API    var authHeader = this.Token.CreateAuthorizationHeader();    var client = new HttpClient();    var uri = string.Format("{0}{1}", this.ServiceUri,      string.Format("api/{0}", action));    var request = new HttpRequestMessage(HttpMethod.Get, uri);    request.Headers.TryAddWithoutValidation("Authorization",      authHeader);      // Get response    var response = await client.SendAsync(request);    var responseString = await response.Content.ReadAsStringAsync();      // Deserialize JSON    var data = await Task.Factory.StartNew(() =>      JsonConvert.DeserializeObject<T>(responseString));      return data; } Once we have this, we can quickly create methods for getting order and batch data like this:   public async Task<IEnumerable<Order>> GetOrders() {    return await this.GetData<IEnumerable<Order>>("orders"); }   public async Task<IEnumerable<Batch>> GetBatches() {    return await this.GetData<IEnumerable<Batch>>("batches"); } This service implements an IDataService interface and is registered in the ViewModelLocator class, ready to be injected into our view models like this: SimpleIoc.Default.Register<IDataService, DataService>(); Creating a SignalR service We'll create another service derived from the AzureAdAuthBase class, which is called ManagementService, and which sends updated orders to the SignalR hub and receives updates from the hub originating from other clients to keep the UI updated in real time. First, we'll create a Register method, which creates a hub proxy using our authorization token from the base class, registers for updates from the hub, and starts the connection: private IHubProxy _proxy = null;   public event EventHandler<Order> OrderUpdated; public event EventHandler<Batch> BatchUpdated;   public ManagementService() {   }   public async Task Register() {    // Login using AD OAuth    if (!this.Login())        return;      // Get header from auth token    var authHeader = this.Token.CreateAuthorizationHeader();      // Create hub proxy and add auth token    var cnString = string.Format("{0}signalr", base.ServiceUri);    var hubConnection = new HubConnection(cnString, useDefaultUrl:      false);    this._proxy = hubConnection.CreateHubProxy("managementHub");    hubConnection.Headers.Add("Authorization", authHeader);      // Register for order updates    this._proxy.On<Order>("updateOrder", order =>    {        this.OnOrderUpdated(order);    });        // Register for batch updates    this._proxy.On<Batch>("updateBatch", batch =>    {        this.OnBatchUpdated(batch);    });        // Start hub connection    await hubConnection.Start(); } The OnOrderUpdated and OnBatchUpdated methods call events to notify about updates. Now, add two methods that call the hub methods we created in the website using the IHubProxy.Invoke<T> method: public async Task<bool> UpdateOrder(Order order) {    // Invoke updateOrder method on hub    await this._proxy.Invoke<Order>("updateOrder",      order).ContinueWith(task =>    {        return !task.IsFaulted;    });      return false; }   public async Task<bool> UpdateBatch(Batch batch) {    // Invoke updateBatch method on hub    await this._proxy.Invoke<Batch>("updateBatch",      batch).ContinueWith(task =>    {        return !task.IsFaulted;    });      return false; } This service implements an IManagementService interface and is registered in the ViewModelLocator class, ready to be injected into our view models like this: SimpleIoc.Default.Register<IManagementService, ManagementService>(); Testing the application To test the application locally, we need to start the Web API project and the WPF client application at the same time. So, under the Startup Project section in the Solution Properties dialog, check Multiple startup projects, select the two applications, and click on OK: Once running, we can easily debug both applications simultaneously. To test the application with the service running in the cloud, we need to deploy the service to the cloud, and then change the settings in the client app.config file (remember we put the local and Azure settings in the config with the Azure settings commented-out, so swap them around). To debug the client against the Azure service, make sure that only the client application is running (select Single startup project from the Solution Properties dialog). Summary We learned how to use a Web API to enable the production management Windows client application to access data from our production database and a SignalR hub to handle order and batch changes, keeping all clients updated and messaging the Service Bus topic. Resources for Article: Further resources on this subject: Using the Windows Azure Platform PowerShell Cmdlets [Article] Windows Azure Mobile Services - Implementing Push Notifications using [Article] Using Azure BizTalk Features [Article]
Read more
  • 0
  • 0
  • 3044

article-image-using-sensors
Packt
10 Oct 2014
25 min read
Save for later

Using Sensors

Packt
10 Oct 2014
25 min read
In this article by Leon Anavi, author of the Tizen Cookbook, we will cover the following topics: Using location-based services to display current location Getting directions Geocoding Reverse geocoding Calculating distance Detecting device motion Detecting device orientation Using the Vibration API (For more resources related to this topic, see here.) The data provided by the hardware sensors of Tizen devices can be useful for many mobile applications. In this article, you will learn how to retrieve the geographic location of Tizen devices using the assisted GPS, to detect changes of the device orientation and motion as well as how to integrate map services into Tizen web applications. Most of the examples related to maps and navigation use Google APIs. Other service providers such as Nokia HERE, OpenStreetMap, and Yandex also offer APIs with similar capabilities and can be used as an alternative to Google in Tizen web applications. It was announced that Nokia HERE joined the Tizen association at the time of writing this book. Some Tizen devices will be shipped with built-in navigation applications powered by Nokia HERE. The smart watch Gear S is the first Tizen wearable device from Samsung that comes of the box with an application called Navigator, which is developed with Nokia HERE. Explore the full capabilities of Nokia HERE JavaScript APIs if you are interested in their integration in your Tizen web application at https://developer.here.com/javascript-apis. OpenStreetMap also deserves special attention because it is a high quality platform and very successful community-driven project. The main advantage of OpenStreetMap is that its usage is completely free. The recipe about Reverse geocoding in this article demonstrates address lookup using two different approaches: through Google and through OpenStreetMap API. Using location-based services to display current location By following the provided example in this recipe, you will master the HTML5 Geolocation API and learn how to retrieve the coordinates of the current location of a device in a Tizen web application. Getting ready Ensure that the positioning capabilities are turned on. On a Tizen device or Emulator, open Settings, select Locations, and turn on both GPS (if it is available) and Network position as shown in the following screenshot: Enabling GPS and network position from Tizen Settings How to do it... Follow these steps to retrieve the location in a Tizen web application: Implement JavaScript for handling errors: function showError(err) { console.log('Error ' + err.code + ': ' + err.message); } Implement JavaScript for processing the retrieved location: function showLocation(location) { console.log('latitude: ' + location.coords.longitude + '    longitude: ' + location.coords.longitude); } Implement a JavaScript function that searches for the current position using the HTML5 Geolocation API: function retrieveLocation() { if (navigator.geolocation) {    navigator.geolocation.getCurrentPosition(showLocation,      showError); } } At an appropriate place in the source code of the application, invoke the function created in the previous step: retrieveLocation(); How it works The getCurrentPosition() method of the HTML5 Geolocation API is used in the retrieveLocation() function to retrieve the coordinates of the current position of the device. The functions showLocation() and showError() are provided as callbacks, which are invoked on success or failure. An instance of the Position interface is provided as an argument to showLocation(). This interface has two properties: coords: This specifies an object that defines the retrieved position timestamp: This specifies the date and time when the position has been retrieved The getCurrentPosition() method accepts an instance of the PositionOptions interface as a third optional argument. This argument should be used for setting specific options such as enableHighAccuracy, timeout, and maximumAge. Explore the Geolocation API specification if you are interested in more details regarding the attributes of the discussed interface at http://www.w3.org/TR/geolocation-API/#position-options. There is no need to add any specific permissions explicitly in config.xml. When an application that implements the code from this recipe, is launched for the first time, it will ask for permission to access the location, as shown in the following screenshot: A request to access location in Tizen web application If you are developing a location-based application and want to debug it using the Tizen Emulator, use the Event Injector to set the position. There's more... A map view provided by Google Maps JavaScript API v3 can be easily embedded into a Tizen web application. An internet connection is required to use the API, but there is no need to install an additional SDK or tools from Google. Follow these instructions to display a map and a marker: Make sure that the application can access the Google API. For example, you can enable access to any website by adding the following line to config.xml: <access origin="*" subdomains="true"></access> Visit https://code.google.com/apis/console to get the API keys. Click on Services and activate Google Maps API v3. After that, click on API and copy Key for browser apps. Its value will be used in the source code of the application. Implement the following source code to show a map inside div with the ID map-canvas: <style type="text/css"> #map-canvas { width: 320px; height: 425px; } </style> <script type="text/javascript" src="https://maps.googleapis.com/maps/api/js?key=<API Key>&sensor=false"></script> Replace <API Key> in the line above with the value of the key obtained on the previous step. <script type="text/javascript"> function initialize(nLatitude, nLongitude) { var mapOptions = {    center: new google.maps.LatLng(nLatitude, nLongitude),    zoom: 14 }; var map = new google.maps.Map(document.getElementById("map-canvas"), mapOptions); var marker = new google.maps.Marker({    position: new google.maps.LatLng(nLatitude,      nLongitude),    map: map }); } </script> In the HTML of the application, create the following div element: <div id="map-canvas"></div> Provide latitude and longitude to the function and execute it at an appropriate location. For example, these are the coordinates of a location in Westminster, London: initialize(51.501725, -0.126109); The following screenshot demonstrates a Tizen web application that has been created by following the preceding guidelines: Google Map in Tizen web application Combine the tutorial from the How to do it section of the recipe with these instructions to display a map with the current location. See also A source code of a simple Tizen web application is provided alongside the book following the tutorial from this recipe. Feel free to use it as you wish. More details are available in the W3C specification of the HTML5 Geolocation API at http://www.w3.org/TR/geolocation-API/. To learn more details and to explore the full capabilities of the Google Maps JavaScript API v3, please visit https://developers.google.com/maps/documentation/javascript/tutorial. Getting directions Navigation is another common task for mobile applications. The Google Directions API allows web and mobile developers to retrieve a route between locations by sending an HTTP request. It is mandatory to specify an origin and a destination, but it is also possible to set way points. All locations can be provided either by exact coordinates or by address. An example for getting directions and to reach a destination on foot is demonstrated in this recipe. Getting ready Before you start with the development, register an application and obtain API keys: Log in to Google Developers Console at https://code.google.com/apis/console. Click on Services and turn on Directions API. Click on API Access and get the value of Key for server apps, which should be used in all requests from your Tizen web application to the API. For more information about the API keys for the Directions API, please visit https://developers.google.com/maps/documentation/directions/#api_key. How to do it... Use the following source code to retrieve and display step-by-step instructions on how to walk from one location to another using the Google Directions API: Allow the application to access websites by adding the following line to config.xml: <access origin="*" subdomains="true"></access> Create an HTML unordered list: <ul id="directions" data-role="listview"></ul> Create JavaScript that will load retrieved directions: function showDirections(data) { if (!data || !data.routes || (0 == data.routes.length)) {    console.log('Unable to provide directions.');    return; } var directions = data.routes[0].legs[0].steps; for (nStep = 0; nStep < directions.length; nStep++) {     var listItem = $('<li>').append($( '<p>'      ).append(directions[nStep].html_instructions));    $('#directions').append(listItem); } $('#directions').listview('refresh'); } Create a JavaScript function that sends an asynchronous HTTP (AJAX) request to the Google Maps API to retrieve directions: function retrieveDirection(sLocationStart, sLocationEnd){ $.ajax({    type: 'GET',    url: 'https://maps.googleapis.com/maps/api/directions/json?',    data: { origin: sLocationStart,        destination: sLocationEnd,        mode: 'walking',        sensor: 'true',        key: '<API key>' }, Do not forget to replace <API key> with the Key for server apps value provided by Google for the Directions API. Please note that a similar key has to be set to the source code in the subsequent recipes that utilize Google APIs too:    success : showDirections,    error : function (request, status, message) {    console.log('Error');    } }); } Provide start and end locations as arguments and execute the retrieveDirection() function. For example: retrieveDirection('Times Square, New York, NY, USA', 'Empire State Building, 350 5th Avenue, New York, NY 10118, USA'); How it works The first mandatory step is to allow access to the Tizen web application to Google servers. After that, an HTML unordered list with ID directions is constructed. An origin and destination is provided to the JavaScript function retrieveDirections(). On success, the showDirections() function is invoked as a callback and it loads step-by-step instructions on how to move from the origin to the destination. The following screenshot displays a Tizen web application with guidance on how to walk from Times Square in New York to the Empire State Building: The Directions API is quite flexible. The mandatory parameters are origin, destination, and sensor. Numerous other options can be configured at the HTTP request using different parameters. To set the desired transport, use the parameter mode, which has the following options: driving walking bicycling transit (for getting directions using public transport) By default, if the mode is not specified, its value will be set to driving. The unit system can be configured through the parameter unit. The options metric and imperial are available. The developer can also define restrictions using the parameter avoid and the addresses of one or more directions points at the waypoints parameter. A pipe (|) is used as a symbol for separation if more than one address is provided. There's more... An application with similar features for getting directions can also be created using services from Nokia HERE. The REST API can be used in the same way as Google Maps API. Start by acquiring the credentials at http://developer.here.com/get-started. An asynchronous HTTP request should be sent to retrieve directions. Instructions on how to construct the request to the REST API are provided in its documentation at https://developer.here.com/rest-apis/documentation/routing/topics/request-constructing.html. The Nokia HERE JavaScript API is another excellent solution for routing. Make instances of classes Display and Manager provided by the API to create a map and a routing manager. After that, create a list of way points whose coordinates are defined by an instance of the Coordinate class. Refer to the following example provided by the user's guide of the API to learn details at https://developer.here.com/javascript-apis/documentation/maps/topics/routing.html. The full specifications about classes Display, Manager, and Coordinate are available at the following links: https://developer.here.com/javascript-apis/documentation/maps/topics_api_pub/nokia.maps.map.Display.html https://developer.here.com/javascript-apis/documentation/maps/topics_api_pub/nokia.maps.routing.Manager.html https://developer.here.com/javascript-apis/documentation/maps/topics_api_pub/nokia.maps.geo.Coordinate.html See also All details, options, and returned results from the Google Directions API are available at https://developers.google.com/maps/documentation/directions/. Geocoding Geocoding is the process of retrieving geographical coordinates associated with an address. It is often used in mobile applications that use maps and provide navigation. In this recipe, you will learn how to convert an address to longitude and latitude using JavaScript and AJAX requests to the Google Geocoding API. Getting ready You must obtain keys before you can use the Geocoding API in a Tizen web application: Visit Google Developers Console at https://code.google.com/apis/console. Click on Services and turn on Geocoding API. Click on API Access and get the value of Key for server apps. Use it in all requests from your Tizen web application to the API. For more details regarding the API keys for the Geocoding API, visit https://developers.google.com/maps/documentation/geocoding/#api_key. How to do it... Follow these instructions to retrieve geographic coordinates of an address in a Tizen web application using the Google Geocoding API: Allow the application to access websites by adding the following line to config.xml: <access origin="*" subdomains="true"></access> Create a JavaScript function to handle results provided by the API: function retrieveCoordinates(data) { if (!data || !data.results || (0 == data.results.length)) {    console.log('Unable to retrieve coordinates');    return; } var latitude = data.results[0].geometry.location.lat; var longitude = data.results[0].geometry.location.lng; console.log('latitude: ' + latitude + ' longitude: ' +    longitude); } Create a JavaScript function that sends a request to the API: function geocoding(address) { $.ajax({    type: 'GET',    url: 'https://maps.googleapis.com/maps/api/geocode/json?',    data: { address: address,      sensor: 'true',      key: '<API key>' }, As in the previous recipes, you should again replace <API key> with the Key for server apps value provided by Google for the Geocoding API.    success : retrieveCoordinates,    error : function (request, status, message) {    console.log('Error: ' + message);    } }); } Provide the address as an argument to the geocoding() function and invoke it. For example: geocoding('350 5th Avenue, New York, NY 10118, USA'); How it works The address is passed as an argument to the geocoding() function, which sends a request to the URL of Google Geocoding API. The URL specifies that the returned result should be serialized as JSON. The parameters of the URL contain information about the address and the API key. Additionally, there is a parameter that indicates whether the device has a sensor. In general, Tizen mobile devices are equipped with GPS so the parameter sensor is set to true. A successful response from the API is handled by the retrieveCoordinates() function, which is executed as a callback. After processing the data, the code snippet in this recipe prints the retrieved coordinates at the console. For example, if we provide the address of the Empire State Building to the geocoding() function on success, the following text will be printed: latitude: 40.7481829 longitude: -73.9850635. See also Explore the Google Geocoding API documentation to learn the details regarding the usage of the API and all of its parameters at https://developers.google.com/maps/documentation/geocoding/#GeocodingRequests. Nokia HERE provides similar features. Refer to the documentation of its Geocoder API to learn how to create the URL of a request to it at https://developer.here.com/rest-apis/documentation/geocoder/topics/request-constructing.html. Reverse geocoding Reverse geocoding, also known as address lookup, is the process of retrieving an address that corresponds to a location described with geographic coordinates. The Google Geocoding API provides methods for both geocoding as well as reverse geocoding. In this recipe, you will learn how to find the address of a location based on its coordinates using the Google API as well as an API provided by OpenStreetMap. Getting ready Same keys are required for geocoding and reverse geocoding. If you have already obtained a key for the previous recipe, you can directly use it here again. Otherwise, you can perform the following steps: Visit Google Developers Console at https://code.google.com/apis/console. Go to Services and turn on Geocoding API. Select API Access, locate the value of Key for server apps, and use it in all requests from the Tizen web application to the API. If you need more information about the Geocoding API keys, visit https://developers.google.com/maps/documentation/geocoding/#api_key. How to do it... Follow the described algorithm to retrieve an address based on geographic coordinates using the Google Maps Geocoding API: Allow the application to access websites by adding the following line to config.xml: <access origin="*" subdomains="true"></access> Create a JavaScript function to handle the data provided for a retrieved address: function retrieveAddress(data) { if (!data || !data.results || (0 == data.results.length)) {    console.log('Unable to retrieve address');    return; } var sAddress = data.results[0].formatted_address; console.log('Address: ' + sAddress); } Implement a function that performs a request to Google servers to retrieve an address based on latitude and longitude: function reverseGeocoding(latitude, longitude) { $.ajax({    type: 'GET',    url: 'https://maps.googleapis.com/maps/api/geocode/json?',    data: { latlng: latitude+','+longitude,        sensor: 'true',        key: '<API key>' }, Pay attention that <API key> has to be replaced with the Key for server apps value provided by Google for the Geocoding API:    success : retrieveAddress,    error : function (request, status, message) {    console.log('Error: ' + message);    } }); } Provide coordinates as arguments of function and execute it, for example: reverseGeocoding('40.748183', '-73.985064'); How it works If an application developed using the preceding source code invokes the reverseGeocoding() function with latitude 40.748183 and longitude -73.985064, the printed result at the console will be: 350 5th Avenue, New York, NY 10118, USA. By the way, as in the previous recipe, the address corresponds to the location of the Empire State Building in New York. The reverseGeocoding() function sends an AJAX request to the API. The parameters at the URL specify that the response must be formatted as JSON. The longitude and latitude of the location are divided by commas and set as a value of the latlng parameter in the URL. There's more... OpenStreetMap also provides a reverse geocoding services. For example, the following URL will return a JSON result of a location with the latitude 40.7481829 and longitude -73.9850635: http://nominatim.openstreetmap.org/reverse?format=json&lat=40.7481829&lon=-73.9850635 The main advantage of OpenStreetMap is that it is an open project with a great community. Its API for reverse geocoding does not require any keys and it can be used for free. Leaflet is a popular open source JavaScript library based on OpenStreetMap optimized for mobile devices. It is well supported and easy to use, so you may consider integrating it in your Tizen web applications. Explore its features at http://leafletjs.com/features.html. See also All details regarding the Google Geocoding API are available at https://developers.google.com/maps/documentation/geocoding/#ReverseGeocoding If you prefer to user the API provided by OpenStreetMap, please have a look at http://wiki.openstreetmap.org/wiki/Nominatim#Reverse_Geocoding_.2F_Address_lookup Calculating distance This recipe is dedicated to a method for calculating the distance between two locations. The Google Directions API will be used again. Unlike the Getting directions recipe, this time only the information about the distance will be processed. Getting ready Just like the other recipe related to the Google API, in this case, the developer must obtain the API keys before the start of the development. Please follow these instructions to register and get an appropriate API key: Visit Google Developers Console at https://code.google.com/apis/console. Click on Services and turn on Geocoding API. Click on API Access and save the value of Key for server apps. Use it in all requests from your Tizen web application to the API. If you need more information about the API keys for Directions API, visit https://developers.google.com/maps/documentation/directions/#api_key. How to do it... Follow these steps to calculate the distance between two locations: Allow the application to access websites by adding the following line to config.xml: <access origin="*" subdomains="true"></access> Implement a JavaScript function that will process the retrieved data: function retrieveDistance(data) { if (!data || !data.routes || (0 == data.routes.length)) {    console.log('Unable to retrieve distance');    return; } var sLocationStart =    data.routes[0].legs[0].start_address; var sLocationEnd = data.routes[0].legs[0].end_address; var sDistance = data.routes[0].legs[0].distance.text; console.log('The distance between ' + sLocationStart + '    and ' + sLocationEnd + ' is: ' +    data.routes[0].legs[0].distance.text); } Create a JavaScript function that will request directions using the Google Maps API: function checkDistance(sStart, sEnd) { $.ajax({    type: 'GET',    url: 'https://maps.googleapis.com/maps/api/directions/json?',    data: { origin: sStart,        destination: sEnd,        sensor: 'true',        units: 'metric',        key: '<API key>' }, Remember to replace <API key> with the Key for server apps value provided by Google for the Direction API:        success : retrieveDistance,        error : function (request, status, message) {        console.log('Error: ' + message);        }    }); } Execute the checkDistance() function and provide the origin and the destination as arguments, for example: checkDistance('Plovdiv', 'Burgas'); Geographical coordinates can also be provided as arguments to the function checkDistance(). For example, let's calculate the same distances but this time by providing the latitude and longitude of locations in the Bulgarian cities Plovdiv and Burgas: checkDistance('42.135408,24.74529', '42.504793,27.462636'); How it works The checkDistance() function sends data to the Google Directions API. It sets the origin, the destination, the sensor, the unit system, and the API key as parameters of the URL. The result returned by the API is provided as JSON, which is handled in the retriveDistance() function. The output in the console of the preceding example, which retrieves the distance between the Bulgarian cities Plovdiv and Burgas, is The distance between Plovdiv, Bulgaria and Burgas, Bulgaria is: 253 km. See also For all details about the Directions API as well as a full description of the returned response, visit https://developers.google.com/maps/documentation/directions/. Detecting device motion This recipe offers a tutorial on how to detect and handle device motion in Tizen web applications. No specific Tizen APIs will be used. The source code in this recipe relies on the standard W3C DeviceMotionEvent, which is supported by Tizen web applications as well as any modern web browser. How to do it... Please follow these steps to detect device motion and display its acceleration in a Tizen web application: Create HTML components to show device acceleration, for example: <p>X: <span id="labelX"></span></p> <p>Y: <span id="labelY"></span></p> <p>Z: <span id="labelZ"></span></p> Create a JavaScript function to handle errors: function showError(err) { console.log('Error: ' + err.message); } Create a JavaScript function that handles motion events: function motionDetected(event) { var acc = event.accelerationIncludingGravity; var sDeviceX = (acc.x) ? acc.x.toFixed(2) : '?'; var sDeviceY = (acc.y) ? acc.y.toFixed(2) : '?'; var sDeviceZ = (acc.z) ? acc.z.toFixed(2) : '?'; $('#labelX').text(sDeviceX); $('#labelY').text(sDeviceY); $('#labelZ').text(sDeviceZ); } Create a JavaScript function that starts a listener for motion events: function deviceMotion() { try {    if (!window.DeviceMotionEvent) {      throw new Error('device motion not supported.');    }    window.addEventListener('devicemotion', motionDetected,      false); } catch (err) {    showError(err); } } Invoke a function at an appropriate location of the source code of the application: deviceMotion(); How it works The deviceMotion() function registers an event listener that invokes the motionDetected() function as a callback when device motion event is detected. All errors, including an error if DeviceMotionEvent is not supported, are handled in the showError() function. As shown in the following screenshot, the motionDetected() function loads the data of the properties of DeviceMotionEvent into the HTML5 labels that were created in the first step. The results are displayed using standard units for acceleration according to the international system of units (SI)—metres per second squared (m/s2). The JavaScript method toFixed() is invoked to convert the result to a string with two decimals: A Tizen web application that detects device motion See also Notice that the device motion event specification is part of the DeviceOrientationEvent specification. Both are still in draft. The latest published version is available at http://www.w3.org/TR/orientation-event/. The source code of a sample Tizen web application that detects device motion is provided along with the book. You can import the project of the application into the Tizen IDE and explore it. Detecting device orientation In this recipe, you will learn how to monitor changes of the device orientation using the HTML5 DeviceOrientation event as well as get the device orientation using the Tizen SystemInfo API. Both methods for retrieving device orientation have advantages and work in Tizen web applications. It is up to the developer to decide which approach is more suitable for their application. How to do it... Perform the following steps to register a listener and handle device orientation events in your Tizen web application: Create a JavaScript function to handle errors: function showError(err) { console.log('Error: ' + err.message); } Create a JavaScript function that handles change of the orientation: function orientationDetected(event) { console.log('absolute: ' + event.absolute); console.log('alpha: ' + event.alpha); console.log('beta: ' + event.beta); console.log('gamma: ' + event.gamma); } Create a JavaScript function that adds a listener for the device orientation: function deviceOrientation() { try {    if (!window.DeviceOrientationEvent) {      throw new Error('device motion not supported.');    }    window.addEventListener('deviceorientation',      orientationDetected, false); } catch (err) {    showError(err); } } Execute the JavaScript function to start listening for device orientation events: deviceOrientation(); How it works If DeviceOrientationEvent is supported, the deviceOrientation() function binds the event to the orientationDetected() function, which is invoked as a callback only on success. The showError() function will be executed only if a problem occurs. An instance of the DeviceOrientationEvent interface is provided as an argument of the orientationDetected() function. In the preceding code snippet, the values of its four read-only properties absolute (Boolean value, true if the device provides orientation data absolutely), alpha (motion around the z axis), beta (motion around the x-axis), and gamma (motion around the y axis) are printed in the console. There's more... There is an easier way to determine whether a Tizen device is in landscape or portrait mode. In a Tizen web application, for this case, it is recommended to use the SystemInfo API. The following code snippet retrieves the device orientation: function onSuccessCallback(orientation) { console.log("Device orientation: " + orientation.status); } function onErrorCallback(error) { console.log("Error: " + error.message); }  tizen.systeminfo.getPropertyValue("DEVICE_ORIENTATION", onSuccessCallback, onErrorCallback); The status of the orientation can be one of the following values: PORTRAIT_PRIMARY PORTRAIT_SECONDARY LANDSCAPE_PRIMARY LANDSCAPE_SECONDARY See also The DeviceOrientationEvent specification is still a draft. The latest published version is available at http://www.w3.org/TR/orientation-event/. For more information on the Tizen SystemInfo API, visit https://developer.tizen.org/dev-guide/2.2.1/org.tizen.web.device.apireference/tizen/systeminfo.html. Using the Vibration API Tizen is famous for its excellent support of HTML5 and W3C APIs. The standard Vibration API is also supported and it can be used in Tizen web applications. This recipe offers code snippets on how to activate vibration on a Tizen device. How to do it... Use the following code snippet to activate the vibration of the device for three seconds: if (navigator.vibrate) { navigator.vibrate(3000); } To cancel an ongoing vibration, just call the vibrate() method again with zero as a value of its argument: if (navigator.vibrate) { navigator.vibrate(0); } Alternatively, the vibration can be canceled by passing an empty array to the same method: navigator.vibrate([]); How it works The W3C Vibration API is used through the JavaScript object navigator. Its vibrate() method expects either a single value or an array of values. All values must be specified in milliseconds. The value provided to the vibrate() method in the preceding example is 3000 because 3 seconds is equal to 3000 milliseconds. There's more... The W3C Vibration API allows advanced tuning of the device vibration. A list of time intervals (with values in milliseconds), during which the device will vibrate, can be specified as an argument of the vibrate() method. For example, the following code snippet will make the device to vibrate for 100 ms, stand still for 3 seconds, and then again vibrate, but this time just for 50 ms: if (navigator.vibrate) { navigator.vibrate([100, 3000, 50]); } See also For more information on the vibration capabilities and the API usage, visit http://www.w3.org/TR/vibration/. Tizen native applications for the mobile profile have exposure to additional APIs written in C++ for light and proximity sensors. Explore the source code of the sample native application SensorApp which is provided with the Tizen SDK to learn how to use these sensors. More information about them is available at https://developer.tizen.org/dev-guide/2.2.1/org.tizen.native.appprogramming/html/guide/uix/light_sensor.htm and https://developer.tizen.org/dev-guide/2.2.1/org.tizen.native.appprogramming/html/guide/uix/proximity_sensor.htm. Summary In this article, we learned the details of various hardware sensors such as the GPS, accelerometer, and gyroscope sensor. The main focus of this article was on location-based services, maps, and navigation. Resources for Article: Further resources on this subject: Major SDK components [article] Getting started with Kinect for Windows SDK Programming [article] https://www.packtpub.com/books/content/cordova-plugins [article]
Read more
  • 0
  • 0
  • 7082
article-image-indexing-and-performance-tuning
Packt
10 Oct 2014
40 min read
Save for later

Indexing and Performance Tuning

Packt
10 Oct 2014
40 min read
In this article by Hans-Jürgen Schönig, author of the book PostgreSQL Administration Essentials, you will be guided through PostgreSQL indexing, and you will learn how to fix performance issues and find performance bottlenecks. Understanding indexing will be vital to your success as a DBA—you cannot count on software engineers to get this right straightaway. It will be you, the DBA, who will face problems caused by bad indexing in the field. For the sake of your beloved sleep at night, this article is about PostgreSQL indexing. (For more resources related to this topic, see here.) Using simple binary trees In this section, you will learn about simple binary trees and how the PostgreSQL optimizer treats the trees. Once you understand the basic decisions taken by the optimizer, you can move on to more complex index types. Preparing the data Indexing does not change user experience too much, unless you have a reasonable amount of data in your database—the more data you have, the more indexing can help to boost things. Therefore, we have to create some simple sets of data to get us started. Here is a simple way to populate a table: test=# CREATE TABLE t_test (id serial, name text);CREATE TABLEtest=# INSERT INTO t_test (name) SELECT 'hans' FROM   generate_series(1, 2000000);INSERT 0 2000000test=# INSERT INTO t_test (name) SELECT 'paul' FROM   generate_series(1, 2000000);INSERT 0 2000000 In our example, we created a table consisting of two columns. The first column is simply an automatically created integer value. The second column contains the name. Once the table is created, we start to populate it. It's nice and easy to generate a set of numbers using the generate_series function. In our example, we simply generate two million numbers. Note that these numbers will not be put into the table; we will still fetch the numbers from the sequence using generate_series to create two million hans and rows featuring paul, shown as follows: test=# SELECT * FROM t_test LIMIT 3;id | name----+------1 | hans2 | hans3 | hans(3 rows) Once we create a sufficient amount of data, we can run a simple test. The goal is to simply count the rows we have inserted. The main issue here is: how can we find out how long it takes to execute this type of query? The timing command will do the job for you: test=# timingTiming is on. As you can see, timing will add the total runtime to the result. This makes it quite easy for you to see if a query turns out to be a problem or not: test=# SELECT count(*) FROM t_test;count---------4000000(1 row)Time: 316.628 ms As you can see in the preceding code, the time required is approximately 300 milliseconds. This might not sound like a lot, but it actually is. 300 ms means that we can roughly execute three queries per CPU per second. On an 8-Core box, this would translate to roughly 25 queries per second. For many applications, this will be enough; but do you really want to buy an 8-Core box to handle just 25 concurrent users, and do you want your entire box to work just on this simple query? Probably not! Understanding the concept of execution plans It is impossible to understand the use of indexes without understanding the concept of execution plans. Whenever you execute a query in PostgreSQL, it generally goes through four central steps, described as follows: Parser: PostgreSQL will check the syntax of the statement. Rewrite system: PostgreSQL will rewrite the query (for example, rules and views are handled by the rewrite system). Optimizer or planner: PostgreSQL will come up with a smart plan to execute the query as efficiently as possible. At this step, the system will decide whether or not to use indexes. Executor: Finally, the execution plan is taken by the executor and the result is generated. Being able to understand and read execution plans is an essential task of every DBA. To extract the plan from the system, all you need to do is use the explain command, shown as follows: test=# explain SELECT count(*) FROM t_test;                             QUERY PLAN                            ------------------------------------------------------Aggregate (cost=71622.00..71622.01 rows=1 width=0)   -> Seq Scan on t_test (cost=0.00..61622.00                         rows=4000000 width=0)(2 rows)Time: 0.370 ms In our case, it took us less than a millisecond to calculate the execution plan. Once you have the plan, you can read it from right to left. In our case, PostgreSQL will perform a sequential scan and aggregate the data returned by the sequential scan. It is important to mention that each step is assigned to a certain number of costs. The total cost for the sequential scan is 61,622 penalty points (more details about penalty points will be outlined a little later). The overall cost of the query is 71,622.01. What are costs? Well, costs are just an arbitrary number calculated by the system based on some rules. The higher the costs, the slower a query is expected to be. Always keep in mind that these costs are just a way for PostgreSQL to estimate things—they are in no way a reliable number related to anything in the real world (such as time or amount of I/O needed). In addition to the costs, PostgreSQL estimates that the sequential scan will yield around four million rows. It also expects the aggregation to return just a single row. These two estimates happen to be precise, but it is not always so. Calculating costs When in training, people often ask how PostgreSQL does its cost calculations. Consider a simple example like the one we have next. It works in a pretty simple way. Generally, there are two types of costs: I/O costs and CPU costs. To come up with I/O costs, we have to figure out the size of the table we are dealing with first: test=# SELECT pg_relation_size('t_test'),   pg_size_pretty(pg_relation_size('t_test'));pg_relation_size | pg_size_pretty------------------+----------------       177127424 | 169 MB(1 row) The pg_relation_size command is a fast way to see how large a table is. Of course, reading a large number (many digits) is somewhat hard, so it is possible to fetch the size of the table in a much prettier format. In our example, the size is roughly 170 MB. Let's move on now. In PostgreSQL, a table consists of 8,000 blocks. If we divide the size of the table by 8,192 bytes, we will end up with exactly 21,622 blocks. This is how PostgreSQL estimates I/O costs of a sequential scan. If a table is read completely, each block will receive exactly one penalty point, or any number defined by seq_page_cost: test=# SHOW seq_page_cost;seq_page_cost---------------1(1 row) To count this number, we have to send four million rows through the CPU (cpu_tuple_cost), and we also have to count these 4 million rows (cpu_operator_cost). So, the calculation looks like this: For the sequential scan: 21622*1 + 4000000*0.01 (cpu_tuple_cost) = 61622 For the aggregation: 61622 + 4000000*0.0025 (cpu_operator_cost) = 71622 This is exactly the number that we see in the plan. Drawing important conclusions Of course, you will never do this by hand. However, there are some important conclusions to be drawn: The cost model in PostgreSQL is a simplification of the real world The costs can hardly be translated to real execution times The cost of reading from a slow disk is the same as the cost of reading from a fast disk It is hard to take caching into account If the optimizer comes up with a bad plan, it is possible to adapt the costs either globally in postgresql.conf, or by changing the session variables, shown as follows: test=# SET seq_page_cost TO 10;SET This statement inflated the costs at will. It can be a handy way to fix the missed estimates, leading to bad performance and, therefore, to poor execution times. This is what the query plan will look like using the inflated costs: test=# explain SELECT count(*) FROM t_test;                     QUERY PLAN                              -------------------------------------------------------Aggregate (cost=266220.00..266220.01 rows=1 width=0)   -> Seq Scan on t_test (cost=0.00..256220.00          rows=4000000 width=0)(2 rows) It is important to understand the PostgreSQL code model in detail because many people have completely wrong ideas about what is going on inside the PostgreSQL optimizer. Offering a basic explanation will hopefully shed some light on this important topic and allow administrators a deeper understanding of the system. Creating indexes After this introduction, we can deploy our first index. As we stated before, runtimes of several hundred milliseconds for simple queries are not acceptable. To fight these unusually high execution times, we can turn to CREATE INDEX, shown as follows: test=# h CREATE INDEXCommand:     CREATE INDEXDescription: define a new indexSyntax:CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ name ]ON table_name [ USING method ]   ( { column_name | ( expression ) }[ COLLATE collation ] [ opclass ][ ASC | DESC ] [ NULLS { FIRST | LAST } ]   [, ...] )   [ WITH ( storage_parameter = value [, ... ] ) ]   [ TABLESPACE tablespace_name ]   [ WHERE predicate ] In the most simplistic case, we can create a normal B-tree index on the ID column and see what happens: test=# CREATE INDEX idx_id ON t_test (id);CREATE INDEXTime: 3996.909 ms B-tree indexes are the default index structure in PostgreSQL. Internally, they are also called B+ tree, as described by Lehman-Yao. On this box (AMD, 4 Ghz), we can build the B-tree index in around 4 seconds, without any database side tweaks. Once the index is in place, the SELECT command will be executed at lightning speed: test=# SELECT * FROM t_test WHERE id = 423423;   id   | name--------+------423423 | hans(1 row)Time: 0.384 ms The query executes in less than a millisecond. Keep in mind that this already includes displaying the data, and the query is a lot faster internally. Analyzing the performance of a query How do we know that the query is actually a lot faster? In the previous section, you saw EXPLAIN in action already. However, there is a little more to know about this command. You can add some instructions to EXPLAIN to make it a lot more verbose, as shown here: test=# h EXPLAINCommand:     EXPLAINDescription: show the execution plan of a statementSyntax:EXPLAIN [ ( option [, ...] ) ] statementEXPLAIN [ ANALYZE ] [ VERBOSE ] statement In the preceding code, the term option can be one of the following:    ANALYZE [ boolean ]   VERBOSE [ boolean ]   COSTS [ boolean ]   BUFFERS [ boolean ]   TIMING [ boolean ]   FORMAT { TEXT | XML | JSON | YAML } Consider the following example: test=# EXPLAIN (ANALYZE TRUE, VERBOSE true, COSTS TRUE,   TIMING true) SELECT * FROM t_test WHERE id = 423423;               QUERY PLAN        ------------------------------------------------------Index Scan using idx_id on public.t_test(cost=0.43..8.45 rows=1 width=9)(actual time=0.016..0.018 rows=1 loops=1)   Output: id, name   Index Cond: (t_test.id = 423423)Total runtime: 0.042 ms(4 rows)Time: 0.536 ms The ANALYZE function does a special form of execution. It is a good way to figure out which part of the query burned most of the time. Again, we can read things inside out. In addition to the estimated costs of the query, we can also see the real execution time. In our case, the index scan takes 0.018 milliseconds. Fast, isn't it? Given these timings, you can see that displaying the result actually takes a huge fraction of the time. The beauty of EXPLAIN ANALYZE is that it shows costs and execution times for every step of the process. This is important for you to familiarize yourself with this kind of output because when a programmer hits your desk complaining about bad performance, it is necessary to dig into this kind of stuff quickly. In many cases, the secret to performance is hidden in the execution plan, revealing a missing index or so. It is recommended to pay special attention to situations where the number of expected rows seriously differs from the number of rows really processed. Keep in mind that the planner is usually right, but not always. Be cautious in case of large differences (especially if this input is fed into a nested loop). Whenever a query feels slow, we always recommend to take a look at the plan first. In many cases, you will find missing indexes. The internal structure of a B-tree index Before we dig further into the B-tree indexes, we can briefly discuss what an index actually looks like under the hood. Understanding the B-tree internals Consider the following image that shows how things work: In PostgreSQL, we use the so-called Lehman-Yao B-trees (check out http://www.cs.cmu.edu/~dga/15-712/F07/papers/Lehman81.pdf). The main advantage of the B-trees is that they can handle concurrency very nicely. It is possible that hundreds or thousands of concurrent users modify the tree at the same time. Unfortunately, there is not enough room in this book to explain precisely how this works. The two most important issues of this tree are the facts that I/O is done in 8,000 chunks and that the tree is actually a sorted structure. This allows PostgreSQL to apply a ton of optimizations. Providing a sorted order As we stated before, a B-tree provides the system with sorted output. This can come in quite handy. Here is a simple query to make use of the fact that a B-tree provides the system with sorted output: test=# explain SELECT * FROM t_test ORDER BY id LIMIT 3;                   QUERY PLAN                                    ------------------------------------------------------Limit (cost=0.43..0.67 rows=3 width=9)   -> Index Scan using idx_id on t_test(cost=0.43..320094.43 rows=4000000 width=9)(2 rows) In this case, we are looking for the three smallest values. PostgreSQL will read the index from left to right and stop as soon as enough rows have been returned. This is a very common scenario. Many people think that indexes are only about searching, but this is not true. B-trees are also present to help out with sorting. Why do you, the DBA, care about this stuff? Remember that this is a typical use case where a software developer comes to your desk, pounds on the table, and complains. A simple index can fix the problem. Combined indexes Combined indexes are one more source of trouble if they are not used properly. A combined index is an index covering more than one column. Let's drop the existing index and create a combined index (make sure your seq_page_cost variable is set back to default to make the following examples work): test=# DROP INDEX idx_combined;DROP INDEXtest=# CREATE INDEX idx_combined ON t_test (name, id);CREATE INDEX We defined a composite index consisting of two columns. Remember that we put the name before the ID. A simple query will return the following execution plan: test=# explain analyze SELECT * FROM t_test   WHERE id = 10;               QUERY PLAN                                              -------------------------------------------------Seq Scan on t_test (cost=0.00..71622.00 rows=1   width=9)(actual time=181.502..351.439 rows=1 loops=1)   Filter: (id = 10)   Rows Removed by Filter: 3999999Total runtime: 351.481 ms(4 rows) There is no proper index for this, so the system will fall back to a sequential scan. Why is there no proper index? Well, try to look up for first names only in the telephone book. This is not going to work because a telephone book is sorted by location, last name, and first name. The same applies to our index. A B-tree works basically on the same principles as an ordinary paper phone book. It is only useful if you look up the first couple of values, or simply all of them. Here is an example: test=# explain analyze SELECT * FROM t_test   WHERE id = 10 AND name = 'joe';     QUERY PLAN                                                    ------------------------------------------------------Index Only Scan using idx_combined on t_test   (cost=0.43..6.20 rows=1 width=9)(actual time=0.068..0.068 rows=0 loops=1)   Index Cond: ((name = 'joe'::text) AND (id = 10))   Heap Fetches: 0Total runtime: 0.108 ms(4 rows) In this case, the combined index comes up with a high speed result of 0.1 ms, which is not bad. After this small example, we can turn to an issue that's a little bit more complex. Let's change the costs of a sequential scan to 100-times normal: test=# SET seq_page_cost TO 100;SET Don't let yourself be fooled into believing that an index is always good: test=# explain analyze SELECT * FROM t_testWHERE id = 10;                   QUERY PLAN                ------------------------------------------------------Index Only Scan using idx_combined on t_test(cost=0.43..91620.44 rows=1 width=9)(actual time=0.362..177.952 rows=1 loops=1)   Index Cond: (id = 10)   Heap Fetches: 1Total runtime: 177.983 ms(4 rows) Just look at the execution times. We are almost as slow as a sequential scan here. Why does PostgreSQL use the index at all? Well, let's assume we have a very broad table. In this case, sequentially scanning the table is expensive. Even if we have to read the entire index, it can be cheaper than having to read the entire table, at least if there is enough hope to reduce the amount of data by using the index somehow. So, in case you see an index scan, also take a look at the execution times and the number of rows used. The index might not be perfect, but it's just an attempt by PostgreSQL to avoid the worse to come. Keep in mind that there is no general rule (for example, more than 25 percent of data will result in a sequential scan) for sequential scans. The plans depend on a couple of internal issues, such as physical disk layout (correlation) and so on. Partial indexes Up to now, an index covered the entire table. This is not always necessarily the case. There are also partial indexes. When is a partial index useful? Consider the following example: test=# CREATE TABLE t_invoice (   id     serial,   d     date,   amount   numeric,   paid     boolean);CREATE TABLEtest=# CREATE INDEX idx_partial   ON   t_invoice (paid)   WHERE   paid = false;CREATE INDEX In our case, we create a table storing invoices. We can safely assume that the majority of the invoices are nicely paid. However, we expect a minority to be pending, so we want to search for them. A partial index will do the job in a highly space efficient way. Space is important because saving on space has a couple of nice side effects, such as cache efficiency and so on. Dealing with different types of indexes Let's move on to an important issue: not everything can be sorted easily and in a useful way. Have you ever tried to sort circles? If the question seems odd, just try to do it. It will not be easy and will be highly controversial, so how do we do it best? Would we sort by size or coordinates? Under any circumstances, using a B-tree to store circles, points, or polygons might not be a good idea at all. A B-tree does not do what you want it to do because a B-tree depends on some kind of sorting order. To provide end users with maximum flexibility and power, PostgreSQL provides more than just one index type. Each index type supports certain algorithms used for different purposes. The following list of index types is available in PostgreSQL (as of Version 9.4.1): btree: These are the high-concurrency B-trees gist: This is an index type for geometric searches (GIS data) and for KNN-search gin: This is an index type optimized for Full-Text Search (FTS) sp-gist: This is a space-partitioned gist As we mentioned before, each type of index serves different purposes. We highly encourage you to dig into this extremely important topic to make sure that you can help software developers whenever necessary. Unfortunately, we don't have enough room in this book to discuss all the index types in greater depth. If you are interested in finding out more, we recommend checking out information on my website at http://www.postgresql-support.de/slides/2013_dublin_indexing.pdf. Alternatively, you can look up the official PostgreSQL documentation, which can be found at http://www.postgresql.org/docs/9.4/static/indexes.html. Detecting missing indexes Now that we have covered the basics and some selected advanced topics of indexing, we want to shift our attention to a major and highly important administrative task: hunting down missing indexes. When talking about missing indexes, there is one essential query I have found to be highly valuable. The query is given as follows: test=# xExpanded display (expanded) is on.test=# SELECT   relname, seq_scan, seq_tup_read,     idx_scan, idx_tup_fetch,     seq_tup_read / seq_scanFROM   pg_stat_user_tablesWHERE   seq_scan > 0ORDER BY seq_tup_read DESC;-[ RECORD 1 ]-+---------relname       | t_user  seq_scan     | 824350      seq_tup_read | 2970269443530idx_scan     | 0      idx_tup_fetch | 0      ?column?     | 3603165 The pg_stat_user_tables option contains statistical information about tables and their access patterns. In this example, we found a classic problem. The t_user table has been scanned close to 1 million times. During these sequential scans, we processed close to 3 trillion rows. Do you think this is unusual? It's not nearly as unusual as you might think. In the last column, we divided seq_tup_read through seq_scan. Basically, this is a simple way to figure out how many rows a typical sequential scan has used to finish. In our case, 3.6 million rows had to be read. Do you remember our initial example? We managed to read 4 million rows in a couple of hundred milliseconds. So, it is absolutely realistic that nobody noticed the performance bottleneck before. However, just consider burning, say, 300 ms for every query thousands of times. This can easily create a heavy load on a totally unnecessary scale. In fact, a missing index is the key factor when it comes to bad performance. Let's take a look at the table description now: test=# d t_user                         Table "public.t_user"Column | Type   |       Modifiers                    ----------+---------+-------------------------------id      | integer | not null default               nextval('t_user_id_seq'::regclass)email   | text   |passwd   | text   |Indexes:   "t_user_pkey" PRIMARY KEY, btree (id) This is really a classic example. It is hard to tell how often I have seen this kind of example in the field. The table was probably called customer or userbase. The basic principle of the problem was always the same: we got an index on the primary key, but the primary key was never checked during the authentication process. When you log in to Facebook, Amazon, Google, and so on, you will not use your internal ID, you will rather use your e-mail address. Therefore, it should be indexed. The rules here are simple: we are searching for queries that needed many expensive scans. We don't mind sequential scans as long as they only read a handful of rows or as long as they show up rarely (caused by backups, for example). We need to keep expensive scans in mind, however ("expensive" in terms of "many rows needed"). Here is an example code snippet that should not bother us at all: -[ RECORD 1 ]-+---------relname       | t_province  seq_scan     | 8345345      seq_tup_read | 100144140idx_scan     | 0      idx_tup_fetch | 0      ?column?     | 12 The table has been read 8 million times, but in an average, only 12 rows have been returned. Even if we have 1 million indexes defined, PostgreSQL will not use them because the table is simply too small. It is pretty hard to tell which columns might need an index from inside PostgreSQL. However, taking a look at the tables and thinking about them for a minute will, in most cases, solve the riddle. In many cases, things are pretty obvious anyway, and developers will be able to provide you with a reasonable answer. As you can see, finding missing indexes is not hard, and we strongly recommend checking this system table once in a while to figure out whether your system works nicely. There are a couple of tools, such as pgbadger, out there that can help us to monitor systems. It is recommended that you make use of such tools. There is not only light, there is also some shadow. Indexes are not always good. They can also cause considerable overhead during writes. Keep in mind that when you insert, modify, or delete data, you have to touch the indexes as well. The overhead of useless indexes should never be underestimated. Therefore, it makes sense to not just look for missing indexes, but also for spare indexes that don't serve a purpose anymore. Detecting slow queries Now that we have seen how to hunt down tables that might need an index, we can move on to the next example and try to figure out the queries that cause most of the load on your system. Sometimes, the slowest query is not the one causing a problem; it is a bunch of small queries, which are executed over and over again. In this section, you will learn how to track down such queries. To track down slow operations, we can rely on a module called pg_stat_statements. This module is available as part of the PostgreSQL contrib section. Installing a module from this section is really easy. Connect to PostgreSQL as a superuser, and execute the following instruction (if contrib packages have been installed): test=# CREATE EXTENSION pg_stat_statements;CREATE EXTENSION This module will install a system view that will contain all the relevant information we need to find expensive operations: test=# d pg_stat_statements         View "public.pg_stat_statements"       Column       |       Type       | Modifiers---------------------+------------------+-----------userid             | oid             |dbid               | oid             |queryid             | bigint          |query               | text             |calls               | bigint           |total_time         | double precision |rows               | bigint           |shared_blks_hit     | bigint           |shared_blks_read   | bigint           |shared_blks_dirtied | bigint           |shared_blks_written | bigint           |local_blks_hit     | bigint           |local_blks_read     | bigint           |local_blks_dirtied | bigint           |local_blks_written | bigint           |temp_blks_read     | bigint           |temp_blks_written   | bigint           |blk_read_time       | double precision |blk_write_time     | double precision | In this view, we can see the queries we are interested in, the total execution time (total_time), the number of calls, and the number of rows returned. Then, we will get some information about the I/O behavior (more on caching later) of the query as well as information about temporary data being read and written. Finally, the last two columns will tell us how much time we actually spent on I/O. The final two fields are active when track_timing in postgresql.conf has been enabled and will give vital insights into potential reasons for disk wait and disk-related speed problems. The blk_* prefix will tell us how much time a certain query has spent reading and writing to the operating system. Let's see what happens when we want to query the view: test=# SELECT * FROM pg_stat_statements;ERROR: pg_stat_statements must be loaded via   shared_preload_libraries The system will tell us that we have to enable this module; otherwise, data won't be collected. All we have to do to make this work is to add the following line to postgresql.conf: shared_preload_libraries = 'pg_stat_statements' Then, we have to restart the server to enable it. We highly recommend adding this module to the configuration straightaway to make sure that a restart can be avoided and that this data is always around. Don't worry too much about the performance overhead of this module. Tests have shown that the impact on performance is so low that it is even too hard to measure. Therefore, it might be a good idea to have this module activated all the time. If you have configured things properly, finding the most time-consuming queries should be simple: SELECT *FROM   pg_stat_statementsORDER   BY total_time DESC; The important part here is that PostgreSQL can nicely group queries. For instance: SELECT * FROM foo WHERE bar = 1;SELECT * FROM foo WHERE bar = 2; PostgreSQL will detect that this is just one type of query and replace the two numbers in the WHERE clause with a placeholder indicating that a parameter was used here. Of course, you can also sort by any other criteria: highest I/O time, highest number of calls, or whatever. The pg_stat_statement function has it all, and things are available in a way that makes the data very easy and efficient to use. How to reset statistics Sometimes, it is necessary to reset the statistics. If you are about to track down a problem, resetting can be very beneficial. Here is how it works: test=# SELECT pg_stat_reset();pg_stat_reset---------------(1 row)test=# SELECT pg_stat_statements_reset();pg_stat_statements_reset--------------------------(1 row) The pg_stat_reset command will reset the entire system statistics (for example, pg_stat_user_tables). The second call will wipe out pg_stat_statements. Adjusting memory parameters After we find the slow queries, we can do something about them. The first step is always to fix indexing and make sure that sane requests are sent to the database. If you are requesting stupid things from PostgreSQL, you can expect trouble. Once the basic steps have been performed, we can move on to the PostgreSQL memory parameters, which need some tuning. Optimizing shared buffers One of the most essential memory parameters is shared_buffers. What are shared buffers? Let's assume we are about to read a table consisting of 8,000 blocks. PostgreSQL will check if the buffer is already in cache (shared_buffers), and if it is not, it will ask the underlying operating system to provide the database with the missing 8,000 blocks. If we are lucky, the operating system has a cached copy of the block. If we are not so lucky, the operating system has to go to the disk system and fetch the data (worst case). So, the more data we have in cache, the more efficient we will be. Setting shared_buffers to the right value is more art than science. The general guideline is that shared_buffers should consume 25 percent of memory, but not more than 16 GB. Very large shared buffer settings are known to cause suboptimal performance in some cases. It is also not recommended to starve the filesystem cache too much on behalf of the database system. Mentioning the guidelines does not mean that it is eternal law—you really have to see this as a guideline you can use to get started. Different settings might be better for your workload. Remember, if there was an eternal law, there would be no setting, but some autotuning magic. However, a contrib module called pg_buffercache can give some insights into what is in cache at the moment. It can be used as a basis to get started on understanding what is going on inside the PostgreSQL shared buffer. Changing shared_buffers can be done in postgresql.conf, shown as follows: shared_buffers = 4GB In our example, shared buffers have been set to 4GB. A database restart is needed to activate the new value. In PostgreSQL 9.4, some changes were introduced. Traditionally, PostgreSQL used a classical System V shared memory to handle the shared buffers. Starting with PostgreSQL 9.3, mapped memory was added, and finally, it was in PostgreSQL 9.4 that a config variable was introduced to configure the memory technique PostgreSQL will use, shown as follows: dynamic_shared_memory_type = posix# the default is the first option     # supported by the operating system:     #   posix     #   sysv     #   windows     #   mmap     # use none to disable dynamic shared memory The default value on the most common operating systems is basically fine. However, feel free to experiment with the settings and see what happens performance wise. Considering huge pages When a process uses RAM, the CPU marks this memory as used by this process. For efficiency reasons, the CPU usually allocates RAM by chunks of 4,000 bytes. These chunks are called pages. The process address space is virtual, and the CPU and operating system have to remember which process belongs to which page. The more pages you have, the more time it takes to find where the memory is mapped. When a process uses 1 GB of memory, it means that 262.144 blocks have to be looked up. Most modern CPU architectures support bigger pages, and these pages are called huge pages (on Linux). To tell PostgreSQL that this mechanism can be used, the following config variable can be changed in postgresql.conf: huge_pages = try                     # on, off, or try Of course, your Linux system has to know about the use of huge pages. Therefore, you can do some tweaking, as follows: grep Hugepagesize /proc/meminfoHugepagesize:     2048 kB In our case, the size of the huge pages is 2 MB. So, if there is 1 GB of memory, 512 huge pages are needed. The number of huge pages can be configured and activated by setting nr_hugepages in the proc filesystem. Consider the following example: echo 512 > /proc/sys/vm/nr_hugepages Alternatively, we can use the sysctl command or change things in /etc/sysctl.conf: sysctl -w vm.nr_hugepages=512 Huge pages can have a significant impact on performance. Tweaking work_mem There is more to PostgreSQL memory configuration than just shared buffers. The work_mem parameter is widely used for operations such as sorting, aggregating, and so on. Let's illustrate the way work_mem works with a short, easy-to-understand example. Let's assume it is an election day and three parties have taken part in the elections. The data is as follows: test=# CREATE TABLE t_election (id serial, party text);test=# INSERT INTO t_election (party)SELECT 'socialists'   FROM generate_series(1, 439784);test=# INSERT INTO t_election (party)SELECT 'conservatives'   FROM generate_series(1, 802132);test=# INSERT INTO t_election (party)SELECT 'liberals'   FROM generate_series(1, 654033); We add some data to the table and try to count how many votes each party has: test=# explain analyze SELECT party, count(*)   FROM   t_election   GROUP BY 1;       QUERY PLAN                                                        ------------------------------------------------------HashAggregate (cost=39461.24..39461.26 rows=3     width=11) (actual time=609.456..609.456   rows=3 loops=1)     Group Key: party   -> Seq Scan on t_election (cost=0.00..29981.49     rows=1895949 width=11)   (actual time=0.007..192.934 rows=1895949   loops=1)Planning time: 0.058 msExecution time: 609.481 ms(5 rows) First of all, the system will perform a sequential scan and read all the data. This data is passed on to a so-called HashAggregate. For each party, PostgreSQL will calculate a hash key and increment counters as the query moves through the tables. At the end of the operation, we will have a chunk of memory with three values and three counters. Very nice! As you can see, the explain analyze statement does not take more than 600 ms. Note that the real execution time of the query will be a lot faster. The explain analyze statement does have some serious overhead. Still, it will give you valuable insights into the inner workings of the query. Let's try to repeat this same example, but this time, we want to group by the ID. Here is the execution plan: test=# explain analyze SELECT id, count(*)   FROM   t_election     GROUP BY 1;       QUERY PLAN                                                          ------------------------------------------------------GroupAggregate (cost=253601.23..286780.33 rows=1895949     width=4) (actual time=1073.769..1811.619     rows=1895949 loops=1)     Group Key: id   -> Sort (cost=253601.23..258341.10 rows=1895949   width=4) (actual time=1073.763..1288.432   rows=1895949 loops=1)         Sort Key: id       Sort Method: external sort Disk: 25960kB         -> Seq Scan on t_election         (cost=0.00..29981.49 rows=1895949 width=4)     (actual time=0.013..235.046 rows=1895949     loops=1)Planning time: 0.086 msExecution time: 1928.573 ms(8 rows) The execution time rises by almost 2 seconds and, more importantly, the plan changes. In this scenario, there is no way to stuff all the 1.9 million hash keys into a chunk of memory because we are limited by work_mem. Therefore, PostgreSQL has to find an alternative plan. It will sort the data and run GroupAggregate. How does it work? If you have a sorted list of data, you can count all equal values, send them off to the client, and move on to the next value. The main advantage is that we don't have to keep the entire result set in memory at once. With GroupAggregate, we can basically return aggregations of infinite sizes. The downside is that large aggregates exceeding memory will create temporary files leading to potential disk I/O. Keep in mind that we are talking about the size of the result set and not about the size of the underlying data. Let's try the same thing with more work_mem: test=# SET work_mem TO '1 GB';SETtest=# explain analyze SELECT id, count(*)   FROM t_election   GROUP BY 1;         QUERY PLAN                                                        ------------------------------------------------------HashAggregate (cost=39461.24..58420.73 rows=1895949     width=4) (actual time=857.554..1343.375   rows=1895949 loops=1)   Group Key: id   -> Seq Scan on t_election (cost=0.00..29981.49   rows=1895949 width=4)   (actual time=0.010..201.012   rows=1895949 loops=1)Planning time: 0.113 msExecution time: 1478.820 ms(5 rows) In this case, we adapted work_mem for the current session. Don't worry, changing work_mem locally does not change the parameter for other database connections. If you want to change things globally, you have to do so by changing things in postgresql.conf. Alternatively, 9.4 offers a command called ALTER SYSTEM SET work_mem TO '1 GB'. Once SELECT pg_reload_conf() has been called, the config parameter is changed as well. What you see in this example is that the execution time is around half a second lower than before. PostgreSQL switches back to the more efficient plan. However, there is more; work_mem is also in charge of efficient sorting: test=# explain analyze SELECT * FROM t_election ORDER BY id DESC;     QUERY PLAN                                                          ------------------------------------------------------Sort (cost=227676.73..232416.60 rows=1895949 width=15)   (actual time=695.004..872.698 rows=1895949   loops=1)   Sort Key: id   Sort Method: quicksort Memory: 163092kB   -> Seq Scan on t_election (cost=0.00..29981.49   rows=1895949 width=15) (actual time=0.013..188.876rows=1895949 loops=1)Planning time: 0.042 msExecution time: 995.327 ms(6 rows) In our example, PostgreSQL can sort the entire dataset in memory. Earlier, we had to perform a so-called "external sort Disk", which is way slower because temporary results have to be written to disk. The work_mem command is used for some other operations as well. However, sorting and aggregation are the most common use cases. Keep in mind that work_mem should not be abused, and work_mem can be allocated to every sorting or grouping operation. So, more than just one work_mem amount of memory might be allocated by a single query. Improving maintenance_work_mem To control the memory consumption of administrative tasks, PostgreSQL offers a parameter called maintenance_work_mem. It is used to handle index creations as well as VACUUM. Usually, creating an index (B-tree) is mostly related to sorting, and the idea of maintenance_work_mem is to speed things up. However, things are not as simple as they might seem. People might assume that increasing the parameter will always speed things up, but this is not necessarily true; in fact, smaller values might even be beneficial. We conducted some research to solve this riddle. The in-depth results of this research can be found at http://www.cybertec.at/adjusting-maintenance_work_mem/. However, indexes are not the only beneficiaries. The maintenance_work_mem command is also here to help VACUUM clean out indexes. If maintenance_work_mem is too low, you might see VACUUM scanning tables repeatedly because dead items cannot be stored in memory during VACUUM. This is something that should basically be avoided. Just like all other memory parameters, maintenance_work_mem can be set per session, or it can be set globally in postgresql.conf. Adjusting effective_cache_size The number of shared_buffers assigned to PostgreSQL is not the only cache in the system. The operating system will also cache data and do a great job of improving speed. To make sure that the PostgreSQL optimizer knows what to expect from the operation system, effective_cache_size has been introduced. The idea is to tell PostgreSQL how much cache there is going to be around (shared buffers + operating system side cache). The optimizer can then adjust its costs and estimates to reflect this knowledge. It is recommended to always set this parameter; otherwise, the planner might come up with suboptimal plans. Summary In this article, you learned how to detect basic performance bottlenecks. In addition to this, we covered the very basics of the PostgreSQL optimizer and indexes. At the end of the article, some important memory parameters were presented. Resources for Article: Further resources on this subject: PostgreSQL 9: Reliable Controller and Disk Setup [article] Running a PostgreSQL Database Server [article] PostgreSQL: Tips and Tricks [article]
Read more
  • 0
  • 0
  • 3541

article-image-administering-and-monitoring-processes
Packt
09 Oct 2014
25 min read
Save for later

Administering and Monitoring Processes

Packt
09 Oct 2014
25 min read
In this article by Alexandre Borges, the author of Solaris 11 Advanced Administration, we will cover the following topics: Monitoring and handling process execution Managing processes' priority on Solaris 11 Configuring FSS and applying it to projects When working with Oracle Solaris 11, many of the executing processes compose applications, and even the operating system itself runs many other processes and threads, which takes care of the smooth working of the environment. So, administrators have a daily task of monitoring the entire system and taking some hard decisions, when necessary. Furthermore, not all processes have the same priority and urgency, and there are some situations where it is suitable to give higher priority to one process than another (for example, rendering images). Here, we introduce a key concept: scheduling classes. Oracle Solaris 11 has a default process scheduler (svc:/system/scheduler:default) that controls the allocation of the CPU for each process according to its scheduling class. There are six important scheduling classes, as follows: Time Sharing (TS): By default, all processes or threads (non-GUI) are assigned to this class, where the priority value is dynamic and adjustable according to the system load (-60 to 60). Additionally, the system scheduler switches a process/thread with a lower priority from a processor to another process/thread with higher priority. Interactive (IA): This class has the same behavior as the TS class (dynamic and with an adjustable priority value from -60 to 60), but the IA class is suitable for GUI processes/threads that have an associated window. Additionally, when the mouse focus is on a window, the bound process or thread receives an increase of 10 points of its priority. When the mouse focuses on a window, the bound process loses the same 10 points. Fixed (FX): This class has the same behavior as that of TS, except that any process or thread that is associated with this class has its priority value fixed. The value range is from 0 to 59, but the initial priority of the process or thread is kept from the beginning to end of the life process. System (SYS): This class is used for kernel processes or threads where the possible priority goes from 60 to 99. However, once the kernel process or thread begins processing, it's bound to the CPU until the end of its life (the system scheduler doesn't take it off the processor). Realtime (RT): Processes and threads from this class have a fixed priority that ranges from 100 to 159. Any process or thread of this class has a higher priority than any other class. Fair share scheduler (FSS): Any process or thread managed by this class is scheduled based on its share value (and not on its priority value) and in the processor's utilization. The priority range goes from -60 to 60. Usually, the FSS class is used when the administrator wants to control the resource distribution on the system using processor sets or when deploying Oracle zones. It is possible to change the priority and class of any process or thread (except the system class), but it is uncommon, such as using FSS. When handling a processor set (a group of processors), the processes bound to this group must belong to only one scheduling class (FSS or FX, but not both). It is recommended that you don't use the RT class unless it is necessary because RT processes are bound to the processor (or core) up to their conclusion, and it only allows any other process to execute when it is idle. The FSS class is based on shares, and personally, I establish a total of 100 shares and assign these shares to processes, threads, or even Oracle zones. This is a simple method to think about resources, such as CPUs, using percentages (for example, 10 shares = 10 percent). Monitoring and handling process execution Oracle Solaris 11 offers several methods to monitor and control process execution, and there isn't one best tool to do this because every technique has some advantages. Getting ready This recipe requires a virtual machine (VirtualBox or VMware) running Oracle Solaris 11 installed with a 2 GB RAM at least. It's recommended that the system has more than one processor or core. How to do it… A common way to monitor processes on Oracle Solaris 11 is using the old and good ps command: root@solaris11-1:~# ps -efcl -o s,uid,pid,zone,class,pri,vsz,rss,time,comm | more According to the output shown in the previous screenshot, we have: S (status) UID (user ID) PID (process ID) ZsONE (zone) CLS (scheduling class) PRI (priority) VSZ (virtual memory size) RSS (resident set size) TIME (the time for that the process runs on the CPU) COMMAND (the command used to start the process) Additionally, possible process statuses are as follows: O (running on a processor) S (sleeping—waiting for an event to complete) R (runnable—process is on a queue) T (process is stopped either because of a job control signal or because it is being traced) Z (zombie—process finished and parent is not waiting) W (waiting—process is waiting for the CPU usage to drop to the CPU-caps enforced limit) Do not get confused between the virtual memory size (VSZ) and resident set size (RSS). The VSZ of a process includes all information on a physical memory (RAM) plus all mapped files and devices (swap). On the other hand, the RSS value only includes the information in the memory (RAM). Other important command to monitor processes on Oracle Solaris 11 is the prstat tool. For example, it is possible to list the threads of each process by executing the following command: root@solaris11-1:~# prstat –L PID USERNAME SIZE   RSS STATE   PRI NICE     TIME CPU PROCESS/LWPID   2609 root     129M   18M sleep   15   0   0:00:24 1.1% gnome-terminal/1 1238 root       88M   74M sleep   59   0   0:00:41 0.5% Xorg/1 2549 root     217M   99M sleep     1   0   0:00:45 0.3% java/22 2549 root     217M   99M sleep     1   0   0:00:30 0.2% java/21 2581 root       13M 2160K sleep   59   0   0:00:24 0.2% VBoxClient/3 1840 root       37M 7660K sleep     1   0   0:00:26 0.2% pkg.depotd/2 (truncated output) The LWPID column shows the number of threads of each process. Other good options are –J (summary per project), -Z (summary per zone), and –mL (includes information about thread microstates). To collect some information about processes and projects, execute the following command: root@solaris11-1:~# prstat –J    PID USERNAME SIZE   RSS STATE   PRI NICE     TIME CPU PROCESS/NLWP     2549 root     217M   99M sleep   55   0   0:01:56 0.8% java/25 1238 root       88M   74M sleep   59   0   0:00:44 0.4% Xorg/3 1840 root       37M 7660K sleep     1   0   0:00:55 0.4% pkg.depotd/64 (truncated output) PROJID   NPROC SWAP   RSS MEMORY     TIME CPU PROJECT                          1       43 2264M 530M   13%   0:03:46 1.9% user.root                        0       79 844M 254M   6.1%   0:03:12 0.9% system                         3       2   11M 5544K   0.1%   0:00:55 0.0% default                     Total: 124 processes, 839 lwps, load averages: 0.23, 0.22, 0.22 Pay attention to the last column (PROJECT) from the second part of the output. It is very interesting to know that Oracle Solaris already works using projects and some of them are created by default. By the way, it is always appropriate to remember that the structure of a project is project | tasks | processes. Collecting information about processes and zones is done by executing the following command: root@solaris11-1:~# prstat -Z    PID USERNAME SIZE   RSS STATE   PRI NICE     TIME CPU PROCESS/NLWP     3735 root       13M   12M sleep   59   0   0:00:13 4.2% svc.configd/17 3733 root       17M 8676K sleep   59   0   0:00:05 2.0% svc.startd/15 2532 root     219M   83M sleep   47   0   0:00:15 0.8% java/25 1214 root      88M   74M sleep     1   0   0:00:09 0.6% Xorg/3    746 root       0K   0K sleep   99 -20   0:00:02 0.5% zpool-myzones/138 (truncated output) ZONEID   NPROC SWAP   RSS MEMORY     TIME CPU ZONE      1       11   92M   36M   0.9%   0:00:18 6.7% zone1      0     129 3222M 830M   20%   0:02:09 4.8% global      2       5   18M 6668K   0.2%   0:00:00 0.2% zone2 According to the output, there is a global zone and two other nonglobal zones (zone1 and zone2) in this system. Finally, to gather information about processes and their respective microstate information, execute the following command: root@solaris11-1:~# prstat –mL    PID USERNAME USR SYS TRP TFL DFL LCK SLP LAT VCX ICX SCL SIG PROCESS/LWPID 1925 pkg5srv 0.8 5.9 0.0 0.0 0.0 0.0 91 2.1 286   2 2K   0 htcacheclean/1 1214 root     1.6 3.4 0.0 0.0 0.0 0.0 92 2.7 279 24 3K   0 Xorg/1 2592 root     2.2 2.1 0.0 0.0 0.0 0.0 94 1.7 202   9 1K   0 gnome-termin/1 2532 root     0.9 1.4 0.0 0.0 0.0 97 0.0 1.2 202   4 304   0 java/22 5809 root     0.1 1.2 0.0 0.0 0.0 0.0 99 0.0 55   1 1K   0 prstat/1 2532 root     0.6 0.5 0.0 0.0 0.0 98 0.0 1.3 102   6 203   0 java/21 (truncated output) The output from prtstat –mL (gathering microstates information) is very interesting because it can give us some clues about performance problems. For example, the LAT column (latency) indicates the percentage of time wait for CPU (possible problems with the CPU) and, in this case, a constant value above zero could mean a CPU performance problem. Continuing the explanation, a possible problem with the memory can be highlighted using the TFL (the percentage of time the process has spent processing text page faults) and DFL columns (the percentage of time the process has spent processing data page faults), which shows whether and how many times (in percentage) a thread is waiting for memory paging. In a complementary manner, when handling processes, there are several useful commands, as shown in the following table: Objective Command To show the stack process pstack <pid> To kill a process pkill <process name> To get the process ID of a process pgrep –l <pid> To list the opened files by a process pfiles <pid> To get a memory map of a process pmap –x <pid> To list the shared libraries of a process pldd <pid> To show all the arguments of a process pargs –ea <pid> To trace a process truss –p <pid> To reap a zombie process preap <pid> For example, to find out which shared libraries are used by the top command, execute the following sequence of commands: root@solaris11-1:~# top root@solaris11-1:~# ps -efcl | grep top 0 S     root 2672 2649   IA 59       ?   1112       ? 05:32:53 pts/3       0:00 top 0 S     root 2674 2606   IA 54       ?   2149       ? 05:33:01 pts/2       0:00 grep top root@solaris11-1:~# pldd 2672 2672: top /lib/amd64/libc.so.1 /usr/lib/amd64/libkvm.so.1 /lib/amd64/libelf.so.1 /lib/amd64/libkstat.so.1 /lib/amd64/libm.so.2 /lib/amd64/libcurses.so.1 /lib/amd64/libthread.so.1 To find the top-most stack, execute the following command: root@solaris11-1:~# pstack 2672 2672: top ffff80ffbf54a66a pollsys (ffff80ffbfffd070, 1, ffff80ffbfffd1f0, 0) ffff80ffbf4f1995 pselect () + 181 ffff80ffbf4f1e14 select () + 68 000000000041a7d1 do_command () + ed 000000000041b5b3 main () + ab7 000000000040930c ???????? () To verify which files are opened by an application as the Firefox browser, we have to execute the following commands: root@solaris11-1:~# firefox & root@solaris11-1:~# ps -efcl | grep firefox 0 S     root 2600 2599  IA 59       ? 61589       ? 13:50:14 pts/1       0:07 firefox 0 S     root 2616 2601   IA 58       ?   2149       ? 13:51:18 pts/2       0:00 grep firefox root@solaris11-1:~# pfiles 2600 2600: firefox Current rlimit: 1024 file descriptors 0: S_IFCHR mode:0620 dev:563,0 ino:45703982 uid:0 gid:7 rdev:195,1      O_RDWR      /dev/pts/1      offset:997    1: S_IFCHR mode:0620 dev:563,0 ino:45703982 uid:0 gid:7 rdev:195,1      O_RDWR      /dev/pts/1      offset:997    2: S_IFCHR mode:0620 dev:563,0 ino:45703982 uid:0 gid:7 rdev:195,1      O_RDWR      /dev/pts/1      offset:997 (truncated output) Another excellent command from the previous table is pmap, which shows information about the address space of a process. For example, to see the address space of the current shell, execute the following command: root@solaris11-1:~# pmap -x $$ 2675: bash Address Kbytes     RSS   Anon Locked Mode   Mapped File 08050000   1208   1184       -       - r-x-- bash 0818E000     24     24       8       - rw--- bash 08194000     188     188     32       - rw---   [ heap ] EF470000     56     52       -       - r-x-- methods_unicode.so.3 EF48D000       8       8       -       - rwx-- methods_unicode.so.3 EF490000   6744     248       -     - r-x-- en_US.UTF-8.so.3 EFB36000       4       4       -       - rw--- en_US.UTF-8.so.3 FE550000     184     148       -       - r-x-- libcurses.so.1 FE58E000     16     16       -       - rw--- libcurses.so.1 FE592000       8       8       -     - rw--- libcurses.so.1 FE5A0000       4       4       4       - rw---   [ anon ] FE5B0000     24     24       -       - r-x-- libgen.so.1 FE5C6000       4       4       -       - rw--- libgen.so.1 FE5D0000     64     16       -       - rwx--   [ anon ] FE5EC000       4       4       -       - rwxs-   [ anon ] FE5F0000       4       4       4       - rw---   [ anon ] FE600000     24     12       4       - rwx--   [ anon ] FE610000   1352   1072       -       - r-x-- libc_hwcap1.so.1 FE772000     44     44     16       - rwx-- libc_hwcap1.so.1 FE77D000       4       4       -       - rwx-- libc_hwcap1.so.1 FE780000       4       4       4       - rw---   [ anon ] FE790000       4       4       4       - rw---   [ anon ] FE7A0000       4       4       -       - rw---   [ anon ] FE7A8000       4       4       -       - r--s-   [ anon ] FE7B4000     220     220       -       - r-x-- ld.so.1 FE7FB000       8       8       4       - rwx-- ld.so.1 FE7FD000       4       4       -       - rwx-- ld.so.1 FEFFB000     16     16       4       - rw---   [ stack ] -------- ------- ------- ------- ------- total Kb   10232   3332     84       - The pmap output shows us the following essential information: Address: This is the starting virtual address of each mapping Kbytes: This is the virtual size of each mapping RSS: The amount of RAM (in KB) for each mapping, including shared memory Anon: The number of pages of anonymous memory, which is usually and roughly defined as the sum of heap and stack pages without a counterpart on the disk (excluding the memory shared with other address spaces) Lock: The number of pages locked in the mapping Permissions: Virtual memory permissions for each mapping. The possible and valid permissions are as follows: x: Any instructions inside this mapping can be executed by the process w: The mapping can be written by the process r: The mapping can be read by the process s: The mapping is shared with other processes R: There is no swap space reserved for this process Mapped File: The name for each mapping such as an executable, a library, and anonymous pages (heap and stack) Finally, there is an excellent framework, DTrace, where you can get information on processes and anything else related to Oracle Solaris 11. What is DTrace? It is a clever instrumentation tool that is used for troubleshooting and, mainly, as a suitable framework for performance and analysis. DTrace is composed of thousands of probes (sensors) that are scattered through the Oracle Solaris kernel. To explain this briefly, when a program runs, any touched probe from memory, CPU, or I/O is triggered and gathers information from the related activity, giving us an insight on where the system is spending more time and making it possible to create reports. DTrace is nonintrusive (it does not add a performance burden on the system) and safe (by default, only the root user has enough privileges to use DTrace) and uses the Dscript language (similar to AWK). Different from other tools such as truss, apptrace, sar, prex, tnf, lockstat, and mdb, which allow knowing only the problematic area, DTrace provides the exact point of the problem. The fundamental structure of a DTrace probe is as follows: provider:module:function:name The previous probe is explained as follows: provider: These are libraries that instrument regions of the system, such as, syscall (system calls), proc (processes), fbt (function boundary tracing), lockstat, and so on module: This represents the shared library or kernel module where the probe was created function: This is a program, process, or thread function that contains the probe name: This is the probe's name When using DTrace, for each probe, it is possible to associate an action that will be executed if this probe is touched (triggered). By default, all probes are disabled and don't consume CPU processing. DTrace probes are listed by executing the following command: root@solaris11-1:~# dtrace -l | more The output of the previous command is shown in the following screenshot: The number of available probes on Oracle Solaris 11 are reported by the following command: root@solaris11-1:~# dtrace -l | wc –l    75899 After this brief introduction to DTrace, we can use it for listing any new processes (including their respective arguments) by running the following command: root@solaris11-1:~# dtrace -n 'proc:::exec-success { trace(curpsinfo->pr_psargs); }' dtrace: description 'proc:::exec-success ' matched 1 probe CPU     ID                   FUNCTION:NAME    3   7639         exec_common:exec-success   bash                                2   7639         exec_common:exec-success   /usr/bin/firefox                    0   7639         exec_common:exec-success   sh -c ps -e -o 'pid tty time comm'> /var/tmp/aaacLaiDl    0   7639         exec_common:exec-success   ps -e -o pid tty time comm          0   7639         exec_common:exec-success   ps -e -o pid tty time comm          1   7639         exec_common:exec-success   sh -c ps -e -o 'pid tty time comm'> /var/tmp/caaeLaiDl    2   7639        exec_common:exec-success   sh -c ps -e -o 'pid tty time comm'> /var/tmp/baadLaiDl    2   7639         exec_common:exec-success   ps -e -o pid tty (truncated output) There are very useful one-line tracers, as shown previously, available from Brendan Gregg's website at http://www.brendangregg.com/DTrace/dtrace_oneliners.txt. It is feasible to get any kind of information using DTrace. For example, get the system call count per program by executing the following command: root@solaris11-1:~# dtrace -n 'syscall:::entry { @num[pid,execname] = count(); }' dtrace: description 'syscall:::entry ' matched 213 probes ^C        11 svc.startd                                           2        13 svc.configd                                         2        42 netcfgd                                             2 (truncated output)      2610 gnome-terminal                                     1624      2549 java                                               2464      1221 Xorg                                              5246      2613 dtrace                                             5528      2054 htcacheclean                                       9503 To get the total number of read bytes per process, execute the following command: root@solaris11-1:~# dtrace -n 'sysinfo:::readch { @bytes[execname] = sum(arg0); }' dtrace: description 'sysinfo:::readch ' matched 4 probes ^C in.mpathd                                                     1 named                                                         56 sed                                                         100 wnck-applet                                                 157 (truncated output) VBoxService                                               20460 svc.startd                                                40320 Xorg                                                       65294 ps                                                       1096780 thunderbird-bin                                         3191863 To get the number of write bytes by process, run the following command: root@solaris11-1:~# dtrace -n 'sysinfo:::writech { @bytes[execname] = sum(arg0); }' dtrace: description 'sysinfo:::writech ' matched 4 probes ^C dtrace                                                        1 gnome-power-mana                                               8 xscreensaver                                                 36 gnome-session                                               367 clock-applet                                                404 named                                                       528 gvfsd                                                       748 (truncated output) metacity                                                   24616 ps                                                        59590 wnck-applet                                               65523 gconfd-2                                                   83234 Xorg                                                     184712 firefox                                                   403682 To know the number of pages paged-in by process, execute the following command: root@solaris11-1:~# dtrace -n 'vminfo:::pgpgin { @pg[execname] = sum(arg0); }' dtrace: description 'vminfo:::pgpgin ' matched 1 probe ^C (no output) To list the disk size by process, run the following command: root@solaris11-1:~# dtrace -n 'io:::start { printf("%d %s %d",pid,execname,args[0]->b_bcount); }' dtrace: description 'io:::start ' matched 3 probes CPU     ID                    FUNCTION:NAME 1   6962             bdev_strategy:start 5 zpool-rpool 4096 1   6962             bdev_strategy:start 5 zpool-rpool 4096 2   6962             bdev_strategy:start 5 zpool-rpool 4096 2   6962             bdev_strategy:start 2663 firefox 3584 2   6962             bdev_strategy:start 2663 firefox 3584 2   6962             bdev_strategy:start 2663 firefox 3072 2   6962             bdev_strategy:start 2663 firefox 4096 ^C (truncated output) From Brendan Gregg's website (http://www.brendangregg.com/dtrace.html), there are other good and excellent scripts. For example, prustat.d (which we can save in our home directory) is one of them and its output is self-explanatory; it can be obtained using the following commands: root@solaris11-1:~# chmod u+x prustat.d root@solaris11-1:~# ./prustat.d PID   %CPU   %Mem %Disk   %Net COMM 2537   0.91   2.38   0.00   0.00 java 1218   0.70   1.81   0.00   0.00 Xorg 2610   0.51   0.47   0.00   0.00 gnome-terminal 2522   0.00   0.96   0.00   0.00 nautilus 2523   0.01   0.78   0.00   0.00 updatemanagerno 2519   0.00   0.72   0.00   0.00 gnome-panel 1212   0.42   0.20   0.00   0.00 pkg.depotd 819   0.00   0.53   0.00   0.00 named 943   0.17   0.36   0.00  0.00 poold    13   0.01   0.47   0.00   0.00 svc.configd (truncated output) From the DTraceToolkit website (http://www.brendangregg.com/dtracetoolkit.html), we can download and save the topsysproc.d script in our home directory. Then, by executing it, we are able to find which processes execute more system calls, as shown in the following commands: root@solaris11-1:~/DTraceToolkit-0.99/Proc# ./topsysproc 10 2014 May 4 19:25:10, load average: 0.38, 0.30, 0.28   syscalls: 12648    PROCESS                        COUNT    isapython2.6                       20    sendmail                           20    dhcpd                               24    httpd.worker                       30    updatemanagernot                   40    nautilus                            42    xscreensaver                       50    tput                               59    gnome-settings-d                   62    metacity                           75    VBoxService                         81    ksh93                            118    clear                             163    poold                             201    pkg.depotd                         615    VBoxClient                         781    java                             1249    gnome-terminal                   2224    dtrace                           2712    Xorg                             3965 An overview of the recipe You learned how to monitor processes using several tools such as prstat, ps, and dtrace. Furthermore, you saw several commands that explain how to control and analyze a process. Managing processes' priority on Solaris 11 Oracle Solaris 11 allows us to change the priority of processes using the priocntl command either during the start of the process or after the process is run. Getting ready This recipe requires a virtual machine (VirtualBox or VMware) running Oracle Solaris 11 with a 2 GB RAM at least. It is recommended that the system have more than one processor or core. How to do it… In the Introduction section, we talked about scheduling classes and this time, we will see more information on this subject. To begin, list the existing and active classes by executing the following command: root@solaris11-1:~# priocntl -l CONFIGURED CLASSES ================== SYS (System Class) TS (Time Sharing) Configured TS User Priority Range: -60 through 60 SDC (System Duty-Cycle Class) FSS (Fair Share) Configured FSS User Priority Range: -60 through 60 FX (Fixed priority) Configured FX User Priority Range: 0 through 60 IA (Interactive) Configured IA User Priority Range: -60 through 60 RT (Real Time) Configured RT User Priority Range: 0 through 59 When handling priorities, which we learned in this article, only the positive part is important and we need to take care because the values shown in the previous output have their own class as the reference. Thus, they are not absolute values. To show a simple example, start a process with a determined class (FX) and priority (55) by executing the following commands: root@solaris11-1:~# priocntl -e -c FX -m 60 -p 55 gcalctool root@solaris11-1:~# ps -efcl | grep gcalctool 0 S     root 2660 2646   FX 55       ? 33241       ? 04:48:52 pts/1       0:01 gcalctool 0 S     root 2664 2661 FSS 22       ?   2149       ? 04:50:09 pts/2       0:00 grep gcalctool As can be seen previously, the process is using exactly the class and priority that we have chosen. Moreover, it is appropriate to explain some options such as -e (to execute a specified command), -c (to set the class), -p (the chosen priority inside the class), and -m (the maximum limit that the priority of a process can be raised to). The next exercise is to change the process priority after it starts. For example, by executing the following command, the top tool will be executed in the FX class with an assigned priority equal to 40, as shown in the following command: root@solaris11-1:~# priocntl -e -c FX -m 60 -p 40 top root@solaris11-1:~# ps -efcl | grep top 0 S     root 2662 2649   FX 40       ?   1112       ? 05:16:21 pts/3       0:00 top 0 S     root 2664 2606   IA 33       ?   2149       ? 05:16:28 pts/2       0:00 grep top Then, to change the priority that is running, execute the following command: root@solaris11-1:~# priocntl -s -p 50 2662 root@solaris11-1:~# ps -efcl | grep top 0 S     root 2662 2649   FX 50       ?   1112       ? 05:16:21 pts/3       0:00 top 0 S     root 2667 2606   IA 55       ?   2149       ? 05:17:00 pts/2       0:00 grep top This is perfect! The -s option is used to change the priorities' parameters, and the –p option assigns the new priority to the process. If we tried to use the TS class, the results would not have been the same because this test system does not have a serious load (it's almost idle) and in this case, the priority would be raised automatically to around 59. An overview of the recipe You learned how to configure a process class as well as change the process priority at the start and during its execution using the priocntl command. Configuring FSS and applying it to projects The FSS class is the best option to manage resource allocation (for example, CPU) on Oracle Solaris 11. In this section, we are going to learn how to use it. Getting ready This recipe requires a virtual machine (VirtualBox or VMware) running Oracle Solaris 11 with a 4 GB RAM at least. It is recommended that the system has only one processor or core. How to do it… In Oracle Solaris 11, the default scheduler class is TS, as shown by the following command: root@solaris11-1:~# dispadmin -d TS (Time Sharing) This default configuration comes from the /etc/dispadmin.conf file: root@solaris11-1:~# more /etc/dispadmin.conf # # /etc/dispadmin.conf # # Do NOT edit this file by hand -- use dispadmin(1m) instead. # DEFAULT_SCHEDULER=TS If we need to verify and change the default scheduler, we can accomplish this task by running the following commands: root@solaris11-1:~# dispadmin -d FSS root@solaris11-1:~# dispadmin -d FSS (Fair Share) root@solaris11-1:~# more /etc/dispadmin.conf # # /etc/dispadmin.conf # # Do NOT edit this file by hand -- use dispadmin(1m) instead. # DEFAULT_SCHEDULER=FSS Unfortunately, this new setting only takes effect for newly created processes that are run after the command, but current processes still are running using the previously configured classes (TS and IA), as shown in the following command: root@solaris11-1:~# ps -efcl -o s,uid,pid,zone,class,pri,comm | more S   UID   PID     ZONE CLS PRI COMMAND T     0     0   global SYS 96 sched S     0     5   global SDC 99 zpool-rpool S   0     6   global SDC 99 kmem_task S     0     1   global   TS 59 /usr/sbin/init S     0     2   global SYS 98 pageout S     0     3   global SYS 60 fsflush S     0     7   global SYS 60 intrd S     0     8   global SYS 60 vmtasks S 60002 1173  global   TS 59 /usr/lib/fm/notify/smtp-notify S     0   11   global   TS 59 /lib/svc/bin/svc.startd S     0   13   global   TS 59 /lib/svc/bin/svc.configd S   16   99   global   TS 59 /lib/inet/ipmgmtd S     0   108   global   TS 59 /lib/inet/in.mpathd S   17   40   global   TS 59 /lib/inet/netcfgd S     0   199   global   TS 59 /usr/sbin/vbiosd S     0   907   global   TS 59 /usr/lib/fm/fmd/fmd (truncated output) To change the settings from all current processes (the -i option) to using FSS (the -c option) without rebooting the system, execute the following command: root@solaris11-1:~# priocntl -s -c FSS -i all root@solaris11-1:~# ps -efcl -o s,uid,pid,zone,class,pri,comm | more S   UID   PID     ZONE CLS PRI COMMAND T     0     0   global SYS 96 sched S     0     5   global SDC 99 zpool-rpool S     0     6   global SDC 99 kmem_task S     0     1   global   TS 59 /usr/sbin/init S     0     2   global SYS 98 pageout S     0     3   global SYS 60 fsflush S     0     7   global SYS 60 intrd S     0     8   global SYS 60 vmtasks S 60002 1173   global FSS 29 /usr/lib/fm/notify/smtp-notify S     0   11   global FSS 29 /lib/svc/bin/svc.startd S     0   13   global FSS 29 /lib/svc/bin/svc.configd S   16   99   global FSS 29 /lib/inet/ipmgmtd S     0   108   global FSS 29 /lib/inet/in.mpathd S   17   40   global FSS 29 /lib/inet/netcfgd S     0   199   global FSS 29 /usr/sbin/vbiosd S     0   907   global FSS 29 /usr/lib/fm/fmd/fmd S     0 2459   global FSS 29 gnome-session S   15   66   global FSS 29 /usr/sbin/dlmgmtd S     1   88   global FSS 29 /lib/crypto/kcfd S     0   980   global FSS 29 /usr/lib/devchassis/devchassisd S     0   138   global FSS 29 /usr/lib/pfexecd S     0   277   global FSS 29 /usr/lib/zones/zonestatd O     0 2657   global FSS   1 more S   16   638   global FSS 29 /lib/inet/nwamd S   50 1963   global FSS 29 /usr/bin/dbus-launch S     0   291   global FSS 29 /usr/lib/dbus-daemon S     0   665   global FSS 29 /usr/lib/picl/picld (truncated output) It's almost done, but the init process (PID equal to 1) was not changed to the FSS class, unfortunately. This change operation is done manually, by executing the following commands: root@solaris11-1:~# priocntl -s -c FSS -i pid 1 root@solaris11-1:~# ps -efcl -o s,uid,pid,zone,class,pri,comm | more S   UID   PID     ZONE CLS PRI COMMAND T     0     0   global SYS 96 sched S     0     5   global SDC 99 zpool-rpool S     0     6   global SDC 99 kmem_task S     0     1   global FSS 29 /usr/sbin/init S     0     2   global SYS 98 pageout S     0     3   global SYS 60 fsflush S     0     7   global SYS 60 intrd S     0     8   global SYS 60 vmtasks S 60002 1173   global FSS 29 /usr/lib/fm/notify/smtp-notify S    0   11   global FSS 29 /lib/svc/bin/svc.startd S     0   13   global FSS 29 /lib/svc/bin/svc.configd S   16   99   global FSS 29 /lib/inet/ipmgmtd S     0   108   global FSS 29 /lib/inet/in.mpathd (truncated output) From here, it would be possible to use projects (a very nice concept from Oracle Solaris), tasks, and FSS to make an attractive example. It follows a quick demonstration. From an initial installation, Oracle Solaris 11 already has some default projects, as shown by the following commands: root@solaris11-1:~# projects user.root default root@solaris11-1:~# projects -l system projid : 0 comment: "" users : (none) groups : (none) attribs: user.root projid : 1 comment: "" users : (none) groups : (none) attribs: (truncated output) root@solaris11-1:~# more /etc/project system:0:::: user.root:1:::: noproject:2:::: default:3:::: group.staff:10:::: In this exercise, we are going to create four new projects: ace_proj_1, ace_proj_2, ace_proj_3, and ace_proj_4. For each project will be associated an amount of shares (40, 30, 20, and 10 respectively). Additionally, it will create some useless, but CPU-consuming tasks by starting a Firefox instance. Therefore, execute the following commands to perform the tasks: root@solaris11-1:~# projadd -U root -K "project.cpu-shares=(priv,40,none)" ace_proj_1 root@solaris11-1:~# projadd -U root -K "project.cpu-shares=(priv,30,none)" ace_proj_2 root@solaris11-1:~# projadd -U root -K "project.cpu-shares=(priv,20,none)" ace_proj_3 root@solaris11-1:~# projadd -U root -K "project.cpu-shares=(priv,10,none)" ace_proj_4 root@solaris11-1:~# projects user.root default ace_proj_1 ace_proj_2 ace_proj_3 ace_proj_4 Here is where the trick comes in. The FSS class only starts to act when: The total CPU consumption by all processes is over 100 percent The sum of processes from defined projects is over the current number of CPUs Thus, to be able to see the FSS effect, as explained previously, we have to repeat the next four commands several times (using the Bash history is suitable here), shown as follows: root@solaris11-1:~# newtask -p ace_proj_1 firefox & [1] 3016 root@solaris11-1:~# newtask -p ace_proj_2 firefox & [2] 3032 root@solaris11-1:~# newtask -p ace_proj_3 firefox & [3] 3037 root@solaris11-1:~# newtask -p ace_proj_4 firefox & [4] 3039 As time goes by and the number of tasks increase, each project will be approaching the FSS share limit (40 percent, 30 percent, 20 percent, and 10 percent of processor, respectively). We can follow this trend by executing the next command: root@solaris11-1:~# prstat -JR PID USERNAME SIZE   RSS STATE   PRI NICE     TIME CPU PROCESS/NLWP 3516 root     8552K 1064K cpu1     49   0   0:01:25 25% dd/1 3515 root     8552K 1064K run       1   0   0:01:29 7.8% dd/1 1215 root       89M   29M run     46   0   0:00:56 0.0% Xorg/3 2661 root       13M 292K sleep   59   0   0:00:28 0.0% VBoxClient/3    750 root       13M 2296K sleep   55   0   0:00:02 0.0% nscd/32 3518 root       11M 3636K cpu0     59   0 0:00:00 0.0% (truncated output) PROJID   NPROC SWAP   RSS MEMORY     TIME CPU PROJECT 100       4   33M 4212K   0.1%   0:01:49 35% ace_proj_1 101       4   33M 4392K   0.1%   0:01:14 28% ace_proj_2 102       4   33M 4204K   0.1%   0:00:53 20% ace_proj_3 103       4   33M 4396K   0.1%   0:00:30 11% ace_proj_4 3       2   10M 4608K   0.1%   0:00:06 0.8% default 1       41 2105M 489M   12%   0:00:09 0.7% user.root 0       78 780M 241M   5.8%   0:00:20 0.3% system The prstat command with the –J option shows a summary of the existing projects, and –R requires the kernel to execute the prstat command in the RT scheduling class. If the reader faces some problem getting the expected results, it is possible to swap the firefox command with the dd if=/dev/zero of=/dev/null & command to get the same results. It is important to highlight that while not all projects take their full share of the CPU, other projects can borrow some shares (percentages). This is the reason why ace_proj_4 has 11 percent, because ace_proj_1 has taken only 35 percent ( the maximum is 40 percent). An overview of the recipe In this section, you learned how to change the default scheduler from TS to FSS in a temporary and persistent way. Finally, you saw a complete example using projects, tasks, and FSS. References Solaris Performance and Tools: DTrace and MDB Techniques for Solaris 10 and OpenSolaris; Brendan Gregg, Jim Mauro, Richard McDougall; Prentice Hall; ISBN-13: 978-0131568198 DTraceToolkit website at http://www.brendangregg.com/dtracetoolkit.html Dtrace.org website at http://dtrace.org/blogs/ Summary In this article we learned to monitor and handle process execution, to manage process priority on Solaris 11, and configure FSS and apply it to projects. Resources for Article: Further resources on this subject: Securing Data at Rest in Oracle 11g [article] Getting Started with Oracle GoldenGate [article] Remote Job Agent in Oracle 11g Database with Oracle Scheduler [article]
Read more
  • 0
  • 0
  • 5300

article-image-creating-routers
Packt
08 Oct 2014
11 min read
Save for later

Creating Routers

Packt
08 Oct 2014
11 min read
In this article by James Denton, author of the book, Learning OpenStack, Networking (Neutron), we will create Neutron routers and attach them to networks. The Neutron L3 agent enables IP routing and NAT support for instances within the cloud by utilizing network namespaces to provide isolated routing instances. By creating networks and attaching them to routers, tenants can expose connected instances and their applications to the Internet. The neutron-l3-agent service was installed on the controller node as part of the overall Neutron installation process. (For more resources related to this topic, see here.) Configuring the Neutron L3 agent Before the neutron-l3-agent service can be started, it must be configured. Neutron stores the L3 agent configuration in the /etc/neutron/l3_agent.ini file. The most common configuration options will be covered here. Defining an interface driver Like previously installed agents, the Neutron L3 agent must be configured to use an interface driver that corresponds to the chosen networking plugin. Using crudini, configure the Neutron L3 agent to use one of the following drivers: For LinuxBridge: # crudini --set /etc/neutron/l3_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.BridgeInterfaceDriver For Open vSwitch: # crudini --set /etc/neutron/l3_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver Setting the external network The external network connected to a router is one that not only provides external connectivity to the router and the instances behind it, but also serves as the network from which floating IPs are derived. In Havana, each L3 agent in the cloud can be associated with only one external network. In Icehouse, L3 agents are capable of supporting multiple external networks. To be eligible to serve as an external network, a provider network must have been configured with its router:external attribute set to true. In Havana, if more than one provider network has the attribute set to true, then the gateway_external_network_id configuration option must be used to associate an external network to the agent. To define a specific external network, configure the gateway_external_network_id option as follows: gateway_external_network_id = <UUID of eligible provider network> In Havana, if this option is left empty, the agent will enforce that only a single external networks exists. The agent will automatically use the network for which the router:external attribute is set to true. The default configuration contains an empty or unset value and is sufficient for now. Setting the external bridge The L3 agent must be aware of how to connect the external interface of a router to the network. The external_network_bridge configuration option defines a bridge on the host in which the external interface will be connected. In earlier releases of Havana, the default value of external_network_bridge was br-ex, a bridge expected to be configured manually outside of OpenStack and intended to be dedicated to the external network. As a result of the bridge not being fully managed by OpenStack, provider attributes of the network created within Neutron, including the segmentation ID, network type, and the provider bridge itself, are ignored. To fully utilize a provider network and its attributes, the external_network_bridge configuration option should be set to an empty, or blank, value. By doing so, Neutron will adhere to the attributes of the network and place the external interface of routers into a bridge that it creates, along with a physical or virtual VLAN interface used to provide external connectivity. When using Open vSwitch, the external interface of the router is placed in the integration bridge and assigned to the appropriate local VLAN. With the LinuxBridge plugin, the external interface of routers is placed into a Linux bridge that corresponds to the external network. Using crudini, set the external_network_bridge configuration option to an empty value as follows: # crudini --set /etc/neutron/l3_agent.ini DEFAULT external_network_bridge Enabling the metadata proxy When Neutron routers are used as the gateway for instances, requests for metadata are proxied by the router and forwarded to the Nova metadata service. This feature is enabled by default and can be disabled by setting the enable_metadata_proxy value to false in the l3_agent.ini configuration file. Starting the Neutron L3 agent To start the neutron-l3-agent service and configure it to start at boot, issue the following commands on the controller node: # service neutron-l3-agent start # chkconfig neutron-l3-agent on Verify the agent is running: # service neutron-l3-agent status The service should return an output similar to the following: [root@controller neutron]# service neutron-l3-agent status neutron-l3-agent (pid 13501) is running... If the service remains stopped, troubleshoot any issues that may be found in the /var/log/neutron/l3-agent.log log file. Router management in the CLI Neutron offers a number of commands that can be used to create and manage routers. The primary commands associated with router management include: router-create router-delete router-gateway-clear router-gateway-set router-interface-add router-interface-delete router-list router-list-on-l3-agent router-port-list router-show router-update Creating routers in the CLI Routers in Neutron are associated with tenants and are available for use only by users within the tenant that created them. As an administrator, you can create routers on behalf of tenants during the creation process. To create a router, use the router-create command as follows: Syntax: router-create [--tenant-id TENANT_ID] [--admin-state-down] NAME Working with router interfaces in the CLI Neutron routers have two types of interfaces: gateway and internal. The gateway interface of a Neutron router is analogous to the WAN interface of a hardware router. It is the interface connected to an upstream device that provides connectivity to external resources. The internal interfaces of Neutron routers are analogous to the LAN interfaces of hardware routers. Internal interfaces are connected to tenant networks and often serve as the gateway for connected instances. Attaching internal interfaces to routers To create an interface in the router and attach it to a subnet, use the router-interface-add command as follows: Syntax: router-interface-add <router-id> <INTERFACE> In this case, INTERFACE is the ID of the subnet to be attached to the router. In Neutron, a network may contain multiple subnets. It is important to attach the router to each subnet so that it properly serves as the gateway for those subnets. Once the command is executed, Neutron creates a port in the database that is associated with the router interface. The L3 agent is responsible for connecting interfaces within the router namespace to the proper bridge. Attaching a gateway interface to a router The external interface of a Neutron router is referred to as the gateway interface. A router is limited to a single gateway interface. To be eligible for use as an external network that can be used for gateway interfaces, a provider network must have its router:external attribute set to true. To attach a gateway interface to a router, use the router-gateway-set command as follows: Syntax: router-gateway-set <router-id> <external-network-id> [--disable-snat] The default behavior of a Neutron router is to source NAT all outbound traffic from instances that do not have a corresponding floating IP. To disable this functionality, append --disable-snat to the router-gateway-set command. Listing interfaces attached to routers To list the interfaces attached to routers, use the router-port-list command as follows: Syntax: router-port-list <router-id> The returned output includes the Neutron port ID, MAC address, IP address, and associated subnet of attached interfaces. Deleting internal interfaces To delete an internal interface from a router, use the router-interface-delete command as follows: Syntax: router-interface-delete <router-id> <INTERFACE> Here, INTERFACE is the ID of the subnet to be removed from the router. Deleting an interface from a router results in the associated Neutron port being removed from the database. Clearing the gateway interface Gateway interfaces cannot be removed from a router using the router-interface-delete command. Instead, the router-gateway-clear command must be used. To clear the gateway of a router, use the router-gateway-clear command as follows: Syntax: router-gateway-clear <router-id> Neutron includes checks that will prohibit the clearing of a gateway interface in the event that floating IPs or other resources from the network are associated with the router. Listing routers in the CLI To display a list of existing routers, use the Neutron router-list command as follows: Syntax: router-list [--tenant-id TENANT_ID] The returned output includes the router ID, name, external gateway network, and the SNAT state. Users will only see routers that exist in their tenant or project. When executed by an administrator, Neutron will return a listing of all routers across all tenants unless, the tenant ID is specified. Displaying router attributes in the CLI To display the attributes of a router, use the Neutron router-show command as follows: Syntax: router-show <router id> Among the output returned is the admin state, the external network, the SNAT state, and the tenant ID associated with the router. Updating router attributes in the CLI To update the attributes of a router, use the Neutron router-update command as follows: Syntax: router-update <router id> [--admin-state-up] [--routes destination=<network/cidr>,nexthop=<gateway_ip>] The admin-state-up attribute is a Boolean, which when set to false, does not allow Neutron to update interfaces within the router. This includes not adding floating IPs or additional internal interfaces to the router. Setting the value to true will allow queued changes to be applied. The routes option allows you to add static routes to the routing table of a Neutron router. To add static routes, use the following syntax: Syntax: neutron router-update <router id> --routes type=dict list=true destination=<network/cidr>,nexthop=<gateway_ip> Adding static routes to a router is an undocumented and broken feature in Havana. In Havana, the command results in the route being added to the database and router show output, while not being added to the routing table. To resolve this, add the following line to the [DEFAULT] block of the /etc/neutron/l3_agent.ini configuration file: root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf Restart the neutron-l3-agent service for changes to take effect. Deleting routers in the CLI To delete a router, use the Neutron router-delete command as follows: Syntax: router-delete <router id> Before a router can be deleted, all floating IPs and internal interfaces associated with the router must be unassociated or deleted. This may require deleting instances or detaching connected interfaces from instances. Network Address Translation Network Address Translation (NAT) is a networking concept that was developed in the early 1990s in response to the rapid depletion of IP addresses throughout the world. Prior to NAT, every host connected to the Internet had a unique IP address. OpenStack routers support two types of NAT: one-to-one many-to-one A one-to-one NAT is a method in which one IP address is directly mapped to another. Commonly referred to as a static NAT, a one-to-one NAT is often used to map a unique public address to a privately addressed host. Floating IPs utilize one-to-one NAT concepts. A many-to-one NAT is a method in which multiple addresses are mapped to a single address. A many-to-one NAT employs the use of port address translation (PAT). Neutron uses PAT to provide external access to instances behind the router when floating IPs are not assigned. For more information on network address translation, please visit Wikipedia at http://en.wikipedia.org/wiki/Network_address_translation. Floating IP addresses Tenant networks, when attached to a Neutron router, are meant to utilize the router as their default gateway. By default, when a router receives traffic from an instance and routes it upstream, the router performs a port address translation and modifies the source address of the packet to appear as its own external interface address. This ensures that the packet can be routed upstream and returned to the router, where it will modify the destination address to be that of the instance that initiated the connection. Neutron refers to this type of behavior as Source NAT. When users require direct inbound access to instances, a floating IP address can be utilized. A floating IP address in OpenStack is a static NAT that maps an external address to an internal address. This method of NAT allows instances to be reachable from external networks, such as the Internet. Floating IP addresses are configured on the external interface of the router that serves as the gateway for the instance, which is then responsible for modifying the source and/or destination address of packets depending on their direction. In this article, we learned that Neutron routers being a core component of networking in OpenStack, provides tenants the flexibility to design the network to best suit their application. Resources for Article: Further resources on this subject: Using OpenStack Swift [article] The OpenFlow Controllers [article] Troubleshooting [article]
Read more
  • 0
  • 0
  • 11682
article-image-typical-sales-cycle-and-territory-management
Packt
08 Oct 2014
6 min read
Save for later

A typical sales cycle and territory management

Packt
08 Oct 2014
6 min read
In this article by Mohith Shrivastava, the author of Salesforce Essentials for Administrators, we will look into the typical sales cycle and the territory management feature of Salesforce. (For more resources related to this topic, see here.) A typical sales cycle starts from a campaign. An example of a campaign can be a conference or a seminar where marketing individuals explain the product offering of the company to their prospects. Salesforce provides a campaign object to store this data. A campaign may involve different processes, and the campaign management module of Salesforce is simple. A matured campaign management system will have features such as sending e-mails to campaign members in bulk, and tracking how many people really opened and viewed the e-mails, and how many of them responded to the e-mails. Some of these processes can be custom built in Salesforce, but out of the box, Salesforce has a campaign member object apart from the campaign where members are selected by marketing reps. Members can be leads or contacts of Salesforce. A campaign generates leads. Leads are the prospects that have shown interest in the products and offerings of the company. The lead management module provides a lead object to store all the leads in the system. These prospects are converted into accounts, contacts, and opportunities when the prospect qualifies as an account. Salesforce provides a Lead Convert button to convert these leads into accounts, contacts, and opportunities. Features such as Web-to-Lead provided by the platform are ideal for capturing leads in Salesforce. Accounts can be B2B (business to business) or B2C (business to consumer). B2C in Salesforce is represented as person accounts. This is a special feature that needs to be enabled by a request from Salesforce. It's a record type where person accounts fields are from contacts. Contacts are people, and they are stored in objects in the contact object. They have a relationship with accounts (a relationship can be both master-detail as well as lookup.) An opportunity generates revenue if its status is closed won. Salesforce provides an object known as opportunities to store a business opportunity. The sales reps typically work on these opportunities, and their job is to close these deals and generate revenue. Opportunities have a stage field and stages start from prospecting to closed won or closed lost. Opportunity management provided by Salesforce consists of objects such as opportunity line items, products, price books, and price book entries. Products in Salesforce are the objects that are used as a lookup to junction objects such as an opportunity line item. An opportunity line item is a junction between an opportunity and a line item. Price books are price listings for products in Salesforce. A product can have a standard or custom price book. Custom price books are helpful when your company is offering products at discounts or varied prices for different customers based on market segmentation. Salesforce also provides a quote management module that consists of a quote object and quote line items that sales reps can use to send quotes to customers. The Order management module is new to the Salesforce CRM, and Salesforce provides an object known as orders that can generate an order from the draft state to the active state on accounts and contracts. Most companies use an ERP such as a SAP system to do order management. However, now, Salesforce has introduced this new feature, so on closed opportunities from accounts, you can create orders. The following screenshot explains the sales process and the sales life cycle from campaign to opportunity management:   To read more, I would recommend that you go through the Salesforce documentation available at http://www.salesforce.com/ap/assets/pdf/cloudforce/SalesCloud-TheSalesCloud.pdf. Territory management This feature is very helpful for organizations that run sales processes by sales territories. Let's say you have an account and your organization has a private sharing model. The account has to be worked on by sales representatives of the eastern as well as western regions. Presently, the owner is the sales rep of the eastern region, and because of the private sharing model, the sales rep of the western region will not have access. We could have used sharing rules to provide access, but the challenge is also to do a forecasting of the revenue generated from opportunities for both reps, and this is where writing sharing rules simply won't help us. We need the territory management feature of Salesforce for this, where you can retain opportunities and transfer representatives across territories, draw reports based on territories, and share accounts across territories extending the private sharing model. The key feature of this module is that it works with customizable forecasting only. Basic configurations We will explore the basic configuration needed to set up territory management. This feature is not enabled in your instance by default. To enable it, you have to log a case with Salesforce and explain its need. The basic navigation path for the territories feature is Setup | Manage Users | Manage Territories. Under Manage Territories, we have the settings to set the default access level for accounts, contacts, opportunities, and cases. This implies that when a new territory is created, the access level will be based on the default settings configured. There is a checkbox named Forecast managers can manage territories. Once checked, forecast managers can add accounts to territories, manage account assignment rules, and manage users. Under Manage Territories | Settings, you can see two different buttons, which are as follows: Enable Territory Management: This button forecasts hierarchy, and data is copied to the territory hierarchy. Each forecast hierarchy role will have a territory automatically created. Enable Territory Management from Scratch: This is for new organizations. On clicking this button, the forecast data is wiped, and please note that this is irreversible. Based on the role of the user, a territory is automatically assigned to the user. On the Territory Details page, one can use Add Users to assign users to territories. Account assignment rules To write account assignment rules, navigate to Manage Territories | Hierarchy. Select a territory and click on Manage Rules in the list related to the account assignment rules. Enter the rule name and define the filter criteria based on the account field. You can apply these rules to child territories if you check the Apply to Child Territories checkbox. There is a lot more to explore on this topic, but that's beyond the scope of this book. To explore more, I would recommend that you read the documentation from Salesforce available at https://na9.salesforce.com/help/pdfs/en/salesforce_territories_implementation_guide.pdf. Summary In this article, we have looked at how we can use the territory management feature of Salesforce. We have also described a typical sales cycle. Resources for Article: Further resources on this subject: Introducing Salesforce Chatter [article] Salesforce CRM Functions [article] Configuration in Salesforce CRM [article]
Read more
  • 0
  • 0
  • 2429

article-image-interfacing-react-components-angular-applications
Patrick Marabeas
26 Sep 2014
10 min read
Save for later

Interfacing React Components with Angular Applications

Patrick Marabeas
26 Sep 2014
10 min read
There's been talk lately of using React as the view within Angular's MVC architecture. Angular, as we all know, uses dirty checking. As I'll touch on later, it accepts the fact of (minor) performance loss to gain the great two-way data binding it has. React, on the other hand, uses a virtual DOM and only renders the difference. This results in very fast performance. So, how do we leverage React's performance from our Angular application? Can we retain two-way data flow? And just how significant is the performance increase? The nrg module and demo code can be found over on my GitHub. The application To demonstrate communication between the two frameworks, let's build a reusable Angular module (nrg[Angular(ng) + React(r) = energy(nrg)!]) which will render (and re-render) a React component when our model changes. The React component will be composed of aninputandpelement that will display our model and will also update the model on change. To show this, we'll add aninputandpto our view bound to the model. In essence, changes to either input should result in all elements being kept in sync. We'll also add a button to our component that will demonstrate component unmounting on scope destruction. ;( ;(function(window, document, angular, undefined) { 'use strict'; angular.module('app', ['nrg']) .controller('MainController', ['$scope', function($scope) { $scope.text = 'This is set in Angular'; $scope.destroy = function() { $scope.$destroy(); } }]); })(window, document, angular); data-component specifies the React component we want to mount.data-ctrl (optional) specifies the controller we want to inject into the directive—this will allow specific components to be accessible onscope itself rather than scope.$parent.data-ng-model is the model we are going to pass between our Angular controller and our React view. <div data-ng-controller="MainController"> <!-- React component --> <div data-component="reactComponent" data-ctrl="" data-ng-model="text"> <!-- <input /> --> <!-- <button></button> --> <!-- <p></p> --> </div> <!-- Angular view --> <input type="text" data-ng-model="text" /> <p>{{text}}</p> </div> As you can see, the view has meaning when using Angular to render React components.<div data-component="reactComponent" data-ctrl="" data-ng-model="text"></div> has meaning when compared to<div id="reactComponent"></div>,which requires referencing a script file to see what component (and settings) will be mounted on that element. The Angular module - nrg.js The main functions of this reusable Angular module will be to: Specify the DOM element that the component should be mounted onto. Render the React component when changes have been made to the model. Pass the scope and element attributes to the component. Unmount the React component when the Angular scope is destroyed. The skeleton of our module looks like this: ;(function(window, document, angular, React, undefined) { 'use strict'; angular.module('nrg', []) To keep our code modular and extensible, we'll create a factory that will house our component functions, which are currently justrender and unmount . .factory('ComponentFactory', [function() { return { render: function() { }, unmount: function() { } } }]) This will be injected into our directive. .directive('component', ['$controller', 'ComponentFactory', function($controller, ComponentFactory) { return { restrict: 'EA', If a controller has been specified on the elements viadata-ctrl , then inject the$controller service. As mentioned earlier, this will allow scope variables and functions to be used within the React component to be accessible directly onscope , rather thanscope.$parent (the controller also doesn't need to be declared in the view withng-controller ). controller: function($scope, $element, $attrs){ return ($attrs.ctrl) ? $controller($attrs.ctrl, {$scope:$scope, $element:$element, $attrs:$attrs}) : null; }, Here’s an isolated scope with two-way-binding ondata-ng-model . scope: { ngModel: '=' }, link: function(scope, element, attrs) { // Calling ComponentFactory.render() & watching ng-model } } }]); })(window, document, angular, React); ComponentFactory Fleshing out theComponentFactory , we'll need to know how to render and unmount components. React.renderComponent( ReactComponent component, DOMElement container, [function callback] ) As such, we'll need to pass the component we wish to mount (component), the container we want to mount it in (element) and any properties (attrsandscope) we wish to pass to the component. This render function will be called every time the model is updated, so the updated scope will be pushed through each time. According to the React documentation, "If the React component was previously rendered into container, this (React.renderComponent) will perform an update on it and only mutate the DOM as necessary to reflect the latest React component." .factory('ComponentFactory', [function() { return { render: function(component, element, scope, attrs) { // If you have name-spaced your components, you'll want to specify that here - or pass it in via an attribute etc React.renderComponent(window[component]({ scope: scope, attrs: attrs }), element[0]); }, unmount: function(element) { React.unmountComponentAtNode(element[0]); } } }]) Component directive Back in our directive, we can now set up when we are going to call these two functions. link: function(scope, element, attrs) { // Collect the elements attrs in a nice usable object var attributes = {}; angular.forEach(element[0].attributes, function(a) { attributes[a.name.replace('data-','')] = a.value; }); // Render the component when the directive loads ComponentFactory.render(attrs.component, element, scope, attributes); // Watch the model and re-render the component scope.$watch('ngModel', function() { ComponentFactory.render(attrs.component, element, scope, attributes); }, true); // Unmount the component when the scope is destroyed scope.$on('$destroy', function () { ComponentFactory.unmount(element); }); } This implements dirty checking to see if the model has been updated. I haven't played around too much to see if there's a notable difference in performance between this and using a broadcast/listener. That said, to get a listener working as expected, you will need to wrap the render call in a $timeout to push it to the bottom of the stack to ensure scope is updated. scope.$on('renderMe', function() { $timeout(function() { ComponentFactory.render(attrs.component, element, scope, attributes); }); }); The React component We can now build our React component, which will use the model we defined as well as inform Angular of any updates it performs. /** @jsx React.DOM */ ;(function(window, document, React, undefined) { 'use strict'; window.reactComponent = React.createClass({ This is the content that will be rendered into the container. The properties that we passed to the component ({ scope: scope, attrs: attrs }) when we called React.renderComponent back in our component directive are now accessible via this.props. render: function(){ return ( <div> <input type='text' value={this.props.scope.ngModel} onChange={this.handleChange} /> <button onClick={this.deleteScope}>Destroy Scope</button> <p>{this.props.scope.ngModel}</p> </div> ) }, Via the on Change   event, we can call for Angular to run a digest, just as we normally would, but accessing scope via this.props : handleChange: function(event) { var _this = this; this.props.scope.$apply(function() { _this.props.scope.ngModel = event.target.value; }); }, Here we deal with the click event deleteScope  . The controller is accessible via scope.$parent  . If we had injected a controller into the component directive, its contents would be accessible directly on scope  , just as ngModel is.     deleteScope: function() { this.props.scope.$parent.destroy(); } }); })(window, document, React); The result Putting this code together (you can view the completed code on GitHub, or see it in action) we end up with: Two input elements, both of which update the model. Any changes in either our Angular application or our React view will be reflected in both. A React component button that calls a function in our MainController, destroying the scope and also resulting in the unmounting of the component. Pretty cool. But where is my perf increase!? This is obviously too small an application for anything to be gained by throwing your view over to React. To demonstrate just how much faster applications can be (by using React as the view), we'll throw a kitchen sink worth of randomly generated data at it. 5000 bits to be precise. Now, it should be stated that you probably have a pretty questionable UI if you have this much data binding going on. Misko Hevery has a great response regarding Angular's performance on StackOverflow. In summary: Humans are: Slow: Anything faster than 50ms is imperceptible to humans and thus can be considered as "instant". Limited: You can't really show more than about 2000 pieces of information to a human on a single page. Anything more than that is really bad UI, and humans can't process this anyway. Basically, know Angular's limits and your user's limits! That said, the following performance test was certainly accentuated on mobile devices. Though, on the flip side, UI should be simpler on mobile. Brute force performance demonstration ;(function(window, document, angular, undefined) { 'use strict'; angular.module('app') .controller('NumberController', ['$scope', function($scope) { $scope.numbers = []; ($scope.numGen = function(){ for(var i = 0; i < 5000; i++) { $scope.numbers[i] = Math.floor(Math.random() * (999999999999999 - 1)) + 1; } })(); }]); })(window, document, angular); Angular ng-repeat <div data-ng-controller="NumberController"> <button ng-click="numGen()">Refresh Data</button> <table> <tr ng-repeat="number in numbers"> <td>{{number}}</td> </tr> </table> </div> There was definitely lag felt as the numbers were loaded in and refreshed. From start to finish, this took around 1.5 seconds. React component <div data-ng-controller="NumberController"> <button ng-click="numGen()">Refresh Data</button> <div data-component="numberComponent" data-ng-model="numbers"></div> </div> ;(function(window, document, React, undefined) { window.numberComponent = React.createClass({ render: function() { var rows = this.props.scope.ngModel.map(function(number) { return ( <tr> <td>{number}</td> </tr> ); }); return ( <table>{rows}</table> ); } }); })(window, document, React); So that just happened. 270 milliseconds start to finish. Around 80% faster! Conclusion So, should you go rewrite all those Angular modules as React components? Probably not. It really comes down to the application you are developing and how dependent you are on OSS. It's definitely possible that a handful of complex modules could put your application in the realm of “feeling a tad sluggish”, but it should be remembered that perceived performance is all that matters to the user. Altering the manner in which content is loaded could end up being a better investment of time. Users will definitely feel performance increases on mobile websites sooner, however, and is certainly something to keep in mind. The nrg module and demo code can be found over on my GitHub. Visit our JavaScript page for more JavaScript content and tutorials!  About the author A guest post by Patrick Marabeas, a freelance frontend developer who loves learning and working with cutting edge web technologies. He spends much of his free time developing Angular modules, such as ng-FitText, ng-Slider, ng-YouTubeAPI, and ng-ScrollSpy. You can follow him on Twitter: @patrickmarabeas.
Read more
  • 0
  • 0
  • 10953
Modal Close icon
Modal Close icon