Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Languages

135 Articles
article-image-unlocking-javascript-core
Packt
16 Feb 2016
19 min read
Save for later

Unlocking the JavaScript Core

Packt
16 Feb 2016
19 min read
You may have owned an iPhone for years and regard yourself as an experienced user. At the same time, you keep removing unwanted characters one at a time while typing by pressing delete. However, one day you find out that a quick shake allows you to delete the whole message in one tap. Then you wonder why on earth you didn't know this earlier. The same thing happens with programming. We can be quite satisfied with our coding until, all of sudden, we run into a trick or a lesser-known language feature that makes us reconsider the entire work done over the years. It turns out that we could do this in a cleaner, more readable, more testable, and more maintainable way. So it's presumed that you already have experience with JavaScript; however, this article equips you with the best practices to improve your code. (For more resources related to this topic, see here.) We will cover the following topics: Making your code readable and expressive Mastering multiline strings in JavaScript Manipulating arrays in the ES5 way Traversing an object in an elegant, reliable, safe, and fast way The most effective way of declaring objects How to magic methods in JavaScript Make your code readable and expressive There are numerous practices and heuristics to make a code more readable, expressive, and clean. We will cover this topic later on, but here we will talk about syntactic sugar. The term means an alternative syntax that makes the code more expressive and readable. In fact, we already had some of this in JavaScript from the very beginning. For instance, the increment/decrement and addition/subtraction assignment operators inherited from C. foo++ is syntactic sugar for foo = foo + 1, and foo += bar is a shorter form for foo = foo + bar. Besides, we have a few tricks that serve the same purpose. JavaScript applies logical expressions to so-called short-circuit evaluation. This means that an expression is read left to right, but as soon as the condition result is determined at an early stage, the expression tail is not evaluated. If we have true || false || false, the interpreter will know from the first test that the result is true regardless of other tests. So the false || false part is not evaluated, and this opens a way for creativity. Function argument default value When we need to specify default values for parameters we can do like that: function stub( foo ) { return foo || "Default value"; } console.log( stub( "My value" ) ); // My value console.log( stub() ); // Default value What is going on here? When foo is true (not undefined, NaN, null, false, 0, or ""), the result of the logical expression is foo otherwise the expression is evaluated until Default value and this is the final result. Starting with 6th edition of EcmaScript (specification of JavaScript language) we can use nicer syntax: function stub( foo = "Default value" ) { return foo; } Conditional invocation While composing our code we shorten it on conditions:" var age = 20; age >= 18 && console.log( "You are allowed to play this game" ); age >= 18 || console.log( "The game is restricted to 18 and over" ); In the preceding example, we used the AND (&&) operator to invoke console.log if the left-hand condition is Truthy. The OR (||) operator does the opposite, it calls console.log if the condition is Falsy. I think the most common case in practice is the shorthand condition where the function is called only when it is provided: /** * @param {Function} [cb] - callback */ function fn( cb ) { cb && cb(); }; The following is one more example on this: /** * @class AbstractFoo */ AbstractFoo = function(){ // call this.init if the subclass has init method this.init && this.init(); }; Syntactic sugar was introduced to its full extent to the JavaScript world only with the advance in CoffeeScript, a subset of the language that trans-compiles (compiles source-to-source) into JavaScript. Actually CoffeeScript, inspired by Ruby, Python, and Haskell, has unlocked arrow-functions, spreads, and other syntax to JavaScript developers. In 2011, Brendan Eich (the author of JavaScript) admitted that CoffeeScript influenced him in his work on EcmaScript Harmony, which was finalized this summer in ECMA-262 6th edition specification. From a marketing perspective, the specification writers agreed on using a new name convention that calls the 6th edition as EcmaScript 2015 and the 7th edition as EcmaScript 2016. Yet the community is used to abbreviations such as ES6 and ES7. To avoid confusion further in the book, we will refer to the specifications by these names. Now we can look at how this affects the new JavaScript. Arrow functions Traditional function expression may look like this: function( param1, param2 ){ /* function body */ } When declaring an expression using the arrow function (aka fat arrow function) syntax, we will have this in a less verbose form, as shown in the following: ( param1, param2 ) => { /* function body */ } In my opinion, we don't gain much with this. But if we need, let's say, an array method callback, the traditional form would be as follows: function( param1, param2 ){ return expression; } Now the equivalent arrow function becomes shorter, as shown here: ( param1, param2 ) => expression We may do filtering in an array this way: // filter all the array elements greater than 2 var res = [ 1, 2, 3, 4 ].filter(function( v ){ return v > 2; }) console.log( res ); // [3,4] Using an array function, we can do filtering in a cleaner form: var res = [ 1, 2, 3, 4 ].filter( v => v > 2 ); console.log( res ); // [3,4] Besides shorter function declaration syntax, the arrow functions bring the so called lexical this. Instead of creating its own context, it uses the context of the surrounding object as shown here: "use strict"; /** * @class View */ let View = function(){ let button = document.querySelector( "[data-bind="btn"]" ); /** * Handle button clicked event * @private */ this.onClick = function(){ console.log( "Button clicked" ); }; button.addEventListener( "click", () => { // we can safely refer surrounding object members this.onClick(); }, false ); } In the preceding example, we subscribed a handler function to a DOM event (click). Within the scope of the handler, we still have access to the view context (this), so we don't need to bind the handler to the outer scope or pass it as a variable through the closure: var that = this; button.addEventListener( "click", function(){ // cross-cutting concerns that.onClick(); }, false ); Method definitions As mentioned in the preceding section, arrow functions can be quite handy when declaring small inline callbacks, but always applying it for a shorter syntax is controversial. However, ES6 provides new alternative method definition syntax besides the arrow functions. The old-school method declaration may look as follows: var foo = { bar: function( param1, param2 ) { } } In ES6 we can get rid of the function keyword and the colon. So the preceding code can be put this way: let foo = { bar ( param1, param2 ) { } } The rest operator Another syntax structure that was borrowed from CoffeeScript came to JavaScript as the rest operator (albeit, the approach is called splats in CoffeeScript). When we had a few mandatory function parameters and an unknown number of rest parameters, we used to do something like this: "use strict"; var cb = function() { // all available parameters into an array var args = [].slice.call( arguments ), // the first array element to foo and shift foo = args.shift(), // the new first array element to bar and shift bar = args.shift(); console.log( foo, bar, args ); }; cb( "foo", "bar", 1, 2, 3 ); // foo bar [1, 2, 3] Now check out how expressive this code becomes in ES6: let cb = function( foo, bar, ...args ) { console.log( foo, bar, args ); } cb( "foo", "bar", 1, 2, 3 ); // foo bar [1, 2, 3] Function parameters aren't the only application of the rest operator. For example, we can use it in destructions as well, as follows: let [ bar, ...others ] = [ "bar", "foo", "baz", "qux" ]; console.log([ bar, others ]); // ["bar",["foo","baz","qux"]] The spread operator Similarly, we can spread array elements into arguments: let args = [ 2015, 6, 17 ], relDate = new Date( ...args ); console.log( relDate.toString() ); // Fri Jul 17 2015 00:00:00 GMT+0200 (CEST) Mastering multiline strings in JavaScript Multi-line strings aren't a good part of JavaScript. While they are easy to declare in other languages (for instance, NOWDOC), you cannot just keep single-quoted or double-quoted strings in multiple lines. This will lead to syntax error as every line in JavaScript is considered as a possible command. You can set backslashes to show your intention: var str = "Lorem ipsum dolor sit amet, n consectetur adipiscing elit. Nunc ornare, n diam ultricies vehicula aliquam, mauris n ipsum dapibus dolor, quis fringilla leo ligula non neque"; This kind of works. However, as soon as you miss a trailing space, you get a syntax error, which is not easy to spot. While most script agents support this syntax, it's, however, not a part of the EcmaScript specification. In the times of EcmaScript for XML (E4X), we could assign a pure XML to a string, which opened a way for declarations such as these: var str = <>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc ornare </>.toString(); Nowadays E4X is deprecated, it's not supported anymore. Concatenation versus array join We can also use string concatenation. It may feel clumsy, but it's safe: var str = "Lorem ipsum dolor sit amet, n" + "consectetur adipiscing elit. Nunc ornare,n" + "diam ultricies vehicula aliquam, mauris n" + "ipsum dapibus dolor, quis fringilla leo ligula non neque"; You may be surprised, but concatenation is slower than array joining. So the following technique will work faster: var str = [ "Lorem ipsum dolor sit amet, n", "consectetur adipiscing elit. Nunc ornare,n", "diam ultricies vehicula aliquam, mauris n", "ipsum dapibus dolor, quis fringilla leo ligula non neque"].join( "" ); Template literal What about ES6? The latest EcmaScript specification introduces a new sort of string literal, template literal: var str = `Lorem ipsum dolor sit amet, n consectetur adipiscing elit. Nunc ornare, n diam ultricies vehicula aliquam, mauris n ipsum dapibus dolor, quis fringilla leo ligula non neque`; Now the syntax looks elegant. But there is more. Template literals really remind us of NOWDOC. You can refer any variable declared in the scope within the string: "use strict"; var title = "Some title", text = "Some text", str = `<div class="message"> <h2>${title}</h2> <article>${text}</article> </div>`; console.log( str ); The output is as follows: <div class="message"> <h2>Some title</h2> <article>Some text</article> </div> If you wonder when can you safely use this syntax, I have a good news for you—this feature is already supported by (almost) all the major script agents (http://kangax.github.io/compat-table/es6/). Multi-line strings via transpilers With the advance of ReactJS, Facebook's EcmaScript language extension named JSX (https://facebook.github.io/jsx/) is now really gaining momentum. Apparently influenced by previously mentioned E4X, they proposed a kind of string literal for XML-like content without any screening at all. This type supports template interpolation similar to ES6 templates: "use strict"; var Hello = React.createClass({ render: function() { return <div class="message"> <h2>{this.props.title}</h2> <article>{this.props.text}</article> </div>; } }); React.render(<Hello title="Some title" text="Some text" />, node); Another way to declare multiline strings is by using CommonJS Compiler (http://dsheiko.github.io/cjsc/). While resolving the 'require' dependencies, the compiler transforms any content that is not .js/.json content into a single-line string: foo.txt Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc ornare, diam ultricies vehicula aliquam, mauris ipsum dapibus dolor, quis fringilla leo ligula non neque consumer.js var str = require( "./foo.txt" ); console.log( str ); Manipulating arrays in the ES5 way Some years ago when the support of ES5 features was poor (EcmaScript 5th edition was finalized in 2009), libraries such as Underscore and Lo-Dash got highly popular as they provided a comprehensive set of utilities to deal with arrays/collections. Today, many developers still use third-party libraries (including jQuery/Zepro) for methods such as map, filter, every, some, reduce, and indexOf, while these are available in the native form of JavaScript. It still depends on how you use such libraries, but it may likely happen that you don't need them anymore. Let's see what we have now in JavaScript. Array methods in ES5 Array.prototype.forEach is probably the most used method of the arrays. That is, it is the native implementation of _.each, or for example, of the $.each utilities. As parameters, forEach expects an iteratee callback function and optionally a context in which you want to execute the callback. It passes to the callback function an element value, an index, and the entire array. The same parameter syntax is used for most array manipulation methods. Note that jQuery's $.each has the inverted callback parameters order: "use strict"; var data = [ "bar", "foo", "baz", "qux" ]; data.forEach(function( val, inx ){ console.log( val, inx ); }); Array.prototype.map produces a new array by transforming the elements of a given array: "use strict"; var data = { bar: "bar bar", foo: "foo foo" }, // convert key-value array into url-encoded string urlEncStr = Object.keys( data ).map(function( key ){ return key + "=" + window.encodeURIComponent( data[ key ] ); }).join( "&" ); console.log( urlEncStr ); // bar=bar%20bar&foo=foo%20foo Array.prototype.filter returns an array, which consists of given array values that meet the callback's condition: "use strict"; var data = [ "bar", "foo", "", 0 ], // remove all falsy elements filtered = data.filter(function( item ){ return !!item; }); console.log( filtered ); // ["bar", "foo"] Array.prototype.reduce/Array.prototype.reduceRight retrieves the product of values in an array. The method expects a callback function and optionally the initial value as arguments. The callback function receive four parameters: the accumulative value, current one, index and original array. So we can, for an instance, increment the accumulative value by the current one (return acc += cur;) and, thus, we will get the sum of array values. Besides calculating with these methods, we can concatenate string values or arrays: "use strict"; var data = [[ 0, 1 ], [ 2, 3 ], [ 4, 5 ]], arr = data.reduce(function( prev, cur ) { return prev.concat( cur ); }), arrReverse = data.reduceRight(function( prev, cur ) { return prev.concat( cur ); }); console.log( arr ); // [0, 1, 2, 3, 4, 5] console.log( arrReverse ); // [4, 5, 2, 3, 0, 1] Array.prototype.some tests whether any (or some) values of a given array meet the callback condition: "use strict"; var bar = [ "bar", "baz", "qux" ], foo = [ "foo", "baz", "qux" ], /** * Check if a given context (this) contains the value * @param {*} val * @return {Boolean} */ compare = function( val ){ return this.indexOf( val ) !== -1; }; console.log( bar.some( compare, foo ) ); // true In this example, we checked whether any of the bar array values are available in the foo array. For testability, we need to pass a reference of the foo array into the callback. Here we inject it as context. If we need to pass more references, we would push them in a key-value object. As you probably noticed, we used in this example Array.prototype.indexOf. The method works the same as String.prototype.indexOf. This returns an index of the match found or -1. Array.prototype.every tests whether every value of a given array meets the callback condition: "use strict"; var bar = [ "bar", "baz" ], foo = [ "bar", "baz", "qux" ], /** * Check if a given context (this) contains the value * @param {*} val * @return {Boolean} */ compare = function( val ){ return this.indexOf( val ) !== -1; }; console.log( bar.every( compare, foo ) ); // true If you are still concerned about support for these methods in a legacy browser as old as IE6-7, you can simply shim them with https://github.com/es-shims/es5-shim. Array methods in ES6 In ES6, we get just a few new methods that look rather like shortcuts over the existing functionality. Array.prototype.fill populates an array with a given value, as follows: "use strict"; var data = Array( 5 ); console.log( data.fill( "bar" ) ); // ["bar", "bar", "bar", "bar", "bar"] Array.prototype.includes explicitly checks whether a given value exists in the array. Well, it is the same as arr.indexOf( val ) !== -1, as shown here: "use strict"; var data = [ "bar", "foo", "baz", "qux" ]; console.log( data.includes( "foo" ) ); Array.prototype.find filters out a single value matching the callback condition. Again, it's what we can get with Array.prototype.filter. The only difference is that the filter method returns either an array or a null value. In this case, this returns a single element array, as follows: "use strict"; var data = [ "bar", "fo", "baz", "qux" ], match = function( val ){ return val.length < 3; }; console.log( data.find( match ) ); // fo Traversing an object in an elegant, reliable, safe, and fast way It is a common case when we have a key-value object (let's say options) and need to iterate it. There is an academic way to do this, as shown in the following code: "use strict"; var options = { bar: "bar", foo: "foo" }, key; for( key in options ) { console.log( key, options[ key] ); } The preceding code outputs the following: bar bar foo foo Now let's imagine that any of the third-party libraries that you load in the document augments the built-in Object: Object.prototype.baz = "baz"; Now when we run our example code, we will get an extra undesired entry: bar bar foo foo baz baz The solution to this problem is well known, we have to test the keys with the Object.prototype.hasOwnProperty method: //… for( key in options ) { if ( options.hasOwnProperty( key ) ) { console.log( key, options[ key] ); } } Iterating the key-value object safely and fast Let's face the truth—the structure is clumsy and requires optimization (we have to perform the hasOwnProperty test on every given key). Luckily, JavaScript has the Object.keys method that retrieves all string-valued keys of all enumerable own (non-inherited) properties. This gives us the desired keys as an array that we can iterate, for instance, with Array.prototype.forEach: "use strict"; var options = { bar: "bar", foo: "foo" }; Object.keys( options ).forEach(function( key ){ console.log( key, options[ key] ); }); Besides the elegance, we get a better performance this way. In order to see how much we gain, you can run this online test in distinct browsers such as: http://codepen.io/dsheiko/pen/JdrqXa. Enumerating an array-like object Objects such as arguments and nodeList (node.querySelectorAll, document.forms) look like arrays, in fact they are not. Similar to arrays, they have the length property and can be iterated in the for loop. In the form of objects, they can be traversed in the same way that we previously examined. But they do not have any of the array manipulation methods (forEach, map, filter, some and so on). The thing is we can easily convert them into arrays as shown here: "use strict"; var nodes = document.querySelectorAll( "div" ), arr = Array.prototype.slice.call( nodes ); arr.forEach(function(i){ console.log(i); }); The preceding code can be even shorter: arr = [].slice.call( nodes ) It's a pretty convenient solution, but looks like a trick. In ES6, we can do the same conversion with a dedicated method: arr = Array.from( nodes ); The collections of ES6 ES6 introduces a new type of objects—iterable objects. These are the objects whose elements can be retrieved one at a time. They are quite the same as iterators in other languages. Beside arrays, JavaScript received two new iterable data structures, Set and Map. Set which are a collection of unique values: "use strict"; let foo = new Set(); foo.add( 1 ); foo.add( 1 ); foo.add( 2 ); console.log( Array.from( foo ) ); // [ 1, 2 ] let foo = new Set(), bar = function(){ return "bar"; }; foo.add( bar ); console.log( foo.has( bar ) ); // true The map is similar to a key-value object, but may have arbitrary values for the keys. And this makes a difference. Imagine that we need to write an element wrapper that provides jQuery-like events API. By using the on method, we can pass not only a handler callback function but also a context (this). We bind the given callback to the cb.bind( context ) context. This means addEventListener receives a function reference different from the callback. How do we unsubscribe the handler then? We can store the new reference in Map by a key composed from an event name and a callback function reference: "use strict"; /** * @class * @param {Node} el */ let El = function( el ){ this.el = el; this.map = new Map(); }; /** * Subscribe a handler on event * @param {String} event * @param {Function} cb * @param {Object} context */ El.prototype.on = function( event, cb, context ){ let handler = cb.bind( context || this ); this.map.set( [ event, cb ], handler ); this.el.addEventListener( event, handler, false ); }; /** * Unsubscribe a handler on event * @param {String} event * @param {Function} cb */ El.prototype.off = function( event, cb ){ let handler = cb.bind( context ), key = [ event, handler ]; if ( this.map.has( key ) ) { this.el.removeEventListener( event, this.map.get( key ) ); this.map.delete( key ); } }; Any iterable object has methods, keys, values, and entries, where the keys work the same as Object.keys and the others return array values and an array of key-value pairs respectively. Now let's see how we can traverse the iterable objects: "use strict"; let map = new Map() .set( "bar", "bar" ) .set( "foo", "foo" ), pair; for ( pair of map ) { console.log( pair ); } // OR let map = new Map([ [ "bar", "bar" ], [ "foo", "foo" ], ]); map.forEach(function( value, key ){ console.log( key, value ); }); Iterable objects have manipulation methods such as arrays. So we can use forEach. Besides, they can be iterated by for...in and for...of loops. The first one retrieves indexes and the second, the values. Summary This article gives practices and tricks on how to use the JavaScript core features for the maximum effect. This article also discusses the techniques to improve the expressiveness of the code, to master multi-line strings and templating, and to manipulate arrays and array-like objects. Further, we are introduced to the "magic methods" of JavaScript and gives a practical example of their use. JavaScript was born as a scripting language at the most inappropriate time—the time of browser wars. It was neglected and misunderstood for a decade and endured six editions. And look at it now! JavaScript has become a mainstream programming language. You can learn more about JavaScript with the help of the following books: https://www.packtpub.com/application-development/mastering-javascript-design-patterns https://www.packtpub.com/application-development/javascript-promises-essentials https://www.packtpub.com/application-development/learning-javascript-data-structures-and-algorithms Resources for Article: Further resources on this subject: Using JavaScript with HTML[article] Dart with JavaScript[article] Creating a basic JavaScript plugin[article]
Read more
  • 0
  • 0
  • 3047

article-image-cython-wont-bite
Packt
10 Jan 2016
9 min read
Save for later

Cython Won't Bite

Packt
10 Jan 2016
9 min read
In this article by Philip Herron, the author of the book Learning Cython Programming - Second Edition, we see how Cython is much more than just a programminglanguage. Its origin can be traced to Sage, the mathematics software package, where it was used to increase the performance of mathematical computations, such as those involving matrices. More generally, I tend to consider Cython as an alternative to Swig to generate really good python bindings to native code. Language bindings have been around for years and Swig was one of the first and best tools to generate bindings for multitudes of languages. Cython generates bindings for Python code only, and this single purpose approach means it generates the best Python bindings you can get outside of doing it all manually; attempt the latter only if you're a Python core developer. For me, taking control of legacy software by generating language bindings is a great way to reuse any software package. Consider a legacy application written in C/C++; adding advanced modern features like a web server for a dashboard or message bus is not a trivial thing to do. More importantly, Python comes with thousands of packages that have been developed, tested, and used by people for a long time, and can do exactly that. Wouldn't it be great to take advantage of all of this code? With Cython, we can do exactly this, and I will demonstrate approaches with plenty of example codes along the way. This article will be dedicated to the core concepts on using Cython, including compilation, and will provide a solid reference and introduction for all to Cython core concepts. In this article, we will cover: Installing Cython Getting started - Hello World Using distutils with Cython Calling C functions from Python Type conversion (For more resources related to this topic, see here.) Installing Cython Since Cython is a programming language, we must install its respective compiler, which just so happens to be so aptly named Cython. There are many different ways to install Cython. The preferred one would be to use pip: $ pip install Cython This should work on both Linux and Mac. Alternatively, you can use your Linux distribution's package manager to install Cython: $ yum install cython # will work on Fedora and Centos $ apt-get install cython # will work on Debian based systems In Windows, although there are a plethora of options available, following this Wiki is the safest option to stay up to date: http://wiki.cython.org/InstallingOnWindows Emacs mode There is an emacs mode available for Cython. Although the syntax is nearly the same as Python, there are differences that conflict in simply using Python mode. You can choose to grab the cython-mode.el from the Cython source code (inside the Tools directory.) The preferred way of installing packages to emacs would be to use a package repository such as MELPA(). To add the package repository to emacs, open your ~/.emacs configuration file and add the following code: (when (>= emacs-major-version 24) (require 'package) (add-to-list 'package-archives '("melpa" . "http://melpa.org/packages/") t) (package-initialize)) Once you add this and reload your configuration to install the cython mode, you can simply run the following: 'M-x package-install RET cython-mode' Once this is installed, you can activate the mode by adding this into your emacs config file: (require 'cython-mode) You can always activate the mode manually at any time with the following: 'M-x cython-mode RET' Getting the code examples Throughout this book, I intend to show real examples that are easy to digest to help you get a feel of the different things you can achieve with Cython. To access and download the code used, please clone the following repository: $ git clone git://github.com/redbrain/cython-book.git Getting started – Hello World As you will see when running the Hello World program, Cython generates native python modules. Therefore, while running any Cython code, you will reference it via a module import in Python. Let's build the module: $ cd cython-book/chapter1/helloworld $ make You should have now created helloworld.so! This is a Cython module of the same name of the Cython source code file. While in the same directory of the shared object module, you can invoke this code by running a respective Python import: $ python Python 2.7.3 (default, Aug 1 2012, 05:16:07) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import helloworld Hello World from cython! As you can see from opening helloworld.pyx, it looks just like a normal Python Hello World application; but as previously stated, Cython generates modules. These modules need a name so that it can be correctly imported by the python runtime. The Cython compiler simply uses the name of the source code file. It then requires us to compile this to the same shared object name. Overall, Cython source code files have the .pyx,.pxd, and .pxi extensions. For now, all we care about are the .pyx files; the others are for cimports and includes respectively within a .pyx module file. The following screenshot depicts the compilation flow required to have a callable native python module: I wrote a basic makefile so that you can simply run make to compile these examples. Here's the code to do this manually: $ cython helloworld.pyx $ gcc/clang -g -O2 -fpic `python-config --cflags` -c helloworld.c -o helloworld.o $ gcc/clang -shared -o helloworld.so helloworld.o `python-config –libs Using DistUtils with Cython You can compile this using Python distutils and cythonize. Open setup.py: from distutils.core import setup from Cython.Build import cythonize setup( ext_modules = cythonize("helloworld.pyx") ) Using the cythonize function as part of the ext_modules section will build any specified Cython source into an installable Python module. This will compile helloworld.pyx into the same shared library. This provides the Python practice to distribute native modules as part of distutils. Calling C functions from Python We should be careful when talking about Python and Cython for clarity, since the syntax is so similar. Let's wrap a simple AddFunction in C and make it callable from Python. Firstly, open a file called AddFunction.c, and write a simple function into it: #include <stdio.h> int AddFunction(int a, int b) { printf("look we are within your c code!n"); return a + b; } This is the C code we will call, which is just a simple function to add two integers. Now, let's get Python to call it. Open a file called AddFunction.h, wherein we will declare our prototype: #ifndef __ADDFUNCTION_H__ #define __ADDFUNCTION_H__ extern int AddFunction (int, int); #endif //__ADDFUNCTION_H__ We need this so that Cython can see the prototype for the function we want to call. In practice, you will already have your headers in your own project with your prototypes and declarations already available. Open a file called AddFunction.pyx, and insert the following code in to it: cdef extern from "AddFunction.h": cdef int AddFunction(int, int) Here, we have to declare what code we want to call. The cdef is a keyword signifying that this is from the C code that will be linked in. Now, we need a Python entry point: def Add(a, b): return AddFunction(a, b) This Add is a Python callable inside a PyAddFunction module. Again, I have provided a handy makefile to produce the module:   $ cd cython-book/chapter1/ownmodule $ make cython -2 PyAddFunction.pyx gcc -g -O2 -fpic -c PyAddFunction.c -o PyAddFunction.o `python-config --includes` gcc -g -O2 -fpic -c AddFunction.c -o AddFunction.o gcc -g -O2 -shared -o PyAddFunction.so AddFunction.o PyAddFunc-tion.o `python-config --libs` Notice that AddFunction.c is compiled into the same PyAddFunction.so shared object. Now, let's call this AddFunction and check to see if C can add numbers correctly: $ python >>> from PyAddFunction import Add >>> Add(1,2) look we are within your c code!! 3 Notice the print statement inside AddFunction.c::AddFunction and that the final result is printed correctly. Therefore, we know the control hit the C code and did the calculation in C and not inside the Python runtime. This is a revelation to what is possible. Python can be cited to be slow in some circumstances. Using this technique, it makes it possible for Python code to bypass its own runtime and to run in an unsafe context, which is unrestricted by the Python runtime, which is much faster. Type conversion Notice that we had to declare a prototype inside the cython source code PyAddFunction.pyx: cdef extern from "AddFunction.h": cdef int AddFunction(int, int) It let the compiler know that there is a function called AddFunction, and that it takes two int's and returns an int. This is all the information the compiler needs to know besides the host and target operating system's calling convention in order to call this function safely. Then, we created the Python entry point, which is a python callable that takes two parameters: def Add(a, b): return AddFunction(a, b) Inside this entry point, it simply returned the native AddFunction and passed the two Python objects as parameters. This is what makes Cython so powerful. Here, the Cython compiler must inspect the function call and generate code to safely try and convert these Python objects to native C integers. This becomes difficult when precision is taken into account, and potential overflow, which just so happens to be a major use case since it handles everything so well. Also, remember that this function returns an integer and Cython also generates code to convert the integer return into a valid Python object. Summary Overall, we installed the Cython compiler, ran the Hello World example, and took into consideration that we need to compile all code into native shared objects. We also saw how to wrap native C code to be callable from Python, and how to do type conversion of parameters and return to C code and back to Python. Resources for Article: Further resources on this subject: Monte Carlo Simulation and Options [article] Understanding Cython [article] Scaling your Application Across Nodes with Spring Python's Remoting [article]
Read more
  • 0
  • 0
  • 2786

article-image-creating-catalyst-application-catalyst-58
Packt
30 Jun 2010
7 min read
Save for later

Creating a Catalyst Application in Catalyst 5.8

Packt
30 Jun 2010
7 min read
Creating the application skeleton Catalyst comes with a script called catalyst.pl to make this task as simple as possible. catalyst.pl takes a single argument, the application's name, and creates an application with that specified name. The name can be any valid Perl module name such as MyApp or MyCompany::HR::Timesheets. Let's get started by creating MyApp, which is the example application for this article: $ catalyst.pl MyAppcreated "MyApp"created "MyApp/script"created "MyApp/lib"created "MyApp/root"created "MyApp/root/static"created "MyApp/root/static/images"created "MyApp/t"created "MyApp/lib/MyApp"created "MyApp/lib/MyApp/Model"created "MyApp/lib/MyApp/View"18 ]created "MyApp/lib/MyApp/Controller"created "MyApp/myapp.conf"created "MyApp/lib/MyApp.pm"created "MyApp/lib/MyApp/Controller/Root.pm"created "MyApp/README"created "MyApp/Changes"created "MyApp/t/01app.t"created "MyApp/t/02pod.t"created "MyApp/t/03podcoverage.t"created "MyApp/root/static/images/catalyst_logo.png"created "MyApp/root/static/images/btn_120x50_built.png"created "MyApp/root/static/images/btn_120x50_built_shadow.png"created "MyApp/root/static/images/btn_120x50_powered.png"created "MyApp/root/static/images/btn_120x50_powered_shadow.png"created "MyApp/root/static/images/btn_88x31_built.png"created "MyApp/root/static/images/btn_88x31_built_shadow.png"created "MyApp/root/static/images/btn_88x31_powered.png"created "MyApp/root/static/images/btn_88x31_powered_shadow.png"created "MyApp/root/favicon.ico"created "MyApp/Makefile.PL"created "MyApp/script/myapp_cgi.pl"created "MyApp/script/myapp_fastcgi.pl"created "MyApp/script/myapp_server.pl"created "MyApp/script/myapp_test.pl"created "MyApp/script/myapp_create.pl"Change to application directory, and run "perl Makefile.PL" to make sureyour installation is complete. At this point it is a good idea to check if the installation is complete by switching to the newly-created directory (cd MyApp) and running perl Makefile.PL. You should see something like the following: $ perl Makefile.PLinclude /Volumes/Home/Users/solar/Projects/CatalystBook/MyApp/inc/Module/Install.pminclude inc/Module/Install/Metadata.pminclude inc/Module/Install/Base.pmCannot determine perl version info from lib/MyApp.pminclude inc/Module/Install/Catalyst.pm*** Module::Install::Catalystinclude inc/Module/Install/Makefile.pmPlease run "make catalyst_par" to create the PAR package!*** Module::Install::Catalyst finished.include inc/Module/Install/Scripts.pminclude inc/Module/Install/AutoInstall.pminclude inc/Module/Install/Include.pminclude inc/Module/AutoInstall.pm*** Module::AutoInstall version 1.03*** Checking for Perl dependencies...[Core Features]- Test::More ...loaded. (0.94 >= 0.88)- Catalyst::Runtime ...loaded. (5.80021 >= 5.80021)- Catalyst::Plugin::ConfigLoader ...loaded. (0.23)- Catalyst::Plugin::Static::Simple ...loaded. (0.29)- Catalyst::Action::RenderView ...loaded. (0.14)- Moose ...loaded. (0.99)- namespace::autoclean ...loaded. (0.09)- Config::General ...loaded. (2.42)*** Module::AutoInstall configuration finished.include inc/Module/Install/WriteAll.pminclude inc/Module/Install/Win32.pminclude inc/Module/Install/Can.pminclude inc/Module/Install/Fetch.pmWriting Makefile for MyAppWriting META.yml Note that it mentions that all the required modules are available. If any modules are missing, you may have to install those modules using cpan.You can also alternatively install the missing modules by running make followed by make install. We will discuss what each of these files do but for now, let's just change to the newly-created MyApp directory (cd MyApp) and run the following command: $ perl script/myapp_server.pl This will start up the development web server. You should see some debugging information appear on the console, which is shown as follows: [debug] Debug messages enabled[debug] Loaded plugins:.--------------------------------------------.| Catalyst::Plugin::ConfigLoader 0.23| Catalyst::Plugin::Static::Simple 0.21 |'--------------------------------------------'[debug] Loaded dispatcher "Catalyst::Dispatcher" [debug] Loaded engine"Catalyst::Engine::HTTP"[debug] Found home "/home/jon/projects/book/chapter2/MyApp"[debug] Loaded Config "/home/jon/projects/book/chapter2/MyApp/myapp.conf"[debug] Loaded components:.-----------------------------+----------.| Class | Type +-------------------------------+----------+| MyApp::Controller::Root | instance |'----------------------------------+----------'[debug] Loaded Private actions:.----------------------+-------- --------+----.| Private | Class | Method |+-----------+------------+--------------+| /default | MyApp::Controller::Root | default || /end | MyApp::Controller::Root | end|/index | MyApp::Controller::Root | index'--------------+--------------+-------'[debug] Loaded Path actions:.----------------+--------------.| Path | Private|+-------------------+-------------------------+| / | /default|| / | /index|'----------------+--------------------------------'[info] MyApp powered by Catalyst 5.80004You can connect to your server at http://localhost:3000 This debugging information contains a summary of plugins, Models, Views, and Controllers that your application uses, in addition to showing a map of URLs to actions. As we haven't added anything to the application yet, this isn't particularly helpful, but it will become helpful as we add features. To see what your application looks like in a browser, simply browse to http://localhost:3000. You should see the standard Catalyst welcome page as follows: Let's put the application aside for a moment, and see the usage of all the files that were created. The list of files is as shown in the following screenshot: Before we modify MyApp, let's take a look at how a Catalyst application is structured on a disk. In the root directory of your application, there are some support files. If you're familiar with CPAN modules, you'll be at home with Catalyst. A Catalyst application is structured in exactly the same way (and can be uploaded to the CPAN unmodified, if desired). This article will refer to MyApp as your application's name, so if you use something else, be sure to substitute properly. Latest helper scripts Catalyst 5.8 is ported to Moose and the helper scripts for Catalyst were upgraded much later. Therefore, it is necessary for you to check if you have the latest helper scripts. We will discuss helper scripts later. For now, catalyst.pl is a helper script and if you're using an updated helper script, then the lib/MyApp.pm file (or lib/whateverappname.pm) will have the following line: use Moose; If you don't see this line in your application package in the lib directory, then you will have to update the helper scripts. You can do that by executing the following command: cpan Catalyst::Helper Files in the MyApp directory The MyApp directory contains the following files: Makefile.PL: This script generates a Makefile to build, test, and in stall your application. It can also contain a list of your application's CPAN dependencies and automatically install them. To run Makefile.PL and generate a Makefile, simply type perl Makefile.PL. After that, you can run make to build the application, make test to test the application (you can try this right now, as some sample tests have already been created), make install to install the application, and so on. For more details, see the Module::Install documentation. It's important that you don't delete this file. Catalyst looks for it to determine where the root of your application is. Changes: This is simply a free-form text file where you can document changes to your application. It's not required, but it can be helpful to end users or other developers working on your application, especially if you're writing an open source application. README: This is just a text file with information on your application. If you're not going to distribute your application, you don't need to keep it around. myapp.conf: This is your application's main confi guration file, which is loaded when you start your application. You can specify configuration directly inside your application, but this file makes it easier to tweak settings without worrying about breaking your code. myapp.conf is in Apache-style syntax, but if you rename the file to myapp.pl, you can write it in Perl (or myapp.yml for YML format; see the Config::Any manual for a complete list). The name of this file is based on your application's name. Everything is converted to lowercase, double colons are replaced with underscores, and the .conf extension is appended. Files in the lib directory The heart of your application lives in the lib directory. This directory contains a file called MyApp.pm. This file defines the namespace and inheritance that are necessary to make this a Catalyst application. It also contains the list of plugins to load application-specific configurations. These configurations can also be defined in the myapp.conf file mentioned previously. However, if the same configuration is mentioned in both the files, then the configuration mentioned here takes precedence. Inside the lib directory, there are three key directories, namely MyApp/Controller, MyApp/Model, and MyApp/View. Catalyst loads the Controllers, Models, and Views from these directories respectively.
Read more
  • 0
  • 0
  • 2753

article-image-pointers-and-references
Packt
03 Jun 2015
14 min read
Save for later

Pointers and references

Packt
03 Jun 2015
14 min read
In this article by Ivo Balbaert, author of the book, Rust Essentials, we will go through the pointers and memory safety. (For more resources related to this topic, see here.) The stack and the heap When a program starts, by default a 2 MB chunk of memory called the stack is granted to it. The program will use its stack to store all its local variables and function parameters; for example, an i32 variable takes 4 bytes of the stack. When our program calls a function, a new stack frame is allocated to it. Through this mechanism, the stack knows the order in which the functions are called so that the functions return correctly to the calling code and possibly return values as well. Dynamically sized types, such as strings or arrays, can't be stored on the stack. For these values, a program can request memory space on its heap, so this is a potentially much bigger piece of memory than the stack. When possible, stack allocation is preferred over heap allocation because accessing the stack is a lot more efficient. Lifetimes All variables in a Rust code have a lifetime. Suppose we declare an n variable with the let n = 42u32; binding. Such a value is valid from where it is declared to when it is no longer referenced, which is called the lifetime of the variable. This is illustrated in the following code snippet: fn main() { let n = 42u32; let n2 = n; // a copy of the value from n to n2 life(n); println!("{}", m); // error: unresolved name `m`. println!("{}", o); // error: unresolved name `o`. }   fn life(m: u32) -> u32 {    let o = m;    o } The lifetime of n ends when main() ends; in general, the start and end of a lifetime happen in the same scope. The words lifetime and scope are synonymous, but we generally use the word lifetime to refer to the extent of a reference. As in other languages, local variables or parameters declared in a function do not exist anymore after the function has finished executing; in Rust, we say that their lifetime has ended. This is the case for the m and o variables in the preceding code snippet, which are only known in the life function. Likewise, the lifetime of a variable declared in a nested block is restricted to that block, like phi in the following example: {    let phi = 1.618; } println!("The value of phi is {}", phi); // is error Trying to use phi when its lifetime is over results in an error: unresolved name 'phi'. The lifetime of a value can be indicated in the code by an annotation, for example 'a, which reads as lifetime where a is simply an indicator; it could also be written as 'b, 'n, or 'life. It's common to see single letters being used to represent lifetimes. In the preceding example, an explicit lifetime indication was not necessary since there were no references involved. All values tagged with the same lifetime have the same maximum lifetime. In the following example, we have a transform function that explicitly declares the lifetime of its s parameter to be 'a: fn transform<'a>(s: &'a str) { /* ... */ } Note the <'a> indication after the name of the function. In nearly all cases, this explicit indication is not needed because the compiler is smart enough to deduce the lifetimes, so we can simply write this: fn transform_without_lifetime(s: &str) { /* ... */ } Here is an example where even when we indicate a lifetime specifier 'a, the compiler does not allow our code. Let's suppose that we define a Magician struct as follows: struct Magician { name: &'static str, power: u32 } We will get an error message if we try to construct the following function: fn return_magician<'a>() -> &'a Magician { let mag = Magician { name: "Gandalf", power: 4625}; &mag } The error message is error: 'mag' does not live long enough. Why does this happen? The lifetime of the mag value ends when the return_magician function ends, but this function nevertheless tries to return a reference to the Magician value, which no longer exists. Such an invalid reference is known as a dangling pointer. This is a situation that would clearly lead to errors and cannot be allowed. The lifespan of a pointer must always be shorter than or equal to than that of the value which it points to, thus avoiding dangling (or null) references. In some situations, the decision to determine whether the lifetime of an object has ended is complicated, but in almost all cases, the borrow checker does this for us automatically by inserting lifetime annotations in the intermediate code; so, we don't have to do it. This is known as lifetime elision. For example, when working with structs, we can safely assume that the struct instance and its fields have the same lifetime. Only when the borrow checker is not completely sure, we need to indicate the lifetime explicitly; however, this happens only on rare occasions, mostly when references are returned. One example is when we have a struct with fields that are references. The following code snippet explains this: struct MagicNumbers { magn1: &u32, magn2: &u32 } This won't compile and will give us the following error: missing lifetime specifier [E0106]. Therefore, we have to change the code as follows: struct MagicNumbers<'a> { magn1: &'a u32, magn2: &'a u32 } This specifies that both the struct and the fields have the lifetime as 'a. Perform the following exercise: Explain why the following code won't compile: fn main() {    let m: &u32 = {        let n = &5u32;        &*n    };    let o = *m; } Answer the same question for this code snippet as well: let mut x = &3; { let mut y = 4; x = &y; } Copying values and the Copy trait In the code that we discussed in earlier section the value of n is copied to a new location each time n is assigned via a new let binding or passed as a function argument: let n = 42u32; // no move, only a copy of the value: let n2 = n; life(n); fn life(m: u32) -> u32 {    let o = m;    o } At a certain moment in the program's execution, we would have four memory locations that contain the copied value 42, which we can visualize as follows: Each value disappears (and its memory location is freed) when the lifetime of its corresponding variable ends, which happens at the end of the function or code block in which it is defined. Nothing much can go wrong with this Copy behavior, in which the value (its bits) is simply copied to another location on the stack. Many built-in types, such as u32 and i64, work similar to this, and this copy-value behavior is defined in Rust as the Copy trait, which u32 and i64 implement. You can also implement the Copy trait for your own type, provided all of its fields or items implement Copy. For example, the MagicNumber struct, which contains a field of the u64 type, can have the same behavior. There are two ways to indicate this: One way is to explicitly name the Copy implementation as follows: struct MagicNumber {    value: u64 } impl Copy for MagicNumber {} Otherwise, we can annotate it with a Copy attribute: #[derive(Copy)] struct MagicNumber {    value: u64 } This now means that we can create two different copies, mag and mag2, of a MagicNumber by assignment: let mag = MagicNumber {value: 42}; let mag2 = mag; They are copies because they have different memory addresses (the values shown will differ at each execution): println!("{:?}", &mag as *const MagicNumber); // address is 0x23fa88 println!("{:?}", &mag2 as *const MagicNumber); // address is 0x23fa80 The *const function is a so-called raw pointer. A type that does not implement the Copy trait is called non-copyable. Another way to accomplish this is by letting MagicNumber implement the Clone trait: #[derive(Clone)] struct MagicNumber {    value: u64 } Then, we can use clone() mag into a different object called mag3, effectively making a copy as follows: let mag3 = mag.clone(); println!("{:?}", &mag3 as *const MagicNumber); // address is 0x23fa78 mag3 is a new pointer referencing a new copy of the value of mag. Pointers The n variable in the let n = 42i32; binding is stored on the stack. Values on the stack or the heap can be accessed by pointers. A pointer is a variable that contains the memory address of some value. To access the value it points to, dereference the pointer with *. This happens automatically in simple cases such as in println! or when a pointer is given as a parameter to a method. For example, in the following code, m is a pointer containing the address of n: let m = &n; println!("The address of n is {:p}", m); println!("The value of n is {}", *m); println!("The value of n is {}", m); This prints out the following output, which differs for each program run: The address of n is 0x23fb34 The value of n is 42 The value of n is 42 So, why do we need pointers? When we work with dynamically allocated values, such as a String, that can change in size, the memory address of that value is not known at compile time. Due to this, the memory address needs to be calculated at runtime. So, to be able to keep track of it, we need a pointer for it whose value will change when the location of String in memory changes. The compiler automatically takes care of the memory allocation of pointers and the freeing up of memory when their lifetime ends. You don't have to do this yourself like in C/C++, where you could mess up by freeing memory at the wrong moment or at multiple times. The incorrect use of pointers in languages such as C++ leads to all kinds of problems. However, Rust enforces a strong set of rules at compile time called the borrow checker, so we are protected against them. We have already seen them in action, but from here onwards, we'll explain the logic behind their rules. Pointers can also be passed as arguments to functions, and they can be returned from functions, but the compiler severely restricts their usage. When passing a pointer value to a function, it is always better to use the reference-dereference &* mechanism, as shown in this example: let q = &42; println!("{}", square(q)); // 1764 fn square(k: &i32) -> i32 {    *k * *k } References In our previous example, m, which had the &n value, is the simplest form of pointer, and it is called a reference (or borrowed pointer); m is a reference to the stack-allocated n variable and has the &i32 type because it points to a value of the i32 type. In general, when n is a value of the T type, then the &n reference is of the &T type. Here, n is immutable, so m is also immutable; for example, if you try to change the value of n through m with *m = 7; you will get a cannot assign to immutable borrowed content '*m' error. Contrary to C, Rust does not let you change an immutable variable via its pointer. Since there is no danger of changing the value of n through a reference, multiple references to an immutable value are allowed; they can only be used to read the value, for example: let o = &n; println!("The address of n is {:p}", o); println!("The value of n is {}", *o); It prints out as described earlier: The address of n is 0x23fb34 The value of n is 42 We could represent this situation in the memory as follows: It is clear that working with pointers such as this or in much more complex situations necessitates much stricter rules than the Copy behavior. For example, the memory can only be freed when there are no variables or pointers associated with it anymore. And when the value is mutable, can it be changed through any of its pointers? Mutable references do exist, and they are declared as let m = &mut n. However, n also has to be a mutable value. When n is immutable, the compiler rejects the m mutable reference binding with the error, cannot borrow immutable local variable 'n' as mutable. This makes sense since immutable variables cannot be changed even when you know their memory location. To reiterate, in order to change a value through a reference, both the variable and its reference have to be mutable, as shown in the following code snippet: let mut u = 3.14f64; let v = &mut u; *v = 3.15; println!("The value of u is now {}", *v); This will print: The value of u is now 3.15. Now, the value at the memory location of u is changed to 3.15. However, note that we now cannot change (or even print) that value anymore by using the u: u = u * 2.0; variable gives us a compiler error: cannot assign to 'u' because it is borrowed. We say that borrowing a variable (by making a reference to it) freezes that variable; the original u variable is frozen (and no longer usable) until the reference goes out of scope. In addition, we can only have one mutable reference: let w = &mut u; which results in the error: cannot borrow 'u' as mutable more than once at a time. The compiler even adds the following note to the previous code line with: let v = &mut u; note: previous borrow of 'u' occurs here; the mutable borrow prevents subsequent moves, borrows, or modification of `u` until the borrow ends. This is logical; the compiler is (rightfully) concerned that a change to the value of u through one reference might change its memory location because u might change in size, so it will not fit anymore within its previous location and would have to be relocated to another address. This would render all other references to u as invalid, and even dangerous, because through them we might inadvertently change another variable that has taken up the previous location of u! A mutable value can also be changed by passing its address as a mutable reference to a function, as shown in this example: let mut m = 7; add_three_to_magic(&mut m); println!("{}", m); // prints out 10 With the function add_three_to_magic declared as follows: fn add_three_to_magic(num: &mut i32) {    *num += 3; // value is changed in place through += } To summarize, when n is a mutable value of the T type, then only one mutable reference to it (of the &mut T type) can exist at any time. Through this reference, the value can be changed. Using ref in a match If you want to get a reference to a matched variable inside a match function, use the ref keyword, as shown in the following example: fn main() { let n = 42; match n {      ref r => println!("Got a reference to {}", r), } let mut m = 42; match m {      ref mut mr => {        println!("Got a mutable reference to {}", mr);        *mr = 43;      }, } println!("m has changed to {}!", m); } Which prints out: Got a reference to 42 Got a mutable reference to 42 m has changed to 43! The r variable inside the match has the &i32 type. In other words, the ref keyword creates a reference for use in the pattern. If you need a mutable reference, use ref mut. We can also use ref to get a reference to a field of a struct or tuple in a destructuring via a let binding. For example, while reusing the Magician struct, we can extract the name of mag by using ref and then return it from the match: let mag = Magician { name: "Gandalf", power: 4625}; let name = {    let Magician { name: ref ref_to_name, power: _ } = mag;    *ref_to_name }; println!("The magician's name is {}", name); Which prints: The magician's name is Gandalf. References are the most common pointer type and have the most possibilities; other pointer types should only be applied in very specific use cases. Summary In this article, we learned the intelligence behind the Rust compiler, which is embodied in the principles of ownership, moving values, and borrowing. Resources for Article: Further resources on this subject: Getting Started with NW.js [article] Creating Random Insults [article] Creating Man-made Materials in Blender 2.5 [article]
Read more
  • 0
  • 0
  • 2736

article-image-more-about-julia
Packt
21 Jul 2015
28 min read
Save for later

More about Julia

Packt
21 Jul 2015
28 min read
In this article by Malcolm Sherrington, author of the book Mastering Julia, we will see why write a book on Julia when the language is not yet reached the version v1.0 stage? It was the first question which needed to be addressed when deciding on the contents and philosophy behind the book. (For more resources related to this topic, see here.) Julia at the time as v0.2, it is now soon to achieve a stable v0.4 but already the blueprint for v0.5 is being touted. There were some common misconceptions which I wished to address: It is a language designed for Geeks It's main attribute, possibly only, was its speed It was a scientific language primarily a MATLAB clone It is not as easy to use compared with the alternatives such Python and R There are not enough library support to tackle Enterprise Solutions In fact none of these apply to Julia. True it is a relatively young programming language. The initial design work on Julia project began at MIT in August 2009, by February 2012 it became open source. It is largely the work of three developers Stefan Karpinski, Jeff Bezanson, and Viral Shah. Initially Julia was envisaged by the designers as a scientific language sufficiently rapid to make the necessity of modeling in an interactive language and subsequently having to redevelop in a compiled language, such as C or Fortran. To achieve this, Julia code would need to be transformed to the underlying machine code of the computer but using the low level virtual machine (LLVM) compilation system, at the time itself a new project. This was a masterstroke. LLVM is now the basis of a variety of systems, the Apple C compiler (clang) uses it, Google V8 JavaScript engine and Mozilla's Rust language use it and Python is attempting to achieve significant increases in speed with its numba module. In Julia LLVM always works, there are no exceptions because it has to. When launched possibly the community itself saw Julia as a replacement for MATLAB but that proved not to be just case. The syntax of Julia is similar to MATLAB, so much so that anyone competent in the latter can easily learn Julia but, it is a much richer language with many significant differences. The task of the book was to focus on these. In particular my target audience was the data scientist and programmer analyst but have sufficient for the "jobbing" C++ and Java programmer. Julia's features The Julia programming language is free and open source (MIT licensed) and the source is available on GitHub. To the veteran programmer it has a look and feel similar to MATLAB. Blocks created by for, while, and if statements are all terminated by end rather than by endfor, endwhile, and endif or using the familiar {} style syntax. However it is not a MATLAB clone and sources written for MATLAB will not run on Julia. The following are some of the Julia's features: Designed for parallelism and distributed computation (multicore and cluster) C functions called directly (no wrappers or special APIs needed) Powerful shell-like capabilities for managing other processes Lisp-like macros and other metaprogramming facilities User-defined types are as fast and compact as built-ins LLVM-based, just-in-time (JIT) compiler that allows Julia to approach and often match the performance of C/C++ An extensive mathematical function library (written in Julia) Integrated mature, best-of-breed C and Fortran libraries for linear algebra, random number generation, FFTs, and string processing Julia's core is implemented in C and C++, its parser in Scheme, and the LLVM compiler framework is used for JIT generation of machine code. The standard library is written in Julia itself using the Node.js's libuv library for efficient, cross-platform I/O. Julia has a rich language of types for constructing and describing objects that can also optionally be used to make type declarations. The ability to define function behavior across many combinations of argument types via multiple dispatch which is a key cornerstone of the language design. Julia can utilize code in other programming languages by a directly calling routines written in C or Fortran and stored in shared libraries or DLLs. This is a feature of the language syntax. In addition it is possible to interact with Python via the PyCall and this is used in the implementation of the IJulia programming environment. A quick look at some Julia To get feel for programming in Julia let's look at an example which uses random numbers to price an Asian derivative on the options market. A share option is the right to purchase a specific stock at a nominated price sometime in the future. The person granting the option is called the grantor and the person who has the benefit of the option is the beneficiary. At the time the option matures the beneficiary may choose to exercise the option if it is in his/her interest the grantor is then obliged to complete the contract. The following snippet is part of the calculation and computes a single trial and uses the Winston package to display the trajectory: using Winston; S0 = 100;     # Spot price K   = 102;     # Strike price r   = 0.05;     # Risk free rate q   = 0.0;      # Dividend yield v   = 0.2;     # Volatility tma = 0.25;     # Time to maturity T = 100;       # Number of time steps dt = tma/T;     # Time increment S = zeros(Float64,T) S[1] = S0; dW = randn(T)*sqrt(dt); [ S[t] = S[t-1] * (1 + (r - q - 0.5*v*v)*dt + v*dW[t] + 0.5*v*v*dW[t]*dW[t]) for t=2:T ]   x = linspace(1, T, length(T)); p = FramedPlot(title = "Random Walk, drift 5%, volatility 2%") add(p, Curve(x,S,color="red")) display(p) Plot one track so only compute a vector S of T elements. The stochastic variance dW is computed in a single vectorized statement. The track S is computed using a "list comprehension". The array x is created using linspace to define a linear absicca for the plot. Using the Winston package to produce the display, which only requires 3 statements: to define the plot space, add a curve to it and display the plot as shown in the following figure: Generating Julia sets Both the Mandelbrot set and Julia set (for a given constant z0) are the sets of all z (complex number) for which the iteration z → z*z + z0 does not diverge to infinity. The Mandelbrot set is those z0 for which the Julia set is connected. We create a file jset.jl and its contents defines the function to generate a Julia set. functionjuliaset(z, z0, nmax::Int64) for n = 1:nmax if abs(z) > 2 (return n-1) end z = z^2 + z0 end returnnmax end Here z and z0 are complex values and nmax is the number of trials to make before returning. If the modulus of the complex number z gets above 2 then it can be shown that it will increase without limit. The function returns the number of iterations until the modulus test succeeds or else nmax. Also we will write a second file pgmfile.jl to handling displaying the Julia set. functioncreate_pgmfile(img, outf::String) s = open(outf, "w") write(s, "P5n") n, m = size(img) write(s, "$m $n 255n") for i=1:n, j=1:m    p = img[i,j]    write(s, uint8(p)) end close(s) end It is quite easy to create a simple disk file using the portable bitmap (netpbm) format. This consists of a "magic" number P1 - P6, followed on the next line the image height, width and a maximum color value which must be greater than 0 and less than 65536; all of these are ASCII values not binary. Then follows the image values (height x width) which make be ASCII for P1, P2, P3 or binary for P4, P5, P6. There are three different types of portable bitmap; B/W (P1/P4), Grayscale (P2/P5), and Colour (P3/P6). The function create_pgmfile() creates a binary grayscale file (magic number = P5) from an image matrix where the values are written as Uint8. Notice that the for loop defines the indices i, j in a single statement with correspondingly only one end statement. The image matrix is output in column order which matches the way it is stored in Julia. So the main program looks like: include("jset.jl") include("pgmfile.jl") h = 400; w = 800; M = Array(Int64, h, w); c0 = -0.8+0.16im; pgm_name = "juliaset.pgm";   t0 = time(); for y=1:h, x=1:w c = complex((x-w/2)/(w/2), (y-h/2)/(w/2)) M[y,x] = juliaset(c, c0, 256) end t1 = time(); create_pgmfile(M, pgm_name); print("Written $pgm_namenFinished in $(t1-t0) seconds.n"); This is how the previous code works: We define an array N of type Int64 to hold the return values from the juliaset function. The constant c0 is arbitrary, different values of c0 will produce different Julia sets. The starting complex number is constructed from the (x,y) coordinates and scaled to the half width. We 'cheat' a little by defining the maximum number of iterations as 256. Because we are writing byte values (Uint8) and value which remains bounded will be 256 and since overflow values wrap around will be output as 0 (black). The script defines a main program in a function jmain(): julia>jmain Written juliaset.pgm Finished in 0.458 seconds # => (on my laptop) Julia type system and multiple dispatch Julia is not an object-oriented language so when we speak of object they are a different sort of data structure to those in traditional O-O languages. Julia does not allow types to have methods or so it is not possible to create subtypes which inherit methods. While this might seem restrictive it does permit methods to use a multiple dispatch call structure rather than the single dispatch system employed in object orientated ones. Coupled with Julia's system of types, multiple dispatch is extremely powerful. Moreover it is a more logical approach for data scientists and scientific programmers and if for no other reason exposing this to you the analyst/programmer is a reason to use Julia. A function is an object that maps a tuple of arguments to a return value. In the case where the arguments are not valid the function should handle the situation cleanly by catching the error and handling it or throw an exception. When a function is applied to its argument tuple it selects the appropriate method and this process is called dispatch. In traditional object-oriented languages the method chosen is based only on the object type and this paradigm is termed single-dispatch. With Julia the combination of all a functions argument determine which method is chosen, this is the basis of multiple dispatch. To the scientific programmer this all seems very natural. It makes little sense in most circumstances for one argument to be more important than the others. In Julia all functions and operators, which are also functions, use multiple dispatch. The methods chosen for any combination of operators. For example looking at the methods of the power operator (^): julia> methods(^) # 43 methods for generic function "^": ^(x::Bool,y::Bool) at bool.jl:41 ^(x::BigInt,y::Bool) at gmp.jl:314 ^(x::Integer,y::Bool) at bool.jl:42     ^(A::Array{T,2},p::Number) at linalg/dense.jl:180 ^(::MathConst{:e},x::AbstractArray{T,2}) at constants.jl:87 We can see that there are 43 methods for ^ and the file and line where the methods is defined are given too. Because any untyped argument is designed as type Any, it is possible to define a set of function methods such that there is no unique most specific method applicable to some combinations of arguments. julia> pow(a,b::Int64) = a^b; julia> pow(a::Float64,b) = a^b; Warning: New definition pow(Float64,Any) at /Applications/JuliaStudio.app/Contents/Resources/Console/Console.jl:1 is ambiguous with: pow(Any,Int64) at /Applications/JuliaStudio.app/Contents/Resources/Console/Console.jl:1. To fix, define pow(Float64,Int64) before the new definition. A call of pow(3.5, 2) can be handled by either function. In this case they will give the same result by only because of the function bodies and Julia can't know that. Working with Python The ability for Julia with call code written in other languages is one of its main strengths. From its inception Julia had to play "catchup" and a key feature was it makes calling code written in C, and by implication Fortran, very easy. The code to be called must be available as a shared library rather than just a standalone object file. There is a zero-overhead in the call, meaning that it is reduced to a single machine instruction in the LLVM compilation. In addition Python models can be accessed via the PyCall package which provides a @pyimport macro that mimics a Python import statement. This imports a Python module and provides Julia wrappers for all of the functions and constants including automatic conversion of types between Julia and Python. This work has led to the creation of an IJulia kernel to the IPython IDE which now is a principle component of the Jupyter project. In Pycall, type conversions are automatically performed for numeric, boolean, string, IO streams plus all tuples, arrays and dictionaries of these types. julia> using PyCall julia> @pyimport scipy.optimize as so julia> @pyimport scipy.integrate as si julia> so.ridder(x -> x*cos(x), 1, pi); # => 1.570796326795 julia> si.quad(x -> x*sin(x), 1, pi)[1]; # => 2.840423974650 In the preceding commands, the Python optimize and integrate modules are imported and functions in those modules called from the Julia REPL. One difference imposed on the package is that calls using the Python object notation are not possible from Julia, so these are referenced using an array-style notation po[:atr] rather po.atr, where po is a PyObject and atr is an attribute. It is also easy to use the Python matplotlib module to display simple (and complex) graphs. @pyimport matplotlib.pyplot as plt x = linspace(0,2*pi,1000); y = sin(3*x + 4*cos(2*x)); plt.plot(x, y, color="red", linewidth=2.0, linestyle="--") 1-element Array{Any,1}: PyObject<matplotlib.lines.Line2D object at 0x0000000027652358> plt.show() Notice that keywords can also be passed such as the color, line width and the preceding style. Simple statistics using dataframes Julia implements various approaches for handling data held on disk. These may be 'normal' files such as text files, CSV and other delimited files, or in SQL and NoSQL databases. Also there is an implementation of dataframe support similar to that provided in R and via the pandas module in Python. The following looks at the Apple share prices from 2000 to 200, using a CSV file with provides opening, closing, high and low prices together with trading volumes over the day. using DataFrames, StatsBase aapl = readtable("AAPL-Short.csv");   naapl = size(aapl)[1] m1 = int(mean((aapl[:Volume]))); # => 6306547 The data is read into a DataFrame and we can estimate the mean (m1). For the volume, it is possible to cast it as an integer as this makes more sense. We can do this by creating a weighting vector. using Match wts = zeros(naapl); for i in 1:naapl    dt = aapl[:Date][i]    wts[i] = @match dt begin          r"^2000" => 1.0          r"^2001" => 2.0          r"^2002" => 4.0          _       => 0.0    end end;   wv = WeightVec(wts); m2 = int(mean(aapl[:Volume], wv)); # => 6012863 Computing weighted statistical metrics it is possible to 'trim' off the outliers from each end of the data. Returning to the closing prices: mean(aapl[:Close]);         # => 37.1255 mean(aapl[:Close], wv);     # => 26.9944 trimmean(aapl[:Close], 0.1); # => 34.3951 trimean() is the trimmed mean where 5 percent is taken from each end. std(aapl[:Close]);           # => 34.1186 skewness(aapl[:Close])       # => 1.6911 kurtosis(aapl[:Close])       # => 1.3820 As well as second moments such as standard deviation, StatsBase provides a generic moments() function and specific instances based on these such as for skewness (third) and kurtosis (fourth). It is also possible to provide some summary statistics: summarystats(aapl[:Close])   Summary Stats: Mean:         37.125505 Minimum:     13.590000 1st Quartile: 17.735000 Median:       21.490000 3rd Quartile: 31.615000 Maximum:     144.190000 The first and third quartiles related to the 25 percent and 75 percent percentiles for a finer granularity we can use the percentile() function. percentile(aapl[:Close],5); # => 14.4855 percentile(aapl[:Close],95); # => 118.934 MySQL access using PyCall We have seen previously that Python can be used for plotting via the PyPlot package which interfaces with matplotlib. In fact the ability to easily call Python modules is a very powerful feature in Julia and we can use this as an alternative method for connecting to databases. Any database which can be manipulated by Python is also available to Julia. In particular since the DBD driver for MySQL is not fully DBT compliant, let's look this approach to running some queries. Our current MySQL setup already has the Chinook dataset loaded some we will execute a query to list the Genre table. In Python we will first need to download the MySQL Connector module. For Anaconda this needs to be using the source (independent) distribution, rather than a binary package and the installation performed using the setup.py file. The query (in Python) to list the Genre table would be: import mysql.connector as mc cnx = mc.connect(user="malcolm", password="mypasswd") csr = cnx.cursor() qry = "SELECT * FROM Chinook.Genre" csr.execute(qry) for vals in csr:    print(vals)   (1, u'Rock') (2, u'Jazz') (3, u'Metal') (4, u'Alternative & Punk') (5, u'Rock And Roll') ... ... csr.close() cnx.close() We can execute the same in Julia by using the PyCall to the mysql.connector module and the form of the coding is remarkably similar: using PyCall @pyimport mysql.connector as mc   cnx = mc.connect (user="malcolm", password="mypasswd"); csr = cnx[:cursor]() query = "SELECT * FROM Chinook.Genre" csr[:execute](query)   for vals in csr    id   = vals[1]    genre = vals[2]    @printf “ID: %2d, %sn” id genre end ID: 1, Rock ID: 2, Jazz ID: 3, Metal ID: 4, Alternative & Punk ID: 5, Rock And Roll ... ... csr[:close]() cnx[:close]() Note that the form of the call is a little different from the corresponding Python method, since Julia is not object-oriented the methods for a Python object are constructed as an array of symbols. For example the Python csr.execute(qry) routine is called in Julia as csr[:execute](qry). Also be aware that although Python arrays are zero-based this is translated to one-based by PyCall, so the first values is referenced as vals[1]. Scientific programming with Julia Julia was originally developed as a replacement for MATLAB with a focus on scientific programming. There are modules which are concerned with linear algebra, signal processing, mathematical calculus, optimization problems, and stochastic simulation. The following is a subject dear to my heart: the solution of differential equations. Differential equations are those which involve terms which involve the rates of change of variates as well as the variates themselves. They arise naturally in a number of fields, notably dynamics and when the changes are with respect to one dependent variable, often time, the systems are called ordinary differential equations. If more than a single dependent variable is involved, then they are termed partial differential equations. Julia supports the solution of ordinary differential equations thorough a couple of packages ODE and Sundials. The former (ODE) consists of routines written solely in Julia whereas Sundials is a wrapper package around a shared library. ODE exports a set of adaptive solvers; adaptive meaning that the 'step' size of the algorithm changes algorithmically to reduce the error estimate to be below a certain threshold. The calls take the form odeXY, where X is the order of the solver and Y the error control. ode23: Third order adaptive solver with third order error control ode45: Fourth order adaptive solver with fifth order error control ode78: Seventh order adaptive solver with eighth order error control To solve the explicit ODE defined as a vectorize set of equations dy/dt = F(t,y), all routines of which have the same basic form: (tout, yout) = odeXY(F, y0, tspan). As an example, I will look at it as a linear three-species food chain model where the lowest-level prey x is preyed upon by a mid-level species y, which, in turn, is preyed upon by a top level predator z. This is an extension of the Lotka-Volterra system from to three species. Examples might be three-species ecosystems such as mouse-snake-owl, vegetation-rabbits-foxes, and worm-sparrow-falcon. x' = a*x − b*x*y y' = −c*y + d*x*y − e*y*z z' = −f*z + g*y*z #for a,b,c,d,e,f g > 0 Where a, b, c, d are in the two-species Lotka-Volterra equations: e represents the effect of predation on species y by species z f represents the natural death rate of the predator z in the absence of prey g represents the efficiency and propagation rate of the predator z in the presence of prey This translates to the following set of linear equations: x[1] = p[1]*x[1] - p[2]*x[1]*x[2] x[2] = -p[3]*x[2] + p[4]*x[1]*x[2] - p[5]*x[2]*x[3] x[3] = -p[6]*x[3] + p[7]*x[2]*x[3] It is slightly over specified since one of the parameters can be removed by rescaling the timescale. We define the function F as follows: function F(t,x,p) d1 = p[1]*x[1] - p[2]*x[1]*x[2] d2 = -p[3]*x[2] + p[4]*x[1]*x[2] - p[5]*x[2]*x[3] d3 = -p[6]*x[3] + p[7]*x[2]*x[3] [d1, d2, d3] end This takes the time range, vectors of the independent variables and of the coefficients and returns a vector of the derivative estimates: p = ones(7); # Choose all parameters as 1.0 x0 = [0.5, 1.0, 2.0]; # Setup the initial conditions tspan = [0.0:0.1:10.0]; # and the time range Solve the equations by calling the ode23 routine. This returns a matrix of the solutions in a columnar order which we extract and display using Winston: (t,x) = ODE.ode23((t,x) -> F(t,x,pp), x0, tspan);   n = length(t); y1 = zeros(n); [y1[i] = x[i][1] for i = 1:n]; y2 = zeros(n); [y2[i] = x[i][2] for i = 1:n]; y3 = zeros(n); [y3[i] = x[i][3] for i = 1:n];   using Winston plot(t,y1,"b.",t,y2,"g-.",t,y3,"r--") This is shown in the following figure: Graphics with Gadlfy Julia now provides a vast array of graphics packages. The "popular" ones may be thought of as Winston, PyPlot and Gadfly and there is also an interface to the increasingly more popular online system Plot.ly. Gadfly is a large and complex package and provides great flexibility in the range and breadth of the visualizations possible in Julia. It is equivalent to the R module ggplot2 and similarly is based on the seminal work The Grammar of Graphics by Leland Wilkinson. The package was written by Daniel Jones and the source on GitHub contains numerous examples of visual displays together with the accompanying code. An entire text could be devoted just to Gadfly so I can only point out some of the main features here and encourage the reader interested in print standard graphics in Julia to refer to the online website at http://gadflyjl.org. The standard call is to the plot() function with creates a graph on the display device via a browser either directly or under the control of IJulia if that is being used as an IDE. It is possible to assign the result of plot() to a variable and invoke this using display(), In this way output can be written to files including: SVG, SVGJS/D3 PDF, and PNG. dd = plot(x = rand(10), y = rand(10)); draw(SVG(“random-pts.svg”, 15cm, 12cm) , dd); Notice that if writing to a backend, the display size is provided, this can be specified in units of cm and inch. Gadfly works well with C libraries of cairo, pango and, fontconfig installed. It will produce SVG and SVGJS graphics but for PNG, PostScript (PS) and PDF cairo is required. Also complex text layouts are more accurate when pango and fontconfig are available. The plot() call can operate on three different data sources: Dataframes Functions and expressions Arrays and collections Unless otherwise specified the type of graph produced is a scatter diagram. The ability to work directly with data frames is especially useful. To illustrate this let's look at the GCSE result set. Recall that this is available as part of the RDatasets suite of source data. using Gadfly, RDatasets, DataFrames; set_default_plot_size(20cm, 12cm); mlmf = dataset("mlmRev","Gcsemv") df = mlmf[complete_cases(mlmf), :] After extracting the data we need to operate with values with do not have any NA values, so we use the complete_cases() function to create a subset of the original data. names(df) 5-element Array{Symbol,1}: ; # => [ :School, :Student, :Gender, :Written, :Course ] If we wish to view the data values for the exam and course work results and at the same time differentiate between boys and girls, this can be displayed by: plot(df, x="Course", y="Written", color="Gender") The JuliaGPU community group Many Julia modules build on the work of other authors working within the same field of study and these have classified as community groups (http://julialang.org/community). Probably the most prolific is the statistics group: JuliaStats (http://juliastats.github.io). One of the main themes in my professional career has been working with hardware to speed up the computing process. In my work on satellite data I worked with the STAR-100 array processor, and once back in the UK, used Silicon Graphics for 3D rendering of medical data . Currently I am interested in using NVIDIA GPUs in financial scenarios and risk calculations. Much of this work has been coded in C, with domain specific languages to program the ancillary hardware. It is now possible to do much of this in Julia with packages contained in the JuliaGPU group. This has routines for both CUDA and OpenCL, at present covering: Basic runtime: CUDA.jl, CUDArt.jl, OpenCL.jl BLAS integration: CUBLAS.jl, CLBLAS FFT operations: CUFFT.jl, CLFFT.jl The CU*-style routines only applies to NVIDIA cards and requires the CUDA SDK to be installed, whereas CL*-functions can be used with variety of GPU s. The CLFFT and CLBLAS do require some additional libraries to be present but we can use OpenCL as is. The following is output from a Lenovo Z50 laptop with an i7 processor and both Intel and NVIDIA graphics chips. julia> using OpenCL julia> OpenCL.devices() OpenCL.Platform(Intel(R) HDGraphics 4400) OpenCL.Platform(Intel(R) Core(TM) i7-4510U CPU) OpenCL.Platform(GeForce 840M on NVIDIA CUDA) To do some calculations we need to define a kernel to be loaded on the GPU. The following multiplies two 1024x1024 matrices of Gaussian random numbers: import OpenCL const cl = OpenCL const kernel_source = """ __kernel void mmul( const int Mdim, const int Ndim, const int Pdim, __global float* A, __global float* B, __global float* C) {    int k;    int i = get_global_id(0);    int j = get_global_id(1);    float tmp;    if ((i < Ndim) && (j < Mdim)) {      tmp = 0.0f;      for (k = 0; k < Pdim; k++)        tmp += A[i*Ndim + k] * B[k*Pdim + j];        C[i*Ndim+j] = tmp;    } } """ The kernel is expressed as a string and the OpenCL DSL has a C-like syntax: const ORDER = 1024; # Order of the square matrices A, B and C const TOL   = 0.001; # Tolerance used in floating point comps const COUNT = 3;     # Number of runs   sizeN = ORDER * ORDER; h_A = float32(randn(ORDER)); # Fill array with random numbers h_B = float32(randn(ORDER)); # --- ditto -- h_C = Array(Float32, ORDER); # Array to hold the results   ctx   = cl.Context(cl.devices()[3]); queue = cl.CmdQueue(ctx, :profile);   d_a = cl.Buffer(Float32, ctx, (:r,:copy), hostbuf = h_A); d_b = cl.Buffer(Float32, ctx, (:r,:copy), hostbuf = h_B); d_c = cl.Buffer(Float32, ctx, :w, length(h_C)); We now create the Open CL context and some data space on the GPU for the three arrays d_A, d_B, and D_C. Then we copy the data in the host arrays h_A and h_B to the device and then load the kernel onto the GPU. prg = cl.Program(ctx, source=kernel_source) |> cl.build! mmul = cl.Kernel(prg, "mmul"); The following loop runs the kernel COUNT times to give an accurate estimate of the elapsed time for the operation. This includes the cl-copy!() operation which copies the results back from the device to the host (Julia) program. for i in 1:COUNT fill!(h_C, 0.0); global_range = (ORDER. ORDER); mmul_ocl = mmul[queue, global_range]; evt = mmul_ocl(int32(ORDER), int32(ORDER), int32(ORDER), d_a, d_b, d_c); run_time = evt[:profile_duration] / 1e9; cl.copy!(queue, h_C, d_c); mflops = 2.0 * Ndims^3 / (1000000.0 * run_time); @printf “%10.8f seconds at %9.5f MFLOPSn” run_time mflops end 0.59426405 seconds at 3613.686 MFLOPS 0.59078856 seconds at 3634.957 MFLOPS 0.57401651 seconds at 3741.153 MFLOPS This compares with the figures for running this natively, without the GPU processor: 7.060888678 seconds at 304.133 MFLOPS That is using the GPU gives a 12-fold increase in the performance of matrix calculation. Summary This article has introduced some of the main features which sets Julia apart from other similar programming languages. I began with a quick look some Julia code by developing a trajectory used in estimating the price of a financial option which was displayed graphically. Continuing with the graphics theme we presented some code to generating a Julia set and to write this to disk as a PGM formatted file. The type system and use of multiple dispatch is discussed next. This a major difference for the user between Julia and object-orientated languages such as R and Python and is central to what gives Julia the power to generate fast machine-level code via LLVM compilation. We then turned to a series of topics from the Julia armory: Working with Python: The ability to call C and Fortran, seamlessly, has been a central feature of Julia since its initial development by the addition of interoperability with Python has opened up a new series of possibilities, leading to the development of the IJulia interface and its integration in the Jupyter project. Simple statistics using DataFrames :As an example of working with data highlighted the Julia implementation of data frames by looking at Apple share prices and applying some simple statistics. MySQL Access using PyCall: Returns to another usage of Python interoperability to illustrate an unconventional method to interface to a MySQL database. Scientific programming with Julia: The case of solution of the ordinary differential equations is presented here by looking at the Lotka-Volterras equation but unusually we develop a solution for the three species model. Graphics with Gadfly: Julia has a wide range of options when developing data visualizations. Gadfly is one of the ‘heavyweights’ and a dataset is extracted from the RDataset.jl package, containing UK GCSE results and the comparison between written and course work results is displayed using Gadfly, categorized by gender. Finally we showcased the work of Julia community groups by looking at an example from the JuliaGPU group by utilizing the OpenCL package to check on the set of supported devices. We then selected an NVIDIA GeForce chip, in order to run execute a simple kernel which multiplied a pair of matrices via the GPU. This was timed against conventional evaluation against native Julia coding in order to illustrate the speedup involved in this approach from parallelizing matrix operations. Resources for Article: Further resources on this subject: Pricing the Double-no-touch option [article] Basics of Programming in Julia [article] SQL Server Analysis Services Administering and Monitoring Analysis Services [article]
Read more
  • 0
  • 0
  • 2486

article-image-visualforce-development-apex
Packt
06 Feb 2015
12 min read
Save for later

Visualforce Development with Apex

Packt
06 Feb 2015
12 min read
In this article by Matt Kaufman and Michael Wicherski, authors of the book Learning Apex Programming, we will see how we can use Apex to extend the Salesforce1 Platform. We will also see how to create a customized Force.com page. (For more resources related to this topic, see here.) Apex on its own is a powerful tool to extend the Salesforce1 Platform. It allows you to define your own database logic and fully customize the behavior of the platform. Sometimes, controlling "what happens behind the scenes isn't enough. You might have a complex process that needs to step users through a wizard or need to present data in a format that isn't native to the Salesforce1 Platform, or maybe even make things look like your corporate website. Anytime you need to go beyond custom logic and implement a custom interface, you can turn to Visualforce. Visualforce is the user interface framework for the Salesforce1 Platform. It supports the use of HTML, JavaScript, CSS, and Flash—all of which enable you to build your own custom web pages. These web pages are stored and hosted by the Salesforce1 Platform and can be exposed to just your internal users, your external community users, or publicly to the world. But wait, there's more! Also included with Visualforce is a robust markup language. This markup language (which is also referred to as Visualforce) allows you to bind your web pages to data and actions stored on the platform. It also allows you to leverage Apex for code-based objects and actions. Like the rest of the platform, the markup portion of Visualforce is upgraded three times a year with new tags and features. All of these features mean that Visualforce is very powerful. s-con-what? Before the "introduction of Visualforce, the Salesforce1 Platform had a feature called s-controls. These were simple files where you could write HTML, CSS, and JavaScript. There was no custom markup language included. In order to make things look like the Force.com GUI, a lot of HTML was required. If you wanted to create just a simple input form for a new Account record, so much HTML code was required. The following is just a" small, condensed excerpt of what the HTML would look like if you wanted to recreate such a screen from scratch: <div class="bPageTitle"><div class="ptBody"><div class="content"> <img src="/s.gif" class="pageTitleIcon" title="Account" /> <h1 class="pageType">    Account Edit<span class="titleSeparatingColon">:</span> </h1> <h2 class="pageDescription"> New Account</h2> <div class="blank">&nbsp;</div> </div> <div class="links"></div></div><div   class="ptBreadcrumb"></div></div> <form action="/001/e" method="post" onsubmit="if   (window.ffInAlert) { return false; }if (window.sfdcPage   &amp;&amp; window.sfdcPage.disableSaveButtons) { return   window.sfdcPage.disableSaveButtons(); }"> <div class="bPageBlock brandSecondaryBrd bEditBlock   secondaryPalette"> <div class="pbHeader">    <table border="0" cellpadding="0" cellspacing="0"><tbody>      <tr>      <td class="pbTitle">      <img src="/s.gif" width="12" height="1" class="minWidth"         style="margin-right: 0.25em;margin-right: 0.25em;margin-       right: 0.25em;">      <h2 class="mainTitle">Account Edit</h2>      </td>      <td class="pbButton" id="topButtonRow">      <input value="Save" class="btn" type="submit">      <input value="Cancel" class="btn" type="submit">      </td>      </tr>    </tbody></table> </div> <div class="pbBody">    <div class="pbSubheader brandTertiaryBgr first       tertiaryPalette" >    <span class="pbSubExtra"><span class="requiredLegend       brandTertiaryFgr"><span class="requiredExampleOuter"><span       class="requiredExample">&nbsp;</span></span>      <span class="requiredMark">*</span>      <span class="requiredText"> = Required Information</span>      </span></span>      <h3>Account Information<span         class="titleSeparatingColon">:</span> </h3>    </div>    <div class="pbSubsection">    <table class="detailList" border="0" cellpadding="0"     cellspacing="0"><tbody>      <tr>        <td class="labelCol requiredInput">        <label><span class="requiredMark">*</span>Account         Name</label>      </td>      <td class="dataCol col02">        <div class="requiredInput"><div         class="requiredBlock"></div>        <input id="acc2" name="acc2" size="20" type="text">        </div>      </td>      <td class="labelCol">        <label>Website</label>      </td>      <td class="dataCol">        <span>        <input id="acc12" name="acc12" size="20" type="text">        </span>      </td>      </tr>    </tbody></table>    </div> </div> <div class="pbBottomButtons">    <table border="0" cellpadding="0" cellspacing="0"><tbody>    <tr>      <td class="pbTitle"><img src="/s.gif" width="12" height="1"       class="minWidth" style="margin-right: 0.25em;margin-right:       0.25em;margin-right: 0.25em;">&nbsp;</td>      <td class="pbButtonb" id="bottomButtonRow">      <input value=" Save " class="btn" title="Save"         type="submit">      <input value="Cancel" class="btn" type="submit">      </td>    </tr>    </tbody></table> </div> <div class="pbFooter secondaryPalette"><div class="bg"> </div></div> </div> </form> We did our best to trim down this HTML to as little as possible. Despite all of our efforts, it still "took up more space than we wanted. The really sad part is that all of that code only results in the following screenshot: Not only was it time consuming to write all this HTML, but odds were that we wouldn't get it exactly right the first time. Worse still, every time the business requirements changed, we had to go through the exhausting effort of modifying the HTML code. Something had to change in order to provide us relief. That something was the introduction of Visualforce and its markup language. Your own personal Force.com The markup "tags in Visualforce correspond to various parts of the Force.com GUI. These tags allow you to quickly generate HTML markup without actually writing any HTML. It's really one of the greatest tricks of the Salesforce1 Platform. You can easily create your own custom screens that look just like the built-in ones with less effort than it would take you to create a web page for your corporate website. Take a look at the Visualforce markup that corresponds to the HTML and screenshot we showed you earlier: <apex:page standardController="Account" > <apex:sectionHeader title="Account Edit" subtitle="New Account"     /> <apex:form>    <apex:pageBlock title="Account Edit" mode="edit" >      <apex:pageBlockButtons>        <apex:commandButton value="Save" action="{!save}" />        <apex:commandButton value="Cancel" action="{!cancel}" />      </apex:pageBlockButtons>      <apex:pageBlockSection title="Account Information" >        <apex:inputField value="{!account.Name}" />        <apex:inputField value="{!account.Website}" />      </apex:pageBlockSection>    </apex:pageBlock> </apex:form> </apex:page> Impressive! With "merely these 15 lines of markup, we can render nearly 100 lines of earlier HTML. Don't believe us, you can try it out yourself. Creating a Visualforce page Just like" triggers and classes, Visualforce pages can "be created and edited using the Force.com IDE. The Force.com GUI also includes a web-based editor to work with Visualforce pages. To create a new Visualforce page, perform these simple steps: Right-click on your project and navigate to New | Visualforce Page. The Create New Visualforce Page window appears as shown: Enter" the label and name for your "new page in the Label and Name fields, respectively. For this example, use myTestPage. Select the API version for the page. For this example, keep it at the default value. Click on Finish. A progress bar will appear followed by your new Visualforce page. Remember that you always want to create your code in a Sandbox or Developer Edition org, not directly in Production. It is technically possible to edit Visualforce pages in Production, but you're breaking all sorts of best practices when you do. Similar to other markup languages, every tag in a Visualforce page must be closed. Tags and their corresponding closing tags must also occur in a proper order. The values of tag attributes are enclosed by double quotes; however, single quotes can be used inside the value to denote text values. Every Visualforce page starts with the <apex:page> tag and ends with </apex:page> as shown: <apex:page> <!-- Your content goes here --> </apex:page> Within "the <apex:page> tags, you can paste "your existing HTML as long as it is properly ordered and closed. The result will be a web page hosted by the Salesforce1 Platform. Not much to see here If you are" a web developer, then there's a lot you can "do with Visualforce pages. Using HTML, CSS, and images, you can create really pretty web pages that educate your users. If you have some programming skills, you can also use JavaScript in your pages to allow for interaction. If you have access to web services, you can use JavaScript to call the web services and make a really powerful application. Check out the following Visualforce page for an example of what you can do: <apex:page> <script type="text/javascript"> function doStuff(){    var x = document.getElementById("myId");    console.log(x); } </script> <img src="http://www.thisbook.com/logo.png" /> <h1>This is my title</h1> <h2>This is my subtitle</h2> <p>In a world where books are full of code, there was only one     that taught you everything you needed to know about Apex!</p> <ol>    <li>My first item</li>    <li>Etc.</li> </ol> <span id="myId"></span> <iframe src="http://www.thisbook.com/mypage.html" /> <form action="http://thisbook.com/submit.html" >    <input type="text" name="yoursecret" /> </form> </apex:page> All of this code is standalone and really has nothing to do with the Salesforce1 Platform other than being hosted by it. However, what really makes Visualforce powerful is its ability to interact with your data, which allows your pages to be more dynamic. Even better, you" can write Apex code to control how "your pages behave, so instead of relying on client-side JavaScript, your logic can run server side. Summary In this article we learned how a few features of Apex and how we can use it to extend the SalesForce1 Platform. We also created a custom Force.com page. Well, you've made a lot of progress. Not only can you write code to control how the database behaves, but you can create beautiful-looking pages too. You're an Apex rock star and nothing is going to hold you back. It's time to show your skills to the world. If you want to dig deeper, buy the book and read Learning Apex Programming in a simple step-by-step fashion by using Apex, the language for extension of the Salesforce1 Platform. Resources for Article: Further resources on this subject: Learning to Fly with Force.com [article] Building, Publishing, and Supporting Your Force.com Application [article] Adding a Geolocation Trigger to the Salesforce Account Object [article]
Read more
  • 0
  • 0
  • 2474
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-net-generics-40-container-patterns-and-best-practices
Packt
24 Jan 2012
6 min read
Save for later

.NET Generics 4.0: Container Patterns and Best Practices

Packt
24 Jan 2012
6 min read
(For more resources on .NET, see here.) Generic container patterns There are several generic containers such as List<T>, Dictionary<Tkey,Tvalue>, and so on. Now, let's take a look at some of the patterns involving these generic containers that show up more often in code. How these are organized Each pattern discussed in this article has a few sections. First is the title. This is written against the pattern sequence number. For example, the title for Pattern 1 is One-to-one mapping. The Pattern interface section denotes the interface implementation of the pattern. So anything that conforms to that interface is a concrete implementation of that pattern. For example, Dictionary<TKey,TValue> is a concrete implementation of IDictionary<TKey,TValue>. The Example usages section shows some implementations where TKey and TValue are replaced with real data types such as string or int. The last section, as the name suggests, showcases some ideas where this pattern can be used. Pattern 1: One-to-one mapping One-to-one mapping maps one element to another. Pattern interface The following is an interface implementation of this pattern: IDictionary<TKey,Tvalue> Some concrete implementations Some concrete implementations of this pattern are as follows: Dictionary<TKey,TValue> SortedDictionary<TKey,TValue> SortedList<TKey,TValue> Example usages The following are examples where TKey and TValue are replaced with real data types such as string or int: Dictionary<string,int> SortedDictionary<int,string> SortedList<string,string> Dictionary<string,IClass> Some situations where this pattern can be used One-to-one mapping can be used in the following situations: Mapping some class objects with a string ID Converting an enum to a string General conversion between types Find and replace algorithms where the find and replace strings become key and value pairs Implementing a state machine where each state has a description, which becomes the key, and the concrete implementation of the IState interface becomes the value of a structure such as Dictionary<string,IState> Pattern 2: One-to-many unique value mapping One-to-many unique value mapping maps one element to a set of unique values. Pattern interface The following is an interface implementation of this pattern: IDictionary<TKey,ISet<Tvalue>> Some concrete implementations Some concrete implementations of this pattern are as follows: Dictionary<TKey,HashSet<TValue>> SortedDictionary<TKey,HashSet<TValue>> SortedList<TKey,SortedSet<TValue>> Dictionary<TKey,SortedSet<TValue>> Example usages The following are examples where TKey and TValue are replaced with real data types such as string or int: Dictionary<int,HashSet<string>> SortedDictionary<string,HashSet<int>> Dictionary<string,SortedSet<int>> Some situations where this pattern can be used One-to-many unique value mapping can be used in the following situations: Mapping all the anagrams of a given word Creating spell check where all spelling mistakes can be pre-calculated and stored as unique values Pattern 3: One-to-many value mapping One-to-many value mapping maps an element to a list of values. This might contain duplicates. Pattern interface The following are the interface implementations of this pattern: IDictionary<TKey,ICollection<Tvalue>> IDictionary<TKey,Ilist<TValue>> Some concrete implementations Some concrete implementations of this pattern are as follows: Dictionary<TKey,List<TValue>> SortedDictionary<TKey,Queue<TValue>> SortedList<TKey,Stack<TValue>> Dictionary<TKey,LinkedList<TValue>> Example usages The following are examples where TKey and TValue are replaced with real data types such as string or int: Dictionary<string,List<DateTime>> SortedDictionary<string,Queue<int>> SortedList<int,Stack<float>> Dictionary<string,LinkedList<int>> Some situations where this pattern can be used One-to-many value mapping can be used in the following situations: Mapping all the grades obtained by a student. The ID of the student can be the key and the grades obtained in each subject (which may be duplicate) can be stored as the values in a list. Tracking all the followers of a Twitter account. The user ID for the account will be the key and all follower IDs can be stored as values in a list. Scheduling all the appointments for a patient whose user ID will serve as the key. Pattern 4: Many-to-many mapping Many-to-many mapping maps many elements of a group to many elements in other groups. Both can have duplicate entries. Pattern interface The following are the interface implementations of this pattern: IEnumerable<Tuple<T1,T2,..,ISet<Tresult>>> IEnumerable<Tuple<T1,T2,..,ICollection<Tresult>>> Some concrete implementations A concrete implementation of this pattern is as follows: IList<Tuple<T1,T2,T3,HashSet<TResult>>> Example usages The following are examples where TKey and TValue are replaced with real data types such as string or int: List<Tuple<string,int,int,int>> List<Tuple<string,int,int,int,HashSet<float>>> Some situations where this pattern can be used Many-to-many mapping can be used in the following situations: If many independent values can be mapped to a set of values, then these patterns should be used. ISet<T> implementations don't allow duplicates while ICollection<T> implementations, such as IList<T>, do. Imagine a company wants to give a pay hike to its employees based on certain conditions. In this situation, the parameters for conditions can be the independent variable of the Tuples, and IDs of employees eligible for the hike can be stored in an ISet<T> implementation. For concurrency support, replace non-concurrent implementations with their concurrent cousins. For example, replace Dictionary<TKey,TValue> with ConcurrentDictionary<TKey,TValue>.
Read more
  • 0
  • 0
  • 2411

article-image-html5-apis
Packt
03 Nov 2015
6 min read
Save for later

HTML5 APIs

Packt
03 Nov 2015
6 min read
 In this article by Dmitry Sheiko author of the book JavaScript Unlocked we will create our first web component. (For more resources related to this topic, see here.) Creating the first web component You might be familiar with HTML5 video element (http://www.w3.org/TR/html5/embedded-content-0.html#the-video-element). By placing a single element in your HTML, you will get a widget that runs a video. This element accepts a number of attributes to set up the player. If you want to enhance this, you can use its public API and subscribe listeners on its events (http://www.w3.org/2010/05/video/mediaevents.html). So, we reuse this element whenever we need a player and only customize it for project-relevant look and feel. If only we had enough of these elements to pick every time we needed a widget on a page. However, this is not the right way to include any widget that we may need in an HTML specification. However, the API to create custom elements, such as video, is already there. We can really define an element, package the compounds (JavaScript, HTML, CSS, images, and so on), and then just link it from the consuming HTML. In other words, we can create an independent and reusable web component, which we then use by placing the corresponding custom element (<my-widget />) in our HTML. We can restyle the element, and if needed, we can utilize the element API and events. For example, if you need a date picker, you can take an existing web component, let's say the one available at http://component.kitchen/components/x-tag/datepicker. All that we have to do is download the component sources (for example, using browser package manager) and link to the component from our HTML code: <link rel="import" href="bower_components/x-tag-datepicker/src/datepicker.js"> Declare the component in the HTML code: <x-datepicker name="2012-02-02"></x-datepicker> This is supposed to go smoothly in the latest versions of Chrome, but this won't probably work in other browsers. Running a web component requires a number of new technologies to be unlocked in a client browser, such as Custom Elements, HTML Imports, Shadow DOM, and templates. The templates include the JavaScript templates. The Custom Element API allows us to define new HTML elements, their behavior, and properties. The Shadow DOM encapsulates a DOM subtree required by a custom element. And support of HTML Imports assumes that by a given link the user-agent enables a web-component by including its HTML on a page. We can use a polyfill (http://webcomponents.org/) to ensure support for all of the required technologies in all the major browsers: <script src="./bower_components/webcomponentsjs/webcomponents.min.js"></script> Do you fancy writing your own web components? Let's do it. Our component acts similar to HTML's details/summary. When one clicks on summary, the details show up. So we create x-details.html, where we put component styles and JavaScript with component API: x-details.html <style> .x-details-summary { font-weight: bold; cursor: pointer; } .x-details-details { transition: opacity 0.2s ease-in-out, transform 0.2s ease-in-out; transform-origin: top left; } .x-details-hidden { opacity: 0; transform: scaleY(0); } </style> <script> "use strict"; /** * Object constructor representing x-details element * @param {Node} el */ var DetailsView = function( el ){ this.el = el; this.initialize(); }, // Creates an object based in the HTML Element prototype element = Object.create( HTMLElement.prototype ); /** @lend DetailsView.prototype */ Object.assign( DetailsView.prototype, { /** * @constracts DetailsView */ initialize: function(){ this.summary = this.renderSummary(); this.details = this.renderDetails(); this.summary.addEventListener( "click", this.onClick.bind( this ), false ); this.el.textContent = ""; this.el.appendChild( this.summary ); this.el.appendChild( this.details ); }, /** * Render summary element */ renderSummary: function(){ var div = document.createElement( "a" ); div.className = "x-details-summary"; div.textContent = this.el.dataset.summary; return div; }, /** * Render details element */ renderDetails: function(){ var div = document.createElement( "div" ); div.className = "x-details-details x-details-hidden"; div.textContent = this.el.textContent; return div; }, /** * Handle summary on click * @param {Event} e */ onClick: function( e ){ e.preventDefault(); if ( this.details.classList.contains( "x-details-hidden" ) ) { return this.open(); } this.close(); }, /** * Open details */ open: function(){ this.details.classList.toggle( "x-details-hidden", false ); }, /** * Close details */ close: function(){ this.details.classList.toggle( "x-details-hidden", true ); } }); // Fires when an instance of the element is created element.createdCallback = function() { this.detailsView = new DetailsView( this ); }; // Expose method open element.open = function(){ this.detailsView.open(); }; // Expose method close element.close = function(){ this.detailsView.close(); }; // Register the custom element document.registerElement( "x-details", { prototype: element }); </script> Further in JavaScript code, we create an element based on a generic HTML element (Object.create( HTMLElement.prototype )). Here we could inherit from a complex element (for example, video) if needed. We register a x-details custom element using the earlier one created as prototype. With element.createdCallback, we subscribe a handler that will be called when a custom element created. Here we attach our view to the element to enhance it with the functionality that we intend for it. Now we can use the component in HTML, as follows: <!DOCTYPE html> <html> <head> <title>X-DETAILS</title> <!-- Importing Web Component's Polyfill --> <!-- uncomment for non-Chrome browsers script src="./bower_components/webcomponentsjs/webcomponents.min.js"></script--> <!-- Importing Custom Elements --> <link rel="import" href="./x-details.html"> </head> <body> <x-details data-summary="Click me"> Nunc iaculis ac erat eu porttitor. Curabitur facilisis ligula et urna egestas mollis. Aliquam eget consequat tellus. Sed ullamcorper ante est. In tortor lectus, ultrices vel ipsum eget, ultricies facilisis nisl. Suspendisse porttitor blandit arcu et imperdiet. </x-details> </body> </html> Summary This article covered basically how we can create our own custom advanced elements that can be easily reused, restyled, and enhanced. The assets required to render such elements are HTML, CSS, JavaScript, and images are bundled as Web Components. So, we literally can build the Web now from the components similar to how buildings are made from bricks. Resources for Article: Further resources on this subject: An Introduction to Kibana [article] Working On Your Bot [article] Icons [article]
Read more
  • 0
  • 0
  • 2369

article-image-essentials-working-python-collections
Packt
09 Jul 2015
14 min read
Save for later

The Essentials of Working with Python Collections

Packt
09 Jul 2015
14 min read
In this article by Steven F. Lott, the author of the book Python Essentials, we'll look at the break and continue statements; these modify a for or while loop to allow skipping items or exiting before the loop has processed all items. This is a fundamental change in the semantics of a collection-processing statement. (For more resources related to this topic, see here.) Processing collections with the for statement The for statement is an extremely versatile way to process every item in a collection. We do this by defining a target variable, a source of items, and a suite of statements. The for statement will iterate through the source of items, assigning each item to the target variable, and also execute the suite of statements. All of the collections in Python provide the necessary methods, which means that we can use anything as the source of items in a for statement. Here's some sample data that we'll work with. This is part of Mike Keith's poem, Near a Raven. We'll remove the punctuation to make the text easier to work with: >>> text = '''Poe, E. ...     Near a Raven ... ... Midnights so dreary, tired and weary.''' >>> text = text.replace(",","").replace(".","").lower() This will put the original text, with uppercase and lowercase and punctuation into the text variable. When we use text.split(), we get a sequence of individual words. The for loop can iterate through this sequence of words so that we can process each one. The syntax looks like this: >>> cadaeic= {} >>> for word in text.split(): ...     cadaeic[word]= len(word) We've created an empty dictionary, and assigned it to the cadaeic variable. The expression in the for loop, text.split(), will create a sequence of substrings. Each of these substrings will be assigned to the word variable. The for loop body—a single assignment statement—will be executed once for each value assigned to word. The resulting dictionary might look like this (irrespective of ordering): {'raven': 5, 'midnights': 9, 'dreary': 6, 'e': 1, 'weary': 5, 'near': 4, 'a': 1, 'poe': 3, 'and': 3, 'so': 2, 'tired': 5} There's no guaranteed order for mappings or sets. Your results may differ slightly. In addition to iterating over a sequence, we can also iterate over the keys in a dictionary. >>> for word in sorted(cadaeic): ...   print(word, cadaeic[word]) When we use sorted() on a tuple or a list, an interim list is created with sorted items. When we apply sorted() to a mapping, the sorting applies to the keys of the mapping, creating a sequence of sorted keys. This loop will print a list in alphabetical order of the various pilish words used in this poem. Pilish is a subset of English where the word lengths are important: they're used as mnemonic aids. A for statement corresponds to the "for all" logical quantifier, . At the end of a simple for loop we can assert that all items in the source collection have been processed. In order to build the "there exists" quantifier, , we can either use the while statement, or the break statement inside the body of a for statement. Using literal lists in a for statement We can apply the for statement to a sequence of literal values. One of the most common ways to present literals is as a tuple. It might look like this: for scheme in 'http', 'https', 'ftp':    do_something(scheme) This will assign three different values to the scheme variable. For each of those values, it will evaluate the do_something() function. From this, we can see that, strictly-speaking, the () are not required to delimit a tuple object. If the sequence of values grows, however, and we need to span more than one physical line, we'll want to add (), making the tuple literal more explicit. Using the range() and enumerate() functions The range() object will provide a sequence of numbers, often used in a for loop. The range() object is iterable, it's not itself a sequence object. It's a generator, which will produce items when required. If we use range() outside a for statement, we need to use a function like list(range(x)) or tuple(range(a,b)) to consume all of the generated values and create a new sequence object. The range() object has three commonly-used forms: range(n) produces ascending numbers including 0 but not including n itself. This is a half-open interval. We could say that range(n) produces numbers, x, such that . The expression list(range(5)) returns [0, 1, 2, 3, 4]. This produces n values including 0 and n - 1. range(a,b) produces ascending numbers starting from a but not including b. The expression tuple(range(-1,3)) will return (-1, 0, 1, 2). This produces b - a values including a and b - 1. range(x,y,z) produces ascending numbers in the sequence . This produces (y-x)//z values. We can use the range() object like this: for n in range(1, 21):    status= str(n)    if n % 5 == 0: status += " fizz"    if n % 7 == 0: status += " buzz"    print(status) In this example, we've used a range() object to produce values, n, such that . We use the range() object to generate the index values for all items in a list: for n in range(len(some_list)):    print(n, some_list[n]) We've used the range() function to generate values between 0 and the length of the sequence object named some_list. The for statement allows multiple target variables. The rules for multiple target variables are the same as for a multiple variable assignment statement: a sequence object will be decomposed and items assigned to each variable. Because of that, we can leverage the enumerate() function to iterate through a sequence and assign the index values at the same time. It looks like this: for n, v in enumerate(some_list):      print(n, v) The enumerate() function is a generator function which iterates through the items in source sequence and yields a sequence of two-tuple pairs with the index and the item. Since we've provided two variables, the two-tuple is decomposed and assigned to each variable. There are numerous use cases for this multiple-assignment for loop. We often have list-of-tuples data structures that can be handled very neatly with this multiple-assignment feature. Iterating with the while statement The while statement is a more general iteration than the for statement. We'll use a while loop in two situations. We'll use this in cases where we don't have a finite collection to impose an upper bound on the loop's iteration; we may suggest an upper bound in the while clause itself. We'll also use this when writing a "search" or "there exists" kind of loop; we aren't processing all items in a collection. A desktop application that accepts input from a user, for example, will often have a while loop. The application runs until the user decides to quit; there's no upper bound on the number of user interactions. For this, we generally use a while True: loop. Infinite iteration is recommended. If we want to write a character-mode user interface, we could do it like this: quit_received= False while not quit_received:    command= input("prompt> ")    quit_received= process(command) This will iterate until the quit_received variable is set to True. This will process indefinitely; there's no upper boundary on the number of iterations. This process() function might use some kind of command processing. This should include a statement like this: if command.lower().startswith("quit"): return True When the user enters "quit", the process() function will return True. This will be assigned to the quit_received variable. The while expression, not quit_received, will become False, and the loop ends. A "there exists" loop will iterate through a collection, stopping at the first item that meets certain criteria. This can look complex because we're forced to make two details of loop processing explicit. Here's an example of searching for the first value that meets a condition. This example assumes that we have a function, condition(), which will eventually be True for some number. Here's how we can use a while statement to locate the minimum for which this function is True: >>> n = 1 >>> while n != 101 and not condition(n): ...     n += 1 >>> assert n == 101 or condition(n) The while statement will terminate when n == 101 or the condition(n) is True. If this expression is False, we can advance the n variable to the next value in the sequence of values. Since we're iterating through the values in order from the smallest to the largest, we know that n will be the smallest value for which the condition() function is true. At the end of the while statement we have included a formal assertion that either n is 101 or the condition() function is True for the given value of n. Writing an assertion like this can help in design as well as debugging because it will often summarize the loop invariant condition. We can also write this kind of loop using the break statement in a for loop, something we'll look at in the next section. The continue and break statements The continue statement is helpful for skipping items without writing deeply-nested if statements. The effect of executing a continue statement is to skip the rest of the loop's suite. In a for loop, this means that the next item will be taken from the source iterable. In a while loop, this must be used carefully to avoid an otherwise infinite iteration. We might see file processing that looks like this: for line in some_file:    clean = line.strip()    if len(clean) == 0:        continue    data, _, _ = clean.partition("#")    data = data.rstrip()    if len(data) == 0:        continue    process(data) In this loop, we're relying on the way files act like sequences of individual lines. For each line in the file, we've stripped whitespace from the input line, and assigned the resulting string to the clean variable. If the length of this string is zero, the line was entirely whitespace, and we'll continue the loop with the next line. The continue statement skips the remaining statements in the body of the loop. We'll partition the line into three pieces: a portion in front of any "#", the "#" (if present), and the portion after any "#". We've assigned the "#" character and any text after the "#" character to the same easily-ignored variable, _, because we don't have any use for these two results of the partition() method. We can then strip any trailing whitespace from the string assigned to the data variable. If the resulting string has a length of zero, then the line is entirely filled with "#" and any trailing comment text. Since there's no useful data, we can continue the loop, ignoring this line of input. If the line passes the two if conditions, we can process the resulting data. By using the continue statement, we have avoided complex-looking, deeply-nested if statements. It's important to note that a continue statement must always be part of the suite inside an if statement, inside a for or while loop. The condition on that if statement becomes a filter condition that applies to the collection of data being processed. continue always applies to the innermost loop. Breaking early from a loop The break statement is a profound change in the semantics of the loop. An ordinary for statement can be summarized by "for all." We can comfortably say that "for all items in a collection, the suite of statements was processed." When we use a break statement, a loop is no longer summarized by "for all." We need to change our perspective to "there exists". A break statement asserts that at least one item in the collection matches the condition that leads to the execution of the break statement. Here's a simple example of a break statement: for n in range(1, 100):    factors = []    for x in range(1,n):        if n % x == 0: factors.append(x)    if sum(factors) == n:        break We've written a loop that is bound by . This loop includes a break statement, so it will not process all values of n. Instead, it will determine the smallest value of n, for which n is equal to the sum of its factors. Since the loop doesn't examine all values, it shows that at least one such number exists within the given range. We've used a nested loop to determine the factors of the number n. This nested loop creates a sequence, factors, for all values of x in the range , such that x, is a factor of the number n. This inner loop doesn't have a break statement, so we are sure it examines all values in the given range. The least value for which this is true is the number six. It's important to note that a break statement must always be part of the suite inside an if statement inside a for or while loop. If the break isn't in an if suite, the loop will always terminate while processing the first item. The condition on that if statement becomes the "where exists" condition that summarizes the loop as a whole. Clearly, multiple if statements with multiple break statements mean that the overall loop can have a potentially confusing and difficult-to-summarize post-condition. Using the else clause on a loop Python's else clause can be used on a for or while statement as well as on an if statement. The else clause executes after the loop body if there was no break statement executed. To see this, here's a contrived example: >>> for item in 1,2,3: ...     print(item) ...     if item == 2: ...         print("Found",item) ...       break ... else: ...     print("Found Nothing") The for statement here will iterate over a short list of literal values. When a specific target value has been found, a message is printed. Then, the break statement will end the loop, avoiding the else clause. When we run this, we'll see three lines of output, like this: 1 2 Found 2 The value of three isn't shown, nor is the "Found Nothing" message in the else clause. If we change the target value in the if statement from two to a value that won't be seen (for example, zero or four), then the output will change. If the break statement is not executed, then the else clause will be executed. The idea here is to allow us to write contrasting break and non-break suites of statements. An if statement suite that includes a break statement can do some processing in the suite before the break statement ends the loop. An else clause allows some processing at the end of the loop when none of the break-related suites statements were executed. Summary In this article, we've looked at the for statement, which is the primary way we'll process the individual items in a collection. A simple for statement assures us that our processing has been done for all items in the collection. We've also looked at the general purpose while loop. Resources for Article: Further resources on this subject: Introspecting Maya, Python, and PyMEL [article] Analyzing a Complex Dataset [article] Geo-Spatial Data in Python: Working with Geometry [article]
Read more
  • 0
  • 0
  • 2320

article-image-listening-out
Packt
17 Aug 2015
14 min read
Save for later

Listening Out

Packt
17 Aug 2015
14 min read
In this article by Mat Johns, author of the book Getting Started with Hazelcast - Second Edition, we will learn the following topics: Creating and using collection listeners Instance, lifecycle, and cluster membership listeners The partition migration listener Quorum functions and listeners (For more resources related to this topic, see here.) Listening to the goings-on One great feature of Hazelcast is its ability to notify us of the goings-on of our persisted data and the cluster as a whole, allowing us to register an interest in events. The listener concept is borrowed from Java. So, you should feel pretty familiar with it. To provide this, there are a number of listener interfaces that we can implement to receive, process, and handle different types of events—one of which we previously encountered. The following are the listener interfaces: Collection listeners: EntryListener is used for map-based (IMap and MultiMap) events ItemListener is used for flat collection-based (IList, ISet, and IQueue) events MessageListener is used to receive topic events, but as we've seen before, it is used as a part of the standard operation of topics QuorumListener is used for quorum state change events Cluster listeners: DistributedObjectListener is used for the collection, creation, and destruction of events MembershipListener is used for cluster membership events LifecycleListener is used for local node state events MigrationListener is used for partition migration state events The sound of our own data Being notified about data changes can be rather useful as we can make an application-level decision regarding whether the change is important or not and react accordingly. The first interface that we are going to look at is EntryListener. This class will notify us when changes are made to the entries stored in a map collection. If we take a look at the interface, we can see four entry event types and two map-wide events that we will be notified about. EntryListener has also being broken up into a number of individual super MapListener interfaces. So, should we be interested in only a subset of event types, we can implement the appropriate super interfaces as required. Let's take a look at the following code: void entryAdded(EntryEvent<K, V> event);void entryRemoved(EntryEvent<K, V> event);void entryUpdated(EntryEvent<K, V> event);void entryEvicted(EntryEvent<K, V> event);void mapCleared(MapEvent event);void mapEvicted(MapEvent event); Hopefully, the first three are pretty self-explanatory. However, the fourth is a little less clear and in fact, one of the most useful. The entryEvicted method is invoked when an entry is removed from a map non-programmatically (that is, Hazelcast has done it all by itself). This instance will occur in one of the following two scenarios: An entry's TTL has been reached and the entry has been expired The map size, according to the configured policy, has been reached, and the appropriate eviction policy has kicked in to clear out space in the map The first scenario allows us a capability that is very rarely found in data sources—to have our application be told when a time-bound record has expired and the ability to trigger some behavior based on it. For example, we can use it to automatically trigger a teardown operation if an entry is not correctly maintained by a user's interactions. This will allow us to generate an event based on the absence of activity, which is rather useful! Let's create an example MapEntryListener class to illustrate the various events firing off: public class MapEntryListenerimplements EntryListener<String, String> {@Overridepublic void entryAdded(EntryEvent<String, String> event) {System.err.println("Added: " + event);}@Overridepublic void entryRemoved(EntryEvent<String, String> event) {System.err.println("Removed: " + event);}@Overridepublic void entryUpdated(EntryEvent<String, String> event) {System.err.println("Updated: " + event);}@Overridepublic void entryEvicted(EntryEvent<String, String> event) {System.err.println("Evicted: " + event);}@Overridepublic void mapCleared(MapEvent event) {System.err.println("Map Cleared: " + event);}@Overridepublic void mapEvicted(MapEvent event) {System.err.println("Map Evicted: " + event);}} We shall see the various events firing off as expected, with a short 10-second wait for the Berlin entry to expire, which will trigger the eviction event, as follows: Added: EntryEvent {c:capitals} key=GB, oldValue=null, value=Winchester,event=ADDED, by Member [127.0.0.1]:5701 thisUpdated: EntryEvent {c:capitals} key=GB, oldValue=Winchester,value=London, event=UPDATED, by Member [127.0.0.1]:5701 thisAdded: EntryEvent {c:capitals} key=DE, oldValue=null, value=Berlin,event=ADDED, by Member [127.0.0.1]:5701 thisRemoved: EntryEvent {c:capitals} key=GB, oldValue=null, value=London,event=REMOVED, by Member [127.0.0.1]:5701 thisEvicted: EntryEvent {c:capitals} key=DE, oldValue=null, value=Berlin,event=EVICTED, by Member [127.0.0.1]:5701 this We can obviously implement the interface as extensively as possible to service our application, potentially creating no-op stubs should we wish not to handle a particular type of event. Continuously querying The previous example focuses on notifying us of all the entry events. However, what if we were only interested in some particular data? We can obviously filter out our listener to only handle entries that we are interested in. However, it is potentially expensive to have all the events flying around the cluster, especially if our interest lies only in a minority of the potential data. To address this, we can combine capabilities from the map-searching features that we looked at a while back. As and when we register the entry listener to a collection, we can optionally provide a search Predicate that can be used as a filter within Hazelcast itself. We can whittle events down to relevant data before they even reach our listener, as follows: IMap<String, String> capitals = hz.getMap("capitals");capitals.addEntryListener(new MapEntryListener(),new SqlPredicate("name = 'London'"), true); Listeners racing into action One issue with the previous example is that we retrospectively reconfigured the map to feature the listener after it was already in service. To avoid this race condition, we should wire up the listener before the node-entering service. We can do this by registering the listener within the map configuration, as follows: <hazelcast><map name="default"><entry-listeners><entry-listener include-value="true">com.packtpub.hazelcast.listeners.MapEntryListener</entry-listener></entry-listeners></map></hazelcast> However, in both the methods of configuration, we have provided a Boolean flag when registering the listener to the map. This include-value flag allows us to configure the listener when it is invoked, as if we are interested in just the key of the event entry or the entries value as well. The default behavior (true) is to include the value, but in case the use case does not require it, there is a performance benefit of not having to provide it to the listener. So, if the use case does not require this extra data, it will be beneficial to set this flag to false. Keyless collections Though the keyless collections (set, list, and queue) are very similar to map collections, they feature their own interface to define the available events, in this case, ItemListener. It is not as extensive as its map counterpart, featuring just the itemAdded and itemRemoved events, and can be used in the same way, though it only offers visibility of these two event types. Programmatic configuration ahead of time So far, most of the extra configurations that we applied have been done either by customizing the hazelcast.xml file, or retrospectively modifying a collection in the code. However, what if we want to programmatically configure Hazelcast without the race condition that we discovered earlier? Fortunately, there is a way. By creating an instance of the Config class, we can configure the appropriate behavior on it by using a hierarchy that is similar to the XML configuration, but in code. Before passing this configuration object over to the instance creation method, the previous example can be reconfigured to do so, as follows: public static void main(String[] args) {Config conf = new Config();conf.addListenerConfig(new EntryListenerConfig(new MapEntryListener(), false, true));HazelcastInstance hz = Hazelcast.newHazelcastInstance(conf); Events unfolding in the wider world Now that we can determine what is going on with our data within the cluster, we may wish to have a higher degree of visibility of the state of the cluster itself. We can use this either to trigger application-level responses to cluster instability, or provide mechanisms to enable graceful scaling. We are provided with a number of interfaces for different types of cluster activity. All of these listeners can be configured retrospectively, as we have seen in the previous examples. However, in production, it is better to configure them in advance for the same reasons regarding the race condition as the collection listeners. We can do this either by using the hazelcast.xml configuration, or by using the Config class, as follows: <hazelcast><listeners><listener>com.packtpub.hazelcast.MyClusterListener</listener></listeners></hazelcast> The first of these, DistributedObjectListener, simply notifies all the nodes in the cluster as to the new collection objects that are being created or destroyed. Again, let's create a new example listener, ClusterObjectListener, to receive events, as follows: public class ClusterObjectListenerimplements DistributedObjectListener {@Overridepublic void distributedObjectCreated(DistributedObjectEvent event) {System.err.println("Created: " + event);}@Overridepublic void distributedObjectDestroyed(DistributedObjectEvent event) {System.err.println("Destroyed: " + event);}} As these listeners are for cluster-wide events, the example usage of this listener is rather simple. It mainly creates an instance with the appropriate listener registered, as follows: public class ClusterListeningExample {public static void main(String[] args) {Config config = new Config();config.addListenerConfig(new ListenerConfig(new ClusterObjectListener()));HazelcastInstance hz = Hazelcast.newHazelcastInstance(config);}} When using the TestApp console, we can create and destroy some collections, as follows: hazelcast[default] > ns testnamespace: testhazelcast[test] > m.put foo barnullhazelcast[test] > m.destroyDestroyed! The preceding code will produce the following, logging on ALL the nodes that feature the listener: Created: DistributedObjectEvent{eventType=CREATED, serviceName='hz:impl:mapService', distributedObject=IMap{name='test'}} Destroyed: DistributedObjectEvent{eventType=DESTROYED, serviceName='hz:impl:mapService', distributedObject=IMap{name='test'}} The next type of cluster listener is MembershipListener, which notifies all the nodes as to the joining or leaving of a node from the cluster. Let's create another example class, this time ClusterMembershipListener, as follows: public class ClusterMembershipListenerimplements MembershipListener {@Overridepublic void memberAdded(MembershipEvent membershipEvent) {System.err.println("Added: " + membershipEvent);}@Overridepublic void memberRemoved(MembershipEvent membershipEvent) {System.err.println("Removed: " + membershipEvent);}@Overridepublic void memberAttributeChanged(MemberAttributeEventmemberAttributeEvent) {System.err.println("Changed: " + memberAttributeEvent);}} Let's add the following code to the previous example application: conf.addListenerConfig(new ListenerConfig(new   ClusterMembershipListener())); Lastly, we have LifecycleListener, which is local to an individual node and allows the application built on top of Hazelcast to understand its particular node state by being notified as it changes when starting, pausing, resuming, or even shutting down, as follows: public class NodeLifecycleListener implements LifecycleListener {@Overridepublic void stateChanged(LifecycleEvent event) {System.err.println(event);}} Moving data around the place The final listener is very useful as it lets an application know when Hazelcast is rebalancing the data within the cluster. This gives us an opportunity to prevent or even block the shutdown of a node, as we might be in a period of increased data resilience risk because we may be actively moving data around at the time. The interface used in this case is MigrationListener. It will notify the application when the partitions migrate from one node to another and when they complete: public class ClusterMigrationListener implements MigrationListener {@Overridepublic void migrationStarted(MigrationEvent migrationEvent) {System.err.println("Started: " + migrationEvent);}@Overridepublic void migrationCompleted(MigrationEvent migrationEvent) {System.err.println("Completed: " + migrationEvent);}@Overridepublic void migrationFailed(MigrationEvent migrationEvent) {System.err.println("Failed: " + migrationEvent);}} When you are registering this cluster listener in your example application and creating and destroying various nodes, you will see a deluge of events that show the ongoing migrations. The more astute among you may have previously spotted a repartitioning task logging when spinning up the multiple nodes: INFO: [127.0.0.1]:5701 [dev] [3.5] Re-partitioning cluster data... Migration queue size: 271 The previous code indicated that 271 tasks (one migration task for each partition being migrated) have been scheduled to rebalance the cluster. The new listener will now give us significantly more visibility on these events as they occur and hopefully, they will be completed successfully: Started: MigrationEvent{partitionId=98, oldOwner=Member [127.0.0.1]:5701, newOwner=Member [127.0.0.1]:5702 this} Completed: MigrationEvent{partitionId=98, oldOwner=Member [127.0.0.1]:5701, newOwner=Member [127.0.0.1]:5702 this} Started: MigrationEvent{partitionId=99, oldOwner=Member [127.0.0.1]:5701, newOwner=Member [127.0.0.1]:5702 this} Completed: MigrationEvent{partitionId=99, oldOwner=Member [127.0.0.1]:5701, newOwner=Member [127.0.0.1]:5702 this} However, this logging information is overwhelming and actually not all that useful to us. So, let's expand on the listener to try and provide the application with the ability to check whether the cluster is currently migrating data partitions or has recently done so. Let's create a new static class, MigrationStatus, to hold information about cluster migration and help us interrogate it as regards its current status: public abstract class MigrationStatus {private static final Map<Integer, Boolean> MIGRATION_STATE =new ConcurrentHashMap<Integer, Boolean>();private static final AtomicLong LAST_MIGRATION_TIME =new AtomicLong(System.currentTimeMillis());public static void migrationEvent(int partitionId, boolean state) {MIGRATION_STATE.put(partitionId, state);if (!state) {LAST_MIGRATION_TIME.set(System.currentTimeMillis());}}public static boolean isMigrating() {Collection<Boolean> migrationStates= MIGRATION_STATE.values();Long lastMigrationTime = LAST_MIGRATION_TIME.get();// did we recently (< 10 seconds ago) complete a migrationif (System.currentTimeMillis() < lastMigrationTime + 10000) {return true;}// are any partitions currently migratingfor (Boolean partition : migrationStates) {if (partition) return true;}// otherwise we're not migratingreturn false;}} Then, we will update the listener to pass through the appropriate calls in response to the events coming into it, as follows: @Overridepublic void migrationStarted(MigrationEvent migrationEvent) {MigrationStatus.migrationEvent(migrationEvent.getPartitionId(), true);}@Overridepublic void migrationCompleted(MigrationEvent migrationEvent) {MigrationStatus.migrationEvent(migrationEvent.getPartitionId(), false);}@Overridepublic void migrationFailed(MigrationEvent migrationEvent) {System.err.println("Failed: " + migrationEvent);MigrationStatus.migrationEvent(migrationEvent.getPartitionId(), false);} Finally, let's add a loop to the example application to print out the migration state over time, as follows: public static void main(String[] args) throws Exception {Config conf = new Config();conf.addListenerConfig(new ListenerConfig(new ClusterMembershipListener()));conf.addListenerConfig(new ListenerConfig(new MigrationStatusListener()));HazelcastInstance hz = Hazelcast.newHazelcastInstance(conf);while(true) {System.err.println("Is Migrating?: " + MigrationStatus.isMigrating());Thread.sleep(5000);}} When starting and stopping various nodes, we should see each node detect the presence of the rebalance occurring, but it passes by quite quickly. It is in these small, critical periods of time when data is being moved around that resilience is mostly at risk, albeit depending on the configured numbers of backup, the risk could potentially be quite small. Added: MembershipEvent {member=Member [127.0.0.1]:5703,type=added}Is Migrating?: trueIs Migrating?: trueIs Migrating?: false Extending quorum We previously saw how we can configure a simple cluster health check to ensure that a given number of nodes were present to support the application. However, should we need more detailed control over the quorum definition beyond a simple node count check, we can create our own quorum function that will allow us to programmatically define what it means to be healthy. This can be as simple or as complex as what the application requires. In the following example, we sourced an expected cluster size (probably from a suitable location than a hard-coded) and dynamically checked whether a majority of the nodes are present: public class QuorumExample {public static void main(String[] args) throws Exception {QuorumConfig quorumConf = new QuorumConfig();quorumConf.setName("atLeastTwoNodesWithMajority");quorumConf.setEnabled(true);quorumConf.setType(QuorumType.WRITE);final int expectedClusterSize = 5;quorumConf.setQuorumFunctionImplementation(new QuorumFunction() {@Overridepublic boolean apply(Collection<Member> members) {return members.size() >= 2&& members.size() > expectedClusterSize / 2;}});MapConfig mapConf = new MapConfig();mapConf.setName("default");mapConf.setQuorumName("atLeastTwoNodesWithMajority");Config conf = new Config();conf.addQuorumConfig(quorumConf);conf.addMapConfig(mapConf);HazelcastInstance hz = Hazelcast.newHazelcastInstance(conf);new ConsoleApp(hz).start(args);}} We can also create a listener for the quorum health check so that we can be notified when the state of the quorum changes, as follows: public class ClusterQuorumListener implements QuorumListener {@Overridepublic void onChange(QuorumEvent quorumEvent) {System.err.println("Changed: " + quorumEvent);}} Let's attach the new listener to the appropriate configuration, as follows: quorumConf.addListenerConfig(new QuorumListenerConfig(new   ClusterQuorumListener())); Summary Hazelcast allows us to be a first-hand witness to a lot of internal state information. By registering the listeners so that they can be notified as the events occur, we can further enhance an application not only in terms of its functionality, but also with respect to its resilience. By allowing the application to know when and what events are unfolding underneath it, we can add defensiveness to it, embracing the dynamic and destroyable nature of the ephemeral approaches towards applications and infrastructure. Resources for Article: Further resources on this subject: What is Hazelcast? [Article] Apache Solr and Big Data – integration with MongoDB [Article] Introduction to Apache ZooKeeper [Article]
Read more
  • 0
  • 0
  • 2191
article-image-handle-web-applications
Packt
20 Oct 2014
13 min read
Save for later

Handle Web Applications

Packt
20 Oct 2014
13 min read
In this article by Ivo Balbaert author of Dart Cookbook, we will cover the following recipes: Sanitizing HTML Using a browser's local storage Using an application cache to work offline Preventing an onSubmit event from reloading the page (For more resources related to this topic, see here.) Sanitizing HTML We've all heard of (or perhaps even experienced) cross-site scripting (XSS) attacks, where evil minded attackers try to inject client-side script or SQL statements into web pages. This could be done to gain access to session cookies or database data, or to get elevated access-privileges to sensitive page content. To verify an HTML document and produce a new HTML document that preserves only whatever tags are designated safe is called sanitizing the HTML. How to do it... Look at the web project sanitization. Run the following script and see how the text content and default sanitization works: See how the default sanitization works using the following code: var elem1 = new Element.html('<div class="foo">content</div>'); document.body.children.add(elem1); var elem2 = new Element.html('<script class="foo">evil content</script><p>ok?</p>'); document.body.children.add(elem2); The text content and ok? from elem1 and elem2 are displayed, but the console gives the message Removing disallowed element <SCRIPT>. So a script is removed before it can do harm. Sanitize using HtmlEscape, which is mainly used with user-generated content: import 'dart:convert' show HtmlEscape; In main(), use the following code: var unsafe = '<script class="foo">evil   content</script><p>ok?</p>'; var sanitizer = const HtmlEscape(); print(sanitizer.convert(unsafe)); This prints the following output to the console: &lt;script class=&quot;foo&quot;&gt;evil   content&lt;&#x2F;script&gt;&lt;p&gt;ok?&lt;&#x2F;p&gt; Sanitize using node validation. The following code forbids the use of a <p> tag in node1; only <a> tags are allowed: var html_string = '<p class="note">a note aside</p>'; var node1 = new Element.html(        html_string,        validator: new NodeValidatorBuilder()          ..allowElement('a', attributes: ['href'])      ); The console prints the following output: Removing disallowed element <p> Breaking on exception: Bad state: No elements A NullTreeSanitizer for no validation is used as follows: final allHtml = const NullTreeSanitizer(); class NullTreeSanitizer implements NodeTreeSanitizer {      const NullTreeSanitizer();      void sanitizeTree(Node node) {} } It can also be used as follows: var elem3 = new Element.html('<p>a text</p>'); elem3.setInnerHtml(html_string, treeSanitizer: allHtml); How it works... First, we have very good news: Dart automatically sanitizes all methods through which HTML elements are constructed, such as new Element.html(), Element.innerHtml(), and a few others. With them, you can build HTML hardcoded, but also through string interpolation, which entails more risks. The default sanitization removes all scriptable elements and attributes. If you want to escape all characters in a string so that they are transformed into HTML special characters (such as ;&#x2F for a /), use the class HTMLEscape from dart:convert as shown in the second step. The default behavior is to escape apostrophes, greater than/less than, quotes, and slashes. If your application is using untrusted HTML to put in variables, it is strongly advised to use a validation scheme, which only covers the syntax you expect users to feed into your app. This is possible because Element.html() has the following optional arguments: Element.html(String html, {NodeValidator validator, NodeTreeSanitizer treeSanitizer}) In step 3, only <a> was an allowed tag. By adding more allowElement rules in cascade, you can allow more tags. Using allowHtml5() permits all HTML5 tags. If you want to remove all control in some cases (perhaps you are dealing with known safe HTML and need to bypass sanitization for performance reasons), you can add the class NullTreeSanitizer to your code, which has no control at all and defines an object allHtml, as shown in step 4. Then, use setInnerHtml() with an optional named attribute treeSanitizer set to allHtml. Using a browser's local storage Local storage (also called the Web Storage API) is widely supported in modern browsers. It enables the application's data to be persisted locally (on the client side) as a map-like structure: a dictionary of key-value string pairs, in fact using JSON strings to store and retrieve data. It provides our application with an offline mode of functioning when the server is not available to store the data in a database. Local storage does not expire, but every application can only access its own data up to a certain limit depending on the browser. In addition, of course, different browsers can't access each other's stores. How to do it... Look at the following example, the local_storage.dart file: import 'dart:html';  Storage local = window.localStorage;  void main() { var job1 = new Job(1, "Web Developer", 6500, "Dart Unlimited") ; Perform the following steps to use the browser's local storage: Write to a local storage with the key Job:1 using the following code: local["Job:${job1.id}"] = job1.toJson; ButtonElement bel = querySelector('#readls'); bel.onClick.listen(readShowData); } A click on the button checks to see whether the key Job:1 can be found in the local storage, and, if so, reads the data in. This is then shown in the data <div>: readShowData(Event e) {    var key = 'Job:1';    if(local.containsKey(key)) { // read data from local storage:    String job = local[key];    querySelector('#data').appendText(job); } }   class Job { int id; String type; int salary; String company; Job(this.id, this.type, this.salary, this.company); String get toJson => '{ "type": "$type", "salary": "$salary", "company": "$company" } '; } The following screenshot depicts how data is stored in and retrieved from a local storage: How it works... You can store data with a certain key in the local storage from the Window class as follows using window.localStorage[key] = data; (both key and data are Strings). You can retrieve it with var data = window.localStorage[key];. In our code, we used the abbreviation Storage local = window.localStorage; because local is a map. You can check the existence of this piece of data in the local storage with containsKey(key); in Chrome (also in other browsers via Developer Tools). You can verify this by navigating to Extra | Tools | Resources | Local Storage (as shown in the previous screenshot), window.localStorage also has a length property; you can query whether it contains something with isEmpty, and you can loop through all stored values using the following code: for(var key in window.localStorage.keys) { String value = window.localStorage[key]; // more code } There's more... Local storage can be disabled (by user action, or via an installed plugin or extension), so we must alert the user when this needs to be enabled; we can do this by catching the exception that occurs in this case: try { window.localStorage[key] = data; } on Exception catch (ex) { window.alert("Data not stored: Local storage is disabled!"); } Local storage is a simple key-value store and does have good cross-browser coverage. However, it can only store strings and is a blocking (synchronous) API; this means that it can temporarily pause your web page from responding while it is doing its job storing or reading large amounts of data such as images. Moreover, it has a space limit of 5 MB (this varies with browsers); you can't detect when you are nearing this limit and you can't ask for more space. When the limit is reached, an error occurs so that the user can be informed. These properties make local storage only useful as a temporary data storage tool; this means that it is better than cookies, but not suited for a reliable, database kind of storage. Web storage also has another way of storing data called sessionStorage used in the same way, but this limits the persistence of the data to only the current browser session. So, data is lost when the browser is closed or another application is started in the same browser window. Using an application cache to work offline When, for some reason, our users don't have web access or the website is down for maintenance (or even broken), our web-based applications should also work offline. The browser cache is not robust enough to be able to do this, so HTML5 has given us the mechanism of ApplicationCache. This cache tells the browser which files should be made available offline. The effect is that the application loads and works correctly, even when the user is offline. The files to be held in the cache are specified in a manifest file, which has a .mf or .appcache extension. How to do it... Look at the appcache application; it has a manifest file called appcache.mf. The manifest file can be specified in every web page that has to be cached. This is done with the manifest attribute of the <html> tag: <html manifest="appcache.mf"> If a page has to be cached and doesn't have the manifest attribute, it must be specified in the CACHE section of the manifest file. The manifest file has the following (minimum) content: CACHE MANIFEST # 2012-09-28:v3  CACHE: Cached1.html appcache.css appcache.dart http://dart.googlecode.com/svn/branches/bleeding_edge/dart/client/dart.js  NETWORK: *  FALLBACK: / offline.html Run cached1.html. This displays the This page is cached, and works offline! text. Change the text to This page has been changed! and reload the browser. You don't see the changed text because the page is created from the application cache. When the manifest file is changed (change version v1 to v2), the cache becomes invalid and the new version of the page is loaded with the This page has been changed! text. The Dart script appcache.dart of the page should contain the following minimal code to access the cache: main() { new AppCache(window.applicationCache); }  class AppCache { ApplicationCache appCache;  AppCache(this.appCache) {    appCache.onUpdateReady.listen((e) => updateReady());    appCache.onError.listen(onCacheError); }  void updateReady() {    if (appCache.status == ApplicationCache.UPDATEREADY) {      // The browser downloaded a new app cache. Alert the user:      appCache.swapCache();      window.alert('A new version of this site is available. Please reload.');    } }  void onCacheError(Event e) {      print('Cache error: ${e}');      // Implement more complete error reporting to developers } } How it works... The CACHE section in the manifest file enumerates all the entries that have to be cached. The NETWORK: and * options mean that to use all other resources the user has to be online. FALLBACK specifies that offline.html will be displayed if the user is offline and a resource is inaccessible. A page is cached when either of the following is true: Its HTML tag has a manifest attribute pointing to the manifest file The page is specified in the CACHE section of the manifest file The browser is notified when the manifest file is changed, and the user will be forced to refresh its cached resources. Adding a timestamp and/or a version number such as # 2014-05-18:v1 works fine. Changing the date or the version invalidates the cache, and the updated pages are again loaded from the server. To access the browser's app cache from your code, use the window.applicationCache object. Make an object of a class AppCache, and alert the user when the application cache has become invalid (the status is UPDATEREADY) by defining an onUpdateReady listener. There's more... The other known states of the application cache are UNCACHED, IDLE, CHECKING, DOWNLOADING, and OBSOLETE. To log all these cache events, you could add the following listeners to the appCache constructor: appCache.onCached.listen(onCacheEvent); appCache.onChecking.listen(onCacheEvent); appCache.onDownloading.listen(onCacheEvent); appCache.onNoUpdate.listen(onCacheEvent); appCache.onObsolete.listen(onCacheEvent); appCache.onProgress.listen(onCacheEvent); Provide an onCacheEvent handler using the following code: void onCacheEvent(Event e) {    print('Cache event: ${e}'); } Preventing an onSubmit event from reloading the page The default action for a submit button on a web page that contains an HTML form is to post all the form data to the server on which the application runs. What if we don't want this to happen? How to do it... Experiment with the submit application by performing the following steps: Our web page submit.html contains the following code: <form id="form1" action="http://www.dartlang.org" method="POST"> <label>Job:<input type="text" name="Job" size="75"></input>    </label>    <input type="submit" value="Job Search">    </form> Comment out all the code in submit.dart. Run the app, enter a job name, and click on the Job Search submit button; the Dart site appears. When the following code is added to submit.dart, clicking on the no button for a longer duration makes the Dart site appear: import 'dart:html';  void main() { querySelector('#form1').onSubmit.listen(submit); }  submit(Event e) {      e.preventDefault(); // code to be executed when button is clicked  } How it works... In the first step, when the submit button is pressed, the browser sees that the method is POST. This method collects the data and names from the input fields and sends it to the URL specified in action to be executed, which only shows the Dart site in our case. To prevent the form from posting the data, make an event handler for the onSubmit event of the form. In this handler code, e.preventDefault(); as the first statement will cancel the default submit action. However, the rest of the submit event handler (and even the same handler of a parent control, should there be one) is still executed on the client side. Summary In this article we learned how to handle web applications, sanitize a HTML, use a browser's local storage, use application cache to work offline, and how to prevent an onSubmit event from reloading a page. Resources for Article: Further resources on this subject: Handling the DOM in Dart [Article] QR Codes, Geolocation, Google Maps API, and HTML5 Video [Article] HTML5 Game Development – A Ball-shooting Machine with Physics Engine [Article]
Read more
  • 0
  • 0
  • 2140

article-image-walking-you-through-classes
Packt
02 Sep 2015
15 min read
Save for later

Walking You Through Classes

Packt
02 Sep 2015
15 min read
In this article by Narayan Prusty, author of Learning ECMAScript 6, you will learn how ES6 introduces classes that provide a much simpler and clearer syntax to creating constructors and dealing with inheritance. JavaScript never had the concept of classes, although it's an object-oriented programming language. Programmers from the other programming language background often found it difficult to understand JavaScript's object-oriented model and inheritance due to lack of classes. In this article, we will learn about the object-oriented JavaScript using the ES6 classes: Creating objects the classical way What are classes in ES6 Creating objects using classes The inheritance in classes The features of classes (For more resources related to this topic, see here.) Understanding the Object-oriented JavaScript Before we proceed with the ES6 classes, let's refresh our knowledge on the JavaScript data types, constructors, and inheritance. While learning classes, we will be comparing the syntax of the constructors and prototype-based inheritance with the syntax of the classes. Therefore, it is important to have a good grip on these topics. Creating objects There are two ways of creating an object in JavaScript, that is, using the object literal, or using a constructor. The object literal is used when we want to create fixed objects, whereas constructor is used when we want to create the objects dynamically on runtime. Let's consider a case where we may need to use the constructors instead of the object literal. Here is a code example: var student = { name: "Eden", printName: function(){ console.log(this.name); } } student.printName(); //Output "Eden" Here, we created a student object using the object literal, that is, the {} notation. This works well when you just want to create a single student object. But the problem arises when you want to create multiple student objects. Obviously, you don't want to write the previous code multiple times to create multiple student objects. This is where constructors come into use. A function acts like a constructor when invoked using the new keyword. A constructor creates and returns an object. The this keyword, inside a function, when invoked as a constructor, points to the new object instance, and once the constructor execution is finished, the new object is automatically returned. Consider this example: function Student(name) { this.name = name; } Student.prototype.printName = function(){ console.log(this.name); } var student1 = new Student("Eden"); var student2 = new Student("John"); student1.printName(); //Output "Eden" student2.printName(); //Output "John" Here, to create multiple student objects, we invoked the constructor multiple times instead of creating multiple student objects using the object literals. To add methods to the instances of the constructor, we didn't use the this keyword, instead we used the prototype property of constructor. We will learn more on why we did it this way, and what the prototype property is, in the next section. Actually, every object must belong to a constructor. Every object has an inherited property named constructor, pointing to the object's constructor. When we create objects using the object literal, the constructor property points to the global Object constructor. Consider this example to understand this behavior: var student = {} console.log(student.constructor == Object); //Output "true" Understanding inheritance Each JavaScript object has an internal [[prototype]] property pointing to another object called as its prototype. This prototype object has a prototype of its own, and so on until an object is reached with null as its prototype. null has no prototype, and it acts as a final link in the prototype chain. When trying to access a property of an object, and if the property is not found in the object, then the property is searched in the object's prototype. If still not found, then it's searched in the prototype of the prototype object. It keeps on going until null is encountered in the prototype chain. This is how inheritance works in JavaScript. As a JavaScript object can have only one prototype, JavaScript supports only a single inheritance. While creating objects using the object literal, we can use the special __proto__ property or the Object.setPrototypeOf() method to assign a prototype of an object. JavaScript also provides an Object.create() method, with which we can create a new object with a specified prototype as the __proto__ lacked browser support, and the Object.setPrototypeOf() method seemed a little odd. Here is code example that demonstrates different ways to set the prototype of an object while creating, using the object literal: var object1 = { name: "Eden", __proto__: {age: 24} } var object2 = {name: "Eden"} Object.setPrototypeOf(object2, {age: 24}); var object3 = Object.create({age: 24}, {name: {value: "Eden"}}); console.log(object1.name + " " + object1.age); console.log(object2.name + " " + object2.age); console.log(object3.name + " " + object3.age); The output is as follows: Eden 24 Eden 24 Eden 24 Here, the {age:24} object is referred as base object, superobject, or parent object as its being inherited. And the {name:"Eden"} object is referred as the derived object, subobject, or the child object, as it inherits another object. If you don't assign a prototype to an object while creating it using the object literal, then the prototype points to the Object.prototype property. The prototype of Object.prototype is null therefore, leading to the end of the prototype chain. Here is an example to demonstrate this: var obj = { name: "Eden" } console.log(obj.__proto__ == Object.prototype); //Output "true" While creating objects using a constructor, the prototype of the new objects always points to a property named prototype of the function object. By default, the prototype property is an object with one property named as constructor. The constructor property points to the function itself. Consider this example to understand this model: function Student() { this.name = "Eden"; } var obj = new Student(); console.log(obj.__proto__.constructor == Student); //Output "true" console.log(obj.__proto__ == Student.prototype); //Output "true" To add new methods to the instances of a constructor, we should add them to the prototype property of the constructor, as we did earlier. We shouldn't add methods using the this keyword in a constructor body, because every instance of the constructor will have a copy of the methods, and this isn't very memory efficient. By attaching methods to the prototype property of a constructor, there is only one copy of each function that all the instances share. To understand this, consider this example: function Student(name) { this.name = name; } Student.prototype.printName = function(){ console.log(this.name); } var s1 = new Student("Eden"); var s2 = new Student("John"); function School(name) { this.name = name; this.printName = function(){ console.log(this.name); } } var s3 = new School("ABC"); var s4 = new School("XYZ"); console.log(s1.printName == s2.printName); console.log(s3.printName == s4.printName); The output is as follows: true false Here, s1 and s2 share the same printName function that reduces the use of memory, whereas s3 and s4 contain two different functions with the name as printName that makes the program use more memory. This is unnecessary, as both the functions do the same thing. Therefore, we add methods for the instances to the prototype property of the constructor. Implementing the inheritance hierarchy in the constructors is not as straightforward as we did for object literals. Because the child constructor needs to invoke the parent constructor for the parent constructor's initialization logic to take place and we need to add the methods of the prototype property of the parent constructor to the prototype property of the child constructor, so that we can use them with the objects of child constructor. There is no predefined way to do all this. The developers and JavaScript libraries have their own ways of doing this. I will show you the most common way of doing it. Here is an example to demonstrate how to implement the inheritance while creating the objects using the constructors: function School(schoolName) { this.schoolName = schoolName; } School.prototype.printSchoolName = function(){ console.log(this.schoolName); } function Student(studentName, schoolName) { this.studentName = studentName; School.call(this, schoolName); } Student.prototype = new School(); Student.prototype.printStudentName = function(){ console.log(this.studentName); } var s = new Student("Eden", "ABC School"); s.printStudentName(); s.printSchoolName(); The output is as follows: Eden ABC School Here, we invoked the parent constructor using the call method of the function object. To inherit the methods, we created an instance of the parent constructor, and assigned it to the child constructor's prototype property. This is not a foolproof way of implementing inheritance in the constructors, as there are lots of potential problems. For example—in case the parent constructor does something else other than just initializing properties, such as DOM manipulation, then while assigning a new instance of the parent constructor, to the prototype property, of the child constructor, can cause problems. Therefore, the ES6 classes provide a better and easier way to inherit the existing constructors and classes. Using classes We saw that JavaScript's object-oriented model is based on the constructors and prototype-based inheritance. Well, the ES6 classes are just new a syntax for the existing model. Classes do not introduce a new object-oriented model to JavaScript. The ES6 classes aim to provide a much simpler and clearer syntax for dealing with the constructors and inheritance. In fact, classes are functions. Classes are just a new syntax for creating functions that are used as constructors. Creating functions using the classes that aren't used as constructors doesn't make any sense, and offer no benefits. Rather, it makes your code difficult to read, as it becomes confusing. Therefore, use classes only if you want to use it for constructing objects. Let's have a look at classes in detail. Defining a class Just as there are two ways of defining functions, function declaration and function expression, there are two ways to define a class: using the class declaration and the class expression. The class declaration For defining a class using the class declaration, you need to use the class keyword, and a name for the class. Here is a code example to demonstrate how to define a class using the class declaration: class Student { constructor(name) { this.name = name; } } var s1 = new Student("Eden"); console.log(s1.name); //Output "Eden" Here, we created a class named Student. Then, we defined a constructor method in it. Finally, we created a new instance of the class—an object, and logged the name property of the object. The body of a class is in the curly brackets, that is, {}. This is where we need to define methods. Methods are defined without the function keyword, and a comma is not used in between the methods. Classes are treated as functions, and internally the class name is treated as the function name, and the body of the constructor method is treated as the body of the function. There can only be one constructor method in a class. Defining more than one constructor will throw the SyntaxError exception. All the code inside a class body is executed in the strict mode, by default. The previous code is the same as this code when written using function: function Student(name) { this.name = name; } var s1 = new Student("Eden"); console.log(s1.name); //Output "Eden" To prove that a class is a function, consider this code: class Student { constructor(name) { this.name = name; } } function School(name) { this.name = name; } console.log(typeof Student); console.log(typeof School == typeof Student); The output is as follows: function true Here, we can see that a class is a function. It's just a new syntax for creating a function. The class expression A class expression has a similar syntax to a class declaration. However, with class expressions, you are able to omit the class name. Class body and behavior remains the same in both the ways. Here is a code example to demonstrate how to define a class using a class expression: var Student = class { constructor(name) { this.name = name; } } var s1 = new Student("Eden"); console.log(s1.name); //Output "Eden" Here, we stored a reference of the class in a variable, and used it to construct the objects. The previous code is the same as this code when written using function: var Student = function(name) { this.name = name; } var s1 = new Student("Eden"); console.log(s1.name); //Output "Eden" The prototype methods All the methods in the body of the class are added to the prototype property of the class. The prototype property is the prototype of the objects created using class. Here is an example that shows how to add methods to the prototype property of a class: class Person { constructor(name, age) { this.name = name; this.age = age; } printProfile() { console.log("Name is: " + this.name + " and Age is: " + this.age); } } var p = new Person("Eden", 12) p.printProfile(); console.log("printProfile" in p.__proto__); console.log("printProfile" in Person.prototype); The output is as follows: Name is: Eden and Age is: 12 true true Here, we can see that the printProfile method was added to the prototype property of the class. The previous code is the same as this code when written using function: function Person(name, age) { this.name = name; this.age = age; } Person.prototype.printProfile = function() { console.log("Name is: " + this.name + " and Age is: " + this.age); } var p = new Person("Eden", 12) p.printProfile(); console.log("printProfile" in p.__proto__); console.log("printProfile" in Person.prototype); The output is as follows: Name is: Eden and Age is: 12 true true The get and set methods In ES5, to add accessor properties to the objects, we had to use the Object.defineProperty() method. ES6 introduced the get and set prefixes for methods. These methods can be added to the object literals and classes for defining the get and set attributes of the accessor properties. When get and set methods are used in a class body, they are added to the prototype property of the class. Here is an example to demonstrate how to define the get and set methods in a class: class Person { constructor(name) { this._name_ = name; } get name(){ return this._name_; } set name(name){ this._name_ = name; } } var p = new Person("Eden"); console.log(p.name); p.name = "John"; console.log(p.name); console.log("name" in p.__proto__); console.log("name" in Person.prototype); console.log(Object.getOwnPropertyDescriptor(p.__proto__, "name").set); console.log(Object.getOwnPropertyDescriptor(Person.prototype, "name").get); console.log(Object.getOwnPropertyDescriptor(p, "_name_").value); The output is as follows: Eden John true true function name(name) { this._name_ = name; } function name() { return this._name_; } John Here, we created an accessor property to encapsulate the _name_ property. We also logged some other information to prove that name is an accessor property, which is added to the prototype property of the class. The generator method To treat a concise method of an object literal as the generator method, or to treat a method of a class as the generator method, we can simply prefix it with the * character. The generator method of a class is added to the prototype property of the class. Here is an example to demonstrate how to define a generator method in class: class myClass { * generator_function() { yield 1; yield 2; yield 3; yield 4; yield 5; } } var obj = new myClass(); let generator = obj.generator_function(); console.log(generator.next().value); console.log(generator.next().value); console.log(generator.next().value); console.log(generator.next().value); console.log(generator.next().value); console.log(generator.next().done); console.log("generator_function" in myClass.prototype); The output is as follows: 1 2 3 4 5 true true Implementing inheritance in classes Earlier in this article, we saw how difficult it was to implement inheritance hierarchy in functions. Therefore, ES6 aims to make it easy by introducing the extends clause, and the super keyword for classes. By using the extends clause, a class can inherit static and non-static properties of another constructor (which may or may not be defined using a class). The super keyword is used in two ways: It's used in a class constructor method to call the parent constructor When used inside methods of a class, it references the static and non-static methods of the parent constructor Here is an example to demonstrate how to implement the inheritance hierarchy in the constructors using the extends clause, and the super keyword: function A(a) { this.a = a; } A.prototype.printA = function(){ console.log(this.a); } class B extends A { constructor(a, b) { super(a); this.b = b; } printB() { console.log(this.b); } static sayHello() { console.log("Hello"); } } class C extends B { constructor(a, b, c) { super(a, b); this.c = c; } printC() { console.log(this.c); } printAll() { this.printC(); super.printB(); super.printA(); } } var obj = new C(1, 2, 3); obj.printAll(); C.sayHello(); The output is as follows: 3 2 1 Hello Here, A is a function constructor; B is a class that inherits A; C is a class that inherits B; and as B inherits A, therefore C also inherits A. As a class can inherit a function constructor, we can also inherit the prebuilt function constructors, such as String and Array, and also the custom function constructors using the classes instead of other hacky ways that we used to use. The previous example also shows how and where to use the super keyword. Remember that inside the constructor method, you need to use super before using the this keyword. Otherwise, an exception is thrown. If a child class doesn't have a constructor method, then the default behavior will invoke the constructor method of the parent class. Summary In this article, we have learned about the basics of the object-oriented programming using ES5. Then, we jumped into ES6 classes, and learned how it makes easy for us to read and write the object-oriented JavaScript code. We also learned miscellaneous features, such as the accessor methods. Resources for Article: Further resources on this subject: An Introduction to Mastering JavaScript Promises and Its Implementation in Angular.js[article] Finding Peace in REST[article] Scaling influencers [article]
Read more
  • 0
  • 0
  • 2109

article-image-getting-twitter-data
Packt
19 Feb 2015
9 min read
Save for later

Getting Twitter data

Packt
19 Feb 2015
9 min read
In this article by Paulo A Pereira, the author of Elixir Cookbook, we will build an application that will query the Twitter timeline for a given word and will display any new tweet with that keyword in real time. We will be using an Elixir twitter client extwitter as well as an Erlang application to deal with OAuth. We will wrap all in a phoenix web application. (For more resources related to this topic, see here.) Getting ready Before getting started, we need to register a new application with Twitter to get the API keys that will allow the authentication and use of Twitter's API. To do this, we will go to https://apps.twitter.com and click on the Create New App button. After following the steps, we will have access to four items that we need: consumer_key, consumer_secret, access_token, and access_token_secret. These values can be used directly in the application or setup as environment variables in an initialization file for bash or zsh (if using Unix). After getting the keys, we are ready to start building the application. How to do it… To begin with building the application, we need to follow these steps: Create a new Phoenix application: > mix phoenix.new phoenix_twitter_stream code/phoenix_twitter_stream Add the dependencies in the mix.exs file: defp deps do   [     {:phoenix, "~> 0.8.0"},     {:cowboy, "~> 1.0"},     {:oauth, github: "tim/erlang-oauth"},     {:extwitter, "~> 0.1"}   ] end Get the dependencies and compile them: > mix deps.get && mix deps.compile Configure the application to use the Twitter API keys by adding the configuration block with the keys we got from Twitter in the Getting ready section of this article. Edit lib/phoenix_twitter_stream.ex so that it looks like this: defmodule PhoenixTweeterStream do   use Application   def start(_type, _args) do     import Supervisor.Spec, warn: false     ExTwitter.configure(       consumer_key: System.get_env("SMM_TWITTER_CONSUMER_KEY"),       consumer_secret: System.get_env("SMM_TWITTER_CONSUMER_SECRET"),       access_token: System.get_env("SMM_TWITTER_ACCESS_TOKEN"),       access_token_secret: System.get_env("SMM_TWITTER_ACCESS_TOKEN_SECRET")     )     children = [       # Start the endpoint when the application starts       worker(PhoenixTweeterStream.Endpoint, []),       # Here you could define other workers and supervisors as children       # worker(PhoenixTweeterStream.Worker, [arg1, arg2, arg3]),     ]     opts = [strategy: :one_for_one, name: PhoenixTweeterStream.Supervisor]     Supervisor.start_link(children, opts)   end   def config_change(changed, _new, removed) do     PhoenixTweeterStream.Endpoint.config_change(changed, removed)     :ok   end end In this case, the keys are stored as environment variables, so we use the System.get_env function: System.get_env("SMM_TWITTER_CONSUMER_KEY") (…) If you don't want to set the keys as environment variables, the keys can be directly declared as strings this way: consumer_key: "this-is-an-example-key" (…) Define a module that will handle the query for new tweets in the lib/phoenix_twitter_stream/tweet_streamer.ex file, and add the following code: defmodule PhoenixTwitterStream.TweetStreamer do   def start(socket, query) do     stream = ExTwitter.stream_filter(track: query)     for tweet <- stream do       Phoenix.Channel.reply(socket, "tweet:stream", tweet)     end   end end Create the channel that will handle the tweets in the web/channels/tweets.ex file: defmodule PhoenixTwitterStream.Channels.Tweets do   use Phoenix.Channel   alias PhoenixTwitterStream.TweetStreamer   def join("tweets", %{"track" => query}, socket) do     spawn(fn() -> TweetStreamer.start(socket, query) end)     {:ok, socket}   end  end Edit the application router (/web/router.ex) to register the websocket handler and the tweets channel. The file will look like this: defmodule PhoenixTwitterStream.Router do   use Phoenix.Router   pipeline :browser do     plug :accepts, ~w(html)     plug :fetch_session     plug :fetch_flash     plug :protect_from_forgery   end   pipeline :api do     plug :accepts, ~w(json)   end   socket "/ws" do     channel "tweets", PhoenixTwitterStream.Channels.Tweets   end   scope "/", PhoenixTwitterStream do     pipe_through :browser # Use the default browser stack     get "/", PageController, :index   end end Replace the index template (web/templates/page/index.html.eex) content with this: <div class="row">   <div class="col-lg-12">     <ul id="tweets"></ul>   </div>   <script src="/js/phoenix.js" type="text/javascript"></script>   <script src="https://code.jquery.com/jquery-2.1.1.js" type="text/javascript"></script>   <script type="text/javascript">     var my_track = "programming";     var socket = new Phoenix.Socket("ws://" + location.host + "/ws");     socket.join("tweets", {track: my_track}, function(chan){       chan.on("tweet:stream", function(message){         console.log(message);         $('#tweets').prepend($('<li>').text(message.text));         });     });   </script> </div> Start the application: > mix phoenix.server Go to http://localhost:4000/ and after a few seconds, tweets should start arriving and the page will be updated to display every new tweet at the top. How it works… We start by creating a Phoenix application. We could have created a simple application to output the tweets in the console. However, Phoenix is a great choice for our purposes, displaying a web page with tweets getting updated in real time via websockets! In step 2, we add the dependencies needed to work with the Twitter API. We use parroty's extwitter Elixir application (https://hex.pm/packages/extwitter) and Tim's erlang-oauth application (https://github.com/tim/erlang-oauth/). After getting the dependencies and compiling them, we add the Twitter API keys to our application (step 4). These keys will be used to authenticate against Twitter where we previously registered our application. In step 5, we define a function that, when started, will query Twitter for any tweets containing a specific query. The stream = ExTwitter.stream_filter(track: query) line defines a stream that is returned by the ExTwitter application and is the result of filtering Twitter's timeline, extracting only the entries (tracks) that contain the defined query. The next line, which is for tweet <- stream do Phoenix.Channel.reply(socket, "tweet:stream", tweet), is a stream comprehension. For every new entry in the stream defined previously, send the entry through a Phoenix channel. Step 6 is where we define the channel. This channel is like a websocket handler. Actually, we define a join function:  def join(socket, "stream", %{"track" => query}) do    reply socket, "join", %{status: "connected"}    spawn(fn() -> TweetStreamer.start(socket, query) end)    {:ok, socket}  end It is here, when the websocket connection is performed, that we initialize the module defined in step 5 in the spawn call. This function receives a query string defined in the frontend code as track and passes that string to ExTwitter, which will use it as the filter. In step 7, we register and mount the websocket handler in the router using use Phoenix.Router.Socket, mount: "/ws", and we define the channel and its handler module using channel "tweets", PhoenixTwitterStream.Channels.Tweets. The channel definition must occur outside any scope definition! If we tried to define it, say, right before get "/", PageController, :index, the compiler would issue an error message and the application wouldn't even start. The last code we need to add is related to the frontend. In step 8, we mix HTML and JavaScript on the same file that will be responsible for displaying the root page and establishing the websocket connection with the server. We use a phoenix.js library helper (<script src="/js/phoenix.js" type="text/javascript"></script>), providing some functions to deal with Phoenix websockets and channels. We will take a closer look at some of the code in the frontend: // initializes the query … in this case filter the timeline for // all tweets containing "programming"  var my_track = "programming"; // initialize the websocket connection. The endpoint is /ws.  //(we already have registered with the phoenix router on step 7) var socket = new Phoenix.Socket("ws://" + location.host + "/ws"); // in here we join the channel 'tweets' // this code triggers the join function we saw on step 6 // when a new tweet arrives from the server via websocket // connection it is prepended to the existing tweets in the page socket.join("tweets", "stream", {track: my_track}, function(chan){       chan.on("tweet:stream", function(message){         $('#tweets').prepend($('<li>').text(message.text));         });     }); There's more… If you wish to see the page getting updated really fast, select a more popular word for the query. Summary In this article, we looked at how we can use extwitter to query Twitter for relevant tweets. Resources for Article: Further resources on this subject: NMAP Fundamentals [article] Api With Mongodb And Node.JS [article] Creating a Restful Api [article]
Read more
  • 0
  • 0
  • 1975
article-image-configuring-and-securing-virtual-private-cloud
Packt
16 Sep 2015
7 min read
Save for later

Configuring and Securing a Virtual Private Cloud

Packt
16 Sep 2015
7 min read
In this article by Aurobindo Sarkar and Sekhar Reddy, author of the book Amazon EC2 Cookbook, we will cover recipes for: Configuring VPC DHCP options Configuring networking connections between two VPCs (VPC peering) (For more resources related to this topic, see here.) In this article, we will focus on recipes to configure AWS VPC (Virtual Private Cloud) against typical network infrastructure requirements. VPCs help you isolate AWS EC2 resources, and this feature is available in all AWS regions. A VPC can span multiple availability zones in a region. AWS VPC also helps you run hybrid applications on AWS by extending your existing data center into the public cloud. Disaster recovery is another common use case for using AWS VPC. You can create subnets, routing tables, and internet gateways in VPC. By creating public and private subnets, you can put your web and frontend services in public subnet, your application databases and backed services in a private subnet. Using VPN, you can extend your on-premise data center. Another option to extend your on-premise data center is AWS Direct Connect, which is a private network connection between AWS and you're on-premise data center. In VPC, EC2 resources get static private IP addresses that persist across reboots, which works in the same way as DHCP reservation. You can also assign multiple IP addresses and Elastic Network Interfaces. You can have a private ELB accessible only within your VPC. You can use CloudFormation to automate the VPC creation process. Defining appropriate tags can help you manage your VPC resources more efficiently. Configuring VPC DHCP options DHCP options sets are associated with your AWS account, so they can be used across all your VPCs. You can assign your own domain name to your instances by specifying a set of DHCP options for your VPC. However, only one DHCP Option set can be associated with a VPC. Also, you can't modify the DHCP option set after it is created. In case your want to use a different set of DHCP options, then you will need to create a new DHCP option set and associate it with your VPC. There is no need to restart or relaunch the instances in the VPC after associating the new DHCP option set as they can automatically pick up the changes. How to Do It… In this section, we will create a DHCP option set and then associate it with your VPC. Create a DHCP option set with a specific domain name and domain name servers. In our example, we execute commands to create a DHCP options set and associate it with our VPC. We specify domain name testdomain.com and DNS servers (10.2.5.1 and 10.2.5.2) as our DHCP options. $ aws ec2 create-dhcp-options --dhcp-configuration Key=domain-name,Values=testdomain.com Key=domain-name-servers,Values=10.2.5.1,10.2.5.2 Associate the DHCP option and set your VPC (vpc-bb936ede). $ aws ec2 associate-dhcp-options --dhcp-options-id dopt-dc7d65be --vpc-id vpc-bb936ede How it works… DHCP provides a standard for passing configuration information to hosts in a network. The DHCP message contains an options field in which parameters such as the domain name and the domain name servers can be specified. By default, instances in AWS are assigned an unresolvable host name, hence we need to assign our own domain name and use our own DNS servers. The DHCP options sets are associated with the AWS account and can be used across our VPCs. First, we create a DHCP option set. In this step, we specify the DHCP configuration parameters as key value pairs where commas separate the values and multiple pairs are separated by spaces. In our example, we specify two domain name servers and a domain name. We can use up to four DNS servers. Next, we associate the DHCP option set with our VPC to ensure that all existing and new instances launched in our VPC will use this DHCP options set. Note that if you want to use a different set of DHCP options, then you will need to create a new set and again associate them with your VPC as modifications to a set of DHCP options is not allowed. In addition, you can let the instances pick up the changes automatically or explicitly renew the DHCP lease. However, in all cases, only one set of DHCP options can be associated with a VPC at any given time. As a practice, delete the DHCP options set when none of your VPCs are using it and you don't need it any longer. Configuring networking connections between two VPCs (VPC peering) In this recipe, we will configure VPC peering. VPC peering helps you connect instances in two different VPCs using their private IP addresses. VPC peering is limited to within a region. However, you can create VPC peering connection between VPCs that belong to different AWS accounts. The two VPCs that participate in VPC peering must not have matching or overlapping CIDR addresses. To create a VPC connection, the owner of the local VPC has to send the request to the owner of the peer VPC located in the same account or a different account. Once the owner of peer VPC accepts the request, the VPC peering connection is activated. You will need to update the routes in your route table to send traffic to the peer VPC and vice versa. You will also need to update your instance security groups to allow traffic from–to the peer VPC. How to Do It… Here, we present the commands to creating a VPC peering connection, accepting a peering request, and adding the appropriate route in your routing table. Create a VPC peering connection between two VPCs with IDs vpc-9c19a3f4 and vpc-0214e967. Record VpcPeeringConnectionId for further use $ aws ec2 create-vpc-peering-connection --vpc-id vpc-9c19a3f4 --peer-vpc-id vpc-0214e967 Accept VPC peering connection. Here, we will accept the VPC peering connection request with ID pcx-cf6aa4a6. $ aws ec2 accept-vpc-peering-connection --vpc-peering-connection-id pcx-cf6aa4a6 Add a route in the route table for the VPC peering connection. The following command create route with destination CIDR (172.31.16.0/20) and VPC peer connection ID (pcx-0e6ba567) in route table rtb-7f1bda1a. $ aws ec2 create-route --route-table-id rtb-7f1bda1a --destination-cidr-block 172.31.16.0/20 --vpc-peering-connection-id pcx-0e6ba567 How it works… First, we request a VPC peering connection between two VPCs: a requester VPC that we own (i.e., vpc-9c19a3f4) and a peer VPC with that we want to create a connection (vpc-0214e967). Note that the peering connection request expires after 7 days. In order tot activate the VPC peering connection, the owner of the peer VPC must accept the request. In our recipe, as the owner of the peer VPC, we accept the VPC peering connection request. However, note that the owner of the peer VPC may be a person other than you. You can use the describe-vpc-peering-connections to view your outstanding peering connection requests. The VPC peering connection should be in the pending-acceptance state for you to accept the request. After creating the VPC peering connection, we created a route in our local VPC subnet's route table to direct traffic to the peer VPC. You can also create peering connections between two or more VPCs to provide full access to resources or peer one VPC to access centralized resources. In addition, peering can be implemented between a VPC and specific subnets or instances in one VPC with instances in another VPC. Refer to Amazon VPC documentation to set up the most appropriate peering connections for your specific requirements. Summary In this article, you learned configuring VPC DHCP options as well as configuring networking connections between two VPCs. The book Amazon EC2 Cookbook will cover recipes that relate to designing, developing, and deploying scalable, highly available, and secure applications on the AWS platform. By following the steps in our recipes, you will be able to effectively and systematically resolve issues related to development, deployment, and infrastructure for enterprise-grade cloud applications or products. Resources for Article: Further resources on this subject: Hands-on Tutorial for Getting Started with Amazon SimpleDB [article] Amazon SimpleDB versus RDBMS [article] Amazon DynamoDB - Modelling relationships, Error handling [article]
Read more
  • 0
  • 0
  • 1898

article-image-slideshow-presentations
Packt
15 Sep 2015
24 min read
Save for later

Slideshow Presentations

Packt
15 Sep 2015
24 min read
 In this article by David Mitchell, author of the book Dart By Example you will be introduced to the basics of how to build a presentation application using Dart. It usually takes me more than three weeks to prepare a good impromptu speech. Mark Twain Presentations make some people shudder with fear, yet they are an undeniably useful tool for information sharing when used properly. The content has to be great and some visual flourish can make it stand out from the crowd. Too many slides can make the most receptive audience yawn, so focusing the presenter on the content and automatically taking care of the visuals (saving the creator from fiddling with different animations and fonts sizes!) can help improve presentations. Compelling content still requires the human touch. (For more resources related to this topic, see here.) Building a presentation application Web browsers are already a type of multimedia presentation application so it is feasible to write a quality presentation program as we explore more of the Dart language. Hopefully it will help us pitch another Dart application to our next customer. Building on our first application, we will use a text based editor for creating the presentation content. I was very surprised how much faster a text based editor is for producing a presentation, and more enjoyable. I hope you experience such a productivity boost! Laying out the application The application will have two modes, editing and presentation. In the editing mode, the screen will be split into two panes. The top pane will display the slides and the lower will contain the editor, and other interface elements. This article will focus on the core creation side of the presentation. The application will be a single Dart project. Defining the presentation format The presentations will be written in a tiny subset of the Markdown format which is a powerful yet simple to read text file based format (much easier to read, type and understand than HTML). In 2004, John Gruber and the late Aaron Swartz created the Markdown language in 2004 with the goal of enabling people to write using an easy-to-read, easy-to-write plain text format. It is used on major websites, such as GitHub.com and StackOverflow.com. Being plain text, Markdown files can be kept and compared in version control. For more detail and background on Markdown see https://en.wikipedia.org/wiki/Markdown A simple titled slide with bullet points would be defined as: #Dart Language +Created By Google +Modern language with a familiar syntax +Structured Web Applications +It is Awesomely productive! I am positive you only had to read that once! This will translate into the following HTML. <h1>Dart Language</h1> <li>Created By Google</li>s <li>Modern language with a familiar syntax</li> <li>Structured Web Applications</li> <li>It is Awesomely productive!</li> Markdown is very easy and fast to parse, which probably explains its growing popularity on the web. It can be transformed into many other formats. Parsing the presentation The content of the TextAreaHtml element is split into a list of individual lines, and processed in a similar manner to some of the features in the Text Editor application using forEach to iterate over the list. Any lines that are blank once any whitespace has been removed via the trim method are ignored. #A New Slide Title +The first bullet point +The second bullet point #The Second Slide Title +More bullet points !http://localhost/img/logo.png #Final Slide +Any questions? For each line starting with a # symbol, a new Slide object is created. For each line starting with a + symbol, they are added to this slides bullet point list. For each line is discovered using a ! symbol the slide's image is set (a limit of one per slide). This continues until the end of the presentation source is reached. A sample presentation To get a new user going quickly, there will be an example presentation which can be used as a demonstration and testing the various areas of the application. I chose the last topic that came up round the family dinner table—the coconut! #Coconut +Member of Arecaceae family. +A drupe - not a nut. +Part of daily diets. #Tree +Fibrous root system. +Mostly surface level. +A few deep roots for stability. #Yield +75 fruits on fertile land +30 typically +Fibre has traditional uses #Finally !coconut.png #Any Questions? Presenter project structures The project is a standard Dart web application with index.html as the entry point. The application is kicked off by main.dart which is linked to in index.html, and the application functionality is stored in the lib folder. Source File Description sampleshows.dart    The text for the slideshow application.  lifecyclemixin.dart  The class for the mixin.  slideshow.dart  Data structures for storing the presentation.  slideshowapp.dart  The application object. Launching the application The main function has a very short implementation. void main() { new SlideShowApp(); } Note that the new class instance does not need to be stored in a variable and that the object does not disappear after that line is executed. As we will see later, the object will attach itself to events and streams, keeping the object alive for the lifetime that the page is loaded. Building bullet point slides The presentation is build up using two classes—Slide and SlideShow. The Slide object creates the DivElement used to display the content and the SlideShow contains a list of Slide objects. The SlideShow object is updated as the text source is updated. It also keeps track of which slide is currently being displayed in the preview pane. Once the number of Dart files grows in a project, the DartAnalyzer will recommend naming the library. It is good habit to name every .dart file in a regular project with its own library name. The slideshow.dart file has the keyword library and a name next to it. In Dart, every file is a library, whether it is explicitly declared or not. If you are looking at Dart code online you may stumble across projects with imports that look a bit strange. #import("dart:html"); This is the old syntax for Dart's import mechanism. If you see this it is a sign that other aspects of the code may be out of date too. If you are writing an application in a single project, source files can be arranged in a folder structure appropriate for the project, though keeping the relatives paths manageable is advisable. Creating too many folders is probably means it is time to create a package! Accessing private fields In Dart, as discussed when we covered packages, the privacy is at the library level but it is still possible to have private fields in a class even though Dart does not have the keywords public, protected, and private. A simple return of a private field's value can be performed with a one line function. String getFirstName() => _name; To retrieve this value, a function call is required, for example, Person.getFirstName() however it may be preferred to have a property syntax such as Person.firstName. Having private fields and retaining the property syntax in this manner, is possible using the get and set keywords. Using true getters and setters The syntax of Dart also supports get and set via keywords: int get score =>score + bonus; set score(int increase) =>score += increase * level; Using either get/set or simple fields is down to preference. It is perfectly possible to start with simple fields and scale up to getters and setters if more validation or processing is required. The advantage of the get and set keywords in a library, is the intended interface for consumers of the package is very clear. Further it clarifies which methods may change the state of the object and which merely report current values. Mixin it up In object oriented languages, it is useful to build on one class to create a more specialized related class. For example, in the text editor the base dialog class was extended to create alert and confirm pop ups. What if we want to share some functionality but do not want inheritance occurring between the classes? Aggregation can solve this problem to some extent: class A{ classb usefulObject; } The downside is that this requires a longer reference to use: new A().usefulObject.handyMethod(); This problem has been solved in Dart (and other languages) by a mixin class to do this job, allowing the sharing of functionality without forced inheritance or clunky aggregation. In Dart, a mixin must meet the requirements: No constructors in the class declaration. The base class of the mixin must be Object. No calls to a super class are made. mixins are really just classes that are malleable enough to fit into the class hierarchy at any point. A use case for a mixin may be serialization fields and methods that may be required on several classes in an application that are not part of any inheritance chain. abstract class Serialisation { void save() { //Implementation here. } void load(String filename) { //Implementation here. } } The with keyword is used to declare that a class is using a mixin. class ImageRecord extends Record with Serialisation If the class does not have an explicit base class, it is required to specify Object. class StorageReports extends Object with Serialization In Dart, everything is an object, even basic types such as num are objects and not primitive types. The classes int and double are subtypes of num. This is important to know, as other languages have different behaviors. Let's consider a real example of this. main() { int i; print("$i"); } In a language such as Java the expected output would be 0 however the output in Dart is null. If a value is expected from a variable, it is always good practice to initialize it! For the classes Slide and SlideShow, we will use a mixin from the source file lifecyclemixin.dart to record a creation and an editing timestamp. abstract class LifecycleTracker { DateTime _created; DateTime _edited; recordCreateTimestamp() => _created = new DateTime.now(); updateEditTimestamp() => _edited = new DateTime.now(); DateTime get created => _created; DateTime get lastEdited => _edited; } To use the mixin, the recordCreateTimestamp method can be called from the constructor and the updateEditTimestamp from the main edit method. For slides, it makes sense just to record the creation. For the SlideShow class, both the creation and update will be tracked. Defining the core classes The SlideShow class is largely a container objects for a list of Slide objects and uses the mixin LifecycleTracker. class SlideShow extends Object with LifecycleTracker { List<Slide> _slides; List<Slide> get slides => _slides; ... The Slide class stores the string for the title and a list of strings for the bullet points. The URL for any image is also stored as a string: class Slide extends Object with LifecycleTracker { String titleText = ""; List<String> bulletPoints; String imageUrl = ""; ... A simple constructor takes the titleText as a parameter and initializes the bulletPoints list. If you want to focus on just-the-code when in WebStorm , double-click on filename title of the tab to expand the source code to the entire window. Double-click again to return to the original layout. For even more focus on the code, go to the View menu and click on Enter Distraction Free Mode. Transforming data into HTML To add the Slide object instance into a HTML document, the strings need to be converted into instances of HTML elements to be added to the DOM (Document Object Model). The getSlideContents() method constructs and returns the entire slide as a single object. DivElement getSlideContents() { DivElement slide = new DivElement(); DivElement title = new DivElement(); DivElement bullets = new DivElement(); title.appendHtml("<h1>$titleText</h1>"); slide.append(title); if (imageUrl.length > 0) { slide.appendHtml("<img src="$imageUrl" /><br/>"); } bulletPoints.forEach((bp) { if (bp.trim().length > 0) { bullets.appendHtml("<li>$bp</li>"); } }); slide.append(bullets); return slide; } The Div elements are constructed as objects (instances of DivElement), while the content is added as literal HTML statements. The method appendHtml is used for this particular task as it renders HTML tags in the text. The regular method appendText puts the entire literal text string (including plain unformatted text of the HTML tags) into the element. So what exactly is the difference? The method appendHtml evaluates the supplied ,HTML, and adds the resultant object node to the nodes of the parent element which is rendered in the browser as usual. The method appendText is useful, for example, to prevent user supplied content affecting the format of the page and preventing malicious code being injected into a web page. Editing the presentation When the source is updated the presentation is updated via the onKeyUp event. This was used in the text editor project to trigger a save to local storage. This is carried out in the build method of the SlideShow class, and follows the pattern we discussed parsing the presentation. build(String src) { updateEditTimestamp(); _slides = new List<Slide>(); Slide nextSlide; src.split("n").forEach((String line) { if (line.trim().length > 0) { // Title - also marks start of the next slide. if (line.startsWith("#")) { nextSlide = new Slide(line.substring(1)); _slides.add(nextSlide); } if (nextSlide != null) { if (line.startsWith("+")) { nextSlide.bulletPoints.add(line.substring(1)); } else if (line.startsWith("!")) { nextSlide.imageUrl = line.substring(1); } } } }); } As an alternative to the startsWith method, the square bracket [] operator could be used for line [0] to retrieve the first character. The startsWith can also take a regular expression or a string to match and a starting index, refer to the dart:core documentation for more information. For the purposes of parsing the presentation, the startsWith method is more readable. Displaying the current slide The slide is displayed via the showSlide method in slideShowApp.dart. To preview the current slide, the current index, stored in the field currentSlideIndex, is used to retrieve the desired slide object and the Div rendering method called. showSlide(int slideNumber) { if (currentSlideShow.slides.length == 0) return; slideScreen.style.visibility = "hidden"; slideScreen ..nodes.clear() ..nodes.add(currentSlideShow.slides[slideNumber].getSlideContents ()); rangeSlidePos.value = slideNumber.toString(); slideScreen.style.visibility = "visible"; } The slideScreen is a DivElement which is then updated off screen by setting the visibility style property to hidden The existing content of the DivElement is emptied out by calling nodes.clear() and the slide content is added with nodes.add. The range slider position is set and finally the DivElement is set to visible again. Navigating the presentation A button set with familiar first, previous, next and last slide allow the user to jump around the preview of the presentation. This is carried out by having an index into the list of slides stored in the field slide in the SlideShowApp class. Handling the button key presses The navigation buttons require being set up in an identical pattern in the constructor of the SlideShowApp object. First get an object reference using id, which is the id attribute of the element, and then attaching a handler to the click event. Rather than repeat this code, a simple function can handle the process. setButton(String id, Function clickHandler) { ButtonInputElement btn = querySelector(id); btn.onClick.listen(clickHandler); } As function is a type in Dart, functions can be passed around easily as a parameter. Let us take a look at the button that takes us to the first slide. setButton("#btnFirst", startSlideShow); void startSlideShow(MouseEvent event) { showFirstSlide(); } void showFirstSlide() { showSlide(0); } The event handlers do not directly change the slide, these are carried out by other methods, which may be triggered by other inputs such as the keyboard. Using the function type The SlideShowApp constructor makes use of this feature. Function qs = querySelector; var controls = qs("#controls"); I find the querySelector method a little long to type (though it is a good descriptive of what it does). With Function being types, we can easily create a shorthand version. The constructor spends much of its time selecting and assigning the HTML elements to member fields of the class. One of the advantages of this approach is that the DOM of the page is queried only once, and the reference stored and reused. This is good for performance of the application as, once the application is running, querying the DOM may take much longer. Staying within the bounds Using min and max function from the dart:math package, the index can be kept in range of the current list. void showLastSlide() { currentSlideIndex = max(0, currentSlideShow.slides.length - 1); showSlide(currentSlideIndex); } void showNextSlide() { currentSlideIndex = min(currentSlideShow.slides.length - 1, ++currentSlideIndex); showSlide(currentSlideIndex); } These convenience functions can save a great deal if and else if comparisons and help make code a good degree more readable. Using the slider control The slider control is another new control in the HTML5 standard. This will allow the user to scroll though the slides in the presentation. This control is a personal favorite of mine, as it is so visual and can be used to give very interactive feedback to the user. It seemed to be a huge omission from the original form controls in the early generation of web browsers. Even with clear widely accepted features, HTML specifications can take a long time to clear committees and make it into everyday browsers! <input type="range" id="rngSlides" value="0"/> The control has an onChange event which is given a listener in the SlideShowApp constructor. rangeSlidepos.onChange.listen(moveToSlide);rangeSlidepos.onChange .listen(moveToSlide); The control provides its data via a simple string value, which can be converted to an integer via the int.parse method to be used as an index to the presentation's slide list. void moveToSlide(Event event) { currentSlideIndex = int.parse(rangeSlidePos.value); showSlide(currentSlideIndex); } The slider control must be kept in synchronization with any other change in slide display, use of navigation or change in number of slides. For example, the user may use the slider to reach the general area of the presentation, and then adjust with the previous and next buttons. void updateRangeControl() { rangeSlidepos ..min = "0" ..max = (currentSlideShow.slides.length - 1).toString(); } This method is called when the number of slides is changed, and as with working with most HTML elements, the values to be set need converted to strings. Responding to keyboard events Using the keyboard, particularly the arrow (cursor) keys, is a natural way to look through the slides in a presentation even in the preview mode. This is carried out in the SlideShowApp constructor. In Dart web applications, the dart:html package allows direct access to the globalwindow object from any class or function. The Textarea used to input the presentation source will also respond to the arrow keys so there will need to be a check to see if it is currently being used. The property activeElement on the document will give a reference to the control with focus. This reference can be compared to the Textarea, which is stored in the presEditor field, so a decision can be taken on whether to act on the keypress or not. Key Event Code Action Left Arrow  37  Go back a slide. Up Arrow  38  Go to first slide.   Right Arrow  39  Go to next slide.  Down Arrow  40  Go to last slide. Keyboard events, like other events, can be listened to by using a stream event listener. The listener function is an anonymous function (the definition omits a name) that takes the KeyboardEvent as its only parameter. window.onKeyUp.listen((KeyboardEvent e) { if (presEditor != document.activeElement){ if (e.keyCode == 39) showNextSlide(); else if (e.keyCode == 37) showPrevSlide(); else if (e.keyCode == 38) showFirstSlide(); else if (e.keyCode == 40) showLastSlide(); } }); It is a reasonable question to ask how to get the keyboard key codes required to write the switching code. One good tool is the W3C's Key and Character Codes page at http://www.w3.org/2002/09/tests/keys.html, to help with this but it can often be faster to write the handler and print out the event that is passed in! Showing the key help Rather than testing the user's memory, there will be a handy reference to the keyboard shortcuts. This is a simple Div element which is shown and then hidden when the key (remember to press Shift too!) is pressed again by toggling the visibility style from visible to hidden. Listening twice to event streams The event system in Dart is implemented as a stream. One of the advantages of this is that an event can easily have more than one entity listening to the class. This is useful, for example in a web application where some keyboard presses are valid in one context but not in another. The listen method is an add operation (accumulative) so the key press for help can be implemented separately. This allows a modular approach which helps reuse as the handlers can be specialized and added as required. window.onKeyUp.listen((KeyboardEvent e) { print(e); //Check the editor does not have focus. if (presEditor != document.activeElement) { DivElement helpBox = qs("#helpKeyboardShortcuts"); if (e.keyCode == 191) { if (helpBox.style.visibility == "visible") { helpBox.style.visibility = "hidden"; } else { helpBox.style.visibility = "visible"; } } } }); In, for example, a game, a common set of event handling may apply to title and introduction screen and the actual in game screen contains additional event handling as a superset. This could be implemented by adding and removing handlers to the relevant event stream. Changing the colors HTML5 provides browsers with full featured color picker (typically browsers use the native OS's color chooser). This will be used to allow the user to set the background color of the editor application itself. The color picker is added to the index.html page with the following HTML: <input id="pckBackColor" type="color"> The implementation is straightforward as the color picker control provides: InputElement cp = qs("#pckBackColor"); cp.onChange.listen( (e) => document.body.style.backgroundColor = cp.value); As the event and property (onChange and value) are common to the input controls the basic InputElement class can be used. Adding a date Most presentations are usually dated, or at least some of the jokes are! We will add a convenient button for the user to add a date to the presentation using the HTML5 input type date which provides a graphical date picker. <input type="date" id="selDate" value="2000-01-01"/> The default value is set in the index.html page as follows: The valueAsDate property of the DateInputElement class provides the Date object which can be added to the text area: void insertDate(Event event) { DateInputElement datePicker = querySelector("#selDate"); if (datePicker.valueAsDate != null) presEditor.value = presEditor.value + datePicker.valueAsDate.toLocal().toString(); } In this case, the toLocal method is used to obtain a string formatted to the month, day, year format. Timing the presentation The presenter will want to keep to their allotted time slot. We will include a timer in the editor to aid in rehearsal. Introducing the stopwatch class The Stopwatch class (from dart:core) provides much of the functionality needed for this feature, as shown in this small command line application: main() { Stopwatch sw = new Stopwatch(); sw.start(); print(sw.elapsed); sw.stop(); print(sw.elapsed); } The elapsed property can be checked at any time to give the current duration. This is very useful class, for example, it can be used to compare different functions to see which is the fastest. Implementing the presentation timer The clock will be stopped and started with a single button handled by the toggleTimer method. A recurring timer will update the duration text on the screen as follows: If the timer is running, the update Timer and the Stopwatch in field slidesTime is stopped. No update to the display is required as the user will need to see the final time: void toggleTimer(Event event) { if (slidesTime.isRunning) { slidesTime.stop(); updateTimer.cancel(); } else { updateTimer = new Timer.periodic(new Duration(seconds: 1), (timer) { String seconds = (slidesTime.elapsed.inSeconds % 60).toString(); seconds = seconds.padLeft(2, "0"); timerDisplay.text = "${slidesTime.elapsed.inMinutes}:$seconds"; }); slidesTime ..reset() ..start(); } } The Stopwatch class provides properties for retrieving the elapsed time in minutes and seconds. To format this to minutes and seconds, the seconds portion is determined with the modular division operator % and padded with the string function padLeft. Dart's string interpolation feature is used to build the final string, and as the elapsed and inMinutes properties are being accessed, the {} brackets are required so that the single value is returned. Overview of slides This provides the user with a visual overview of the slides as shown in the following screenshot: The presentation slides will be recreated in a new full screen Div element. This is styled using the fullScreen class in the CSS stylesheet in the SlideShowApp constructor: overviewScreen = new DivElement(); overviewScreen.classes.toggle("fullScreen"); overviewScreen.onClick.listen((e) => overviewScreen.remove()); The HTML for the slides will be identical. To shrink the slides, the list of slides is iterated over, the HTML element object obtained and the CSS class for the slide is set: currentSlideShow.slides.forEach((s) { aSlide = s.getSlideContents(); aSlide.classes.toggle("slideOverview"); aSlide.classes.toggle("shrink"); ... The CSS hover class is set to scale the slide when the mouse enters so a slide can be focused on for review. The classes are set with the toggle method which either adds if not present or removes if they are. The method has an optional parameter: aSlide.classes.toggle('className', condition); The second parameter is named shouldAdd is true if the class is always to be added and false if the class is always to be removed. Handout notes There is nothing like a tangible handout to give attendees to your presentation. This can be achieved with a variation of the overview display: Instead of duplicating the overview code, the function can be parameterized with an optional parameter in the method declaration. This is declared with square brackets [] around the declaration and a default value that is used if no parameter is specified. void buildOverview([bool addNotes = false]) This is called by the presentation overview display without requiring any parameters. buildOverview(); This is called by the handouts display without requiring any parameters. buildOverview(true); If this parameter is set, an additional Div element is added for the Notes area and the CSS is adjust for the benefit of the print layout. Comparing optional positional and named parameters The addNotes parameter is declared as an optional positional parameter, so an optional value can be specified without naming the parameter. The first parameter is matched to the supplied value. To give more flexibility, Dart allows optional parameters to be named. Consider two functions, the first will take named optional parameters and the second positional optional parameters. getRecords1(String query,{int limit: 25, int timeOut: 30}) { } getRecords2(String query,[int limit = 80, int timeOut = 99]) { } The first function can be called in more ways: getRecords1(""); getRecords1("", limit:50, timeOut:40); getRecords1("", timeOut:40, limit:65); getRecords1("", limit:50); getRecords1("", timeOut:40); getRecords2(""); getRecords2("", 90); getRecords2("", 90, 50); With named optional parameters, the order they are supplied is not important and has the advantage that the calling code is clearer as to the use that will be made of the parameters being passed. With positional optional parameters, we can omit the later parameters but it works in a strict left to right order so to set the timeOut parameter to a non-default value, limit must also be supplied. It is also easier to confuse which parameter is for which particular purpose. Summary The presentation editor is looking rather powerful with a range of advanced HTML controls moving far beyond text boxes to date pickers and color selectors. The preview and overview help the presenter visualize the entire presentation as they work, thanks to the strong class structure built using Dart mixins and data structures using generics. We have spent time looking at the object basis of Dart, how to pass parameters in different ways and, closer to the end user, how to handle keyboard input. This will assist in the creation of many different types of application and we have seen how optional parameters and true properties can help document code for ourselves and other developers. Hopefully you learned a little about coconuts too. The next step for this application is to improve the output with full screen display, animation and a little sound to capture the audiences' attention. The presentation editor could be improved as well—currently it is only in the English language. Dart's internationalization features can help with this. Resources for Article: Further resources on this subject: Practical Dart[article] Handling the DOM in Dart[article] Dart with JavaScript [article]
Read more
  • 0
  • 0
  • 1762
Modal Close icon
Modal Close icon