Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Front-End Web Development

341 Articles
article-image-marshalling-data-services-extdirect
Packt
14 Oct 2010
9 min read
Save for later

Marshalling Data Services with Ext.Direct

Packt
14 Oct 2010
9 min read
What is Direct? Part of the power of any client-side library is its ability to tap nearly any server-side technology. That said, with so many server-side options available there were many different implementations being written for accessing the data. Direct is a means of marshalling those server-side connections, creating a 'one-stop-shop' for handling your basic Create, Read, Update, and Delete actions against that remote data. Through some basic configuration, we can now easily create entire server-side API's that we may programmatically expose to our Ext JS applications. In the process, we end up with one set of consistent, predefined methods for managing that data access. Building server-side stacks There are several examples of server-side stacks already available for Ext JS, directly from their site's Direct information. These are examples, showing you how you might use Direct with a particular server-side technology, but Ext provides us with a specification so that we might write our own. Current stack examples are available for: PHP .NET Java ColdFusion Ruby Perl These are examples written directly by the Ext team, as guides, as to what we can do. Each of us writes applications differently, so it may be that our application requires a different way of handling things at the server level. The Direct specification, along with the examples, gives us the guideposts we need for writing our own stacks when necessary. We will deconstruct one such example here to help illustrate this point. Each server-side stack is made up of three basic components: Configuration— denoting which components/classes are available to Ext JS API— client-side descriptors of our configuration Router— a means to 'route' our requests to their proper API counterparts To illustrate each of these pieces of the server-side stack we will deconstruct one of the example stacks provided by the Ext JS team. I have chosen the ColdFusion stack because: It is a good example of using a metadata configuration DirectCFM (the ColdFusion stack example) was written by Aaron Conran, who is the Senior Software Architect and Ext Services Team Leader for Ext, LLC Each of the following sections will contain a "Stack Deconstruction" section to illustrate each of the concepts. These are to show you how these concepts might be written in a server-side language, but you are welcome to move on if you feel you have a good grasp of the material. Configuration Ultimately the configuration must define the classes/objects being accessed, the functions of those objects that can be called, and the length (number) of arguments that the method is expecting. Different servers will allow us to define our configuration in different ways. The method we choose will sometimes depend upon the capabilities or deficiencies of the platform we're coding to. Some platforms provide the ability to introspect components/classes at runtime to build configurations, while others require a far more manual approach. You can also include an optional formHandler attribute to your method definitions, if the method can take form submissions directly. There are four basic ways to write a configuration. Programmatic A programmatic configuration may be achieved by creating a simple API object of key/value pairs in the native language. A key/value pair object is known by many different names, depending upon the platform to which we're writing for: HashMap, Structure, Object, Dictionary, or an Associative Array. For example, in PHP you might write something like this: $API = array( 'Authors'=>array( 'methods'=>array( 'GetAll'=>array( 'len'=>0 ), 'add'=>array( 'len'=>1 ), 'update'=>array( 'len'=>1 ) ) ) ); Look familiar? It should, in some way, as it's very similar to a JavaScript object. The same basic structure is true for our next two methods of configuration as well. JSON and XML For this configuration, we can pass in a basic JSON configuration of our API: { Authors:{ methods:{ GetAll:{ len:0 }, add:{ len:1 }, update:{ len:1 } } } } Or we could return an XML configuration object: <Authors> <methods> <method name="GetAll" len="0" /> <method name="add" len="1" /> <method name="update" len="1" /> </methods> </Authors> All of these forms have given us the same basic outcome, by providing a basic definition of server-side classes/objects to be exposed for use with our Ext applications. But, each of these methods require us to build these configurations basically by hand. Some server-side options make it a little easier. Metadata There are a few server-side technologies that allow us to add additional metadata to classes and function definitions, using which we can then introspect objects at runtime to create our configurations. The following example demonstrates this by adding additional metadata to a ColdFusion component (CFC): <cfcomponent name="Authors" ExtDirect="true"> <cffunction name="GetAll" ExtDirect="true"> <cfreturn true /> </cffunction> <cffunction name="add" ExtDirect="true"> <cfargument name="author" /> <cfreturn true /> </cffunction> <cffunction name="update" ExtDirect="true"> <cfargument name="author" /> <cfreturn true /> </cffunction> </cfcomponent> This is a very powerful method for creating our configuration, as it means adding a single name/value attribute (ExtDirect="true") to any object and function we want to make available to our Ext application. The ColdFusion server is able to introspect this metadata at runtime, passing the configuration object back to our Ext application for use. Stack deconstruction—configuration The example ColdFusion Component provided with the DirectCFM stack is pretty basic, so we'll write one slightly more detailed to illustrate the configuration. ColdFusion has a facility for attaching additional metadata to classes and methods, so we'll use the fourth configuration method for this example, Metadata. We'll start off with creating the Authors.cfc class: <cfcomponent name="Authors" ExtDirect="true"> </cfcomponent> Next we'll create our GetAll method for returning all the authors in the database: <cffunction name="GetAll" ExtDirect="true"> <cfset var q = "" /> <cfquery name="q" datasource="cfbookclub"> SELECT AuthorID, FirstName, LastName FROM Authors ORDER BY LastName </cfquery> <cfreturn q /> </cffunction> We're leaving out basic error handling and stuff, but these are the basics behind it. The classes and methods we want to make available will all contain the additional metadata. Building your API So now that we've explored how to create a configuration at the server, we need to take the next step by passing that configuration to our Ext application. We do this by writing a server-side template that will output our JavaScript configuration. Yes, we'll actually dynamically produce a JavaScript include, calling the server-side template directly from within our <script> tag: <script src="Api.cfm"></script> How we write our server-side file really depends on the platform, but ultimately we just want it to return a block of JavaScript (just like calling a .js file) containing our API configuration description. The configuration will appear as part of the actions attribute, but we must also pass the url of our Router, the type of connection, and our namespace. That API return might look something like this: Ext.ns("com.cc"); com.cc.APIDesc = { "url": "/remote/Router.cfm", "type": "remoting" "namespace": "com.cc", "actions": { "Authors": [{ "name": "GetAll", "len": 0 },{ "name": "add", "len": 1 },{ "name": "update", "len": 1 }] } }; This now exposes our server-side configuration to our Ext application. Stack deconstruction—API The purpose here is to create a JavaScript document, dynamically, of your configuration. Earlier we defined configuration via metadata. The DirectCFM API now has to convert that metadata into JavaScript. The first step is including the Api.cfm in a <script> tag on the page, but we need to know what's going on "under the hood." Api.cfm: <!--- Configure API Namespace and Description variable names ---> <cfset args = StructNew() /> <cfset args['ns'] = "com.cc" /> <cfset args['desc'] = "APIDesc" /> <cfinvoke component="Direct" method="getAPIScript" argumentcollection="#args#" returnVariable="apiScript" /> <cfcontent reset="true" /> <cfoutput>#apiScript#</cfoutput> Here we set a few variables, that will then be used in a method call. The getAPIScript method, of the Direct.cfc class, will construct our API from metadata. Direct.cfc getAPIScript() method: <cffunction name="getAPIScript"> <cfargument name="ns" /> <cfargument name="desc" /> <cfset var totalCFCs = '' /> <cfset var cfcName = '' /> <cfset var CFCApi = '' /> <cfset var fnLen = '' /> <cfset var Fn = '' /> <cfset var currFn = '' /> <cfset var newCfComponentMeta = '' /> <cfset var script = '' /> <cfset var jsonPacket = StructNew() /> <cfset jsonPacket['url'] = variables.routerUrl /> <cfset jsonPacket['type'] = variables.remotingType /> <cfset jsonPacket['namespace'] = ARGUMENTS.ns /> <cfset jsonPacket['actions'] = StructNew() /> <cfdirectory action="list" directory="#expandPath('.')#" name="totalCFCs" filter="*.cfc" recurse="false" /> <cfloop query="totalCFCs"> <cfset cfcName = ListFirst(totalCFCs.name, '.') /> <cfset newCfComponentMeta = GetComponentMetaData(cfcName) /> <cfif StructKeyExists(newCfComponentMeta, "ExtDirect")> <cfset CFCApi = ArrayNew(1) /> <cfset fnLen = ArrayLen(newCFComponentMeta.Functions) /> <cfloop from="1" to="#fnLen#" index="i"> <cfset currFn = newCfComponentMeta.Functions[i] /> <cfif StructKeyExists(currFn, "ExtDirect")> <cfset Fn = StructNew() /> <cfset Fn['name'] = currFn.Name/> <cfset Fn['len'] = ArrayLen(currFn.Parameters) /> <cfif StructKeyExists(currFn, "ExtFormHandler")> <cfset Fn['formHandler'] = true /> </cfif> <cfset ArrayAppend(CFCApi, Fn) /> </cfif> </cfloop> <cfset jsonPacket['actions'][cfcName] = CFCApi /> </cfif> </cfloop> <cfoutput><cfsavecontent variable="script">Ext.ns('#arguments. ns#');#arguments.ns#.#desc# = #SerializeJson(jsonPacket)#;</ cfsavecontent></cfoutput> <cfreturn script /> </cffunction> The getAPIScript method sets a few variables (including the 'actions' array), pulls a listing of all ColdFusion Components from the directory, loops over that listing, and finds any components containing "ExtDirect" in their root meta. With every component that does contain that meta, it then loops over each method, finds methods with "ExtDirect" in the function meta, and creates a structure with the function name and number of arguments, which is then added to an array of methods. When all methods have been introspected, the array of methods is added to the 'actions' array. Once all ColdFusion Components have been introspected, the entire packet is serialized into JSON, and returned to API.cfm for output. One item to note is that the script, when introspecting method metadata, also looks for a "ExtFormHandler" attribute. If it finds the attribute, it will include that in the method struct prior to placing the struct in the 'actions' array.
Read more
  • 0
  • 0
  • 2948

article-image-authentication-and-authorization-modx
Packt
20 Oct 2009
1 min read
Save for later

Authentication and Authorization in MODx

Packt
20 Oct 2009
1 min read
It is vital to keep this distinction in mind to be able to understand the complexities explained in this article. You will also learn how MODx allows grouping of documents, users, and permissions. Create web users Let us start by creating a web user. Web users are users who can access restricted document groups in the web site frontend; they do not have Manager access. Web users can identify themselves at login by using login forms. They are allowed to log in from the user page, but they cannot log in using the Manager interface. To create a web user, perform the following steps: Click on the Web Users menu item in the Security menu. Click on New Web User. Fill in the fields with the following information: Field Name Value Username samira Password samira123 Email Address xyz@configurelater.com    
Read more
  • 0
  • 0
  • 2941

article-image-yearly-holiday-list-calendar-developed-using-jquery-ajax-xml-and-css3
Packt
01 Feb 2011
5 min read
Save for later

Yearly Holiday List Calendar Developed using jQuery, AJAX, XML and CSS3

Packt
01 Feb 2011
5 min read
  PHP jQuery Cookbook Over 60 simple but highly effective recipes to create interactive web applications using PHP with jQuery Create rich and interactive web applications with PHP and jQuery Debug and execute jQuery code on a live site Design interactive forms and menus Another title in the Packt Cookbook range, which will help you get to grips with PHP as well as jQuery About Holiday List Calendar This widget will help you in knowing the list of holidays in various countries. Here in this example, I have listed holidays pertaining to only two counties, namely India and US. You can make use of this widget on your websites or blogs to tell your readers about holidays and their importance if necessary. Adding jQuery to your page Download the latest version of jQuery from the jQuery site. This site can be added as a reference to your web pages accordingly. You can reference a local copy of jQuery after downloading <script> tag in the page. You can also directly reference the remote copy from jQuery or Google Ajax API. Pre-requisite Knowledge In order to understand the code, one should have some knowledge of AJAX concepts and XML Structure, basic knowledge of HTML, advance knowledge of CSS3 and lastly and mostly important one should know advance level of jQuery coding. Ingredients Used jQuery [Advance Level] CSS3 HTML XML Photoshop [Used for coming up with UI Design] HTML Code <div id="calendar-container"> <div class="nav-container"> <span>Year<br/><select id="selectYear"></select></span> <span class="last">Month<br /><a href="#" id="prevBtn" class="button gray"><</a> <select id="selectMonth"></select> <a href="#" id="nextBtn" class="button gray">></a></span> </div> <div class="data-container"></div> </div> XML Structure <?xml version="1.0" encoding="ISO-8859-1"?> <calendar> <year whichyear="2010" id="1"> <month name="January" id="1"> <country name="India"> <holidayList date="Jan 01st" day="Friday"><![CDATA[Sample Data]]></holidayList> <holidayList date="Jan 14th" day="Friday"><![CDATA[Sample Data]]></holidayList> <holidayList date="Jan 26th" day="Wednesday"><![CDATA[Sample Data]]></holidayList> </country> <country name="US"> <holidayList date="Jan 01st" day="Saturday"><![CDATA[Sample Data]]></holidayList> <holidayList date="Jan 17th" day="Monday"><![CDATA[Sample Data]]></holidayList> </country> </month> <month name="January" id="1"> --------------------- --------------------- --------------------- </month> </year> </calendar> CSS Code body{ margin: 0; padding: 0; font-family: "Lucida Grande", "Lucida Sans", sans-serif; font-size: 100%; background: #333333; } #calendar-container{ width:370px; padding:5px; border:1px solid #bcbcbc; margin:0 auto; background-color:#cccccc; -webkit-border-radius: .5em; -moz-border-radius: .5em; border-radius: .5em; -webkit-box-shadow: 0 1px 4px rgba(0,0,0,.2); -moz-box-shadow: 0 1px 4px rgba(0,0,0,.2); box-shadow: 0 1px 4px rgba(0,0,0,.2); } .nav-container{ padding:5px; } .nav-container span{display:inline-block; text-align::left; padding-right:15px; border-right:1px solid #828282; margin-right:12px; text-shadow: 1px 1px 1px #ffffff; font-weight:bold;} .nav-container span.last{padding-right:0px; border-right:none; margin-right:0px;} .data-container{ font-family:Arial, Helvetica, sans-serif; font-size:14px; } #selectMonth{width:120px;} .data-container ul{margin:0px; padding:0px;} .data-container ul li{ list-style:none; padding:5px;} .data-container ul li.list-header{border-bottom:1px solid #bebebe; border-right:1px solid #bebebe; background-color:#eae9e9; -webkit-border-radius: .2em .2em 0 0; -moz-border-radius: .2em .2em 0 0; border-radius: .3em .3em 0 0; background:-moz-linear-gradient(center top , #eae9e9, #d0d0d0) repeat scroll 0 0 transparent; margin-top:5px; text-shadow: 1px 1px 1px #ffffff;} .data-container ul li.padding-left-10px {background-color:#EEEEEE; border-bottom:1px solid #BEBEBE; border-right:1px solid #BEBEBE; font-size:12px;} /* button ---------------------------------------------- */ .button { font-size: 25px; font-weight: 700; display: inline-block; zoom: 1; /* zoom and *display = ie7 hack for display:inline-block */ *display: inline; vertical-align: bottom; margin: 0 2px; outline: none; cursor: pointer; text-align: center; text-decoration: none; text-shadow: 1px 1px 1px #555555; padding: 0px 10px 3px 10px; -webkit-border-radius: .2em; -moz-border-radius: .2em; border-radius: .2em; -webkit-box-shadow: 0 1px 2px rgba(0,0,0,.2); -moz-box-shadow: 0 1px 2px rgba(0,0,0,.2); box-shadow: 0 1px 2px rgba(0,0,0,.2); } .button:hover { text-decoration: none; } .button:active { position: relative; top: 1px; } select{ -webkit-border-radius: .2em .2em .2em .2em; -moz-border-radius: .2em .2em .2em .2em; border-radius: .2em; -webkit-box-shadow: 0 1px 2px rgba(0,0,0,.2); -moz-box-shadow: 0 1px 2px rgba(0,0,0,.2); box-shadow: 0 1px 2px rgba(0,0,0,.2); padding:5px; font-size:16px; border:1px solid #4b4b4b; } /* color styles ---------------------------------------------- */ .gray { color: #e9e9e9; border: solid 1px #555; background: #6e6e6e; background: -webkit-gradient(linear, left top, left bottom, from(#888), to(#575757)); background: -moz-linear-gradient(top, #888, #575757); filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#888888', endColorstr='#575757'); } .gray:hover { background: #616161; background: -webkit-gradient(linear, left top, left bottom, from(#757575), to(#4b4b4b)); background: -moz-linear-gradient(top, #757575, #4b4b4b); filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#757575', endColorstr='#4b4b4b'); } .gray:active { color: #afafaf; background: -webkit-gradient(linear, left top, left bottom, from(#575757), to(#888)); background: -moz-linear-gradient(top, #575757, #888); filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#575757', endColorstr='#888888'); } .grayDis{ color: #999999; background: -webkit-gradient(linear, left top, left bottom, from(#575757), to(#888)); background: -moz-linear-gradient(top, #575757, #888); filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#575757', endColorstr='#888888'); } h2{ color:#ffffff; text-align:center; margin:10px 0px;} #header{ text-align:center; font-size: 1em; font-family: "Helvetica Neue", Helvetica, sans-serif; padding:1px; margin:10px 0px 80px 0px; background-color:#575757; } .ad{ width: 728px; height: 90px; margin: 50px auto 10px; } #footer{ width: 340px; margin: 0 auto; } #footer p{ color: #ffffff; font-size: .70em; margin: 0; } #footer a{ color: #15ADD1; }  
Read more
  • 0
  • 0
  • 2924

article-image-making-progress-menus-and-toolbars-using-ext-js-30-part-1
Packt
19 Nov 2009
7 min read
Save for later

Making Progress with Menus and Toolbars using Ext JS 3.0: Part 1

Packt
19 Nov 2009
7 min read
Placing buttons in a toolbar You can embed different types of components in a toolbar. This topic teaches you how to build a toolbar that contains image-only, text-only and image/text buttons, a toggle button, and a combo box. How to do it Create the styles for the toolbar items: #tbar{ width:600px;}.icon-data{ background:url(img/data.png) 0 no-repeat !important;}.icon-chart{ background:url(img/pie-chart.png) 0 no-repeat !important;}.icon-table{ background:url(img/table.png) 0 no-repeat !important;} Define a data store for the combo box: Ext.onReady(function() { Ext.QuickTips.init(); var makesStore = new Ext.data.ArrayStore({ fields: ['make'], data: makes // from cars.js }); Create a toolbar and define the buttons and combo box inline: var tb = new Ext.Toolbar({ renderTo: 'tbar', items: [{ iconCls: 'icon-data', tooltip: 'Icon only button', handler:clickHandler }, '-', { text: 'Text Button' }, '-', { text: 'Image/Text Button', iconCls: 'icon-chart' }, '-', { text: 'Toggle Button', iconCls: 'icon-table', enableToggle: true, toggleHandler: toggleHandler, pressed: true }, '->', 'Make: ', { xtype: 'combo', store: makesStore, displayField: 'make', typeAhead: true, mode: 'local', triggerAction: 'all', emptyText: 'Select a make...', selectOnFocus: true, width: 135 }]}); Finally, create handlers for the push button and the toggle button: function clickHandler(btn) { Ext.Msg.alert('clickHandler', 'button pressed');}function toggleHandler(item, pressed) { Ext.Msg.alert('toggleHandler', 'toggle pressed');} How it works The buttons and the combo box are declared inline. While the standard button uses a click handler through the handler config option, the toggle button requires the toggleHandler config option.The button icons are set with the iconCls option, using the classes declared in the first step of the topic. As an example, note the use of the Toolbar.Separator instances in this fragment: }, '-', { text: 'Text Button'}, '-', { text: 'Image/Text Button', iconCls: 'icon-chart'}, '-', { Using '-' to declare a Toolbar.Separator is equivalent to using xtype: 'tbseparator'. Similarly, using '->' to declare Toolbar.Fill is equivalent to using xtype:'tbfill'. See also... The next recipe, Working with the new ButtonGroup component, explains how to use the ButtonGroup class to organize a series of related buttons Working with the new ButtonGroup component A welcome addition to Ext JS is the ability to organize buttons in groups. Here's how to create a panel with a toolbar that contains two button groups: How to do it Create the styles for the buttons: #tbar{ width:600px;}.icon-data{ background:url(img/data.png) 0 no-repeat !important;}.icon-chart{ background:url(img/pie-chart.png) 0 no-repeat !important;}.icon-table{ background:url(img/table.png) 0 no-repeat !important;}.icon-sort-asc{ background:url(img/sort-asc.png) 0 no-repeat !important;}.icon-sort-desc{ background:url(img/sort-desc.png) 0 no-repeat !important;}.icon-filter{ background:url(img/funnel.png) 0 no-repeat !important;} Define a panel that will host the toolbar: Ext.onReady(function() { var pnl = new Ext.Panel({ title: 'My Application', renderTo:'pnl-div', height: 300, width: 500, bodyStyle: 'padding:10px', autoScroll: true, Define a toolbar inline and create two button groups: tbar: [{ xtype: 'buttongroup', title: 'Data Connections', columns: 1, defaults: { scale: 'small' }, items: [{ xtype:'button', text: 'Data Sources', iconCls:'icon-data' }, { xtype: 'button', text: 'Tables', iconCls: 'icon-table' }, { xtype: 'button', text: 'Reports', iconCls: 'icon-chart' }]}, { xtype: 'buttongroup', title: 'Sort & Filter', columns: 1, defaults: { scale: 'small' }, items: [{ xtype: 'button', text: 'Sort Ascending', iconCls: 'icon-sort-asc' }, { xtype: 'button', text: 'Sort Descending', iconCls: 'icon-sort-desc' }, { xtype: 'button', text: 'Filter', iconCls: 'icon-filter' }]}] How it works Using a button group consists of adding a step to the process of adding buttons, or other items, to a toolbar. Instead of adding the items directly to the toolbar, you need to firstly define the group and then add the items to the group: tbar: [{ xtype: 'buttongroup', title: 'Data Connections', columns: 1, defaults: { scale: 'small' }, items: [{ xtype:'button', text: 'Data Sources', iconCls:'icon-data' }, { xtype: 'button', text: 'Tables', iconCls: 'icon-table' }, { xtype: 'button', text: 'Reports', iconCls: 'icon-chart' }]} See also... The next recipe, Placing buttons in a toolbar, illustrates how you can embed different types of components in a toolbar Placing menus in a toolbar In this topic, you will see how simple it is to use menus inside a toolbar. The panel's toolbar that we will build, contains a standard button and a split button, both with menus: How to do it Create the styles for the buttons: #tbar{ width:600px;}.icon-data{ background:url(img/data.png) 0 no-repeat !important;}.icon-chart{ background:url(img/pie-chart.png) 0 no-repeat !important;}.icon-table{ background:url(img/table.png) 0 no-repeat !important;} Create a click handler for the menus: Ext.onReady(function() { Ext.QuickTips.init(); var clickHandler = function(action) { alert('Menu clicked: "' + action + '"');}; Create a window to host the toolbar: var wnd = new Ext.Window({ title: 'Toolbar with menus', closable: false, height: 300, width: 500, bodyStyle: 'padding:10px', autoScroll: true, Define the window's toolbar inline, and add the buttons and their respective menus: tbar: [{ text: 'Button with menu', iconCls: 'icon-table', menu: [ { text: 'Menu 1', handler:clickHandler.createCallback('Menu 1'), iconCls: 'icon-data' }, { text: 'Menu 1', handler: clickHandler.createCallback('Menu 2'), iconCls: 'icon-data'}]}, '-',{ xtype: 'splitbutton', text: 'Split button with menu', iconCls: 'icon-chart', handler: clickHandler.createCallback('Split button with menu'), menu: [ { text: 'Menu 3', handler: clickHandler.createCallback('Menu 3'), iconCls: 'icon-data' }, { text: 'Menu 4', handler: clickHandler.createCallback('Menu 4'), iconCls: 'icon-data'}] }]}); Finally, show the window: wnd.show(); How it works This is a simple procedure. Note how the split button is declared with the xtype: 'splitbutton' config option. Also, observe how the createCallback() function is used to invoke the clickHandler() function with the correct arguments for each button. See also... The next recipe, Commonly used menu items, shows the different items that can be used in a menu Commonly used menu items To show you the different items that can be used in a menu, we will build a menu that contains radio items, a checkbox menu, a date menu, and a color menu.This is how the radio options and checkbox menu will look: The Pick a Date menu item will display a date picker, as shown in the next screenshot: The Pick a Color menu item displays a color picker, as seen here: How to do it Create a handler for the checkbox menu: Ext.onReady(function() { Ext.QuickTips.init(); var onCheckHandler = function(item, checked) { Ext.Msg.alert('Menu checked', item.text + ', checked: ' + (checked ? 'checked' : 'unchecked')); }; Define a date menu: var dateMenu = new Ext.menu.DateMenu({ handler: function(dp, date) { Ext.Msg.alert('Date picker', date); }}); Define a color menu: var colorMenu = new Ext.menu.ColorMenu({ handler: function(cm, color) { Ext.Msg.alert('Color picker', String.format('You picked {0}.', color)); }}); Create a main menu. Now add the date and color menus, as well as a few inline menus: var menu = new Ext.menu.Menu({ id: 'mainMenu', items: [{ text: 'Radio Options', menu: { items: [ '<b>Choose a Theme</b>', { text: 'Aero Glass', checked: true, group: 'theme', checkHandler: onCheckHandler }, { text: 'Vista Black', checked: false, group: 'theme', checkHandler: onCheckHandler }, { text: 'Gray Theme', checked: false, group: 'theme', checkHandler: onCheckHandler }, { text: 'Default Theme', checked: false, group: 'theme', checkHandler: onCheckHandler } ] } }, { text: 'Pick a Date', iconCls: 'calendar', menu: dateMenu }, { text: 'Pick a Color', menu: colorMenu }, { text: 'The last menu', checked: true, checkHandler: onCheckHandler }]}); Create a toolbar and add the main menu: var tb = new Ext.Toolbar({ renderTo: 'tbar', items: [{ text: 'Menu Items', menu: menu }]}); How it works After defining the date and color pickers, the main menu is built. This menu contains the pickers, as well as a few more items that are defined inline. To display checked items (see the checked: true config option) with a radio button instead of a checkbox, the menu items need to be defined using the group config option. This is how the theme selector menu is built: menu: { items: [ '<b>Choose a Theme</b>', { text: 'Aero Glass', checked: true, group: 'theme', checkHandler: onCheckHandler }, { text: 'Vista Black', checked: false, group: 'theme', checkHandler: onCheckHandler See also... The Placing buttons in a toolbar recipe (covered earlier in this article) illustrates how you can embed different types of components in a toolbar >> Continue Reading Making Progress with Menus and Toolbars using Ext JS 3.0: Part 2 [ 1 | 2 ]   If you have read this article you may be interested to view : Making Progress with Menus and Toolbars using Ext JS 3.0: Part 2 Load, Validate, and Submit Forms using Ext JS 3.0: Part 1 Load, Validate, and Submit Forms using Ext JS 3.0: Part 2 Load, Validate, and Submit Forms using Ext JS 3.0: Part 3
Read more
  • 0
  • 0
  • 2919

article-image-being-cross-platform-haxe
Packt
26 Jul 2011
10 min read
Save for later

Being Cross-platform with haXe

Packt
26 Jul 2011
10 min read
  haXe 2 Beginner's Guide Develop exciting applications with this multi-platform programming language What is cross-platform in the library The standard library includes a lot of classes and methods, which you will need most of the time in a web application. So, let's have a look at the different features. What we call standard library is simply a set of objects, which is available when you install haXe. Object storage The standard library offers several structures in which you can store objects. In haXe, you will see that, compared to many other languages, there are not many structures to store objects. This choice has been made because developers should be able to do what they need to do with their objects instead of dealing with a lot of structures, which they have to cast all the time. The basic structures are array, list, and hash. All of these have a different utility: Objects stored in an array can be directly accessed by using their index Objects in a list are linked together in an ordered way Objects in an hash are tied to what are called "keys", but instead of being an Int, keys are a String There is also the IntHash structure that is a hash-table using Int values as keys. These structures can be used seamlessly in all targets. This is also, why the hash only supports String as indexes: some platforms would require a complex class that would impair performances to support any kind of object as indexes. The Std class The Std class is a class that contains methods allowing you to do some basic tasks, such as parsing a Float or an Int from a String, transforming a Float to an Int, or obtaining a randomly generated number. This class can be used on all targets without any problems. The haxe package The haxe package (notice the lower-case x) contains a lot of class-specific to haXe such as the haxe.Serializer class that allows one to serialize any object to the haXe serialization format or its twin class haxe.Unserializer that allows one to unserialize objects (that is "reconstruct" them). This is basically a package offering extended cross-platform functionalities. The classes in the haxe package can be used on all platforms most of the time. The haxe.remoting package This package also contains a remoting package that contains several classes allowing us to use the haXe remoting protocol. This protocol allows several programs supporting it to communicate easily. Some classes in this package are only available for certain targets because of their limitations. For example, a browser environment won't allow one to open a TCP socket, or Flash won't allow one to create a server. Remoting will be discussed later, as it is a very interesting feature of haXe. The haxe.rtti package There's also the rtti package. RTTI means Run Time Type Information. A class can hold information about itself, such as what fields it contains and their declared types. This can be really interesting in some cases, such as, if you want to create automatically generated editors for some objects. The haxe.Http class The haxe.Http class is one you are certainly going to use quite often. It allows you to make HTTP requests and retrieve the answer pretty easily without having to deal with the HTTP protocol by yourself. If you don't know what HTTP is, then you should just know that it is the protocol used between web browsers and servers. On a side-note, the ability to make HTTPS requests depends on the platform. For example, at the moment, Neko doesn't provide any way to make one, whereas it's not a problem at all on JS because this functionality is provided by the browser. Also, some methods in this class are only available on some platforms. That's why, if you are writing a cross-platform library or program, you should pay attention to what methods you can use on all the platforms you want to target. You should note that on JS, the haxe.Http class uses HttpRequest objects and as such, they suffer from security restrictions, the most important one being the same-domain policy. This is something that you should keep in mind, when thinking about your solution's architecture. You can make a simple synchronous request by writing the following: var answer = Http.requestUrl("http://www.benjamindasnois.com"); It is also possible to make some asynchronous requests as follows: var myRequest = new Http("http://www.benjamindasnois.com"); myRequest.onData = function (d : String) { Lib.println(d); } myRequest.request(false); This method also allows you to get more information about the answer, such as the headers and the return code. The following is an example displaying the answer's headers: import haxe.Http; #if neko import neko.Lib; #elseif php import php.Lib; #end class Main { static function main() { var myRequest = new Http("http://www.benjamindasnois.com"); myRequest.onData = function (d : String) { for (k in myRequest.responseHeaders.keys()) { Lib.println(k + " : " + myRequest.responseHeaders.get(k)); } }; myRequest.request(false); } } The following is what it displays: X-Cache : MISS from rack1.tumblr.com X-Cache-Lookup : MISS from rack1.tumblr.com:80 Via : 1.0 rack1.tumblr.com:80 (squid/2.6.STABLE6) P3P : CP="ALL ADM DEV PSAi COM OUR OTRo STP IND ONL" Set-Cookie : tmgioct=h6NSbuBBgVV2IH3qzPEPPQLg; expires=Thu, 02-Jul-2020 23:30:11 GMT; path=/; httponly ETag : f85901c583a154f897ba718048d779ef Link : <http://assets.tumblr.com/images/default_avatar_16.gif>; rel=icon Vary : Accept-Encoding Content-Type : text/html; charset=UTF-8 Content-Length : 30091 Server : Apache/2.2.3 (Red Hat) Date : Mon, 05 Jul 2010 23:31:10 GMT X-Tumblr-Usec : D=78076 X-Tumblr-User : pignoufou X-Cache-Auto : hit Connection : close Regular expressions and XML handling haXe offers a cross-platform API for regular expressions and XML that can be used on most targets' target. Regular expressions The regular expression API is implemented as the EReg class. You can use this class on any platform to match a RegExp, split a string according to a RegExp, or do some replacement. This class is available on all targets, but on Flash, it only starts from Flash 9. The following is an example of a simple function that returns true or false depending on if a RegExp matches a string given as parameter: public static function matchesHello(str : String) : Bool { var helloRegExp = ~/.*hello.*/; return helloRegExp.match(str); } One can also replace what is matched by the RegExp and return this value. This one simply replaces the word "hello" with "bye", so it's a bit of an overkill to use a RegExp to do that, and you will find some more useful ways to use this possibility when making some real programs. Now, at least you will know how to do it: public static function replaceHello(str : String) : String { var helloRegExp = ~/hello/; helloRegExp.match(str); return helloRegExp.replace(str, "bye"); } XML handling The XML class is available on all platforms. It allows you to parse and emit XML the same way on many targets. Unfortunately, it is implemented using RegExp on most platforms, and therefore can become quite slow on big files. Such problems have already been raised on the JS targets, particularly on some browsers, but you should keep in mind that different browsers perform completely differently. For example, on the Flash platform, this API is now using the internal Flash XML libraries, which results in some incompatibilities. The following is an example of how to create a simple XML document: <pages> <page id="page1"/> <page id="page2"/> </pages> Now, the haXe code to generate it: var xmlDoc : Xml; var xmlRoot : Xml; xmlDoc = Xml.createDocument(); //Create the document xmlRoot = Xml.createElement("pages"); //Create the root node xmlDoc.addChild(xmlRoot); //Add the root node to the document var page1 : Xml; page1 = Xml.createElement("page"); //create the first page node page1.set("id", "page1"); xmlRoot.addChild(page1); //Add it to the root node var page2 : Xml; page2 = Xml.createElement("page"); page2.set("id", "page2"); xmlRoot.addChild(page2); trace(xmlDoc.toString()); //Print the generated XML Input and output Input and output are certainly the most important parts of an application; indeed, without them, an application is almost useless. If you think about how the different targets supported by haXe work and how the user may interact with them, you will quickly come to the conclusion that they use different ways of interacting with the user, which are as follows: JavaScript in the browser uses the DOM Flash has its own API to draw on screen and handle events Neko uses the classic input/output streams (stdin, stdout, stderr) and so do PHP and C++ So, we have three different main interfaces: DOM, Flash, and classic streams. The DOM interface The implementation of the DOM interface is available in the js package. This interface is implemented through typedefs. Unfortunately, the API doesn't provide any way to abstract the differences between browsers and you will have to deal with them in most cases by yourself. This API is simply telling the compiler what objects exist in the DOM environment; so, if you know how to manipulate the DOM in JavaScript, you will be able to manipulate it in haXe. The thing that you should know is that the document object can be accessed through js.Lib.document. The js package is accessible only when compiling to JS. The Flash interface In a way that is similar to how the Flash is implemented in the flash and flash9 packages, the js package implements the DOM interface. When reading this sentence, you may wonder why there are two packages. The reason is pretty simple, the Flash APIs pre and post Flash 9 are different. You also have to pay attention to the fact that, when compiling to Flash 9, the flash9 package is accessible through the flashpath and not through flash9. Also, at the time of writing, the documentation for flash and flash9 packages on haxe. org is almost non-existent; but, if you need some documentation, you can refer to the official documentation. The standard input/output interface The standard input/output interface refers to the three basic streams that exist on most systems, which are as follows: stdin (most of the time the keyboard). stdout (the standard output which is most of the time the console, or, when running as a web-application, the stream sent to the client). stderr (the standard error output which is most of the time directed to the console or the log file). Neko, PHP and C++ all make use of this kind of interface. Now, there are two pieces news for you: one good and one bad. The bad one is that the API for each platform is located in a platform-specific package. So, for example, when targeting Neko, you will have to use the neko package, which is not available in PHP or C++. The good news is that there is a workaround. Well, indeed, there are three. You just have to continue reading through this article and I'll tell you how to handle that.  
Read more
  • 0
  • 0
  • 2866

article-image-overview-cherrypy-web-application-server-part2
Packt
27 Oct 2009
5 min read
Save for later

Overview of CherryPy - A Web Application Server (Part2)

Packt
27 Oct 2009
5 min read
Library CherryPy comes with a set of modules covering common tasks when building a web application such as session management, static resource service, encoding handling, or basic caching. The Autoreload Feature CherryPy is a long-running Python process, meaning that if we modify a Python module of the application, it will not be propagated in the existing process. Since stopping and restarting the server manually can be a tedious task, the CherryPy team has included an autoreload module that restarts the process as soon as it detects a modification to a Python module imported by the application. This feature is handled via configuration settings. If you need the autoreload module to be enabled while in production you will set it up as below. Note the engine.autoreload_frequency option that sets the number of seconds the autoreloader engine has to wait before checking for new changes. It defaults to one second if not present. [global]server.environment = "production"engine.autoreload_on = Trueengine.autoreload_frequency = 5 Autoreload is not properly a module but we mention it here as it is a common feature offered by the library. The Caching Module Caching is an important side of any web application as it reduces the load and stress of the different servers in action—HTTP, application, and database servers. In spite of being highly correlated to the application itself, generic caching tools such as the ones provided by this module can help in achieving decent improvements in your application's performance. The CherryPy caching module works at the HTTP server level in the sense that it will cache the generated output to be sent to the user agent and will retrieve a cached resource based on a predefined key, which defaults to the complete URL leading to that resource. The cache is held in the server memory and is therefore lost when stopping it. Note that you can also pass your own caching class to handle the underlying process differently while keeping the same high-level interface. The Coverage Module When building an application it is often beneficial to understand the path taken by the application based on the input it processes. This helps to determine potential bottlenecks and also see if the application runs as expected. The coverage module provided by CherryPy does this and provides a friendly browseable output showing the lines of code executed during the run. The module is one of the few that rely on a third-party package to run. The Encoding/Decoding Module Publishing over the Web means dealing with the multitude of existing character encoding. To one extreme you may only publish your own content using US-ASCII without asking for readers' feedback and to the other extreme you may release an application such as bulletin board that will handle any kind of charset. To help in this task CherryPy provides an encoding/decoding module that filters the input and output content based on server or user-agent settings. The HTTP Module This module offers a set of classes and functions to handle HTTP headers and entities. For example, to parse the HTTP request line and query string: s = 'GET /note/1 HTTP/1.1' # no query stringr = http.parse_request_line(s) # r is now ('GET', '/note/1', '','HTTP/1.1')s = 'GET /note?id=1 HTTP/1.1' # query string is id=1r = http.parse_request_line(s) # r is now ('GET', '/note', 'id=1','HTTP/1.1')http.parseQueryString(r[2]) # returns {'id': '1'}Provide a clean interface to HTTP headers:For example, say you have the following Accept header value:accept_value = "text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5"values = http.header_elements('accept', accept_value)print values[0].value, values[0].qvalue # will print text/html 1.0 The Httpauth Module This module provides an implementation of the basic and digest authentication algorithm as defined in RFC 2617. The Profiler Module This module features an interface to conduct a performance check of the application. The Sessions Module The Web is built on top of a stateless protocol, HTTP, which means that requests are independent of each other. In spite of that, a user can navigate an e-commerce website with the impression that the application more or less follows the way he or she would call the store to pass an order. The session mechanism was therefore brought to the Web to allow servers to keep track of users' information. CherryPy's session module offers a straightforward interface to the application developer to store, retrieve, amend, and delete chunks of data from a session object. CherryPy comes natively with three different back-end storages for session objects: Back-end type Advantages Drawbacks RAM Efficient Accepts any type of objects No configuration needed Information lost when server is shutdown Memory consumption can grow fast File system Persistence of the information Simple setup File system locking can be inefficient Only serializable (via the pickle module) objects can be stored Relational database (PostgreSQL built-in support) Persistence of the information Robust Scalable Can be load balanced Only serializable objects can be stored Setup less straightforward
Read more
  • 0
  • 0
  • 2853
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-customizing-wordpress-settings-seo
Packt
27 Apr 2011
12 min read
Save for later

Customizing WordPress Settings for SEO

Packt
27 Apr 2011
12 min read
  WordPress 3 Search Engine Optimization Optimize your website for popularity with search engines         Read more about this book       (For more resources on WordPress, see here.) We will begin by setting up the goals for your Internet presence and determining how best to leverage WordPress' flexibility and power for maximum benefit. We'll examine how to best determine and reach out to the specific audience for your goods or services. Different Internet models require different strategies. For example, if your goal is instant e-commerce sales, you strategize differently than if your goal is a broad-based branding campaign. We'll also examine how to determine how competitive the existing search market is, and how to develop a plan to penetrate that market. It's important to leverage WordPress' strengths. WordPress can effortlessly help you build large, broad-based sites. It can also improve the speed and ease with which you publish new content. It serves up simple, text-based navigation menus that search engines crawl and index easily. WordPress' tagging, pingback, and trackBack features help other blogs and websites find and connect with your content. For these reasons, and quite a few more, WordPress is search ready. In this article, we will look at what WordPress already does for your SEO. Of course, WordPress is designed as a blogging platform and a content management platform—not as a platform purely for ranking. We'll look at what WordPress doesn't accomplish innately and how to address that. Finally, we'll look at how WordPress communicates with search engines and blog update services. Following this article, we'll know how to plan out a new site or improve an existing one, how to gauge WordPress' innate strengths and supplant its weaknesses, and learn how WordPress sites get found by search engines and blog engines. Setting goals for your business and website and getting inspiration A dizzying variety of websites run on the WordPress platform, everything from The Wall Street Journal's blog to the photo sharing comedy site PeopleofWalMart. com . Not every reader will have purely commercial intent in creating his or her web presence. However, all webmasters want more traffic and more visibility for their sites. With that in mind, to increase the reach, visibility, and ranking of your website, you'll want to develop your website plan based on the type of audience you are trying to reach, the type of business you run, and what your business goals are. Analyzing your audience You will obviously want to analyze the nature of your audience. Your website's content, its design, its features, and even the pixel width of the viewable area will depend on your audience. Is your audience senior citizens? If so, your design will need to incorporate large fonts and you will want to keep your design to a pixel width of 800 or less. Senior citizens can have difficulty reading small text and many use older computers with 800 pixel monitors. And you can forget about the integrated Twitter and Facebook feeds; most seniors aren't as tuned into those technologies as young people. You might simply alienate your target users by including features that aren't a fit for your audience. Does your audience include purchasers of web design services? If so, be prepared to dazzle them with up-to-date design and features. Similarly, if you intend to rely on building up your user base by developing viral content, you will want to incorporate social media sharing into your design. Give some thought to the type of users you want to reach, and design your site for your audience. This exercise will go a long way in helping you build a successful site. Analyzing your visitors' screen sizes Have you ever wondered about the monitor sizes of the viewers of your own site? Google Analytics (www.google.com/analytics), the ubiquitous free analytics tool offered by Google, offers this capability. To use it, log on to your Google Analytics account (or sign up if you don't have one) and select the website whose statistics you wish to examine. On the left menu, select Visitors, and expand the Browser Capabilities menu entry. Then select Screen Resolutions. Google Analytics will offer up a table and chart of all the monitor resolutions used by your viewers. Determining the goal of your website The design, style, and features of your website should be dictated by your goal. If your site is to be a destination site for instant sales or sign-ups, you want a website design more in the style of a single-purpose landing page that focuses principally on conversions. A landing page is a special-purpose, conversion-focused page that appears when a potential customer clicks on an advertisement or enters through a search query. The page should display the sales copy that is a logical extension of the advertisement or link, and should employ a very shallow conversion funnel . A conversion funnel tracks the series of steps that a user must undertake to get from the entry point on a website to the satisfaction of a purchase or other conversion event. Shallow conversion funnels have fewer steps and deeper conversion funnels have more steps. When designing conversion-focused landing pages, consider if you want to eliminate navigation choices entirely on the landing page. Logically, if you are trying to get a user to make an immediate purchase, what benefit is served by giving the user easier choices to click onto the other pages? Individualized page-by-page navigation can get clunky with WordPress; you might want to ensure that your WordPress template can easily handle these demands. (Move the mouse over the image to enlarge it.) The previous screenshot shows the expert landing page for Netfiix's DVD rental service. Note the absence of navigational choices. There are other sophisticated conversion tools as well: clear explanation of benefits, free trial, arrows, and a color guide to the reader to the conversion event. If you sell a technical product or high-end consulting services, you rely heavily on the creation of content and the organization and presentation of that content, on your WordPress site. Creating a large amount of content covering broad topics in your niche will establish thought leadership that will help you draw in and maintain new customers. In sites with large amounts of educational content, you'll want to make absolutely sure that your content is well organized and has an easy-to-follow navigation. If you will be relying on social media and other forms of viral marketing to build up your user base, you'd want to integrate social media plug-ins and widgets into your site. Plug-ins and widgets are third-party software tools that you install on your WordPress site to contribute to a new functionality. A popular sports site integrates the TweetMeme and Facebook connect widgets. When users retweet or share the article, it means links, traffic, and sales. When compounded with a large amount of content, the effect can be very powerful. Following the leaders Once you have determined the essential framework and niche for your site, look for best-in-class websites for your inspiration. Trends in design and features are changing constantly. Aim for up-to-the-minute design and features: enlightened design sells more products and services, and sophisticated features will help you convert and engage your visitors more. Likewise, ease of functionality will keep visitors on your website longer and keep them coming back. For design inspiration, you can visit any one of the hundreds of website design gallery sites. These gallery sites showcase great designs in all website niches. The following design gallery sites feature the latest and greatest trends in web design and features (note that all of these sites run on WordPress): Urban Trash (www.urbantrash.net/cssgallery/): This gallery is truly one of the best and should be the first stop when seeking design inspiration. CSS Elite (www.csselite.com): Another of the truly high-end CSS galleries. Many fine sites are featured here. CSSDrive (www.cssdrive.com): CSS Drive is one of the elite classes of directories, and CSS Drive has many other design-related features as well. For general inspiration on everything from website design, to more specialized discussion of the best design for website elements such as sign-up boxes and footers, head to Smashing Magazine , especially its "inspiration" category (http://smashingmagazine.com/category/inspiration/). Ready-made WordPress designs by leading designers are available for purchase off-the-shelf at ThemeForest.net. These templates are resold to others, so they won't be exclusive to your site. A head's up: these top-end themes are full of advanced custom features. They might require a little effort to get them to display exactly as you want. For landing pages, get inspiration from retail monoliths in competitive search markets. DishNetwork and Netfiix have excellent landing pages. Sears' home improvement division serves up sophisticated landing pages for services such as vinyl siding and replacement windows. With thousands of hits per day, you can bet these retail giants are testing and retesting their landing pages periodically. You can save yourself the trouble and budget of your early-stage testing by employing the lessons that these giants have already put into practice. For navigation, usability, and site layout clues for large content-based sites, look for the blogging super-sites such as Blogs.wsj.com , POLITICO.com , Huffingtonpost. com , and Wikipedia.com Gauging competition in the search market Ideally, before you launch your site, you will want to gauge the competitive marketplace. On the Web, you have two spheres of competition: One sphere is traditional business competition: the competition for price, quality, and service in the consumer marketplace The other sphere of competition is search competition; competition for clicks, page views, conversions, user sign-ups, new visitors, returning visitors, search placement, and all the other metrics than help drive a web-based business The obvious way to get started gauging the search marketplace is to run some sample searches on the terms you believe your future customers might use when seeking out your products or services. The search leaders in your marketplace will be easy to spot; they will be in the first six positions in a Google search. While aiming for the first six positions, don't think in terms of the first page of a Google search. Studies show that the first five or six positions in a Google search yield 80 to 90 percent of the click-throughs. The first page of Google is a good milestone, but the highest positions on a search results page will yield significantly higher traffic than the bottom three positions on a search results page Once you've identified the five or six websites that are highly competitive and searches, you want to analyze what they're doing right and what you'll need to do to compete with them. Here's how to gauge the competition: Don't focus on the website in terms of the number one position for a given search. That may be too lofty a goal for the short term. Look at the sites in positions 4, 5, and 6. These positions will be your initial goal. You'll need to match or outdo these websites to earn those positions. First, you want to determine the Google PageRank of your competitor's sites. PageRank is a generalized, but helpful, indicator of the quality and number of inbound links that your competitors' websites have earned. Install a browser plug-in that shows the PageRank of any site to which you browse. For Firefox, try the SearchStatus plug-in (available at http://www.quirk.biz/searchstatus/). For Chrome, use SEO Site Tools (available through the Google Chrome Extensions gallery at https://chrome.google.com/extensions). Both of these tools are free, and they'll display a wide array of important SEO factors for any site you visit. How old are the domains of your competitors' websites? Older sites tend to outrank newer sites. If you are launching a new domain, you will most likely need to outpace your older competitors in other ways such as building more links or employing more advanced on-page optimization. Site age is a factor that can't be overcome with brains or hard work (although you can purchase an older, existing domain in the after market). Look at the size and scale of competing websites. You'll need to at least approach the size of the smallest of your competitors to place well. You will want to inspect your competitors' inbound links. Where are they getting their links from and how many links have they accumulated? To obtain a list of backlinks for any website, visit the Yahoo! Site Explorer at siteexplorer.search.yahoo.com. This free tool displays up to 1,000 links for any site. If you want to see more than 1,000 links, you'll need to purchase inbound link analysis software like SEO Spyglass from link-assistant.com. For most purposes, 1,000 links will give you a clear picture of where a site's links are coming from. Don't worry about high links counts because low-value links in large numbers are easy to overcome; high- value links like .edu links, links from article content, and links from high- PageRank sites will take more effort to surmount. You will want to examine the site's on-page optimization. Are the webmasters utilizing effective title tags and meta tags? Are they using heading tags and is their on-page text keyword-focused? If they aren't, you may be able to best beat your competitors through more effective on-page optimization. Don't forget to look at conversion. Is your competitor's site well-designed to convert his or her visitors into customers? If not, you might edge out your competition with better conversion techniques. When you analyze your competition, you are determining the standard you will need to meet or beat to earn competitive search placement. Don't be discouraged by well-placed competitors. 99 percent of all the websites are not well optimized. As you learn more, you'll be surprised how many webmasters are not employing effective optimization. Your goal as you develop or improve your website will be to do just a little bit more than your competition. Google and the other search engines will be happy to return your content in search results in favor of others if you meet a higher standard.
Read more
  • 0
  • 2
  • 2817

article-image-jquery-refresher
Packt
30 Aug 2013
6 min read
Save for later

jQuery refresher

Packt
30 Aug 2013
6 min read
(For more resources related to this topic, see here.) If you haven't used jQuery in a while, that's okay, we'll get you up to speed very quickly. The first thing to realize is that the Document.Ready function is extremely important when using UI. Although page loading happens incredibly fast, we always want our DOM (the HTML content) to be loaded before our UI code gets applied. Otherwise we have nothing to apply it to! We want to place our code inside the Document.Ready function, and we will be writing it the shorthand way as we did previously. Please remove the previous UI checking code in your header that you had previously: $(function() {// Your code here is called only once the DOM is completelyloaded}); Easy enough. Let's refresh on some jQuery selectors. We'll be using these a lot in our examples so we can manipulate our page. I'll write out a few DOM elements next and how you can select them. I will apply hide() to them so we know what's been selected and hidden. Feel free to place the JavaScript portion in your header script tags and the HTML elements within your <body> tags as follows: Selecting elements (unchanging the HTML entities) as follows: $('p').hide();<p>This is a paragraph</p><p>And here is another</p><p>All paragraphs will go hidden!</p> Selecting classes as follows: $('.edit').hide();<p>This is an intro paragraph</p><p class="edit">But this will go hidden!</p><p>Another paragraph</p><p class="edit">This will also go hidden!</p> Selecting IDs as follows: <div id="box">Hide the Box </div><div id="house">Just a random divider</div> Those are the three basic selectors. We can get more advanced and use the CSS3 selectors as follows: $("input[type=submit]").hide();<form><input type="text" name="name" /><input type="submit" /></form> Lastly, you can chain your DOM tree to hide elements more specifically: $("table tr td.hidden").hide(); <table> <tbody> <tr> <td>Data</td> <td class="hidden">Hide Me</td> </tr> </tbody> </table> Step 3 – console.log is your best friend I brought up that developing with the console open is very helpful. When you need to know details about a JavaScript item you have, whether it be the typeof type or value, a friend of yours is the console.log() method. Notice that it is always in lowercase. This allows you to place things in the console rather than somewhere on your page. For example, if I were having trouble figuring out what a value was returning to me, I would simply do the following: function add(a, b) {return a + b;}var total = add(5, 20);console.log(total); This will give me the result I wanted to know quickly and easily. Internet Explorer does not support console logging, it will prevent your JavaScript from running once it hits a console.log method. Make sure to comment out or remove all the console logs before releasing a live project or else all the IE users will have a serious problem. Step 4 – creating the slider widget Let's get busy! Open your template file and let's create a DOM element to attach a slider widget to. And to make it more interesting, we are also going to add an additional DIV to show a text value. Here is what I placed in my <body> tag: <div id="slider"></div><div id="text"></div> It doesn't have to be a <div> tag, but it's a good generic block-level element to use. Next, to attach a slider element we place the following in our <script> tags (the empty ones): $(function() {var my_slider = $("#slider").slider();}); Refresh your page, and you will have a widget that can slide along a bar. If you don't see a slider, first check your browser's development tools console to see if there are any JavaScript errors. If you don't see any still, make sure you don't have a JavaScript blocker on! The reason we assign a variable to the slider is because, later on, we may want to reference the options, which you'll see next. You are not required to do this, but if you want to access the slider outside of its initial setup, you must give it a variable name. Our widget doesn't do much now, but it feels cool to finally make something, whatever it is! Let's break down a few things we can customize. There are three categories: Options: These are defined in a JavaScript object ({}) and will determine how you want your widget to behave when it's loaded, for example, you could set your slider to have minimum and maximum values. Events: These are always a function and they are triggered when a user does something to your item. Methods: You can use methods to destroy a widget, get and set values from outside of the widget, and even set different options from what you started with. To play with a few categories, the easiest start is to adjust the options. Let's do it by creating an empty object inside our slider: var my_slider = $("#slider").slider({}); Then we'll create a minimum and maximum value for our slider using the following code: var my_slider = $("#slider").slider({min: 1,max: 50}); Now our slider will accept and move along a bar with 50 values. There are many more options at UI API located at api.jquery.com under slider. You'll find many other options we won't have time to cover such as a step option to make the slider count every two digits, as follows: var my_slider = $("#slider").slider({min: 1,max: 50,step: 2}); If we want to attach this to a text field we created in the DOM, a good way to start is by assigning the minimum value in the DIV, as this way we only have to change it once: var min = my_slider.slider('option', 'min');$("#text").html(min); Next we want to update the text value every time the slider is moved, easy enough; this will introduce us to our first event. Let's add it: var my_slider = $("#slider").slider({min: 1,max: 50,step: 2,change: function(event, ui) {$("#text").html(ui.value);}}); Summary This article describes the basis for all widgets. Creating them, setting the options, events, and methods. That is the very simple pattern that handles everything for us. Resources for Article: Further resources on this subject: Using jQuery and jQueryUI Widget Factory plugins with RequireJS [Article] jQuery Animation: Tips and Tricks [Article] New Effects Added by jQuery UI [Article]
Read more
  • 0
  • 0
  • 2805

article-image-adapting-user-devices-using-mobile-web-technology
Packt
23 Oct 2009
10 min read
Save for later

Adapting to User Devices Using Mobile Web Technology

Packt
23 Oct 2009
10 min read
Luigi's Pizza On The Run mobile shop is working well now. And he wants to adapt it to different mobile devices. Let's look at the following: Understanding the Lowest Common Denominator method Finding and comparing features of different mobile devices Deciding to adapt or not Adapting and progressively enhancing POTR application using Wireless Abstraction Library Detecting device capabilities Evaluating tools that can aid in adaptation Moving your blog to the mobile web By the end of this article, you will have a strong foundation in adapting to different devices. What is Adaptation? Adaptation, sometimes called multiserving, means delivering content as per each user device's capabilities. If the visiting device is an old phone supporting only WML, you will show a WML page with Wireless Bitmap (wbmp) images. If it is a newer XHTML MP-compliant device, you will deliver an XHTML MP version, customized according to the screen size of the device. If the user is on iMode in Japan, you will show a Compact HTML (cHTML) version that's more forgiving than XHTML. This way, users get the best experience possible on their device. Do I Need Adaptation? I am sure most of you are wondering why you would want to create somany different versions of your mobile site? Isn't following the XHTML MPstandard enough? On the Web, you could make sure that you followed XHTML and the site will work in all browsers. The browser-specific quirks are limited and fixes are easy. However, in the mobile world, you have thousands of devices using hundreds of different browsers. You need adaptation precisely for that reason! If you want to serve all users well, you need to worry about adaptation. WML devices will give up if they encounter a <b> tag within an <a> tag. Some XHTML MP browsers will not be able to process a form if it is within a table. But a table within a form will work just fine. If your target audience is limited, and you know that they are going to use a limited range of browsers, you can live without adaptation. Can't I just Use Common Capabilities and Ignore the Rest? You can. Finding the Lowest Common Denominator (LCD) of the capabilities of target devices, you can design a site that will work reasonably well in all devices. Devices with better capabilities than LCD will see a version that may not be very beautiful but things will work just as well. How to Determine the LCD? If you are looking for something more than the W3C DDC guidelines, you may be interested in finding out the capabilities of different devices to decide on your own what features you want to use in your application. There is a nice tool that allows you to search on device capabilities and compare them side by side. Take a look at the following screenshot showing mDevInf (http://mdevinf.sourceforge.net/) in action, showing image formats supported on a generic iMode device. You can search for devices and compare them, and then come to a conclusion about features you want to use. This is all good. But when you want to cater to wider mobile audience, you must consider adaptation. You don't want to fight with browser quirks and silly compatibility issues. You want to focus on delivering a good solution. Adaptation can help you there. OK, So How do I Adapt? You have three options to adapt: Design alternative CSS: this will control the display of elements and images. This is the easiest method. You can detect the device and link an appropriate CSS file. Create multiple versions of pages: redirect the user to a device-specific version. This is called "alteration". This way you get the most control over what is shown to each device. Automatic Adaptation: create content in one format and use a tool to generate device-specific versions. This is the most elegant method. Let us rebuild the pizza selection page on POTR to learn how we can detect the device and implement automatic adaptation. Fancy Pizza Selection Luigi has been asking to put up photographs of his delicious pizzas on the mobile site, but we didn't do that so far to save bandwidth for users. Let us now go ahead and add images to the pizza selection page. We want to show larger images to devices that can support them. Review the code shown below. It's an abridged version of the actual code. <?php include_once("wall_prepend.php"); ?> <wall:document><wall:xmlpidtd /> <wall:head> <wall:title>Pizza On The Run</wall:title> <link href="assets/mobile.css" type="text/css" rel="stylesheet" /> </wall:head> <wall:body> <?php echo '<wall:h2>Customize Your Pizza #'.$currentPizza.':</wall:h2> <wall:form enable_wml="false" action="index.php" method="POST"> <fieldset> <wall:input type="hidden" name="action" value="order" />'; // If we did not get the total number of pizzas to order, // let the user select if ($_REQUEST["numPizza"] == -1) { echo 'Pizzas to Order: <wall:select name="numPizza">'; for($i=1; $i<=9; $i++) { echo '<wall:option value="'.$i.'">'.$i.'</wall:option>'; } echo '</wall:select><wall:br/>'; } else { echo '<wall:input type="hidden" name="numPizza" value="'.$_REQUEST["numPizza"].'" />'; } echo '<wall:h3>Select the pizza</wall:h3>'; // Select the pizza $checked = 'checked="checked"'; foreach($products as $product) { // Show a product image based on the device size echo '<wall:img src="assets/pizza_'.$product[ "id"].'_120x80.jpg" alt="'.$product["name"].'"> <wall:alternate_img src="assets/pizza_'.$product[ "id"].'_300x200.jpg" test="'.($wall->getCapa( 'resolution_width') >= 200).'" /> <wall:alternate_img nopicture="true" test="'.( !$wall->getCapa('jpg')).'" /> </wall:img><wall:br />'; echo '<wall:input type="radio" name="pizza[ '.$currentPizza.']" value="'.$product["id"].'" '.$checked.'/>'; echo '<strong>'.$product["name"].' ($'.$product[ "price"].')</strong> - '; echo $product["description"].'<wall:br/>'; $checked = ''; } echo '<wall:input type="submit" class="button" name= "option" value="Next" /> </fieldset></wall:form>'; ?> <p><wall:a href="?action=home">Home</wall:a> - <wall:caller tel="+18007687669"> +1-800-POTRNOW</wall:caller></p> </wall:body> </wall:html> What are Those <wall:*> Tags? All those <wall:*> tags are at the heart of adaptation. Wireless Abstraction Library (WALL) is an open-source tag library that transforms the WALL tags into WML, XHTML, or cHTML code. E.g. iMode devices use <br> tag and simply ignore <br />. WALL will ensure that cHTML devices get a <br> tag and XHTML devices get a <br /> tag. You can find a very good tutorial and extensive reference material on WALL from: http://wurfl.sourceforge.net/java/wall.php. You can download WALL and many other tools too from that site. WALL4PHP—a PHP port of WALL is available from http://wall.laacz.lv/. That's what we are using for POTR. Let's Make Sense of This Code! What are the critical elements of this code? Most of it is very similar to standard XHTML MP. The biggest difference is that tags have a "wall:" prefix. Let us look at some important pieces: The wall_prepend.php file at the beginning loads the WALL class, detects the user's browser, and loads its capabilities. You can use the $wall object in your code later to check device capabilities etc. <wall:document> tells the WALL parser to start the document code. <wall:xmlpidtd /> will insert the XHTML/WML/CHTML prolog as required. This solves part of the headache in adaptation. The next few lines define the page title and meta tags. Code that is not in <wall:*> tags is sent to the browser as is. The heading tag will render as a bold text on a WML device. You can use many standard tags with WALL. Just prefix them with "wall:". We do not want to enable WML support in the form. It requires a few more changes in the document structure, and we don't want it to get complex for this example! If you want to support forms on WML devices, you can enable it in the <wall:form> tag. The img and alternate_img tags are a cool feature of WALL. You can specify the default image in the img tag, and then specify alternative images based on any condition you like. One of these images will be picked up at run time. WALL can even skip displaying the image all together if the nopicture test evaluates to true. In our code, we show a 120x100 pixels images by default, and show a larger image if the device resolution is more than 200 pixels. As the image is a JPG, we skip showing the image if the device cannot support JPG images. The alternate_img tag also supports showing some icons available natively on the phone. You can refer to the WALL reference for more on this. Adapting the phone call link is dead simple. Just use the <wall:caller> tag. Specify the number to call in the tel attribute, and you are done. You can also specify what to display if the phone does not support phone links in alt attribute. When you load the URL in your browser, WALL will do all the heavy liftingand show a mouth-watering pizza—a larger mouth-watering pizza if you have a large screen! Can I Use All XHTML Tags? WALL supports many XHTML tags. It has some additional tags to ease menu display and invoke phone calls. You can use <wall:block> instead of code <p> or <div> tags because it will degrade well, and yet allow you to specify CSS class and id. WALL does not have tags for tables, though it can use tables to generate menus. Here's a list of WALL tags you can use: a, alternate_img, b, block, body, br, caller, cell, cool_menu, cool_menu_css, document, font, form, h1, h2, h3, h4, h5, h6, head, hr, i, img, input, load_capabilities, marquee, menu, menu_css, option, select, title, wurfl_device_id, xmlpidtd. Complete listings of the attributes available with each tag, and their meanings are available from: http://wurfl.sourceforge.net/java/refguide.php. Complete listings of the attributes available with each tag, and their meanings are available from: http://wurfl.sourceforge.net/java/refguide.php. Will This Work Well for WML? WALL can generate WML. WML itself has limited capabilities so you will be restricted in the markup that you can use. You have to enclose content in <wall:block> tags and test rigorously to ensure full WML support. WML handles user input in a different way and we can't use radio buttons or checkboxes in forms. A workaround is to change radio buttons to a menu and pass values using the GET method. Another is to convert them to a select drop down. We are not building WML capability in POTR yet. WALL is still useful for us as it can support cHTML devices and will automatically take care of XHTML implementation variations in different browsers. It can even generate some cool menus for us! Take a look at the following screenshot.
Read more
  • 0
  • 0
  • 2786

article-image-plug-ins-and-dynamic-actions-oracle-application-express-and-ext-js
Packt
22 Mar 2011
7 min read
Save for later

Plug-ins and Dynamic Actions with Oracle Application Express and Ext JS

Packt
22 Mar 2011
7 min read
Oracle Application Express 4.0 with Ext JS Deliver rich desktop-styled Oracle APEX applications using the powerful Ext JS JavaScript library Plug-ins and dynamic actions are the two most exciting new features for developers in APEX 4.0. Combining them with Ext JS components is a recipe for success. For the first time we now have the ability to add custom "widgets" directly into APEX that can be used declaratively in the same way as native APEX components. Plug-ins and dynamic actions are supported with back end integration, allowing developers to make use of APEX provided PL/SQL APIs to simplify component development. Plug-ins give developers a supported mechanism to enhance the existing built-in functionality by writing custom PL/SQL components for item types, regions, and processes. Dynamic actions provide developers with a way to define client-side behavior declaratively without needing to know JavaScript. Using a simple wizard, developers can select a page item and a condition, enter a value, and select an action (for example, Show, Hide, Enable, and Show Item Row). Most APEX developers come from a database development background. So, they are much more comfortable coding with PL/SQL than JavaScript. The sooner work is focused on PL/SQL development the more productive APEX developers become. The ability to create plug-ins that can be used declaratively means developers don't have to write page-specific JavaScript for items on a page, or use messy "hacks" to attach additional JavaScript functionality to standard APEX items. Ext JS provides a rich library of sophisticated JavaScript components just waiting to be integrated into APEX using plug-ins. A home for your plug-ins and dynamic actions APEX allows you to create plug-ins for item, region, dynamic action, and process types. Like templates and themes, plug-ins are designed to be shared, so they can be easily exported and imported from one workspace application to another. Plug-ins can also be subscribed, providing a way to easily share and update common attributes between plug-ins. Building a better Number Field APEX 4.0 introduced the Number Field as a new item type, allowing you to configure number-range checks by optionally specifying minimum and maximum value attributes. It also automatically checks that the entered value is a number, and performs NOT NULL validation as well. You can also specify a format mask for the number as well, presumably to enforce decimal places. This all sounds great, and it does work as described, but only after you have submitted the page for processing on the server. The following screenshot shows the APEX and Ext versions of the Number Field, both setup with a valid number range of 1 to 10,000. The APEX version allows you to enter any characters you want, including letters. The Ext version automatically filters the keys pressed to only accept numbers, conditionally the decimal separator, and the negative sign. It also highlights invalid values when you go outside the valid number range. We are going to build a better Number Field using APEX plug-ins to provide better functionality on the client side, and still maintain the same level of server-side validation. The Number Field is quite a simple example, allowing us to be introduced to how APEX plug-ins work without getting bogged down in the details. The process of building a plug-in requires the following: Creating a plug-in in your application workspace Creating a test page containing your plug-in and necessary extras to test it Running your application to test functionality Repeating the build/test cycle, progressively adding more features until satisfied with the result Creating a plug-in item For our Number Field, we will be creating an item plug-in. So, navigate to the plug-ins page in Application Builder, found under Application Builder | Application xxx | Shared Components | Plug-ins, and press Create to open the Plug-in Create/Edit page. Start filling in the following fields: Name: Ext.form.NumberField. Here, I'm using the Ext naming for the widget, but that's just for a convenience. Internal Name: Oracle's recommendation here is to use your organization's domain name as a prefix. So for example, a company domain of mycompany. com would prefix a plug-in named Slider, would result in an internal name of COM.MYCOMPANY.SLIDER. File Prefix: As we are referencing the Ext JavaScript libraries in our page template, we can skip this field completely. For specialized plug-ins that are only used on a handful of specific pages, you would attach a JavaScript file here. Source: For the PL/SQL source code required to implement our plug-in, you can either enter it as a PL/SQL anonymous block of code that contains functions for rendering, validating and AJAX callbacks, or refer to code in a PL/SQL package in the database. You get better performance using PL/SQL packages in the database, so that's the smart way to go. Simply include your package function names in the Callbacks section, as shown in the following screenshot, noting that no AJAX function name is specified as no AJAX functionality is required for this plug-in. The package doesn't need to exist at this time—the form will submit successfully anyway. Standard attributes: In addition to being able to create up to ten custom attributes, the APEX team has made the standard attributes available for plug-ins to use. This is really useful, because the standard attributes comprise elements that are useful for most components, such as width and height. They also have items such as List of Values (LOV), which have more complicated validation rules already built-in, checking SQL queries used for the LOV source are valid queries, and so on. For the Number Field, only a few attributes have been checked: IsVisible Widget: Indicating the element will be displayed Session State Changeable: So that APEX knows to store the value of the item in session state Has Read Only Attribute: So that conditional logic can be used to make the item read only Has Width Attributes: Allowing the width of the item to be set Notice that some of the attributes are disabled in the following screenshot. This is because the APEX Builder conditionally enables the checkboxes that are dependent on another attribute. So, in this screenshot the List of Values Required, Has LOV Display Null Attributes, and Has Cascading LOV Attributes checkboxes are disabled, because they are dependant on the Has List of Values checkbox. Once it is checked, the other checkboxes are enabled. Custom attributes: Defining the custom attributes for our Number Field is largely an exercise in reviewing the configuration options available in the documentation for Ext.form.NumberField, and deciding which options are most useful to be included. You need to also take into account that some configuration options are already included when you define an APEX item using your plug-in. For example, Item Label is included with an APEX item, but is rendered separately from the item, so don't need to be included. Likewise Value Required is a validation rule, and is also separate when rendering the APEX item. The following next screenshot shows some Ext.form.NumberField configuration options, overlaid with the custom attributes defined in APEX for the plug-in: I've chosen not to include the allowBlank config option as a custom attribute, simply because this can be determined by the APEX Value Required property. Other Configuration options, such as allowDecimals and allowNegative, are included as custom attributes as Yes/No types. The custom attributes created are listed in the following table: Custom events: Custom events are used to define JavaScript event that can be exposed to dynamic actions. For this simple plug-in, there is no need to define any custom events. At this point, we are done defining the Number Field in APEX; it's time to turn our attention to building the database package to execute the Callbacks to render and validate the plug-in.
Read more
  • 0
  • 0
  • 2771
article-image-programming-littlebits-circuits-javascript-part-2
Anna Gerber
14 Dec 2015
5 min read
Save for later

Programming littleBits circuits with JavaScript Part 2

Anna Gerber
14 Dec 2015
5 min read
In this two-part series, we're programing littleBits circuits using the Johnny-Five JavaScript Robotics Framework. Be sure to read over Part 1 before continuing here. Let's create a circuit to play with, using all of the modules from the littleBits Arduino Coding Kit. Attach a button to the Arduino connector labelled d0. Attach a dimmer to the connector marked a0 and a second dimmer to a1. Turn the dimmers all the way to the right (max) to start with. Attach a power module to the single connector on the left-hand side of the fork module, and the three output connectors of the fork module to all of the input modules. The bargraph should be connected to d5, and the servo to d9, and both set to PWM output mode using the switches on board of the Arduino. The servo module has two modes: turn and swing. Swing mode makes the servo sweep betwen maximum and minimum. Set it to swing mode using the onboard switch. Reading input values We'll create an instance of the Johnny-Five Button class to respond to button press events. Our button is connected to the connector labelled d0 (i.e. digital "pin" 0) on our Arduino, so we'll need to specify the pin as an argument when we create the button. var five = require("johnny-five"); var board = new five.Board(); board.on("ready", function() { var button = new five.Button(0); }); Our dimmers are connected to analog pins (A0 and A1), so we'll specify these as strings when we create Sensor objects to read their values. We can also provide options for reading the values; for example, we'll set the frequency to 250 milliseconds, so we'll be receiving 4 readings per second for both dimmers. var dimmer1 = new five.Sensor({ pin: "A0", freq: 250 }); var dimmer2 = new five.Sensor({ pin: "A1", freq: 250 }); We can attach a function that will be run any time the value changes (on "change") or anytime we get a reading (on "data"): dimmer1.on("change", function() { // raw value (between 0 and 1023) console.log("dimmer 1 is " + this.raw); }); Run the code and try turning dimmer 1. You should see the value printed to the console whenever the dimmer value changes. Triggering behavior Now we can use code to hook our input components up to our output components. To use, for example, the dimmer to control the brightness of the bargraph, change the code in the event handler: var led = new five.Led(5); dimmer1.on("change", function() { // set bargraph brightness to one quarter // of the raw value from dimmer led.brightness(Math.floor(this.raw / 4)); }); You'll see the bargraph brightness fade as you turn the dimmer. We can use the JavaScript Math library and operators to manipulate the brightness value before we send it to the bargraph. Writing code gives us more control over the mapping between input values and output behaviors than if we'd snapped our littleBits modules directly together without going via the Arduino. We set our d5 output to PWM mode, so all of the LEDs should fade in and out at the same time. If we set the output to analog mode instead, we'd see the behavior change to light up more or fewer LEDs depending on value of the brightness. Let's use the button to trigger the servo stop and start functions. Add a button press handler to your code, and a variable to keep track of whether the servo is running or not. We'll toggle this variable between true and false using JavaScript's boolean not operator (!). We can determine whether to stop or start the servo each time the button is pressed via a conditional statement based on the value of this variable. var servo = new five.Motor(9); servo.start(); var button = new five.Button(0); var toggle = false; var speed = 255; button.on("press", function(value){ toggle = !toggle; if (toggle) { servo.start(speed); } else { servo.stop(); } }); The other dimmer can be used to change the servo speed: dimmer2.on("change", function(){ speed = Math.floor(this.raw / 4); if (toggle) { servo.start(speed); } }); There are many input and output modules available within the littleBits system for you to experiment with. You can use the Sensor class with input modules, and check out the Johnny-Five API docs to see examples of types of outputs supported by the API. You can always fall back to using the Pin class to program any littleBits module. Using the REPL Johnny-Five includes a Read-Eval-Print-Loop (REPL) so you can interactively write code to control components instantly - no waiting for code to compile and upload! Any of the JavaScript objects from your program that you want to access from the REPL need to be "injected". The following code, for example, injects our servo and led objects. this.repl.inject({ led: led, servo: servo }); After running the program using Node.js, you'll see a >> prompt in your terminal. Try some of the following functions (hit Enter after each to see it take effect): servo.stop(): stop the servo servo.start(50): start servo moving at slow speed servo.start(255): start servo moving at max speed led.on(): turn LED on led.off(): turn LED off led.pulse(): slowly fade LED in and out led.stop(): stop pulsing LED led.brightness(100): set brightness of LED - the parameter should be between 0 and 255 LittleBits are fantastic for prototyping, and pairing the littleBits Arduino with JavaScript makes prototyping interactive electronic projects even faster and easier. About the author Anna Gerber is a full-stack developer with 15 years of experience in the university sector. Specializing in Digital Humanities, she was a Technical Project Manager at the University of Queensland’s eResearch centre, and she has worked at Brisbane’s Distributed System Technology Centre as a Research Scientist. Anna is a JavaScript robotics enthusiast who enjoys tinkering with soft circuits and 3D printers.
Read more
  • 0
  • 0
  • 2747

article-image-understanding-datastore
Packt
14 Sep 2015
41 min read
Save for later

Understanding the Datastore

Packt
14 Sep 2015
41 min read
 In this article by Mohsin Hijazee, the author of the book Mastering Google App Engine, we will go through learning, but unlearning something is even harder. The main reason why learning something is hard is not because it is hard in and of itself, but for the fact that most of the times, you have to unlearn a lot in order to learn a little. This is quite true for a datastore. Basically, it is built to scale the so-called Google scale. That's why, in order to be proficient with it, you will have to unlearn some of the things that you know. Your learning as a computer science student or a programmer has been deeply enriched by the relational model so much so that it is natural to you. Anything else may seem quite hard to grasp, and this is the reason why learning Google datastore is quite hard. However, if this were the only glitch in all that, things would have been way simpler because you could ask yourself to forget the relational world and consider the new paradigm afresh. Things have been complicated due to Google's own official documentation, where it presents a datastore in a manner where it seems closer to something such as Django's ORM, Rails ActiveRecord, or SQLAlchemy. However, all of a sudden, it starts to enlist its limitations with a very brief mention or, at times, no mention of why the limitations exist. Since you only know the limitations but not why the limitations are there in the first place, a lack of reason may result to you being unable to work around those limitations or mold your problem space into the new solution space, which is Google datastore. We will try to fix this. Hence, the following will be our goals in this article: To understand BigTable and its data model To have a look at the physical data storage in BigTable and the operations that are available in it To understand how BigTable scales To understand datastore and the way it models data on top of BigTable So, there's a lot more to learn. Let's get started on our journey of exploring datastore. The BigTable If you decided to fetch every web page hosted on the planet, download and store a copy of it, and later process every page to extract data from it, you'll find out that your own laptop or desktop is not good enough to accomplish this task. It has barely enough storage to store every page. Usually, laptops come with 1 TB hard disk drives, and this seems to be quite enough for a person who is not much into video content such as movies. Assuming that there are 2 billion websites, each with an average of 50 pages and each page weighing around 250 KB, it sums up to around 23,000+ TB (or roughly 22 petabytes), which would need 23,000 such laptops to store all the web pages with a 1 TB hard drive in each. Assuming the same statistics, if you are able to download at a whopping speed of 100 MBps, it would take you about seven years to download the whole content to one such gigantic hard drive if you had one in your laptop. Let's suppose that you downloaded the content in whatever time it took and stored it. Now, you need to analyze and process it too. If processing takes about 50 milliseconds per page, it would take about two months to process the entire data that you downloaded. The world would have changed a lot by then already, leaving your data and processed results obsolete. This is the Kind of scale for which BigTable is built. Every Google product that you see—Search Analytics, Finance, Gmail, Docs, Drive, and Google Maps—is built on top of BigTable. If you want to read more about BigTable, you can go through the academic paper from Google Research, which is available at http://static.googleusercontent.com/media/research.google.com/en//archive/bigtable-osdi06.pdf. The data model Let's examine the data model of BigTable at a logical level. BigTable is basically a key-value store. So, everything that you store falls under a unique key, just like PHP' arrays, Ruby's hash, or Python's dict: # PHP $person['name'] = 'Mohsin'; # Ruby or Python person['name'] = 'Mohsin' However, this is a partial picture. We will learn the details gradually in a while. So, let's understand this step by step. A BigTable installation can have multiple tables, just like a MySQL database can have multiple tables. The difference here is that a MySQL installation might have multiple databases, which in turn might have multiple tables. However, in the case of BigTable, the first major storage unit is a table. Each table can have hundreds of columns, which can be divided into groups called column families. You can define column families at the time of creating a table. They cannot be altered later, but each column family might have hundreds of columns that you can define even after the creation of the table. The notation that is used to address a column and its column families is like job:title, where job is a column family and title is the column. So here, you have a job column family that stores all the information about the job of the user, and title is supposed to store the job title. However, one of the important facts about these columns is that there's no concept of datatypes in BigTable as you'd encounter in other relational database systems. Everything is just an uninterpreted sequence of bytes, which means nothing to BigTable. What they really mean is just up to you. It might be a very long integer, a string, or a JSON-encoded data. Now, let's turn our attention to the rows. There are two major characteristics of the rows that we are concerned about. First, each row has a key, which must be unique. The contents of the key again consist of an uninterpreted string of bytes that is up to 64 KB in length. A key can be anything that you want it to be. All that's required is that it must be unique within the table, and in case it is not, you will have to overwrite the contents of the row with the same content. Which key should you use for a row in your table? That's the question that requires some consideration. To answer this, you need to understand how the data is actually stored. Till then, you can assume that each key has to be a unique string of bytes within the scope of a table and should be up to 64 KB in length. Now that we know about tables, column families, columns, rows, and row keys, let's look at an example of BigTable that stores 'employees' information. Let's pretend that we are creating something similar to LinkedIn here. So, here's the table: Personal Professional Key(name) personal:lastname personal:age professinal:company professional:designation Mohsin Hijazee 29 Sony Senior Designer Peter Smith 34 Panasonic General Manager Kim Yong 32 Sony Director Ricky Martin 45 Panasonic CTO Paul Jefferson 39 LG Sales Head So, 'this is a sample BigTable. The first column is the name, and we have chosen it as a key. It is of course not a good key, because the first name cannot necessarily be unique, even in small groups, let alone in millions of records. However, for the sake of this example, we will assume that the name is unique. Another reason behind assuming the name's uniqueness is that we want to increase our understanding gradually. So, the key point here is that we picked the first name as the row's key for now, but we will improve on this as we learn more. Next, we have two column groups. The personal column group holds all the personal attributes of the employees, and the other column family named professional has all the other attributes pertaining to the professional aspects. When referring to a column within a family, the notation is family:column. So, personal:age contains the age of the employees. If you look at professinal:designation and personal:age, it seems that the first one's contents are strings, while the second one stores integers. That's false. No column stores anything but just plain bytes without any distinction of what they mean. The meaning and interpretation of these bytes is up to the user of the data. From the point of view of BigTable', each column just contains plain old bytes. Another thing that is drastically different from RDBMS is such as MySQL is that each row need not have the same number of columns. Each row can adopt the layout that they want. So, the second row's personal column family can have two more columns that store gender and nationality. For this particular example, the data is in no particular order, and I wrote it down as it came to my mind. Hence, there's no order of any sort in the data at all. To summarize, BigTable is a key-value storage where keys should be unique and have a length that is less than or equal to 64 KB. The columns are divided into column families, which can be created at the time of defining the table, but each column family might have hundreds of columns created as and when needed. Also, contents have no data type and comprise just plain old bytes. There's one minor detail left, which is not important as regards our purpose. However, for the sake of the completeness of the BigTable's data model, I will mention it now. Each value of the column is stored with a timestamp that is accurate to the microseconds, and in this way, multiple versions of a column value are available. The number of last versions that should be kept is something that is configurable at the table level, but since we are not going to deal with BigTable directly, this detail is not important to us. How data is stored? Now that we know about row keys, column families, and columns, we will gradually move towards examining this data model in detail and understand how the data is actually stored. We will examine the logical storage and then dive into the actual structure, as it ends up on the disk. The data that we presented in the earlier table had no order and were listed as they came to my mind. However, while storing, the data is always sorted by the row key. So now, the data will actually be stored like this: personal professional Key(name) personal:lastname personal:age professinal:company professional:designation Kim Yong 32 Sony Director Mohsin Hijazee 29 Sony Senior Designer Paul Jefferson 39 LG Sales Head Peter Smith 34 Panasonic General Manager Ricky Martin 45 Panasonic CTO OK, so what happened here? The name column indicates the key of the table and now, the whole table is sorted by the key. That's exactly how it is stored on the disk as well. 'An important thing about sorting is lexicographic sorting and not semantic sorting. By lexicographic, we mean that they are sorted by the byte value and not by the textness or the semantic sort. This matters because even within the Latin character set, different languages have different sort orders for letters, such as letters in English versus German and French. However, all of this and the Unicode collation order isn't valid here. It is just sorted by byte values. In our instance, since K has a smaller byte value (because K has a lower ASCII/Unicode value) than letter M, it comes first. Now, suppose that some European language considers and sorts M before K. That's not how the data would be laid out here, because it is a plain, blind, and simple sort. The data is sorted by the byte value, with no regard for the semantic value. In fact, for BigTable, this is not even text. It's just a plain string of bytes. Just a hint. This order of keys is something that we will exploit when modeling data. How? We'll see later. The Physical storage Now that we understand the logical data model and how it is organized, it's time to take a closer look at how this data is actually stored on the disk. On a physical disk, the stored data is sorted by the key. So, key 1 is followed by its respective value, key 2 is followed by its respective value, and so on. At the end of the file, there's a sorted list of just the keys and their offset in the file from the start, which is something like the block to the right: Ignore the block on your left that is labeled Index. We will come back to it in a while. This particular format actually has a name SSTable (String Storage Table) because it has strings (the keys), and they are sorted. It is of course tabular data, and hence the name. Whenever your data is sorted, you have certain advantages, with the first and foremost advantage being that when you look up for an item or a range of items, 'your dataset is sorted. We will discuss this in detail later in this article. Now, if we start from the beginning of the file and read sequentially, noting down every key and then its offset in a format such as key:offset, we effectively create an index of the whole file in a single scan. That's where the first block to your left in the preceding diagram comes from. Since the keys are sorted in the file, we simply read it sequentially till the end of the file, hence effectively creating an index of the data. Furthermore, since this index only contains keys and their offsets in the file, it is much smaller in terms of the space it occupies. Now, assuming that SSTable has a table that is, say, 500 MB in size, we only need to load the index from the end of the file into the memory, and whenever we are asked for a key or a range of keys, we just search within a memory index (thus not touching the disk at all). If we find the data, only then do we seek the disk at the given offset because we know the offset of that particular key from the index that we loaded in the memory. Some limitations Pretty smart, neat, and elegant, you would say! Yes it is. However, there's a catch. If you want to create a new row, key must come in a sorted order, and even if you are sure about where exactly this key should be placed in the file to avoid the need to sort the data, you still need to rewrite the whole file in a new, sorted order along with the index. Hence, large amounts of I/O are required for just a single row insertion. The same goes for deleting a row because now, the file should be sorted and rewritten again. Updates are OK as long as the key itself is not altered because, in that case, it is sort of having a new key altogether. This is so because a modified key would have a different place in the sorted order, depending on what the key actually is. Hence, the whole file would be rewritten. Just for an example, say you have a row with the key as all-boys, and then you change the key of that row to x-rays-of-zebra. Now, you will see that after the new modification, the row will end up at nearly the end of the file, whereas previously, it was probably at the beginning of the file because all-boys comes before x-rays-of-zebra when sorted. This seems pretty limiting, and it looks like inserting or removing a key is quite expensive. However, this is not the case, as we will see later. Random writes and deletion There's one last thing that's worth a mention before we examine the operations that are available on a BigTable. We'd like to examine how random writes and the deletion of rows are handled because that seems quite expensive, as we just examined in the preceding section. The idea is very simple. All the read, writes, and removals don't go straight to the disk. Instead, an in-memory SSTable is created along with its index, both of which are empty when created. We'll call it MemTable from this point onwards for the sake of simplicity. Every read checks the index of this table, and if a record is found from here, it's well and good. If it is not, then the index of the SSTable on the disk is checked and the desired row is returned. When a new row has to be read, we don't look at anything and simply enter the row in the MemTable along with its record in the index of this MemTable. To delete a key, we simply mark it deleted in the memory, regardless of whether it is in MemTable or in the on disk table. As shown here the allocation of block into Mem Table: Now, when the size of the MemTable grows up to a certain size, it is written to the disk as a new SSTable. Since this only depends on the size of the MemTable and of course happens much infrequently, it is much faster. Each time the MemTable grows beyond a configured size, it is flushed to the disk as a new SSTable. However, the index of each flushed SSTable is still kept in the memory so that we can quickly check the incoming read requests and locate it in any table without touching the disk. Finally, when the number of SSTables reaches a certain count, the SSTables are merged and collapsed into a single SSTable. Since each SSTable is just a sorted set of keys, a merge sort is applied. This merging process is quite fast. Congratulations! You've just learned the most atomic storage unit in BigData solutions such as BigTable, Hbase, Hypertable, Cassandara, and LevelDB. That's how they actually store and process the data. Now that we know how a big table is actually stored on the disk and how the read and writes are handled, it's time to take a closer look at the available operations. Operations on BigTable Until this point, we know that a BigTable table is a collection of rows that have unique keys up to 64 KB in length and the data is stored according to the lexicographic sort order of the keys. We also examined how it is laid out on the disk and how read, writes, and removals are handled. Now, the question is, which operations are available on this data? The following are the operations that are available to us: Fetching a row by using its key Inserting a new key Deleting a row Updating a row Reading a range of rows from the starting row key to the ending row key Reading Now, the first operation is pretty simple. You have a key, and you want the associated row. Since the whole data set is sorted by the key, all we need to do is perform a binary search on it, and you'll be able to locate your desired row within a few lookups, even within a set of a million rows. In practice, the index at the end of the SSTable is loaded in the memory, and the binary search is actually performed on it. If we take a closer look at this operation in light of what we know from the previous section, the index is already in the memory of the MemTable that we saw in the previous section. In case there are multiple SSTables because MemTable was flushed many times to the disk as it grew too large, all the indexes of all the SSTables are present in the memory, and a quick binary search is performed on them. Writing The second operation that is available to us is the ability to insert a new row. So, we have a key and the values that we want to insert in the table. According to our new knowledge about physical storage and SSTables, we can understand this very well. The write directly happens on the in-memory MemTable and its index is updated, which is also in the memory. Since no disk access is required to write the row as we are writing in memory, the whole file doesn't have to be rewritten on disk, because yet again, all of it is in the memory. This operation is very fast and almost instantaneous. However, if the MemTable grows in size, it will be flushed to the disk as a new SSTable along with the index while retaining a copy of its index in the memory. Finally, we also saw that when the number of SSTables reaches a certain number, they are merged and collapsed to form a new, bigger table. Deleting It seems that since all the keys are in a sorted order on the disk and deleting a key would mean disrupting the sort order, a rewrite of the whole file would be a big I/O overhead. However, it is not, as it can be handled smartly. Since all the indexes, including the MemTable and the tables that were the result of flushing a larger MemTable to the disk, are already in the memory, deleting a row only requires us to find the required key in the in-memory indexes and mark it as deleted. Now, whenever someone tries to read the row, the in-memory indexes will be checked, and although an entry will be there, it will be marked as deleted and won't be returned. When MemTable is being flushed to the disk or multiple tables are being collapsed, this key and the associated row will be excluded in the write process. Hence, they are totally gone from the storage. Updating Updating a row is no different, but it has two cases. The first case is in which not only the values, but also the key is modified. In this case, it is like removing the row with an old key and inserting a row with a new key. We already have seen both of these cases in detail. So, the operation should be obvious. However, the case where only the values are modified is even simpler. We only have to locate the row from the indexes, load it in the memory if it is not already there, and modify. That's all. Scanning a range This last operation is quite interesting. You can scan a range of keys from a starting key to an ending key. For instance, you can return all the rows that have a key greater than or equal to key1 and less than or equal to key2, effectively forming a range. Since the looking up of a single key is a fast operation, we only have to locate the first key of the range. Then, we start reading the consecutive keys one after the other till we encounter a key that is greater than key2, at which point, we will stop the scanning, and the keys that we scanned so far are our query's result. This is how it looks like: Name Department Company Chris Harris Research & Development Google Christopher Graham Research & Development LG Debra Lee Accounting Sony Ernest Morrison Accounting Apple Fred Black Research & Development Sony Janice Young Research & Development Google Jennifer Sims Research & Development Panasonic Joyce Garrett Human Resources Apple Joyce Robinson Research & Development Apple Judy Bishop Human Resources Google Kathryn Crawford Human Resources Google Kelly Bailey Research & Development LG Lori Tucker Human Resources Sony Nancy Campbell Accounting Sony Nicole Martinez Research & Development LG Norma Miller Human Resources Sony Patrick Ward Research & Development Sony Paula Harvey Research & Development LG Stephanie Chavez Accounting Sony Stephanie Mccoy Human Resources Panasonic In the preceding table, we said that the starting key will be greater than or equal to Ernest and ending key will be less than or equal to Kathryn. So, we locate the first key that is greater than or equal to Ernest, which happens to be Ernest Morrison. Then, we start scanning further, picking and returning each key as long as it is less than or equal to Kathryn. When we reach Judy, it is less than or equal to Kathryn, but Kathryn isn't. So, this row is not returned. However, the rows before this are returned. This is the last operation that is available to us on BigTable. Selecting a key Now that we have examined the data model and the storage layout, we are in a better position to talk about the key selection for a table. As we know that the stored data is sorted by the key, it does not impact the writing, deleting, and updating to fetch a single row. However, the operation that is impacted by the key is that of scanning a range. Let's think about the previous table again and assume that this table is a part of some system that processes payrolls for companies, and the companies pay us for the task of processing their payroll. Now, let's suppose that Sony asks us to process their data and generate a payroll for them. Right now, we cannot do anything of this kind. We can just make our program scan the whole table, and hence all the records (which might be in millions), and only pick the records where job:company has the value of Sony. This would be inefficient. Instead, what we can do is put this sorted nature of row keys to our service. Select the company name as the key and concatenate the designation and name along with it. So, the new table will look like this: Key Name Department Company Apple-Accounting-Ernest Morrison Ernest Morrison Accounting Apple Apple-Human Resources-Joyce Garrett Joyce Garrett Human Resources Apple Apple-Research & Development-Joyce Robinson Joyce Robinson Research & Development Apple Google-Human Resources-Judy Bishop Chris Harris Research & Development Google Google-Human Resources-Kathryn Crawford Janice Young Research & Development Google Google-Research & Development-Chris Harris Judy Bishop Human Resources Google Google-Research & Development-Janice Young Kathryn Crawford Human Resources Google LG-Research & Development-Christopher Graham Christopher Graham Research & Development LG LG-Research & Development-Kelly Bailey Kelly Bailey Research & Development LG LG-Research & Development-Nicole Martinez Nicole Martinez Research & Development LG LG-Research & Development-Paula Harvey Paula Harvey Research & Development LG Panasonic-Human Resources-Stephanie Mccoy Jennifer Sims Research & Development Panasonic Panasonic-Research & Development-Jennifer Sims Stephanie Mccoy Human Resources Panasonic Sony-Accounting-Debra Lee Debra Lee Accounting Sony Sony-Accounting-Nancy Campbell Fred Black Research & Development Sony Sony-Accounting-Stephanie Chavez Lori Tucker Human Resources Sony Sony-Human Resources-Lori Tucker Nancy Campbell Accounting Sony Sony-Human Resources-Norma Miller Norma Miller Human Resources Sony Sony-Research & Development-Fred Black Patrick Ward Research & Development Sony Sony-Research & Development-Patrick Ward Stephanie Chavez Accounting Sony So, this is a new format. We just welded the company, department, and name as the key and as the table will always be sorted by the key, that's what it looks like, as shown in the preceding table. Now, suppose that we receive a request from Google to process their data. All we have to do is perform a scan, starting from the key greater than or equal to Google and less then L because that's the next letter. This scan is highlighted in the previous table. Now, the next request is more specific. Sony asks us to process their data, but only for their accounting department. How do we do that? Quite simple! In this case, our starting key will be greater than or equal to Sony-Accounting, and the ending key can be Sony-Accountinga, where a is appended to indicate the end key in the range. The scanned range and the returned rows are highlighted in the previous table. BigTable – a hands-on approach Okay, enough of the theory. It is now time to take a break and perform some hands-on experimentation. By now, we know that about 80 percent of the BigTable and the other 20 percent of the complexity is scaling it to more than one machine. Our current discussion only assumed and focused on a single machine environment, and we assumed that the BigTable table is on our laptop and that's about it. You might really want to experiment with what you learned. Fortunately, given that you have the latest version of Google Chrome or Mozilla Firefox, that's easy. You have BigTable right there! How? Let me explain. Basically, from the ideas that we looked at pertaining to the stored key value, the sorted layout, the indexes of the sorted files, and all the operations that were performed on them, including scanning, we extracted a separate component called LevelDB. Meanwhile, as HTML was evolving towards HTML5, a need was felt to store data locally. Initially, SQLite3 was embedded in browsers, and there was a querying interface for you to play with. So all in all, you had an SQL database in the browser, which yielded a lot of possibilities. However, in recent years, W3C deprecated this specification and urged browser vendors to not implement it. Instead of web databases that were based on SQLite3, they now have databases based on LevelDB that are actually key-value stores, where storage is always sorted by key. Hence, besides looking up for a key, you can scan across a range of keys. Covering the IndexedDB API here would be beyond the scope of this book, but if you want to understand it and find out what the theory that we talked about looks like in practice, you can try using IndexedDB in your browser by visiting http://code.tutsplus.com/tutorials/working-with-indexeddb--net-34673. The concepts of keys and the scanning of key ranges are exactly like those that we examined here as regards BigTable, and those about indexes are mainly from the concepts that we will examine in a later section about datastores. Scaling BigTable to BigData By now, you have probably understood the data model of BigTable, how it is laid out on the disk, and the advantages it offers. To recap once again, the BigTable installation may have many tables, each table may have many column families that are defined at the time of creating the table, and each column family may have many columns, as required. Rows are identified by keys, which have a maximum length of 64 KB, and the stored data is sorted by the key. We can receive, update, and delete a single row. We can also scan a range of rows from a starting key to an ending key. So now, the question comes, how does this scale? We will provide a very high-level overview, neglecting the micro details to keep things simple and build a mental model that is useful to us as the consumers of BigTable, as we're not supposed to clone BigTable's implementation after all. As we saw earlier, the basic storage unit in BigTable is a file format called SSTable that stores key-value pairs, which are sorted by the key, and has an index at its end. We also examined how the read, write, and delete work on an in-memory copy of the table and merged periodically with the table that is present on the disk. Lastly, we also mentioned that when the in memory is flushed as SSTables on the disk when reach a certain configurable count, they are merged into a bigger table. The view so far presents the data model, its physical layout, and how operations work on it in cases where the data resides on a single machine, such as a situation where your laptop has a telephone directory of the entire Europe. However, how does that work at larger scales? Neglecting the minor implementation details and complexities that arise in distributed systems, the overall architecture and working principles are simple. In case of a single machine, there's only one SSTable (or a few in case they are not merged into one) file that has to be taken care of, and all the operations have to be performed on it. However, in case this file does not fit on a single machine, we will of course have to add another machine, and half of the SSTable will reside on one machine, while the other half will be on the another machine. This split would of course mean that each machine would have a range of keys. For instance, if we have 1 million keys (that look like key1, key2, key3, and so on), then the keys from key1 to key500000 might be on one machine, while the keys from key500001 to key1000000 will be on the second machine. So, we can say that each machine has a different key range for the same table. Now, although the data resides on two different machines, it is of course a single table that sprawls over two machines. These partitions or separate parts are called tablets. Let's see the Key allocation on two machines: We will keep this system to only two machines and 1 million rows for the sake of discussion, but there may be cases where there are about 20 billion keys sprawling over some 12,000 machines, with each machine having a different range of keys. However, let's continue with this small cluster consisting of only two nodes. Now, the problem is that as an external user who has no knowledge of which machine has which portion of the SSTable (and eventually, the key ranges on each machine), how can a key, say, key489087 be located? For this, we will have to add something like a telephone directory, where I look up the table name and my desired key and I get to know the machine that I should contact to get the data associated with the key. So, we are going to add another node, which will be called the master. This master will again contain simple, plain SSTable, which is familiar to us. However, the key-value pair would be a very interesting one. Since this table would contain data about the other BigTable tables, let's call it the METADATA table. In the METADATA table, we will adopt the following format for the keys: tablename_ending-row-key Since we have only two machines and each machine has two tablets, the METADATA table will look like this: Key Value employees_key500000 192.168.0.2 employees_key1000000 192.168.0.3 The master stores the location of each tablet server with the row key that is the encoding of the table name and the ending row of the tablet. So, the tablet has to be scanned. The master assigns tablets to different machines when required. Each tablet is about 100 MB to 200 MB in size. So, if we want to fetch a key, all we need to know is the following: Location of the master server Table in which we are looking for the key The key itself Now, we will concatenate the table name with the key and perform a scan on the METADATA table on the master node. Let's suppose that we are looking for key600000 in employees table. So, we would first be actually looking for the employees_key600000 key in the table on master machine. As you are familiar with the scan operation on SSTable (and METADATA is just an SSTable), we are looking for a key that is greater than or equal to employees_key600000, which happens to be employees_key1000000. From this lookup, the key that we get is employees_key1000000 against which, IP address 192.168.0.3 is listed. This means that this is the machine that we should connect to fetch our data. We used the word keys and not the key because it is a range scan operation. This will be clearer with another example. Let's suppose that we want to process rows with keys starting from key400000 to key800000. Now, if you look at the distribution of data across the machine, you'll know that half of the required range is on one machine, while the other half is on the other. Now, in this case, when we consult the METADATA table, two rows will be returned to us because key400000 is less then key500000 (which is the ending row key for data on the first machine) and key800000 is less then key1000000, which is the ending row for the data on the second machine. So, with these two rows returned, we have two locations to fetch our data from. This leads to an interesting side-effect. As the data resides on two different machines, this can be read or processed in parallel, which leads to an improved system performance. This is one reason why even with larger datasets, the performance of BigTable won't deteriorate as badly as it would have if it were a single, large machine with all the data on it. The datastore thyself So until now, everything that we talked about was about BigTable, and we did not mention datastore at all. Now is the time to look at datastore in detail because we understand BigTable quite well now. Datastore is an effectively solution that was built on top of BigTable as a persistent NoSQL layer for Google App Engine. As we know that BigTable might have different tables, data for all the applications is stored in six separate tables, where each table stores a different aspect or information about the data. Don't worry about memorizing things about data modeling and how to use it for now, as this is something that we are going to look into in greater detail later. The fundamental unit of storage in datastore is called a property. You can think of a property as a column. So, a property has a name and type. You can group multiple properties into a Kind, which effectively is a Python class and analogous to a table in the RDBMS world. Here's a pseudo code sample: # 1. Define our Kind and how it looks like. class Person(object): name = StringProperty() age = IntegerProperty() # 2. Create an entity of kind person ali = Person(name='Ali', age='24) bob = Person(name='Bob', age='34) david = Person(name='David', age='44) zain = Person(name='Zain', age='54) # 3. Save it ali.put() bob.put() david.put() zain.put() This looks a lot like an ORM such as Django's ORM, SQLAlchemy, or Rails ActiveRecord. So, Person class is called a Kind in App Engine's terminology. The StringProperty and IntegerProperty property classes are used to indicate the type of the data that is supposed to be stored. We created an instance of the Person class as mohsin. This instance is called an entity in App Engine's terminology. Each entity, when stored, has a key that is not only unique throughout your application, but also combined with your application ID. It becomes unique throughout all the applications that are hosted over Google App Engine. All entities of all kinds for all apps are stored in a single BigTable, and they are stored in a way where all the property values are serialized and stored in a single BigTable column. Hence, no separate columns are defined for each property. This is interesting and required as well because if we are Google App Engine's architects, we do not know the Kind of data that people are going to store or the number and types of properties that they would define so that it makes sense to serialize the whole thing as one and store them in one column. So, this is how it looks like: Key Kind Data agtkZXZ-bWdhZS0wMXIQTXIGUGVyc29uIgNBbGkM Person {name: 'Ali', age: 24} agtkZXZ-bWdhZS0wMXIPCxNTVVyc29uIgNBbGkM Person {name: 'Bob', age: 34} agtkZXZ-bWdhZS0wMXIPCxIGUGVyc29uIgNBbBQM Person {name: 'David', age: 44} agtkZXZ-bWdhZS0wMXIPCxIGUGVyc29uIRJ3bGkM Person {name: 'Zain', age: 54} The key appears to be random, but it is not. A key is formed by concatenating your application ID, your Kind name (Person here), and either a unique identifier that is auto generated by Google App Engine, or a string that is supplied by you. The key seems cryptic, but it is not safe to pass it around in public, as someone might decode it and take advantage of it. Basically, it is just base 64 encoded and can easily be decoded to know the entity's Kind name and ID. A better way would be to encrypt it using a secret key and then pass it around in public. On the other hand, to receive it, you will have to decrypt it using the same key. A gist of this is available on GitHub that can serve the purpose. To view this, visit https://gist.github.com/mohsinhijazee/07cdfc2826a565b50a68. However, for it to work, you need to edit your app.yaml file so that it includes the following: libraries: - name: pycrypto version: latest Then, you can call the encrypt() method on the key while passing around and decrypt it back using the decrypt() method, as follows: person = Person(name='peter', age=10) key = person.put() url_safe_key = key.urlsafe() safe_to_pass_around = encrypt(SECRET_KEY, url_safe_key) Now, when you have a key from the outside, you should first decrypt it and then use it, as follows: key_from_outside = request.params.get('key') url_safe_key = decrypt(SECRET_KEY, key_from_outside) key = ndb.Key(urlsafe=url_safe_key) person = key.get() The key object is now good for use. To summarize, just get the URL safe key by calling the ndb.Key.urlsafe() method and encrypt it so that it can be passed around. On return, just do the reverse. If you really want to see how the encrypt and decrypt operations are implemented, they are reproduced as follows without any documentation/comments, as cryptography is not our main subject: import os import base64 from Crypto.Cipher import AES BLOCK_SIZE = 32 PADDING='#' def _pad(data, pad_with=PADDING): return data + (BLOCK_SIZE - len(data) % BLOCK_SIZE) * PADDING def encrypt(secret_key, data): cipher = AES.new(_pad(secret_key, '@')[:32]) return base64.b64encode(cipher.encrypt(_pad(data))) def decrypt(secret_key, encrypted_data): cipher = AES.new(_pad(secret_key, '@')[:32]) return cipher.decrypt(base64.b64decode (encrypted_data)).rstrip(PADDING) KEY='your-key-super-duper-secret-key-here-only-first-32-characters-are-used' decrypted = encrypt(KEY, 'Hello, world!') print decrypted print decrypt(KEY, decrypted) More explanation on how this works is given at https://gist.github.com/mohsinhijazee/07cdfc2826a565b50a68. Now, let's come back to our main subject, datastore. As you can see, all the data is stored in a single column, and if we want to query something, for instance, people who are older than 25, we have no way to do this. So, how will this work? Let's examine this next. Supporting queries Now, what if we want to get information pertaining to all the people who are older than, say, 30? In the current scheme of things, this does not seem to be something that is doable, because the data is serialized and dumped, as shown in the previous table. Datastore solves this problem by putting the sorted values to be queried upon as keys. So here, we want to query by age. Datastore will create a record in another table called the Index table. This index table is nothing but just a plain BigTable, where the row keys are actually the property value that you want to query. Hence, a scan and a quick lookup is possible. Here's how it would look like: Key Entity key Myapp-person-age-24 agtkZXZ-bWdhZS0wMXIQTXIGUGVyc29uIgNBbGkM Myapp-person-age-34 agtkZXZ-bWdhZS0wMXIPCxNTVVyc29uIgNBbGkM Myapp-person-age-44 agtkZXZ-bWdhZS0wMXIPCxIGUGVyc29uIgNBbBQM Myapp-person-age-54 agtkZXZ-bWdhZS0wMXIPCxIGUGVyc29uIRJ3bGkM Implementation details So, all in all, Datastore actually builds a NoSQL solution on top of BigTable by using the following six tables: A table to store entities A table to store entities by kind A table to store indexes for the property values in the ascending order A table to store indexes for the property values in the descending order A table to store indexes for multiple properties together A table to keep a track of the next unique ID for Kind Let us look at each table in turn. The first table is used to store entities for all the applications. We have examined this in an example. The second table just stores the Kind names. Nothing fancy here. It's just some metadata that datastore maintains for itself. Think of this—you want to get all the entities that are of the Person Kind. How will you do this? If you look at the entities table alone and the operations that are available to us on a BigTable table, you will know that there's no such way for us to fetch all the entities of a certain Kind. This table does exactly this. It looks like this: Key Entity key Myapp-Person-agtkZXZ-bWdhZS0wMXIQTXIGUGVyc29uIgNBbGkM AgtkZXZ-bWdhZS0wMXIQTXIGUGVyc29uIgNBbGkM Myapp-Person-agtkZXZ-bWdhZS0wMXIQTXIGUGVyc29uIgNBb854 agtkZXZ-bWdhZS0wMXIQTXIGUGVyc29uIgNBb854 Myapp-Person-agtkZXZ-bWdhZS0wMXIQTXIGUGVy748IgNBbGkM agtkZXZ-agtkZXZ-bWdhZS0wMXIQTXIGUGVy748IgNBbGkM So, as you can see, this is just a simple BigTable table where the keys are of the [app ID]-[Kind name]-[entity key] pattern. The tables 3, 4, and 5 from the six tables that were mentioned in the preceding list are similar to the table that we examined in the Supporting queries section labeled Data as stored in BigTable. This leaves us with the last table. As you know that while storing entities, it is important to have a unique key for each row. Since all the entities from all the apps are stored in a single table, they should be unique across the whole table. When datastore generates a key for an entity that has to be stored, it combines your application ID and the Kind name of the entity. Now, this much part of the key only makes it unique across all the other entities in the table, but not within the set of your own entities. To do this, you need a number that should be appended to this. This is exactly similar to how AUTO INCREMENT works in the RDBMS world, where the value of a column is automatically incremented to ensure that it is unique. So, that's exactly what the last table is for. It keeps a track of the last ID that was used by each Kind of each application, and it looks like this: Key Next ID Myapp-Person 65 So, in this table, the key is of the [application ID]-[Kind name] format, and the value is the next value, which is 65 in this particular case. When a new entity of kind Person is created, it will be assigned 65 as the ID, and the row will have a new value of 66. Our application has only one Kind defined, which is Person. Therefore, there's only one row in this table because we are only keeping track for the next ID for this Kind. If we had another Kind, say, Group, it will have its own row in this table. Summary We started this article with the problem of storing huge amounts of data, processing it in bulk, and randomly accessing it. This arose from the fact that we were ambitious to store every single web page on earth and process it to extract some results from it. We introduced a solution called BigTable and examined its data model. We saw that in BigTable, we can define multiple tables, with each table having multiple column families, which are defined at the time of creating the table. We learned that column families are logical groupings of columns, and new columns can be defined in a column family, as needed. We also learned that the data store in BigTable has no meaning on its own, and it stores them just as plain bytes; its interpretation and meanings depend on the user of data. We also learned that each row in BigTable has a unique row key, which has a length of 64 KB. Lastly, we turned our attention to datastore, a NoSQL storage solution built on top of BigTable for Google App Engine. We briefly mentioned some datastore terminology such as properties (columns), entities (rows), and kinds (tables). We learned that all data is stored across six different BigTable tables. This captured a different aspect of data. Most importantly, we learned that all the entities of all the apps hosted on Google App Engine are stored in a single BigTable and all properties go to a single BigTable column. We also learned how querying is supported by additional tables that are keyed by the property values that list the corresponding keys. This concludes our discussion on Google App Engine's datastore and its underlying technology, workings, and related concepts. Next, we will learn how to model our data on top of datastore. What we learned in this chapter will help us enormously in understanding how to better model our data to take full advantage of the underlying mechanisms. Resources for Article: Further resources on this subject: Google Guice[article] The EventBus Class[article] Integrating Google Play Services [article]
Read more
  • 0
  • 0
  • 2744

article-image-so-what-django
Packt
26 Mar 2013
7 min read
Save for later

So, what is Django?

Packt
26 Mar 2013
7 min read
(For more resources related to this topic, see here.) I would like to introduce you to Django by using a definition straight from its official website: Django is a high-level Python Web framework that encourages rapid development and clean, pragmatic design. The first part of this definition makes clear that Django is a software framework written in Python and designed to support the development of web applications by offering a series of solutions to common problems and an abstraction of common design patterns for web development. The second part already gives you a it clear idea of the basic concepts on which Django is built, by highlighting its capabilities on rapid development without compromising the quality and the maintainability of the code. To get the job done in a fast and clean way, the Django stack is made up of a series of layers that have nearly no dependencies between them. This introduces great benefits as it will drive you to code with almost no knowledge sharing between components making future changes easy to apply and avoiding side effects on other components. All this identifies Django as a loosely coupled framework and its structure is a consequence of the just described approach and can be defined as the Model-Template-View (MTV) framework, a variation of the well know architectural pattern called Model-View-Controller (MVC). The MTV structure can be explained in the following way: Model: The application data View: Which data is presented Template: How the data is presented As you can understand from the architectural structure of the framework one of the most basic and important Django components is the Object Relational Mapper (ORM) that lets you define your data models entirely in Python and offers a complete dynamic API to access your database. The template engine also plays an important role in making the framework so great and easy to use—it is built to be designer-friendly. This means the templates are just HTML and that the template language doesn't add any variable assignments or advanced logic, offering only "programming-esque" functionality such as looping. Another innovative concept in the Django template engine is the introduction of template inheritance. The possibility to extend a base template discourages redundancy and helps you to keep the information in one place. The key to the success of a web framework is to also make it possible to easily plug third part modules in it. Django uses this concept and it comes—like Python—with "batteries included". It is built with a system to plug in applications in an easy way and the framework itself already includes a series of useful applications that you can feel free to use or not. One of the included applications that makes Django successful is the automatic admin interface, a complete, user-friendly, and production-ready web admin interface for your projects. It's easy to customize and extend and is a great added value that helps you to speed up most common web projects. In modern web application development, systems are often built for a global audience, and web frameworks have to take into account the need to provide support for internalization and localization. Django has full support for the translation of text, formatting of dates, times, and numbers, and time zones, and all this makes it possible to create multilingual web projects in a clear and easy way. On top of all these great features, Django is shipped with a complete cache framework that is a must-have support in a web framework if we want to grant great performance with high load. This component makes caching an easy task offering supports for different types of cache backends, from memory cache to the most famous, memacached. There are several other reasons that make Django a great framework and most of them can be really understood by diving into the framework, so do not hesitate and let's jump into Django. Installation Installing Django on your system is very easy. As it is just Python, you will only need a small effort to get it up and running on your machine. We will do it in two easy steps: Step 1 – What do I need? The only thing you need on your system to get Django running is obviously Python. At the time of writing this book, the latest version available is the 1.5c1 (release candidate) and it works on all Python versions from 2.6.5 to 2.7, and it also features experimental support for Version 3.2 and Version 3.3. Get the right Python package for your system at http://www.python.org. If you are running Linux or Mac OSX, Python is probably already installed in your operating system. If you are using Windows you will need to add the path of the Python installation folder (C:Python27) to the environment variables. You can verify that Python is installed by typing python in your shell. The expected result should look similar to the following output: Python 2.7.2 (default, Jun 20 2012, 16:23:33) [GCC 4.2.1 Compatible Apple Clang 4.0 (tags/Apple/clang-418.0.60)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> Step 2 – Get and install Django Now we will see two methods to install Django: through a Python package manager tool called pip and the manual way. Feel free to use the one that you prefer. At the time of writing this book the Django Version 1.5 is in the release candidate status, if this is still the case jump to the manual installation step and download the 1.5 release candidate package in place of the last stable one. Installing Django with pip Install pip; the easiest way is to get the installer from http://www.pip-installer.org: If you are using a Unix OS execute the following command: $ sudo pip install Django If you are using Windows you will need to start a shell with administrator privileges and run the following command: $ pip install Django Installing Django manually Download the last stable release from the official Django website https://www.djangoproject.com/download/. Uncompress the downloaded file using the tool that you prefer. Change to the directory just created (cd Django-X.Y). If you are using a Unix OS execute the following command: $ sudo python setup.py install If you are using Windows you will need to start a shell with administrator privileges and run the command: $ python setup.py install Django Verifying the Django installation To verify that Django is installed on your system you just need to open a shell and launch a Python console by typing python. In the Python console try to import Django: >>> import django >>> django.get_version() '1.5' c1 And that's it!! Now that Django is installed on your system we can start to explore all its potential. Summary In this article we learned about what Django actually is, what you can do with it, and why it's so great. We also learned how to download and install Django with minimum fuss and then set it up so that you can use it as soon as possible. Resources for Article : Further resources on this subject: Creating an Administration Interface in Django [Article] Creating an Administration Interface with Django 1.0 [Article] Views, URLs, and Generic Views in Django 1.0 [Article]
Read more
  • 0
  • 0
  • 2709
article-image-show-hide-rows-and-highlighting-cells
Packt
09 Apr 2013
7 min read
Save for later

Show/hide rows and Highlighting cells

Packt
09 Apr 2013
7 min read
(For more resources related to this topic, see here.) Show/hide rows Click a link to trigger hiding or displaying of table rows. Getting ready Once again, start off with an HTML table. This one is not quite as simple a table as in previous recipes. You'll need to create a few <td> tags that span the entire table, as well as provide some specific classes to certain elements. How to do it... Again, give the table an id attribute. Each of the rows that represent a department, specifically the rows that span the entire table, should have a class attribute value of dept. <table border="1" id="employeeTable"> <thead> <tr> <th>Last Name</th> <th>First Name</th> <th>Phone</th> </tr> </thead> <tbody> <tr> <td colspan="3" class="dept"> </td> </tr> Each of the department names should be links where the <a> elements have a class of rowToggler. <a href="#" class="rowToggler">Accounting</a> Each table row that contains employee data should have a class attribute value that corresponds to its department. Note that class names cannot contain spaces. So in the case of the Information Technology department, the class names should be InformationTechnology without a space. The issue of the space will be addressed later. <tr class="Accounting"> <td>Frang</td> <td>Corey</td> <td>555-1111</td> </tr> The following script makes use of the class names to create a table whose rows can be easily hidden or shown by clicking a link: <script type="text/javascript"> $( document ).ready( function() { $( "a.rowToggler" ).click( function( e ) { e.preventDefault(); var dept = $( this ).text().replace( /s/g, "" ); $( "tr[class=" + dept + "]" ).toggle(); }) }); </script> With the jQuery implemented, departments are "collapsed", and will only reveal the employees when the link is clicked. How it works... The jQuery will "listen" for a click event on any <a> element that has a class of rowToggler. In this case, capture a reference to the event that triggered the action by passing e to the click handler function. $( "a.rowToggler" ).click( function( e ) In this case, e is simply a variable name. It can be any valid variable name, but e is a standard convention. The important thing is that jQuery has a reference to the event. Why? Because in this case, the event was that an <a> was clicked. The browser's default behavior is to follow a link. This default behavior needs to be prevented. As luck would have it, jQuery has a built-in function called preventDefault(). The first line of the function makes use of this by way of the following: e.preventDefault(); Now that you've safely prevented the browser from leaving or reloading the page, set a variable with a value that corresponds to the name of the department that was just clicked. var dept = $( this ).text().replace( /s/g, "" ); Most of the preceding line should look familiar. $( this ) is a reference to the element that was clicked, and text() is something you've already used. You're getting the text of the <a> tag that was clicked. This will be the name of the department. But there's one small issue. If the department name contains a space, such as "Information Technology", then this space needs to be removed. .replace( /s/g, "" ) replace() is a standard JavaScript function that uses a regular expression to replace spaces with an empty string. This turns "Information Technology" into "InformationTechnology", which is a valid class name. The final step is to either show or hide any table row with a class that matches the department name that was clicked. Ordinarily, the selector would look similar to the following: $( "tr.InformationTechnology" ) Because the class name is a variable value, an alternate syntax is necessary. jQuery provides a way to select an element using any attribute name and value. The selector above can also be represented as follows: $( "tr[class=InformationTechnology]" ) The entire selector is a literal string, as indicated by the fact that it's enclosed in quotes. But the department name is stored in a variable. So concatenate the literal string with the variable value: $( "tr[class=" + dept + "]" ) With the desired elements selected, either hide them if they're displayed, or display them if they're hidden. jQuery makes this very easy with its built-in toggle() method. Highlighting cells Use built-in jQuery traversal methods and selectors to parse the contents of each cell in a table and apply a particular style (for example, a yellow background or a red border) to all cells that meet a specified set of criteria. Getting ready Borrowing some data from Tiobe (http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html), create a table of the top five programming languages for 2012. To make it "pop" a bit more, each <td> in the Ratings column that's over 10 percent will be highlighted in yellow, and each <td> in the Delta column that's less than zero will be highlighted in red. Each <td> in the Ratings column should have a class of ratings, and each <td> in the Delta column should have a class of delta. Additionally, set up two CSS classes for the highlights as follows: .highlight { background-color: #FFFF00; } /* yellow */ .highlight-negative { background-color: #FF0000; } /* red */ Initially, the table should look as follows: How to do it... Once again, give the table an id attribute (but by now, you knew that), as shown in the following code snippet: <table border="1" id="tiobeTable"> <thead> <tr> <th>Position<br />Dec 2012</th> <th>Position<br />Dec 2011</th> <th>Programming Language</th> <th>Ratings<br />Dec 2012</th> <th>Delta<br />Dec 2011</th> </tr> </thead> Apply the appropriate class names to the last two columns in each table row within the <tbody>, as shown in the following code snippet: <tbody> <tr> <td>1</td> <td>2</td> <td>C</td> <td class="ratings">18.696%</td> <td class="delta">+1.64%</td> </tr> With the table in place and properly marked up with the appropriate class names, write the script to apply the highlights as follows: <script type="text/javascript"> $( document ).ready( function() { $( "#tiobeTable tbody tr td.ratings" ).each( function( index ) { if ( parseFloat( $( this ).text() ) > 10 ) { $( this ).addClass( "highlight" ); } }); $( "#tiobeTable tbody tr td.delta" ).each( function( index ) { if ( parseFloat( $( this ).text() ) < 0 ) { $( this ).addClass( "highlight-negative" ); } }); }); </script> Now, you will see a much more interesting table with multiple visual cues: How it works... Select the <td> elements within the tbody tag's table rows that have a class of ratings. For each iteration of the loop, test whether or not the value (text) of the <td> is greater than 10. Because the values in <td> contain non-numeric characters (in this case, % signs), we use JavaScript's parseFloat() to convert the text to actual numbers: parseFloat( $( this ).text() ) Much of that should be review. $( this ) is a reference to the element in question. text() retrieves the text from the element. parseFloat() ensures that the value is numeric so that it can be accurately compared to the value 10. If the condition is met, use addClass() to apply the highlight class to <td>. Do the same thing for the Delta column. The only difference is in checking to see if the text is less than zero. If it is, apply the class highlight-negative. The end result makes it much easier to identify specific data within the table. Summary In this article we covered two recipes Show/hide rows and Highlighting cells. Resources for Article : Further resources on this subject: Tips and Tricks for Working with jQuery and WordPress5 [Article] Using jQuery Script for Creating Dynamic Table of Contents [Article] Getting Started with jQuery [Article]
Read more
  • 0
  • 0
  • 2708

article-image-our-app-and-tool-stack
Packt
04 Mar 2015
33 min read
Save for later

Our App and Tool Stack

Packt
04 Mar 2015
33 min read
In this article by Zachariah Moreno, author of the book AngularJS Deployment Essentials, you will learn how to do the following: Minimize efforts and maximize results using a tool stack optimized for AngularJS development Access the krakn app via GitHub Scaffold an Angular app with Yeoman, Grunt, and Bower Set up a local Node.js development server Read through krakn's source code Before NASA or Space X launches a vessel into the cosmos, there is a tremendous amount of planning and preparation involved. The guiding principle when planning for any successful mission is similar to minimizing efforts and resources while retaining maximum return on the mission. Our principles for development and deployment are no exception to this axiom, and you will gain a firmer working knowledge of how to do so in this article. (For more resources related to this topic, see here.) The right tools for the job Web applications can be compared to buildings; without tools, neither would be a pleasure to build. This makes tools an indispensable factor in both development and construction. When tools are combined, they form a workflow that can be repeated across any project built with the same stack, facilitating the practices of design, development, and deployment. The argument can be made that it is just as paramount to document workflow as an application's source code or API. Along with grouping tools into categories based on the phases of building applications, it is also useful to group tools based on the opinions of a respective project—in our case, Angular, Ionic, and Firebase. I call tools grouped into opinionated workflows tool stacks. For example, the remainder of this article discusses the tool stack used to build the application that we will deploy across environments in this book. In contrast, if you were to build a Ruby on Rails application, the tool stack would be completely different because the project's opinions are different. Our app is called krakn, and it functions as a real-time chat application built on top of the opinions of Angular, the Ionic Framework, and Firebase. You can find all of krakn's source code at https://github.com/zachmoreno/krakn. Version control with Git and GitHub Git is a command-line interface (CLI) developed by Linus Torvalds, to use on the famed Linux kernel. Git is mostly popular due to its distributed architecture making it nearly impossible for corruption to occur. Git's distributed architecture means that any remote repository has all of the same information as your local repository. It is useful to think of Git as a free insurance policy for my code. You will need to install Git using the instructions provided at www.git-scm.com/ for your development workstation's operating system. GitHub.com has played a notable role in Git's popularization, turning its functionality into a social network focused on open source code contributions. With a pricing model that incentivizes Open Source contributions and licensing for private, GitHub elevated the use of Git to heights never seen before. If you don't already have an account on GitHub, now is the perfect time to visit github.com to provision a free account. I mentioned earlier that krakn's code is available for forking at github.com/ZachMoreno/krakn. This means that any person with a GitHub account has the ability to view my version of krakn, and clone a copy of their own for further modifications or contributions. In GitHub's web application, forking manifests itself as a button located to the right of the repository's title, which in this case is XachMoreno/krakn. When you click on the button, you will see an animation that simulates the hardcore forking action. This results in a cloned repository under your account that will have a title to the tune of YourName/krakn. Node.js Node.js, commonly known as Node, is a community-driven server environment built on Google Chrome's V8 JavaScript runtime that is entirely event driven and facilitates a nonblocking I/O model. According to www.nodejs.org, it is best suited for: "Data-intensive real-time applications that run across distributed devices." So what does all this boil down to? Node empowers web developers to write JavaScript both on the client and server with bidirectional real-time I/O. The advent of Node has empowered developers to take their skills from the client to the server, evolving from frontend to full stack (like a caterpillar evolving into a butterfly). Not only do these skills facilitate a pay increase, they also advance the Web towards the same functionality as the traditional desktop or native application. For our purposes, we use Node as a tool; a tool to build real-time applications in the fewest number of keystrokes, videos watched, and words read as possible. Node is, in fact, a modular tool through its extensible package interface, called Node Package Manager (NPM). You will use NPM as a means to install the remainder of our tool stack. NPM The NPM is a means to install Node packages on your local or remote server. NPM is how we will install the majority of the tools and software used in this book. This is achieved by running the $ npm install –g [PackageName] command in your command line or terminal. To search the full list of Node packages, visit www.npmjs.org or run $ npm search [Search Term] in your command line or terminal as shown in the following screenshot: Yeoman's workflow Yeoman is a CLI that is the glue that holds your tools into your opinionated workflow. Although the term opinionated might sound off-putting, you must first consider the wisdom and experience of the developers and community before you who maintain Yeoman. In this context, opinionated means a little more than a collection of the best practices that are all aimed at improving your developer's experience of building static websites, single page applications, and everything in between. Opinionated does not mean that you are locked into what someone else feels is best for you, nor does it mean that you must strictly adhere to the opinions or best practices included. Yeoman is general enough to help you build nearly anything for the Web as well as improving your workflow while developing it. The tools that make up Yeoman's workflow are Yo, Grunt.js, Bower, and a few others that are more-or-less optional, but are probably worth your time. Yo Apart from having one of the hippest namespaces, Yo is a powerful code generator that is intelligent enough to scaffold most sites and applications. By default, instantiating a yo command assumes that you mean to scaffold something at a project level, but yo can also be scoped more granularly by means of sub-generators. For example, the command for instantiating a new vanilla Angular project is as follows: $ yo angular radicalApp Yo will not finish your request until you provide some further information about your desired Angular project. This is achieved by asking you a series of relevant questions, and based on your answers, yo will scaffold a familiar application folder/file structure, along with all the boilerplate code. Note that if you have worked with the angular-seed project, then the Angular application that yo generates will look very familiar to you. Once you have an Angular app scaffolded, you can begin using sub-generator commands. The following command scaffolds a new route, radicalRoute, within radicalApp: $ yo angular:route radicalRoute The :route sub-generator is a very powerful command, as it automates all of the following key tasks: It creates a new file, radicalApp/scripts/controllers/radicalRoute.js, that contains the controller logic for the radicalRoute view It creates another new file, radicalApp/views/radicalRoute.html, that contains the associated view markup and directives Lastly, it adds an additional route within, radicalApp/scripts/app.js, that connects the view to the controller Additionally, the sub-generators for yo angular include the following: :controller :directive :filter :service :provider :factory :value :constant :decorator :view All the sub-generators allow you to execute finer detailed commands for scaffolding smaller components when compared to :route, which executes a combination of sub-generators. Installing Yo Within your workstation's terminal or command-line application type, insert the following command, followed by a return: $ npm install -g yo If you are a Linux or Mac user, you might want to prefix the command with sudo, as follows: $ sudo npm install –g yo Grunt Grunt.js is a task runner that enhances your existing and/or Yeoman's workflow by automating repetitive tasks. Each time you generate a new project with yo, it creates a /Gruntfile.js file that wires up all of the curated tasks. You might have noticed that installing Yo also installs all of Yo's dependencies. Reading through /Gruntfile.js should incite a fair amount of awe, as it gives you a snapshot of what is going on under the hood of Yeoman's curated Grunt tasks and its dependencies. Generating a vanilla Angular app produces a /Gruntfile.js file, as it is responsible for performing the following tasks: It defines where Yo places Bower packages, which is covered in the next section It defines the path where the grunt build command places the production-ready code It initializes the watch task to run: JSHint when JavaScript files are saved Karma's test runner when JavaScript files are saved Compass when SCSS or SASS files are saved The saved /Gruntfile.js file It initializes LiveReload when any HTML or CSS files are saved It configures the grunt server command to run a Node.js server on localhost:9000, or to show test results on localhost:9001 It autoprefixes CSS rules on LiveReload and grunt build It renames files for optimizing browser caching It configures the grunt build command to minify images, SVG, HTML, and CSS files or to safely minify Angular files Let us pause for a moment to reflect on the amount of time it would take to find, learn, and implement each dependency into our existing workflow for each project we undertake. Ok, we should now have a greater appreciation for Yeoman and its community. For the vast majority of the time, you will likely only use a few Grunt commands, which include the following: $ grunt server $ grunt test $ grunt build Bower If Yo scaffolds our application's structure and files, and Grunt automates repetitive tasks for us, then what does Bower bring to the party? Bower is web development's missing package manager. Its functionality parallels that of Ruby Gems for the Ruby on Rails MVC framework, but is not limited to any single framework or technology stack. The explicit use of Bower is not required by the Yeoman workflow, but as I mentioned previously, the use of Bower is configured automatically for you in your project's /Gruntfile.js file. How does managing packages improve our development workflow? With all of the time we've been spending in our command lines and terminals, it is handy to have the ability to automate the management of third-party dependencies within our application. This ability manifests itself in a few simple commands, the most ubiquitous being the following command: $ bower install [PackageName] --save With this command, Bower will automate the following steps: First, search its packages for the specified package name Download the latest stable version of the package if found Move the package to the location defined in your project's /Gruntfile.js file, typically a folder named /bower_components Insert dependencies in the form of <link> elements for CSS files in the document's <head> element, and <script> elements for JavaScript files right above the document's closing </body> tag, to the package's files within your project's /index.html file This process is one that web developers are more than familiar with because adding a JavaScript library or new dependency happens multiple times within every project. Bower speeds up our existing manual process through automation and improves it by providing the latest stable version of a package and then notifying us of an update if one is available. This last part, "notifying us of an update if … available", is important because as a web developer advances from one project to the next, it is easy to overlook keeping dependencies as up to date as possible. This is achieved by running the following command: $ bower update This command returns all the available updates, if available, and will go through the same process of inserting new references where applicable. Bower.io includes all of the documentation on how to use Bower to its fullest potential along with the ability to search through all of the available Bower packages. Searching for available Bower packages can also be achieved by running the following command: $ bower search [SearchTerm] If you cannot find the specific dependency for which you search, and the project is on GitHub, consider contributing a bower.json file to the project's root and inviting the owner to register it by running the following command: $ bower register [ThePackageName] [GitEndpoint] Registration allows you to install your dependency by running the next command: $ bower install [ThePackageName] The Ionic framework The Ionic framework is a truly remarkable advancement in bridging the gap between web applications and native mobile applications. In some ways, Ionic parallels Yeoman where it assembles tools that were already available to developers into a neat package, and structures a workflow around them, inherently improving our experience as developers. If Ionic is analogous to Yeoman, then what are the tools that make up Ionic's workflow? The tools that, when combined, make Ionic noteworthy are Apache Cordova, Angular, Ionic's suite of Angular directives, and Ionic's mobile UI framework. Batarang An invaluable piece to our Angular tool stack is the Google Chrome Developer Tools extension, Batarang, by Brian Ford. Batarang adds a third-party panel (on the right-hand side of Console) to DevTools that facilitates Angular's specific inspection in the event of debugging. We can view data in the scopes of each model, analyze each expression's performance, and view a beautiful visualization of service dependencies all from within Batarang. Because Angular augments the DOM with ng- attributes, it also provides a Properties pane within the Elements panel, to inspect the models attached to a given element's scope. The extension is easy to install from either the Chrome Web Store or the project's GitHub repository and inspection can be enabled by performing the following steps: Firstly, open the Chrome Developer Tools. You should then navigate to the AngularJS panel. Finally, select the Enable checkbox on the far right tab. Your active Chrome tab will then be reloaded automatically, and the AngularJS panel will begin populating the inspection data. In addition, you can leverage the Angular pane with the Elements panel to view Angular-specific properties at an elemental level, and observe the $scope variable from within the Console panel. Sublime Text and Editor Integration While developing any Angular app, it is helpful to augment our workflow further with Angular-specific syntax completion, snippets, go to definition, and quick panel search in the form of a Sublime Text package. Perform the following steps: If you haven't installed Sublime Text already, you need to first install Package Control. Otherwise, continue with the next step. Once installed, press command + Shift + P in Sublime. Then, you need to select the Package Control: Install Package option. Finally, type angularjs and press Enter on your keyboard. In addition to support within Sublime, Angular enhancements exist for lots of popular editors, including WebStorm, Coda, and TextMate. Krakn As a quick refresher, krakn was constructed using all of the tools that are covered in this article. These include Git, GitHub, Node.js, NPM, Yeoman's workflow, Yo, Grunt, Bower, Batarang, and Sublime Text. The application builds on Angular, Firebase, the Ionic Framework, and a few other minor dependencies. The workflow I used to develop krakn went something like the following. Follow these steps to achieve the same thing. Note that you can skip the remainder of this section if you'd like to get straight to the deployment action, and feel free to rename things where necessary. Setting up Git and GitHub The workflow I followed while developing krakn begins with initializing our local Git repository and connecting it to our remote master repository on GitHub. In order to install and set up both, perform the following steps: Firstly, install all the tool stack dependencies, and create a folder called krakn. Following this, run $ git init, and you will create a README.md file. You should then run $ git add README.md and commit README.md to the local master branch. You then need to create a new remote repository on GitHub called XachMoreno/krakn. Following this, run the following command: $ git remote add origin git@github.com:[YourGitHubUserName] /krakn.git Conclude the setup by running $ git push –u origin master. Scaffolding the app with Yo Scaffolding our app couldn't be easier with the yo ionic generator. To do this, perform the following steps: Firstly, install Yo by running $ npm install -g yo. After this, install generator-ionicjs by running $ npm install -g generator-ionicjs. To conclude the scaffolding of your application, run the yo ionic command. Development After scaffolding the folder structure and boilerplate code, our workflow advances to the development phase, which is encompassed in the following steps: To begin, run grunt server. You are now in a position to make changes, for example, these being deletions or additions. Once these are saved, LiveReload will automatically reload your browser. You can then review the changes in the browser. Repeat steps 2-4 until you are ready to advance to the predeployment phase. Views, controllers, and routes Being a simple chat application, krakn has only a handful of views/routes. They are login, chat, account, menu, and about. The menu view is present in all the other views in the form of an off-canvas menu. The login view The default view/route/controller is named login. The login view utilizes the Firebase's Simple Login feature to authenticate users before proceeding to the rest of the application. Apart from logging into krakn, users can register a new account by entering their desired credentials. An interesting part of the login view is the use of the ng-show directive to toggle the second password field if the user selects the register button. However, the ng-model directive is the first step here, as it is used to pass the input text from the view to the controller and ultimately, the Firebase Simple Login. Other than the Angular magic, this view uses the ion-view directive, grid, and buttons that are all core to Ionic. Each view within an Ionic app is wrapped within an ion-view directive that contains a title attribute as follows: <ion-view title="Login"> The login view uses the standard input elements that contain a ng-model attribute to bind the input's value back to the controller's $scope as follows:   <input type="text" placeholder="you@email.com" ng-model= "data.email" />     <input type="password" placeholder=  "embody strength" ng-model="data.pass" />     <input type="password" placeholder=  "embody strength" ng-model="data.confirm" /> The Log In and Register buttons call their respective functions using the ng-click attribute, with the value set to the function's name as follows:   <button class="button button-block button-positive" ng-  click="login()" ng-hide="createMode">Log In</button> The Register and Cancel buttons set the value of $scope.createMode to true or false to show or hide the correct buttons for either action:   <button class="button button-block button-calm" ng-  click="createMode = true" ng-hide=  "createMode">Register</button>   <button class="button button-block button-calm" ng-  show="createMode" ng-click=  "createAccount()">Create Account</button>     <button class="button button-block button-  assertive" ng-show="createMode" ng-click="createMode =   false">Cancel</button> $scope.err is displayed only when you want to show the feedback to the user:   <p ng-show="err" class="assertive text-center">{{err}}</p>   </ion-view> The login controller is dependent on Firebase's loginService module and Angular's core $location module: controller('LoginCtrl', ['$scope', 'loginService', '$location',   function($scope, loginService, $location) { Ionic's directives tend to create isolated scopes, so it was useful here to wrap our controller's variables within a $scope.data object to avoid issues within the isolated scope as follows:     $scope.data = {       "email"   : null,       "pass"   : null,       "confirm"  : null,       "createMode" : false     } The login() function easily checks the credentials before authentication and sends feedback to the user if needed:     $scope.login = function(cb) {       $scope.err = null;       if( !$scope.data.email ) {         $scope.err = 'Please enter an email address';       }       else if( !$scope.data.pass ) {         $scope.err = 'Please enter a password';       } If the credentials are sound, we send them to Firebase for authentication, and when we receive a success callback, we route the user to the chat view using $location.path() as follows:       else {         loginService.login($scope.data.email,         $scope.data.pass, function(err, user) {          $scope.err = err? err + '' : null;          if( !err ) {           cb && cb(user);           $location.path('krakn/chat');          }        });       }     }; The createAccount() function works in much the same way as login(), except that it ensures that the users don't already exist before adding them to your Firebase and logging them in:     $scope.createAccount = function() {       $scope.err = null;       if( assertValidLoginAttempt() ) {        loginService.createAccount($scope.data.email,    $scope.data.pass,          function(err, user) {           if( err ) {             $scope.err = err? err + '' : null;           }           else {             // must be logged in before I can write to     my profile             $scope.login(function() {              loginService.createProfile(user.uid,     user.email);              $location.path('krakn/account');             });           }          });       }     }; The assertValidLoginAttempt() function is a function used to ensure that no errors are received through the account creation and authentication flows:     function assertValidLoginAttempt() {       if( !$scope.data.email ) {        $scope.err = 'Please enter an email address';       }       else if( !$scope.data.pass ) {        $scope.err = 'Please enter a password';       }       else if( $scope.data.pass !== $scope.data.confirm ) {        $scope.err = 'Passwords do not match';       }       return !$scope.err;     }    }]) The chat view Keeping vegan practices aside, the meat and potatoes of krakn's functionality lives within the chat view/controller/route. The design is similar to most SMS clients, with the input in the footer of the view and messages listed chronologically in the main content area. The ng-repeat directive is used to display a message every time a message is added to the messages collection in Firebase. If you submit a message successfully, unsuccessfully, or without any text, feedback is provided via the placeholder attribute of the message input. There are two filters being utilized within the chat view: orderByPriority and timeAgo. The orderByPriority filter is defined within the firebase module that uses the Firebase object IDs that ensure objects are always chronological. The timeAgo filter is an open source Angular module that I found. You can access it at JS Fiddle. The ion-view directive is used once again to contain our chat view: <ion-view title="Chat"> Our list of messages is composed using the ion-list and ion-item directives, in addition to a couple of key attributes. The ion-list directive gives us some nice interactive controls using the option-buttons and can-swipe attributes. This results in each list item being swipable to the left, revealing our option-buttons as follows:    <ion-list option-buttons="itemButtons" can-swipe=     "true" ng-show="messages"> Our workhorse in the chat view is the trusty ng-repeat directive, responsible for persisting our data from Firebase to our service to our controller and into our view and back again:    <ion-item ng-repeat="message in messages |      orderByPriority" item="item" can-swipe="true"> Then, we bind our data into vanilla HTML elements that have some custom styles applied to them:     <h2 class="user">{{ message.user }}</h2> The third-party timeago filter converts the time into something such as, "5 min ago", similar to Instagram or Facebook:     <small class="time">{{ message.receivedTime |       timeago }}</small>     <p class="message">{{ message.text }}</p>    </ion-item>   </ion-list> A vanilla input element is used to accept chat messages from our users. The input data is bound to $scope.data.newMessage for sending data to Firebase and $scope.feedback is used to keep our users informed:   <input type="text" class="{{ feeling }}" placeholder=    "{{ feedback }}" ng-model="data.newMessage" /> When you click on the send/submit button, the addMessage() function sends the message to your Firebase, and adds it to the list of chat messages, in real time:   <button type="submit" id="chat-send" class="button button-small button-clear" ng-click="addMessage()"><span class="ion-android-send"></span></button> </ion-view> The ChatCtrl controller is dependant on a few more modules other than our LoginCtrl, including syncData, $ionicScrollDelegate, $ionicLoading, and $rootScope: controller('ChatCtrl', ['$scope', 'syncData', '$ionicScrollDelegate', '$ionicLoading', '$rootScope',    function($scope, syncData, $ionicScrollDelegate, $ionicLoading, $rootScope) { The userName variable is derived from the authenticated user's e-mail address (saved within the application's $rootScope) by splitting the e-mail and using everything before the @ symbol: var userEmail = $rootScope.auth.user.e-mail       userName = userEmail.split('@'); Avoid isolated scope issue in the same fashion, as we did in LoginCtrl:     $scope.data = {       newMessage   : null,       user      : userName[0]     } Our view will only contain the latest 20 messages that have been synced from Firebase:     $scope.messages = syncData('messages', 20); When a new message is saved/synced, it is added to the bottom of the ng-repeated list, so we use the $ionicScrollDeligate variable to automatically scroll the new message into view on the display as follows: $ionicScrollDelegate.scrollBottom(true); Our default chat input placeholder text is something on your mind?:     $scope.feedback = 'something on your mind?';     // displays as class on chat input placeholder     $scope.feeling = 'stable'; If we have a new message and a valid username (shortened), then we can call the $add() function, which syncs the new message to Firebase and our view is as follows:     $scope.addMessage = function() {       if(  $scope.data.newMessage         && $scope.data.user ) {        // new data elements cannot be synced without adding          them to FB Security Rules        $scope.messages.$add({                    text    : $scope.data.newMessage,                    user    : $scope.data.user,                    receivedTime : Number(new Date())                  });        // clean up        $scope.data.newMessage = null; On a successful sync, the feedback updates say Done! What's next?, as shown in the following code snippet:        $scope.feedback = 'Done! What's next?';        $scope.feeling = 'stable';       }       else {        $scope.feedback = 'Please write a message before sending';        $scope.feeling = 'assertive';       }     };       $ionicScrollDelegate.scrollBottom(true); ]) The account view The account view allows the logged in users to view their current name and e-mail address along with providing them with the ability to update their password and e-mail address. The input fields interact with Firebase in the same way as the chat view does using the syncData method defined in the firebase module: <ion-view title="'Account'" left-buttons="leftButtons"> The $scope.user object contains our logged in user's account credentials, and we bind them into our view as follows:   <p>{{ user.name }}</p>  …   <p>{{ user.email }}</p> The basic account management functionality is provided within this view; so users can update their e-mail address and or password if they choose to, using the following code snippet:   <input type="password" ng-keypress=    "reset()" ng-model="oldpass"/>  …   <input type="password" ng-keypress=    "reset()" ng-model="newpass"/>  …   <input type="password" ng-keypress=    "reset()" ng-model="confirm"/> Both the updatePassword() and updateEmail() functions work in much the same fashion as our createAccount() function within the LoginCtrl controller. They check whether the new e-mail or password is not the same as the old, and if all is well, it syncs them to Firebase and back again:   <button class="button button-block button-calm" ng-click=    "updatePassword()">update password</button>  …    <p class="error" ng-show="err">{{err}}</p>   <p class="good" ng-show="msg">{{msg}}</p>  …   <input type="text" ng-keypress="reset()" ng-model="newemail"/>  …   <input type="password" ng-keypress="reset()" ng-model="pass"/>  …   <button class="button button-block button-calm" ng-click=    "updateEmail()">update email</button>  …   <p class="error" ng-show="emailerr">{{emailerr}}</p>   <p class="good" ng-show="emailmsg">{{emailmsg}}</p>  … </ion-view> The menu view Within krakn/app/scripts/app.js, the menu route is defined as the only abstract state. Because of its abstract state, it can be presented in the app along with the other views by the ion-side-menus directive provided by Ionic. You might have noticed that only two menu options are available before signing into the application and that the rest appear only after authenticating. This is achieved using the ng-show-auth directive on the chat, account, and log out menu items. The majority of the options for Ionic's directives are available through attributes making them simple to use. For example, take a look at the animation="slide-left-right" attribute. You will find Ionic's use of custom attributes within the directives as one of the ways that the Ionic Framework is setting itself apart from other options within this space. The ion-side-menu directive contains our menu list similarly to the one we previously covered, the ion-view directive, as follows: <ion-side-menus>  <ion-pane ion-side-menu-content>   <ion-nav-bar class="bar-positive"> Our back button is displayed by including the ion-nav-back-button directive within the ion-nav-bar directive:    <ion-nav-back-button class="button-clear"><i class=     "icon ion-chevron-left"></i> Back</ion-nav-back-button>   </ion-nav-bar> Animations within Ionic are exposed and used through the animation attribute, which is built atop the ngAnimate module. In this case, we are doing a simple animation that replicates the experience of a native mobile app:   <ion-nav-view name="menuContent" animation="slide-left-right"></ion-nav-view>  </ion-pane>    <ion-side-menu side="left">   <header class="bar bar-header bar-positive">    <h1 class="title">Menu</h1>   </header>   <ion-content class="has-header"> A simple ion-list directive/element is used to display our navigation items in a vertical list. The ng-show attribute handles the display of menu items before and after a user has authenticated. Before a user logs in, they can access the navigation, but only the About and Log In views are available until after successful authentication.    <ion-list>     <ion-item nav-clear menu-close href=      "#/app/chat" ng-show-auth="'login'">      Chat     </ion-item>       <ion-item nav-clear menu-close href="#/app/about">      About     </ion-item>       <ion-item nav-clear menu-close href=      "#/app/login" ng-show-auth="['logout','error']">      Log In     </ion-item> The Log Out navigation item is only displayed once logged in, and upon a click, it calls the logout() function in addition to navigating to the login view:     <ion-item nav-clear menu-close href="#/app/login" ng-click=      "logout()" ng-show-auth="'login'">      Log Out     </ion-item>    </ion-list>   </ion-content>  </ion-side-menu> </ion-side-menus> The MenuCtrl controller is the simplest controller in this application, as all it contains is the toggleMenu() and logout() functions: controller("MenuCtrl", ['$scope', 'loginService', '$location',   '$ionicScrollDelegate', function($scope, loginService,   $location, $ionicScrollDelegate) {   $scope.toggleMenu = function() {    $scope.sideMenuController.toggleLeft();   };     $scope.logout = function() {     loginService.logout();     $scope.toggleMenu();  };  }]) The about view The about view is 100 percent static, and its only real purpose is to present the credits for all the open source projects used in the application. Global controller constants All of krakn's controllers share only two dependencies: ionic and ngAnimate. Because Firebase's modules are defined within /app/scripts/app.js, they are available for consumption by all the controllers without the need to define them as dependencies. Therefore, the firebase service's syncData and loginService are available to ChatCtrl and LoginCtrl for use. The syncData service is how krakn utilizes three-way data binding provided by krakenjs.com. For example, within the ChatCtrl controller, we use syncData( 'messages', 20 ) to bind the latest twenty messages within the messages collection to $scope for consumption by the chat view. Conversely, when a ng-click user clicks the submit button, we write the data to the messages collection by use of the syncData.$add() method inside the $scope.addMessage() function: $scope.addMessage = function() {   if(...) { $scope.messages.$add({ ... });   } }; Models and services The model for krakn is www.krakn.firebaseio.com. The services that consume krakn's Firebase API are as follows: The firebase service in krakn/app/scripts/service.firebase.js The login service in krakn/app/scripts/service.login.js The changeEmail service in krakn/app/scripts/changeEmail.firebase.js The firebase service defines the syncData service that is responsible for routing data bidirectionally between krakn/app/bower_components/angularfire.js and our controllers. Please note that the reason I have not mentioned angularfire.js until this point is that it is basically an abstract data translation layer between firebaseio.com and Angular applications that intend on consuming data as a service. Predeployment Once the majority of an application's development phase has been completed, at least for the initial launch, it is important to run all of the code through a build process that optimizes the file size through compression of images and minification of text files. This piece of the workflow was not overlooked by Yeoman and is available through the use of the $ grunt build command. As mentioned in the section on Grunt, the /Gruntfile.js file defines where built code is placed once it is optimized for deployment. Yeoman's default location for built code is the /dist folder, which might or might not exist depending on whether you have run the grunt build command before. Summary In this article, we discussed the tool stack and workflow used to build the app. Together, Git and Yeoman formed a solid foundation for building krakn. Git and GitHub provided us with distributed version control and a platform for sharing the application's source code with you and the world. Yeoman facilitated the remainder of the workflow: scaffolding with Yo, automation with Grunt, and package management with Bower. With our app fully scaffolded, we were able to build our interface with the directives provided by the Ionic Framework, and wire up the real-time data synchronization forged by our Firebase instance. With a few key tools, we were able to minimize our development time while maximizing our return. Resources for Article: Further resources on this subject: Role of AngularJS? [article] AngularJS Project [article] Creating Our First Animation AngularJS [article]
Read more
  • 0
  • 0
  • 2688
Modal Close icon
Modal Close icon