Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-tips-and-tricks
Packt
09 May 2013
7 min read
Save for later

Tips and Tricks

Packt
09 May 2013
7 min read
(For more resources related to this topic, see here.) Adding more template files to your theme Let's say our site needed to display posts from a specific category differently from the rest of the site, or we needed the home page to work differently, or maybe we wanted to have more control over how search results or 404 pages were displayed. With template files, we can do all that. A search.php file for search results WordPress handles search results pretty well already. Let's see what's displayed in our theme if we try to search for example post (Note that we've now added a Search widgetto the right-hand side footer widget area to make this possible): As you can see, it' s using our index.php template file, so the heading reads This Month:.We'd rather make it more obvious that these are search results. Now let's see what happens if we search for something that can't be found: Again, the heading isn't great. Our theme gives the user a message telling them what's happened (which is coded into index.php as we'll see), but we could beef that up a bit,for example by adding a list of the most recent posts. Time for action – creating a search.php template file Let's create our search.php file and add some code to get it working in the way we'dlike it to: In your theme folder, make a copy of index.php and call it search.php. Find the following code near the top of the file: <h2 class="thisMonth embossed" style="color:#fff;">This Month:</h2> Edit the contents of the h2 element so the line of code now reads: <h2 class="thisMonth embossed" style="color:#fff;">Searchresults:</h2> Find the loop. This will begin with: <?php if (have_posts()) :?><?php while (have_posts()) : the_post();?> The first section of the loop displays any posts found by the search, leave this as itis. The second section of the loop specifies what happens if no search results are found. It's in the following lines of code: <?php else : ?><h2 class="center">Not Found</h2><p class="center">Sorry, but you are looking for somethingthat isn't here.</p><?php get_search_form(); ?><?php endif; ?> Underneath the line that reads <?php get_search_form(); ?> and before <?php endif; ?>, add the following lines of code: <?php endif; ?>, add the following lines of code:<h3>Latest articles:</h3><?php $query = new WP_Query( array ( 'post_type' => 'post', 'post_count' => '5' ) );while ( $query->have_posts() ) : $query->the_post(); ?><ul><li><a href="<?php the_permalink(); ?>"><?php the_title(); ?></a></li></ul><?php endwhile; ?> Save your search.php file and try searching for something which isn't included in the site. What just happened? We created a new template file called search.php, which will be used to display theresults of a site search. We then edited the heading to make it clearer, and added somecode to display the latest posts if the search had no results. We actually did something pretty advanced, we added a second loop inside our original loop. Let's have a look at the code we added after the search form: The function $query = new WP_Query() runs a new query on the database, based on the WordPress WP_Query function, which is the function you should use when running a loop inside the main loop. We gave WP_Query the following parameters: 'post_type' => 'post' – this ensures that the query will only look for posts, not for any other kind of content. 'post_count' => '5' – this tells WordPress how many posts to show. Five, in this case. We then output the title of each post with the php the_title() tag whichwe've already used higher up in the loop to display post titles. We wrapped this in a link inside a list item. The link uses the_permalink() to link to the blog post whose title is displayed. This is very similar to the main loop. Finally, we added endwhile() to stop this loop. This doesn't replace theendwhile() at the end of our main loop, which is higher up in the file. For more on WP_Query and how to use it to create multiple loops, see http://codex.wordpress.org/Class_Reference/ WP_Query. Let's have a look at what our users will see when they do a search now. First,a successful search: Next, an unsuccessful search: So that's how we set up a template file for search results. Our search page is only displayingtwo posts because that's all we have on our site. If there were more than five, it would justdisplay the five most recent. Now let's set one up to display some pages differently. Creating a custom page template In many themes, all pages will need the same basic layout and content, with the same sidebarsand footer and the same styling. But sometimes you may need some pages to look different. For example, you might want to use different sidebars in different pages, or you might wanta different layout. Here we'll look at the second of those two options. Time for action – creating a custom page template Imagine that you have some pages containing a lot of content which you want to displayacross the full width of the page, without the sidebar getting in the way. The way to handlethis is to create a page template which doesn't include the sidebar, and then select thatpage template when you're creating or editing those pages. Let's try it out. In the same folder as your other theme files, make a copy of your page.php file and call it page-no-sidebar.php. At the very top of the file, above the line reading <?php get_header(); ?>,insert the following code: <?php/*Template Name: Full width page without sidebar*/?> Find the following line of code: <div class="content left two-thirds"> Edit it so it reads: <div class="content left full"> Now find the line that reads <?php get_sidebar(); ?> and delete it. Save your file. What just happened? We created a new template file called page-no-sidebar.php, and edited it to displaycontent differently from our default page template: We edited the classes for our .content div, using the object-oriented approach to styling used by the layout-core.css file. This will apply styling for the .fullclass to this div, so that it displays a full width instead of two-thirds of its containing element. We removed the line calling the get_sidebar include, so that it won't be displayed on any pages using this template. The lines we added at the top are essential for WordPress to pick up on our page templateand make it available in the WordPress admin. Page editors will see a drop-down list of pagetemplates available, and the name we defined in the template is what they'll see in that list,as shown in the following screenshot: As you can see, in the Page Attributes box to the right-hand side, a new select box has appeared called Template. Our new page template is listed in that select box, along withDefault Template, which is page.php. Now we can try it out by assigning this template to a page and seeing how it looks.
Read more
  • 0
  • 0
  • 9830

article-image-converting-tables-graphs-advanced
Packt
06 May 2013
7 min read
Save for later

Converting tables into graphs (Advanced)

Packt
06 May 2013
7 min read
(For more resources related to this topic, see here.) Getting ready We maintained the same structure for our table, however this time we do not use this example and load it via AJAX. So the markup looks as follows: <table id="dynamicTable" class="table"> <thead> <tr> <th>Reviews</th> <th>Top</th> <th>Rolling Stones</th> <th>Rock Hard</th> <th>Kerrang</th> </tr> </thead> <tbody> <tr> <th>Ac/Dc</th> <td>10</td> <td>9</td> <td>8</td> <td>9</td> </tr> <tr> <th>Queen</th> <td>9</td> <td>6</td> <td>8</td> <td>5</td> </tr> <tr> <th>Whitesnake</th> <td>8</td> <td>9</td> <td>8</td> <td>6</td> </tr> <tr> <th>Deep Purple</th> <td>10</td> <td>6</td> <td>9</td> <td>8</td> </tr> <tr> <th>Black Sabbath</th> <td>10</td> <td>5</td> <td>7</td> <td>8</td> </tr> </tbody> </table> How to do it... Let's see what we need to do: Add a div right on the top of our table with an ID called graph: <div id="graph"></div> We will use a jQuery Plugin called Highcharts, which can be downloaded for free from http://www.highcharts.com/products/highcharts. Add the following script to the bottom of our document: <script src = "highcharts.js"></script> Add a simple script to initialize the graph as follows: var chart; Highcharts.visualize = function(table, options) { // the data series options.series = []; var l= options.series.length; options.series[l] = { name: $('thead th:eq('+(l+1)+')', table).text(), data: [] }; $('tbody tr', table).each( function(i) { var tr = this; var th = $('th', tr).text(); var td = parseFloat($('td', tr).text()); options.series[0].data.push({name:th,y:td}); }); chart = new Highcharts.Chart(options); } // On document ready, call visualize on the datatable. $(document).ready(function() { var table = document.getElementById('dynamicTable'), options = { chart: { renderTo: 'graph', defaultSeriesType: 'pie' }, title: { text: 'Review Notes from Metal Mags' }, plotOptions: { pie: { allowPointSelect: true, cursor: 'pointer', dataLabels: { enabled: false }, showInLegend: true } }, tooltip: { pointFormat: 'Total: <b>{point.percentage}%</ b>', percentageDecimals: 1 } }; Highcharts.visualize(table, options); }); Many people choose to hide the div with the table in smaller devices and show only the graph. Once they've optimized our table and depending on the amount of data, there is no problem. It also shows that the choice is yours. Now when we look at the browser, we can view both the table and the graph as shown in the following screenshot: Browser screenshot at 320px. Highcharts plugins have an excellent quality in all browsers and works with SVG, they are compatible with iPad, iPhone, and IE 6. How it works... The plugin can generate the table using only a single data array, but by our intuition and step-by-step description of its uses, we have created the following code to generate the graph starting from a table previously created. We create the graph using the id#= dynamicTable function, where we read its contents through the following function: $('tbody tr', table).each( function(i) { var tr = this; var th = $('th', tr).text(); var td = parseFloat($('td', tr).text()); options.series[0].data.push({name:th,y:td}); }); In the plugin settings, we set the div graph to receive the graph after it is rendered by the script. We also add a pie type and a title for our graph. options = { chart: { renderTo: 'graph', defaultSeriesType: 'pie' }, title: { text: 'Review Notes from Metal Mags' }, There's more... We can hide the table using media query so that only the graph appears. Remember that it just hides the fact and does not prevent it from being loaded by the browser; however we still need it to build the graph. For this, just apply display none to the table inside the breakpoint: @media only screen and (max-width: 768px) { .table { display: none; } } Browser screenshot at 320px, without the table Merging data – numbers and text (Advanced) We introduce an alternative based on CSS3 for dealing with tables containing text and numbers. Getting ready Tables are used for different purposes, we will see an example where our data is not a data table. (Code Example: Chapter07_Codes_1 ) Browser screenshot at 1024px Although our table did not have many columns, showing it on a small screen is not easy. Hence we will progressively show the change in the table by subtracting the width of the screen. How to do it... Note that we have removed the .table class so this time apply the style directly in the table tags, see the following steps: Let's use a simple table structure as we saw before. Add some CSS3 to make some magic with our selectors. Set our breakpoints to two sizes. <table> <thead> <tr> <th>CONTACT</th> <th scope="col">Manager</th> <th scope="col">Label</th> <th scope="col">Phone</th> </tr> </thead> <tbody> <tr> <th scope="row">Black Sabbath</th> <td>Richard Both</td> <td>Atlantic</td> <td>+1 (408) 257-1500 </td> </tr> <tr> <th scope="row">Ac/DC</th> <td>Paolla Matazo</td> <td>Sony</td> <td>+1 (302) 236-0800</td> </tr> <tr> <th scope="row">Queen</th> <td>Suzane Yeld</td> <td>Warner</td> <td>+1 (103) 222-6754</td> </tr> <tr> <th scope="row">Whitesnake</th> <td>Paul S. Senne</td> <td>Vertigo</td> <td>+1 (456) 233-1243</td> </tr> <tr> <th scope="row">Deep Purple</th> <td>Ian Friedman</td> <td>CosaNostra</td> <td>+1 (200) 255-0066</td> </tr> </tbody> </table> Applying the style as follows: table { width: 100%; background-color: transparent; border-collapse: collapse; border-spacing: 0; background-color: #fff } th { text-align: left; } td:last-child, th:last-child { text-align:right; } td, th { padding: 6px 12px; } tr:nth-child(odd), tr:nth-child(odd) { background: #f3f3f3; } tr:nth-child(even) { background: #ebebeb; } thead tr:first-child, thead tr:first-child { background: #000; color:white; } table td:empty { background:white; } We use CSS3 pseudo-selectors here again to help in the transformation of the table. And the most important part, the Media Queries breakpoints: @media (max-width: 768px) { tr :nth-child(3) {display: none;} } @media (max-width: 480px) { thead tr:first-child {display: none;} th {display: block} td {display: inline!important;} } When the resolution is set to 768px, we note that the penultimate column is removed. This way we keep the most relevant information on the screen. We have hidden the information less relevant to the subject. And when we decrease further, we have the data distributed as a block. Summary In this article, we saw an alternative solution combining the previous recipes with another plugin for rendering graphics. Resources for Article : Further resources on this subject: MySQL 5.1 Plugin: HTML Storage Engine—Reads and Writes [Article] Creating Accessible Tables in Joomla! [Article] HTML5: Generic Containers [Article]
Read more
  • 0
  • 0
  • 19420

article-image-getting-started-modernizr-using-php-ide
Packt
30 Apr 2013
5 min read
Save for later

Getting started with Modernizr using PHP IDE

Packt
30 Apr 2013
5 min read
(For more resources related to this topic, see here.) From the Modernizr website: Modernizr is a small JavaScript library that detects the availability of native implementations for next-generation web technologies, i.e. features that stem from the HTML5 and CSS3 specifications. Many of these features are already implemented in at least one major browser (most of them in two or more), and what Modernizr does is, very simply, tell you whether the current browser has this feature natively implemented or not. Basically with this library, we can see if the user's browser can support certain features you wish to use on your site. This is important to do, as unfortunately not every browser is created the same. Each one has its own implementation of the HTML5 standard, so some features may be available on Google Chrome but not on Internet Explorer. Using Modernizr is a better alternative to the standard, but it is unreliable, user agent (UA) string checking. Let's begin. Getting ready Go ahead and create a new Web Project in Aptana Studio. Once it is set up, go ahead and add a new folder to the project named js. Next thing we need to do is to download the Development Version of Mondernizr from the Modernizr download page (http://modernizr.com/download/). You will see options to build your own package. The development version will do until you are ready for production use. As of this writing, the latest version is 2.6.2 and that will be the version we use. Place the downloaded file into the js folder. How to do it... Follow these steps: For this exercise, we will simply do a browser test to see if your browser currently supports the HTML5 Canvas element. Type this into a JavaScript file named canvas.js and add the following code: if (Modernizr.canvas) { var c=document.getElementById("canvastest"); var ctx=c.getContext("2d"); // Create gradient Var grd=ctx.createRadialGradient(75,50,5,90,60,100); grd.addColorStop(0,"black"); grd.addColorStop(1,"white"); // Fill with gradient ctx.fillStyle=grd; ctx.fillRect(10,10,150,80); alert("We can use the Canvas element!"); } else { alert("Canvas Element Not Supported"); } Now add the following to index.html: <!DOCTYPE html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Canvas Support Test</title> <script src = "js/modernizr-latest.js" type="text/ javascript"></script> </head> <body> <canvas id="canvastest" width="200" height="100" style="border:1px solid #000000">Your browser does not support the HTML5 canvas tag.</canvas> <script src = "js/canvas.js"> </script> </body> </html> Let's preview the code and see what we got. The following screenshot is what you should see: How it works... What did we just do? Well, let's break it down: <script src = "js/modernizr-latest.js" type="text/javascript"></script> Here, we are calling in our Modernizr library that we downloaded previously. Once you do that, Modernizr does some things to your page. It will redo your opening <html> tag to something like the following (from Google Chrome): <html class=" js flexbox flexboxlegacy canvas canvastext webgl notouch geolocation postmessage websqldatabase indexeddb hashchange history draganddrop websockets rgba hsla multiplebgs backgroundsize borderimage borderradius boxshadow textshadow opacity cssanimations csscolumns cssgradients cssreflections csstransforms csstransforms3d csstransitions fontface generatedcontent video audio localstorage sessionstorage webworkers applicationcache svg inlinesvg smil svgclippaths"> This is all the features your browser supports that Modernizr was able to detect. Next up we have our <canvas> element: <canvas id="canvastest" width="200" height="100" style="border:1px solid #000000">Your browser does not support the HTML5 canvas tag.</ canvas> Here, we are just forming a basic canvas that is 200 x 100 with a black border going around it. Now for the good stuff in our canvas.js file, follow this code snippet: <script> if (Modernizr.canvas) { alert("We can use the Canvas element!"); var c=document.getElementById("canvastest"); var ctx=c.getContext("2d"); // Create gradient var grd=ctx.createRadialGradient(75,50,5,90,60,100); grd.addColorStop(0,"black"); grd.addColorStop(1,"white"); // Fill with gradient ctx.fillStyle=grd; ctx.fillRect(10,10,150,80); } else { alert("Canvas Element Not Supported"); } </script> In the first part of this snippet, we used an if statement to see if the browser supports the Canvas element. If it does support canvas, then we are displaying a JavaScript alert and then filling our canvas element with a black gradient. After that, we have our else statement that will alert the user that canvas is not supported on their browser. They will also see the Your browser does not support the HTML5 canvas tag message. That wasn't so bad, was it? There's more... I highly recommend reading over the documentation on the Modernizr website so that you can see all the feature tests you can do with this library. We will do a few more practice examples with Modernizr, and of course, it will be a big component of our RESS project later on in the book. Keeping it efficient For a production environment, I highly recommend taking the build-a-package approach and only downloading a script that contains the tests you will actually use. This way your script is as small as possible. As of right now, the file we used has every test in it; some you may never use. So, to be as efficient as possible (and we want all the efficiency we can get in mobile development), build your file with the tests you'll use or may use. Summary This article provided guidelines on creating a new Web Project in Aptana Studio, creating new folder to the project named js, downloading the Development Version of Mondernizr from the Modernizr download page, and placing the downloaded file into the js folder. Resources for Article : Further resources on this subject: Let's Chat [Article] Blocking versus Non blocking scripts [Article] Building Applications with Spring Data Redis [Article]
Read more
  • 0
  • 0
  • 5445

article-image-creating-website-artisteer
Packt
30 Apr 2013
5 min read
Save for later

Creating a website with Artisteer

Packt
30 Apr 2013
5 min read
(For more resources related to this topic, see here.) Layout The first thing that we should set up while designing a new website is its width. If you are interested in creating web pages, you probably have a monitor with a large widescreen and good resolution. But we have to remember that not all of your visitors will have such good hardware. All the templates generated by Artisteer are centered, and almost all modern browsers enable you to freely zoom the page. It's far better to let some of your visitors enlarge the site than to make the rest of them use the horizontal scroll bar while reading. The resolution you choose will depend on the target audience of your site. Usually, private computers have better parameters than the typical PCs used for just office work in companies. So if you design a site that you know will be viewed mostly by private individuals, you can choose a slightly wider layout than you might for a typical business site. But you cannot forget that many nonbusiness websites, such as community sites, are often accessed from offices. So what is the answer? In my opinion, a layout with a width of 1,000 pixels is still a good choice for most of the cases. Such width ensures that the site will be displayed correctly on a pretty old, but still commonly used, nonwide 17'' monitor. (The typical resolution for this hardware is 1,024 x 768 and such a layout will fill the whole screen.) As more and more users have now started using computers that are equipped with a far better screen, you can consider increasing the resolution slightly, to, for example, 1,150 pixels. Remember that not every user will visit your site using a desktop. Many laptops, and especially netbooks and tablets, don't have wide screens either. Remember that the width of the page must be a little lower than the total resolution of the screen. You should reserve some space for the vertical scrollbar. We are going to set up the width of our project traditionally to 1,000 pixels. To do this, click on the Layout tab on the ribbon, and next to the Sheet Width button. Choose 1000 pixels from the available options on the list. The Sheet Options window is divided into two areas: on the left you can choose from the values expressed in pixels, while on the right, as a percentage. The percentage value means that the page doesn't have a fixed width, but it will change according to the parameters of the screen it is displayed on (according to the chosen percentage value). Designing layouts with the width defined in percentage might seem to be a great idea; and indeed, this technique, when properly used, can lead to great results. But you have to remember, that in such a case, all page elements have to be similarly prepared in order, to be able to adapt to the dynamically changing width of the site. It is far simpler to achieve good results for the layout with fixed values (expressed in pixels). It is a common rule while working with Artisteer that after clicking on a button on the ribbon, you get the list containing the most commonly used standard values. If you need a custom value, however, you can click on the button located at the bottom of the list to go to a window where you can freely set up and choose the required value. For example, while choosing the width of a layout, clicking on the More Sheet Widths... button (located just under the list) will lead you to a window where you can set up the required width with an accuracy up to 1 pixel. We can set the required value in three ways: We can click on the up and down arrows that are located on the right side of the field. We can move the mouse cursor on the field and use the slider that appears. We can click on the field. The text cursor will appear. Then we can type the required value using the keyboard. For me, this is the most comfortable way, especially since the slider's minimal progress is more than 1. Panel mode versus windows mode If you look carefully at the displayed windows, on the bottom-right corner you will see a panel mode button. This button switches Artisteer's interface between panel mode and windows mode. In the windows mode, the advanced settings are displayed in windows. In the panel mode, the advanced settings are displayed on the side panel located on the right side of Artisteer's window. If you are using a wide screen, you may find the panel mode to be more comfortable. Its advantage is that the side panel doesn't cover anything on your project, so you have a better view to observe the changes. Such a change is persistent and if you switch to the panel mode, all the advanced settings will be displayed in the right panel, as long as you decide to go back into the windows mode. To reverse, find and click on the icon located in the top-right corner of the side panel (just next to the x button that closes the panel). Summary This article has covered some features exclusive to Artisteer. It has also explained a brief process of how to create stunning templates for websites using Artisteer. Resources for Article : Further resources on this subject: Creating and Using Templates with Cacti 0.8 [Article] Using Templates to Display Channel Content in ExpressionEngine [Article] Working with Templates in Apache Roller 4.0 [Article]
Read more
  • 0
  • 0
  • 2140

article-image-basic-use-local-storage
Packt
26 Apr 2013
5 min read
Save for later

Basic use of Local Storage

Packt
26 Apr 2013
5 min read
(For more resources related to this topic, see here.) Getting ready For this article, all you need is your browser and favorite text editor. How to do it... Perform the following steps: Let's begin with creating a blank document in your text editor. Then, add the following code and save it as a test localdemo.html: <!DOCTYPE html> <html> <head> <script src = "http://code.jquery.com/jquery-1.8.1.min.js"></script> <script type="text/javascript"> </script> </head> <body> </body> </html> In between the <script> tags, add the following function. This copes with storing the information within the browser. <script type="text/javascript"> function storeItem() { var item = $('#item').val(); var items = localStorage.getItem('myItems'); if (items != null) { items = JSON.parse(items); } else { items = new Array(); } items.push(item); localStorage.setItem('myItems', JSON.stringify(items)); refresh(); } </script> We need to add another function to retrieve information and refresh the content displayed on screen. So go ahead and update the script as highlighted: <script type="text/javascript"> function storeItem() { var item = $('#item').val(); var items = localStorage.getItem('myItems'); if (items != null) { items = JSON.parse(items); } else { items = new Array(); } items.push(item); localStorage.setItem('myItems', JSON.stringify(items)); refresh(); } function refresh() { var items = localStorage.getItem('myItems'); var ul = $('ul'); ul.html(''); if (items != null) { items = JSON.parse(items); $(items).each(function (index, data) { ul.append('<li>' + data + '</li>'); }); } } $(function () { refresh(); }); </script> We finish by adding a basic form—while the purists amongst you will notice that it doesn't have all of the proper forms of tag, it is enough to illustrate how this demo works. Add the following code snippet, just above the closing </body> tag: Enter item: <input type="text" id="item" /> <input type="button" value= "store" onclick="storeItem()" /> <br /> <ul></ul> Crack open your browser and preview the results. Here's a screenshot of what you should see, with some example values already entered: How it works... Now we've seen Local Storage in action, let's take a look at how it works in detail. HTML5 Local Storage works on the principle of named key/value pairs, where you store information using a named key and retrieve it by calling that named key. Everything is stored locally on the user's PC; it cuts down the need to retrieve information from the server, thereby acting as a form of caching. You may have noticed that we've used jQuery in this article—basic use of LocalStorage (and SessionStorage) doesn't necessarily need jQuery; you could use pure JavaScript if you prefer. It all depends on your requirements; if you are already using jQuery in your pages, for example, you may prefer to use this over JavaScript. (You will see I have used a mix of both throughout this book, to show you how you can use either jQuery or JavaScript). In this article, we've used jQuery to reference LocalStorage; if you take a look at the code, you will see two lines of particular importance: var items = localStorage.getItem('myItems'); localStorage.setItem('myItems', JSON.stringify(items)); These two handle the retrieval and setting of values respectively. In this demo, we begin with either fetching the contents of any existing stored information and inserting them into an array, or creating a new one, if nothing exists within the store. We then use JSON. stringify() to convert information from the form into a string, push this into the storage, and then refresh the page so that you can see the updated list. To get the information back, we simply repeat the same steps, but in the reverse way. The beauty of using JSON as part of storing information in this way is that you are not entirely limited to just plain text; you can store some other things in the LocalStorage area, as we will see later in this book. There's more... By now, you will start to see that using Local Storage works very much in the same way that cookies do—indeed some people often refer to Local Storage as "cookies on steroids". This said, there are still some limitations that you need to be aware of when using Local Storage, such as the following: Local Storage will only support text as a format and is set to a suggested arbitary limit of 5 MB, although this is inconsistent across browsers. If you exceed this, the QUOTA_EXCEEDED_ERR error is thrown. At the time of writing this book, there is no support built in for requesting more space. Some browsers such as Opera will allow the user to control each site's quota, but this is a purely userbased action. Web Storage is no more secure than cookies; although use of the HTTPS protocol can resolve a lot of security issues, it is still up to you as a developer to ensure that sensitive information (such as passwords) is not sent to or stored locally on the client using Web Storage. With careful use, we can take advantage of the ability to store relevant information on a user's PC, and avoid the need to push it back to the server. Once the information has been stored, there will be occasions when you will need to view the raw information from within your browser—this is easy enough to do, although the method varies from browser to browser, which we will see as part of the next article. Summary In this article we discussed basic use of HTML5 Local Storage. Resources for Article : Further resources on this subject: Blocking versus Non blocking scripts [Article] Building HTML5 Pages from Scratch [Article] HTML5 Games Development: Using Local Storage to Store Game Data [Article]
Read more
  • 0
  • 0
  • 2385

article-image-so-what-easeljs
Packt
18 Apr 2013
7 min read
Save for later

So, what is EaselJS?

Packt
18 Apr 2013
7 min read
(For more resources related to this topic, see here.) EaselJS is part of the CreateJS suite, a JavaScript library for building rich and interactive experiences, such as web applications and web-based games that run on desktop and mobile web browsers. The standard HTML5 canvas' syntax can be very hard for beginners, especially if you need to animate and draw many objects. EaselJS greatly simplifies application development in HTML5 canvas using a syntax and an architecture very similar to the ActionScript 3.0 language. As a result, Flash/Flex developers will immediately feel at home, but it's very easy to learn even if you've never opened Flash in your life. CreateJS is currently supported by Adobe, AOL, and Microsoft, and it's developed by Grant Skinner, an internationally recognized leader in the field of rich Internet application development. Thanks to EaselJS, you can easily manage many types of graphic elements (vector shapes, bitmap, spritesheets, texts, and HTML elements) and it also supports touch events, animations, and many other interesting features in order to quickly develop cross-platform HTML5 games and applications, providing a look and feel as well as a behavior very similar to native applications for iOS and Android. Following are the five reasons to choose EaselJS and HTML5 canvas to build your applications: Cross-platform — Using this technology will help you create HTML5 canvas applications that will be supported from: Desktop browsers such as Chrome, Safari, Firefox, Opera, and IE9+ iPhone, iPad, and iPod 4+ (iOS 3.2+) Android smartphones and tablets (OS 2.1+) BlackBerry browser (7.0 and 10.0+) Every HTML5 browser (go to http://caniuse.com/canvas for more information) The following screenshot shows how the same application can run on different devices and resolutions: Easy Integration — EaselJS applications run on browsers and finally can be seen by almost every desktop and mobile user without any plugin installed. The HTML5 canvas element behaves just like any other HTML element. It can overlap other elements or become part of an existing HTML page. So, your canvas application can fill the entire browser area or just a small part of an existing HTML page. You can create amazing image galleries for your sites, product configurators, microsites, games, and interactive banners, and replicate a lot of features that used to be created with Adobe Flash or Apache Flex. One source code — A single codebase can be used to create a responsive application that works on almost all devices and resolutions. If you've ever created a liquid or fluid layout using HTML, Flash, or Flex then you already know this concept. As shown in the previous screenshot, you can also adapt UI and change behaviors according to the size of the device being used. No creativity limits — As in Flash, you can now forget HTML DOM compatibility issues. When you display a graphic element using EaselJS, you can be sure it will be placed at the same position in every browser, desktop and mobile (except for texts because every browser uses a different font renderer, and there may be some minor differences between them and of course Internet Explorer 8 and lower versions that do not support HTML5 syntax). Furthermore the CreateJS suite includes a lot of additional tools helping developers and designers to create amazing stuff: TweenJS: An useful tween engine to create runtime animations PreloadJS: To load assets and create nice preloaders Zoë: To convert SWF (Adobe Flash native web format) into spritesheets and JSON for EaselJS SoundJS: A library to play sounds (this topic is not covered in this book) CreateJS Toolkit for Flash CS6: To export Flash timeline animations in an EaselJS-compatible format Freedom — Developers can now create and publish games and applications skipping the App Store submission process. Of course, the performance of HTML5 applications are not comparable to those achieved by the native applications but can still be an alternative solution to many needs. From a business perspective, it's a great opportunity because it is now possible to avoid following the Apple guidelines that usually don't allow publishing applications that are primarily marketing material or advertisements, duplicated applications or applications that are not very useful, or simply websites bundled as applications. Users can now have a cool touch experience directly while navigating through a website, avoiding having to download, install, and open a native application. Furthermore, developers can also use PhoneGap (http://www.phonegap.com) and many other technologies to convert their HTML applications in native applications for iOS, Android, Windows Phones, BlackBerry, Bada, or WebOS. After the previous introduction you will be guided through the process of downloading, installing and configuring EaselJS in your local machine (this part of the book is not copied in this article). The book continues with the traditional "Hello World" example, as shown in the next paragraph: Quick start — creating your first canvas application Now we'll see how to create our first HTML5 canvas application with EaselJS. Step 1 — creating the HTML template Take a look at the following code that represents the boilerplate we'll use: <!DOCTYPE html> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <title>EaselJS Starter: Template Page</title> <script src = "lib/easeljs-0.6.0.min.js"></script> <script> // Your code here function init() { } </script> </head> <body onload="init();" style="background-color:# ccc "> <h1> EaselJS Starter: Template page </h1> <canvas id="mycanvas" width="960" height="450" style="background-color:#fff"></canvas> </body> </html> The following are the most important steps of the previous code: Define an HTML5 <canvas> object with a width of 960 pixels and a height of 450 pixels. This represents the drawing area of your EaselJS application. When the page is completely loaded, the onload event is fired and the init() function is called. The <script> block is the place where you have to add the code but you should always wait for the onload events before you do anything. Set the <body> and <canvas> background CSS styles. The result is a white container inside an HTML page, as shown in the following screenshot: Step 2 – creating a "Hello World" example Now replace the init() function with the following code: function init() { var canvas = document.getElementById("mycanvas"); var stage = new createjs.Stage(canvas); var text = new createjs.Text("Hello World!", "36px Arial", "#777"); stage.addChild(text); text.x = 360; text.y = 200; stage.update(); } Congrats! You have created your first canvas application! The following screenshot shows the output of the previous code, with a text field at the center of the canvas: The following are the most important steps of the previous code: Use the getElementById method to get a canvas reference. In order to use EaselJS, create a Stage property, passing the canvas reference as a parameter. Create a new Text property and add it to the stage. Assign values for the x and y coordinates in order to see the text at the center of the stage. Call the update() method on the stage to render it to the canvas. The Stage property represents the root level for the display list, which is the main container for all the other graphic elements. Now you only need to know that every graphic element must be added to the Stage property, and that every time you need to update your content you have to refresh the stage calling the update() method. Summary After the previous "Hello World" example the book will help you to learn how to use the most important EaselJS topics with practical examples, technical information, and a lot of tip and tricks, creating a small advertising interactive web application. By the end of book you will be able to draw graphic primitives and texts, load and preload images, handle mouse events, add animations and spritesheets, use TweenJS, PreloadJS, and Zoe and optimize your code for desktop and mobile devices. This article helped you to learn what EaselJS actually is, what you can do with it, and why it's so great. It also helped on hoe to create your first HTML5 canvas application "Hello World". Resources for Article : Further resources on this subject: HTML5: Developing Rich Media Applications using Canvas [Article] HTML5 Games Development: Using Local Storage to Store Game Data [Article] HTML5: Getting Started with Paths and Text [Article]
Read more
  • 0
  • 0
  • 3633
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-nginx-http-server
Packt
18 Apr 2013
28 min read
Save for later

The NGINX HTTP Server

Packt
18 Apr 2013
28 min read
(For more resources related to this topic, see here.) NGINX's architecture NGINX consists of a single master process and multiple worker processes. Each of these is single-threaded and designed to handle thousands of connections simultaneously. The worker process is where most of the action takes place, as this is the component that handles client requests. NGINX makes use of the operating system's event mechanism to respond quickly to these requests. The NGINX master process is responsible for reading the configuration, handling sockets, spawning workers, opening log files, and compiling embedded Perl scripts. The master process is the one that responds to administrative requests via signals. The NGINX worker process runs in a tight event loop to handle incoming connections. Each NGINX module is built into the worker, so that any request processing, filtering, handling of proxy connections, and much more is done within the worker process. Due to this worker model, the operating system can handle each process separately and schedule the processes to run optimally on each processor core. If there are any processes that would block a worker, such as disk I/O, more workers than cores can be configured to handle the load. There are also a small number of helper processes that the NGINX master process spawns to handle dedicated tasks. Among these are the cache loader and cache manager processes. The cache loader is responsible for preparing the metadata for worker processes to use the cache. The cache manager process is responsible for checking cache items and expiring invalid ones. NGINX is built in a modular fashion. The master process provides the foundation upon which each module may perform its function. Each protocol and handler is implemented as its own module. The individual modules are chained together into a pipeline to handle connections and process requests. After a request is handled, it is then passed on to a series of filters, in which the response is processed. One of these filters is responsible for processing subrequests, one of NGINX's most powerful features. Subrequests are how NGINX can return the results of a request that differs from the URI that the client sent. Depending on the configuration, they may be multiply nested and call other subrequests. Filters can collect the responses from multiple subrequests and combine them into one response to the client. The response is then finalized and sent to the client. Along the way, multiple modules come into play. See http://www.aosabook.org/en/nginx.html for a detailed explanation of NGINX internals. We will be exploring the http module and a few helper modules in the remainder of this article. The HTTP core module The http module is NGINX's central module, which handles all interactions with clients over HTTP. We will have a look at the directives in the rest of this section, again divided by type. The server The server directive starts a new context. We have already seen examples of its usage throughout the book so far. One aspect that has not yet been examined in-depth is the concept of a default server. A default server in NGINX means that it is the first server defined in a particular configuration with the same listen IP address and port as another server. A default server may also be denoted by the default_server parameter to the listen directive. The default server is useful to define a set of common directives that will then be reused for subsequent servers listening on the same IP address and port: server { listen 127.0.0.1:80; server_name default.example.com; server_name_in_redirect on; } server { listen 127.0.0.1:80; server_name www.example.com; } In this example, the www.example.com server will have the server_name_in_redirect directive set to on as well as the default.example.com server. Note that this would also work if both servers had no listen directive, since they would still both match the same IP address and port number (that of the default value for listen, which is *:80). Inheritance, though, is not guaranteed. There are only a few directives that are inherited, and which ones are changes over time. A better use for the default server is to handle any request that comes in on that IP address and port, and does not have a Host header. If you do not want the default server to handle requests without a Host header, it is possible to define an empty server_name directive. This server will then match those requests. server { server_name ""; } The following table summarizes the directives relating to server: Table: HTTP server directives Directive Explanation port_in_redirect Determines whether or not the port will be specified in a redirect issued by NGINX. server Creates a new configuration context, defining a virtual host. The listen directive specifies the IP address(es) and port(s); the server_name directive lists the Host header values that this context matches. server_name Configures the names that a virtual host may respond to. server_name_in_redirect Activates using the first value of the server_name directive in any redirect issued by NGINX within this context. server_tokens Disables sending the NGINX version string in error messages and the Server response header (default value is on). Logging NGINX has a very flexible logging model . Each level of configuration may have an access log. In addition, more than one access log may be specified per level, each with a different log_format. The log_format directive allows you to specify exactly what will be logged, and needs to be defined within the http section. The path to the log file itself may contain variables, so that you can build a dynamic configuration. The following example describes how this can be put into practice: http { log_format vhost '$host $remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent"'; log_format downloads '$time_iso8601 $host $remote_addr ' '"$request" $status $body_bytes_sent $request_ time'; open_log_file_cache max=1000 inactive=60s; access_log logs/access.log; server { server_name ~^(www.)?(.+)$; access_log logs/combined.log vhost; access_log logs/$2/access.log; location /downloads { access_log logs/downloads.log downloads; } } } The following table describes the directives used in the preceding code: Table: HTTP logging directives Directive Explanation access_log Describes where and how access logs are to be written. The first parameter is a path to the file where the logs are to be stored. Variables may be used in constructing the path. The special value off disables the access log. An optional second parameter indicates log_format that will be used to write the logs. If no second parameter is configured, the predefined combined format is used. An optional third parameter indicates the size of the buffer if write buffering should be used to record the logs. If write buffering is used, this size cannot exceed the size of the atomic disk write for that filesystem. If this third parameter is gzip, then the buffered logs will be compressed on-the-fly, provided that the nginx binary was built with the zlib library. A final flush parameter indicates the maximum length of time buffered log data may remain in memory before being flushed to disk. log_format Specifies which fields should appear in the log file and what format they should take. See the next table for a description of the log-specific variables. log_not_found Disables reporting of 404 errors in the error log (default value is on). log_subrequest Enables logging of subrequests in the access log (default value is off ). open_log_file_cache Stores a cache of open file descriptors used in access_logs with a variable in the path. The parameters used are: max: The maximum number of file descriptors present in the cache inactive: NGINX will wait this amount of time for something to be written to this log before its file descriptor is closed min_uses: The file descriptor has to be used this amount of times within the inactive period in order to remain open valid: NGINX will check this often to see if the file descriptor still matches a file with the same name off: Disables the cache In the following example, log entries will be compressed at a gzip level of 4. The buffer size is the default of 64 KB and will be flushed to disk at least every minute. access_log /var/log/nginx/access.log.gz combined gzip=4 flush=1m; Note that when specifying gzip the log_format parameter is not optional.The default combined log_format is constructed like this: log_format combined '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent"'; As you can see, line breaks may be used to improve readability. They do not affect the log_format itself. Any variables may be used in the log_format directive. The variables in the following table which are marked with an asterisk ( *) are specific to logging and may only be used in the log_format directive. The others may be used elsewhere in the configuration, as well. Table: Log format variables Variable Name Value $body_bytes_sent The number of bytes sent to the client, excluding the response header. $bytes_sent The number of bytes sent to the client. $connection A serial number, used to identify unique connections. $connection_requests The number of requests made through a particular connection. $msec The time in seconds, with millisecond resolution. $pipe * Indicates if the request was pipelined (p) or not (.). $request_length * The length of the request, including the HTTP method, URI, HTTP protocol, header, and request body. $request_time The request processing time, with millisecond resolution, from the first byte received from the client to the last byte sent to the client. $status The response status. $time_iso8601 * Local time in ISO8601 format. $time_local * Local time in common log format (%d/%b/%Y:%H:%M:%S %z). In this section, we have focused solely on access_log and how that can be configured. You can also configure NGINX to log errors. Finding files In order for NGINX to respond to a request, it passes it to a content handler, determined by the configuration of the location directive. The unconditional content handlers are tried first: perl, proxy_pass, flv, mp4, and so on. If none of these is a match, the request is passed to one of the following, in order: random index, index, autoindex, gzip_static, static. Requests with a trailing slash are handled by one of the index handlers. If gzip is not activated, then the static module handles the request. How these modules find the appropriate file or directory on the filesystem is determined by a combination of certain directives. The root directive is best defined in a default server directive, or at least outside of a specific location directive, so that it will be valid for the whole server: server { root /home/customer/html; location / { index index.html index.htm; } location /downloads { autoindex on; } } In the preceding example any files to be served are found under the root /home/customer/html. If the client entered just the domain name, NGINX will try to serve index.html. If that file does not exist, then NGINX will serve index.htm. When a user enters the /downloads URI in their browser, they will be presented with a directory listing in HTML format. This makes it easy for users to access sites hosting software that they would like to download. NGINX will automatically rewrite the URI of a directory so that the trailing slash is present, and then issue an HTTP redirect. NGINX appends the URI to the root to find the file to deliver to the client. If this file does not exist, the client receives a 404 Not Found error message. If you don't want the error message to be returned to the client, one alternative is to try to deliver a file from different filesystem locations, falling back to a generic page, if none of those options are available. The try_files directive can be used as follows: location / { try_files $uri $uri/ backups/$uri /generic-not-found.html; } As a security precaution, NGINX can check the path to a file it's about to deliver, and if part of the path to the file contains a symbolic link, it returns an error message to the client: server { root /home/customer/html; disable_symlinks if_not_owner from=$document_root; } In the preceding example, NGINX will return a "Permission Denied" error if a symlink is found after /home/customer/html, and that symlink and the file it points to do not both belong to the same user ID. The following table summarizes these directives: Table: HTTP file-path directives Directive Explanation disable_symlinks Determines if NGINX should perform a symbolic link check on the path to a file before delivering it to the client. The following parameters are recognized: off : Disables checking for symlinks (default) on: If any part of a path is a symlink, access is denied if_not_owner: If any part of a path contains a symlink in which the link and the referent have different owners, access to the file is denied from=part: When specified, the path up to part is not checked for symlinks, everything afterward is according to either the on or if_not_owner parameter root Sets the path to the document root. Files are found by appending the URI to the value of this directive. try_files Tests the existence of files given as parameters. If none of the previous files are found, the last entry is used as a fallback, so ensure that this path or named location exists, or is set to return a status code indicated by  =<status code>. Name resolution If logical names instead of IP addresses are used in an upstream or *_pass directive, NGINX will by default use the operating system's resolver to get the IP address, which is what it really needs to connect to that server. This will happen only once, the first time upstream is requested, and won't work at all if a variable is used in the *_pass directive. It is possible, though, to configure a separate resolver for NGINX to use. By doing this, you can override the TTL returned by DNS, as well as use variables in the *_pass directives. server { resolver 192.168.100.2 valid=300s; } Table: Name resolution directives Directive Explanation resolver   Configures one or more name servers to be used to resolve upstream server names into IP addresses. An optional  valid parameter overrides the TTL of the domain name record. In order to get NGINX to resolve an IP address anew, place the logical name into a variable. When NGINX resolves that variable, it implicitly makes a DNS look-up to find the IP address. For this to work, a resolver directive must be configured: server { resolver 192.168.100.2; location / { set $backend upstream.example.com; proxy_pass http://$backend; } } Of course, by relying on DNS to find an upstream, you are dependent on the resolver always being available. When the resolver is not reachable, a gateway error occurs. In order to make the client wait time as short as possible, the resolver_timeout parameter should be set low. The gateway error can then be handled by an error_ page designed for that purpose. server { resolver 192.168.100.2; resolver_timeout 3s; error_page 504 /gateway-timeout.html; location / { proxy_pass http://upstream.example.com; } } Client interaction There are a number of ways in which NGINX can interact with clients. This can range from attributes of the connection itself (IP address, timeouts, keepalive, and so on) to content negotiation headers. The directives listed in the following table describe how to set various headers and response codes to get the clients to request the correct page or serve up that page from its own cache: Table: HTTP client interaction directives Directive Explanation default_type Sets the default MIME type of a response. This comes into play if the MIME type of the file cannot be matched to one of those specified by the types directive. error_page Defines a URI to be served when an error level response code is encountered. Adding an = parameter allows the response code to be changed. If the argument to this parameter is left empty, the response code will be taken from the URI, which must in this case be served by an upstream server of some sort. etag Disables automatically generating the ETag response header for static resources (default is on). if_modified_since Controls how the modification time of a response is compared to the value of the If-Modified-Since request header: off: The If-Modified-Since header is ignored exact: An exact match is made (default) before: The modification time of the response is less than or equal to the value of the If-Modified-Since header ignore_invalid_headers Disables ignoring headers with invalid names (default is on). A valid name is composed of ASCII letters, numbers, the hyphen, and possibly the underscore (controlled by the underscores_in_headers directive). merge_slashes Disables the removal of multiple slashes. The default value of on means that NGINX will compress two or more / characters into one. recursive_error_pages Enables doing more than one redirect using the error_page directive (default is off). types Sets up a map of MIME types to file name extensions. NGINX ships with a conf/mime.types file that contains most MIME type mappings. Using include to load this file should be sufficient for most purposes. underscores_in_headers Enables the use of the underscore character in client request headers. If left at the default value off , evaluation of such headers is subject to the value of the ignore_invalid_headers directive. The error_page directive is one of NGINX's most flexible. Using this directive, we may serve any page when an error condition presents. This page could be on the local machine, but could also be a dynamic page produced by an application server, and could even be a page on a completely different site. http { # a generic error page to handle any server-level errors error_page 500 501 502 503 504 share/examples/nginx/50x.html; server { server_name www.example.com; root /home/customer/html; # for any files not found, the page located at # /home/customer/html/404.html will be delivered error_page 404 /404.html; location / { # any server-level errors for this host will be directed # to a custom application handler error_page 500 501 502 503 504 = @error_handler; } location /microsite { # for any non-existent files under the /microsite URI, # the client will be shown a foreign page error_page 404 http://microsite.example.com/404.html; } # the named location containing the custom error handler location @error_handler { # we set the default type here to ensure the browser # displays the error page correctly default_type text/html; proxy_pass http://127.0.0.1:8080; } } } Using limits to prevent abuse We build and host websites because we want users to visit them. We want our websites to always be available for legitimate access. This means that we may have to take measures to limit access to abusive users. We may define "abusive" to mean anything from one request per second to a number of connections from the same IP address. Abuse can also take the form of a DDOS (distributed denial-of-service) attack, where bots running on multiple machines around the world all try to access the site as many times as possible at the same time. In this section, we will explore methods to counter each type of abuse to ensure that our websites are available. First, let's take a look at the different configuration directives that will help us achieve our goal: Table: HTTP limits directives Directive Explanation limit_conn Specifies a shared memory zone (configured with limit_conn_zone) and the maximum number of connections that are allowed per key value. limit_conn_log_level When NGINX limits a connection due to the limit_conn directive, this directive specifies at which log level that limitation is reported. limit_conn_zone Specifies the key to be limited in limit_conn as the first parameter. The second parameter, zone, indicates the name of the shared memory zone used to store the key and current number of connections per key and the size of that zone (name:size). limit_rate Limits the rate (in bytes per second) at which clients can download content. The rate limit works on a connection level, meaning that a single client could increase their throughput by opening multiple connections. limit_rate_after Starts the limit_rate after this number of bytes have been transferred. limit_req Sets a limit with bursting capability on the number of requests for a specific key in a shared memory store (configured with limit_req_zone). The burst can be specified with the second parameter. If there shouldn't be a delay in between requests up to the burst, a third parameter nodelay needs to be configured. limit_req_log_level When NGINX limits the number of requests due to the limit_req directive, this directive specifies at which log level that limitation is reported. A delay is logged at a level one less than the one indicated here. limit_req_zone Specifies the key to be limited in limit_req as the first parameter. The second parameter, zone, indicates the name of the shared memory zone used to store the key and current number of requests per key and the size of that zone ( name:size). The third parameter, rate, configures the number of requests per second (r/s) or per minute (r/m) before the limit is imposed. max_ranges Sets the maximum number of ranges allowed in a byte-range request. Specifying 0 disables byte-range support. Here we limit access to 10 connections per unique IP address. This should be enough for normal browsing, as modern browsers open two to three connections per host. Keep in mind, though, that any users behind a proxy will all appear to come from the same address. So observe the logs for error code 503 (Service Unavailable), meaning that this limit has come into effect: http { limit_conn_zone $binary_remote_addr zone=connections:10m; limit_conn_log_level notice; server { limit_conn connections 10; } } Limiting access based on a rate looks almost the same, but works a bit differently. When limiting how many pages per unit of time a user may request, NGINX will insert a delay after the first page request, up to a burst. This may or may not be what you want, so NGINX offers the possibility to remove this delay with the nodelay parameter: http { limit_req_zone $binary_remote_addr zone=requests:10m rate=1r/s; limit_req_log_level warn; server { limit_req zone=requests burst=10 nodelay; } } Using $binary_remote_addr We use the $binary_remote_addr variable in the preceding example to know exactly how much space storing an IP address will take. This variable takes 32 bytes on 32-bit platforms and 64 bytes on 64-bit platforms. So the 10m zone we configured previously is capable of holding up to 320,000 states on 32-bit platforms or 160,000 states on 64-bit platforms. We can also limit the bandwidth per client. This way we can ensure that a few clients don't take up all the available bandwidth. One caveat, though: the limit_rate directive works on a connection basis. A single client that is allowed to open multiple connections will still be able to get around this limit: location /downloads { limit_rate 500k; } Alternatively, we can allow a kind of bursting to freely download smaller files, but make sure that larger ones are limited: location /downloads { limit_rate_after 1m; limit_rate 500k; } Combining these different rate limitations enables us to create a configuration that is very flexible as to how and where clients are limited: http { limit_conn_zone $binary_remote_addr zone=ips:10m; limit_conn_zone $server_name zone=servers:10m; limit_req_zone $binary_remote_addr zone=requests:10m rate=1r/s; limit_conn_log_level notice; limit_req_log_level warn; reset_timedout_connection on; server { # these limits apply to the whole virtual server limit_conn ips 10; # only 1000 simultaneous connections to the same server_name limit_conn servers 1000; location /search { # here we want only the /search URL to be rate-limited limit_req zone=requests burst=3 nodelay; } location /downloads { # using limit_conn to ensure that each client is # bandwidth-limited # with no getting around it limit_conn connections 1; limit_rate_after 1m; limit_rate 500k; } } } Restricting access In the previous section, we explored ways to limit abusive access to websites running under NGINX. Now we will take a look at ways to restrict access to a whole website or certain parts of it. Access restriction can take two forms here: restricting to a certain set of IP addresses, or restricting to a certain set of users. These two methods can also be combined to satisfy requirements that some users can access the website either from a certain set of IP addresses or if they are able to authenticate with a valid username and password. The following directives will help us achieve these goals: Table: HTTP access module directives Directive Explanation allow Allows access from this IP address, network, or all. auth_basic Enables authentication using HTTP Basic Authentication. The parameter string is used as the realm name. If the special value off is used, this indicates that the auth_basic value of the parent configuration level is negated. auth_basic_user_file Indicates the location of a file of username:password:comment tuples used to authenticate users. The password field needs to be encrypted with the crypt algorithm. The comment field is optional. deny Denies access from this IP address, network, or all. satisfy Allows access if all or any of the preceding directives grant access. The default value all indicates that a user must come from a specific network address and enter the correct password. To restrict access to clients coming from a certain set of IP addresses, the allow and deny directives can be used as follows: location /stats { allow 127.0.0.1; deny all; } This configuration will allow access to the /stats URI from the localhost only. To restrict access to authenticated users, the auth_basic and auth_basic_user_file directives are used as follows: server { server_name restricted.example.com; auth_basic "restricted"; auth_basic_user_file conf/htpasswd; } Any user wanting to access restricted.example.com would need to provide credentials matching those in the htpasswd file located in the conf directory of NGINX's root. The entries in the htpasswd file can be generated using any available tool that uses the standard UNIX crypt() function. For example, the following Ruby script will generate a file of the appropriate format: #!/usr/bin/env ruby # setup the command-line options require 'optparse' OptionParser.new do |o| o.on('-f FILE') { |file| $file = file } o.on('-u', "--username USER") { |u| $user = u } o.on('-p', "--password PASS") { |p| $pass = p } o.on('-c', "--comment COMM (optional)") { |c| $comm = c } o.on('-h') { puts o; exit } o.parse! if $user.nil? or $pass.nil? puts o; exit end end # initialize an array of ASCII characters to be used for the salt ascii = ('a'..'z').to_a + ('A'..'Z').to_a + ('0'..'9').to_a + [ ".", "/" ] $lines = [] begin # read in the current http auth file File.open($file) do |f| f.lines.each { |l| $lines << l } end rescue Errno::ENOENT # if the file doesn't exist (first use), initialize the array $lines = ["#{$user}:#{$pass}n"] end # remove the user from the current list, since this is the one we're editing $lines.map! do |line| unless line =~ /#{$user}:/ line end end # generate a crypt()ed password pass = $pass.crypt(ascii[rand(64)] + ascii[rand(64)]) # if there's a comment, insert it if $comm $lines << "#{$user}:#{pass}:#{$comm}n" else $lines << "#{$user}:#{pass}n" end # write out the new file, creating it if necessary File.open($file, File::RDWR|File::CREAT) do |f| $lines.each { |l| f << l} end Save this file as http_auth_basic.rb and give it a filename (-f), a user (-u), and a password (-p), and it will generate entries appropriate to use in NGINX's auth_ basic_user_file directive: $ ./http_auth_basic.rb -f htpasswd -u testuser -p 123456 To handle scenarios where a username and password should only be entered if not coming from a certain set of IP addresses, NGINX has the satisfy directive. The any parameter is used here for this either/or scenario: server { server_name intranet.example.com; location / { auth_basic "intranet: please login"; auth_basic_user_file conf/htpasswd-intranet; allow 192.168.40.0/24; allow 192.168.50.0/24; deny all; satisfy any; } If, instead, the requirements are for a configuration in which the user must come from a certain IP address and provide authentication, the all parameter is the default. So, we omit the satisfy directive itself and include only allow, deny, auth_basic, and auth_basic_user_file: server { server_name stage.example.com; location / { auth_basic "staging server"; auth_basic_user_file conf/htpasswd-stage; allow 192.168.40.0/24; allow 192.168.50.0/24; deny all; } Streaming media files NGINX is capable of serving certain video media types. The flv and mp4 modules, included in the base distribution, can perform what is called pseudo-streaming. This means that NGINX will seek to a certain location in the video file, as indicated by the start request parameter. In order to use the pseudo-streaming capabilities, the corresponding module needs to be included at compile time: --with-http_flv_module for Flash Video (FLV) files and/or --with-http_mp4_module for H.264/AAC files. The following directives will then become available for configuration: Table: HTTP streaming directives Directive Explanation flv Activates the flv  module for this location. mp4 Activates the mp4  module for this location. mp4_buffer_size Sets the initial buffer size for delivering MP4 files. mp4_max_buffer_size Sets the maximum size of the buffer used to process MP4 metadata. Activating FLV pseudo-streaming for a location is as simple as just including the flv keyword: location /videos { flv; } There are more options for MP4 pseudo-streaming, as the H.264 format includes metadata that needs to be parsed. Seeking is available once the "moov atom" has been parsed by the player. So to optimize performance, ensure that the metadata is at the beginning of the file. If an error message such as the following shows up in the logs, the mp4_max_buffer_size needs to be increased: mp4 moov atom is too large mp4_max_buffer_size can be increased as follows: location /videos { mp4; mp4_buffer_size 1m; mp4_max_buffer_size 20m; } Predefined variables NGINX makes constructing configurations based on the values of variables easy. Not only can you instantiate your own variables by using the set or map directives, but there are also predefined variables used within NGINX. They are optimized for quick evaluation and the values are cached for the lifetime of a request. You can use any of them as a key in an if statement, or pass them on to a proxy. A number of them may prove useful if you define your own log file format. If you try to redefine any of them, though, you will get an error message as follows: <timestamp> [emerg] <master pid>#0: the duplicate "<variable_name>" variable in <path-to-configuration-file>:<line-number> They are also not made for macro expansion in the configuration—they are mostly used at run time. Summary In this article, we have explored a number of directives used to make NGINX serve files over HTTP. Not only does the http module provide this functionality, but there are also a number of helper modules that are essential to the normal operation of NGINX. These helper modules are enabled by default. Combining the directives of these various modules enables us to build a configuration that meets our needs. We explored how NGINX finds files based on the URI requested. We examined how different directives control how the HTTP server interacts with the client, and how the error_page directive can be used to serve a number of needs. Limiting access based on bandwidth usage, request rate, and number of connections is all possible. We saw, too, how we can restrict access based on either IP address or through requiring authentication. We explored how to use NGINX's logging capabilities to capture just the information we want. Pseudo-streaming was examined briefly, as well. NGINX provides us with a number of variables that we can use to construct our configurations. Resources for Article : Further resources on this subject: Nginx HTTP Server FAQs [Article] Nginx Web Services: Configuration and Implementation [Article] Using Nginx as a Reverse Proxy [Article]
Read more
  • 0
  • 0
  • 5018

article-image-learning-fly-forcecom
Packt
17 Apr 2013
20 min read
Save for later

Learning to Fly with Force.com

Packt
17 Apr 2013
20 min read
(For more resources related to this topic, see here.) What is cloud computing? If you have been in the IT industry for some time, you probably know what cloud means. For the rest, it is used as a metaphor for the worldwide network or the Internet. Computing normally indicates the use of computer hardware and software. Combining these two terms, we get a simple definition—use of computer resources over the Internet (as a service). In other words, when the computing is delegated to resources available over the Internet, we get what is called cloud computing. As Wikipedia defines it: Cloud computing is the delivery of computing as a service rather than a product, whereby shared resources, software, and information are provided to computers and other devices as a utility (like the electricity grid) over a network (typically the Internet). Still confused? A simple example will help clarify it. Say you are managing the IT department of an organization, where you are responsible for purchasing hardware and software (licenses) for your employees and making sure they have the right resources to do their jobs. Whenever there is a new hire, you need to go through all the purchase formalities once again to get your user the necessary resources. Soon this turns out to be a nightmare of managing all your software licenses! Now, what if you could find an alternative where you host an application on the Web, which your users can access through their browsers and interact with it? You are freed from maintaining individual licenses and maintaining high-end hardware at the user machines. Voila, we just discovered cloud computing! Cloud computing is the logical conclusion drawn from observing the drawbacks of in-house solutions. The trend is now picking up and is quickly replacing the onpremise software application delivery models that are accompanied with high costs of managing data centers, hardware, and software. All users pay for is the quantum of the services that they use. That is why it's sometimes also known as utility-based computing, as the corresponding payment is resource usage based. Chances are that even before you ever heard of this term, you had been using it unknowingly. Have you ever used hosted e-mail services such as Yahoo, Hotmail, or Gmail where you accessed all of their services through the browser instead of an e-mail client on your computer? Now that is a typical example of cloud computing. Anything that is offered as a service (aaS) is usually considered in the realm of cloud computing. Everything in the cloud means no hardware, no software, so no maintenance and that is what the biggest advantage is. Different types of services that are most prominently delivered on the cloud are as follows: Infrastructure as a service (IaaS) Platform as a service (PaaS) Software as a service (SaaS) Infrastructure as a service (IaaS) Sometimes referred to hardware as a service, infrastructure as a service offers the IT infrastructure, which includes servers, routers, storages, firewalls, computing resources, and so on, in physical or virtualized forms as a service. Users can subscribe to these services and pay on the basis of need and usage. The key player in this domain is Amazon.com, with EC2 and S3 as examples of typical IaaS. Elastic Cloud Computing (EC2) is a web service that provides resizable computing capacity in the cloud. Computing resources can be scaled up or down within minutes, allowing users to pay for the actual capacity being used. Similarly, S3 is an online storage web service offered by Amazon, which provides 99.999999999 percent durability and 99.99 percent availability of objects over a given year and stores arbitrary objects (computer files) up to 5 terabytes in size! Platform as a service (PaaS) PaaS provides the infrastructure for development of software applications. Accessed over the cloud, it sits between IaaS and SaaS where it hides the complexities of dealing with underlying hardware and software. It is an application-centric approach that allows developers to focus more on business applications rather than infrastructure-level issues. Developers no longer have to worry about the server upgrades, scalability, load balancing, service availability, and other infrastructure hassles, as these are delegated to the platform vendors. Paas allows development of custom applications by providing the appropriate building blocks and the necessary infrastructure available as a service. An excellent example, in this category, is the Force.com platform, which is a game changer in the aaS, specially in the PaaS domain. It exposes a proprietary application development platform, which is woven around a relational database. It stands at a higher level than another key player in this domain, Google App Engine, which supports scalable web application development in Java and Python on the appropriate application server stack, but does not provide equivalent robust proprietary components or the building blocks as Force.com. Another popular choice (or perhaps not) is Microsoft's application platform called Widows Azure, which can be used to build websites (developed in ASP.NET, PHP, Node.JS), provision virtual machines, and provide cloud services (containers of hosted applications). A limitation with applications built on these platforms is the quota limits, or the strategy to prohibit the monopolization of the shared resources in the multitenant environment. Some developers see this as a restriction, which allows them to build applications with limited capability, but we reckon this as an opportunity to build highly efficient solutions to work within governor limits, while still maintaining the business process sanctity. Specificcally for the Force.com platform, some people consider shortage of skilled resources as a possible limitation, but we think the learning curve is steep on this platform and an experienced resource can pick proprietary languages pretty quickly, average ramp up time spanning anywhere from 15 to 30 days! Software as a service (SaaS) The opposite end of IaaS is SaaS. Business applications are offered as services over the Internet to users who don't have to go through the complex custom application development and implementation cycles. They also don't invest upfront on the IT infrastructure or maintain their software with regular upgrades. All this is taken care of by the SaaS vendors. These business applications normally provide the customization capability to accommodate specific business needs such as user interfaces, business workflows, and so on. Some good examples in this category are the Salesforce.com CRM system and Google Apps services. What is Force.com? Force.com is a natural progression from Salesforce.com, which was started as a sales force automation system offered as a service (SaaS). The need to go beyond the initially offered customizable CRM application and develop custom-based solutions, resulted in a radical shift of cloud delivery model from SaaS to PaaS. The technology that powers Salesforce CRM, whose design fulfills all the prerequisites of being a cloud application, is now available for developing enterprise-level applications. An independent study of the Force.com platform concluded that compared to the traditional Java-based application development platform, development with the Force.com platform is almost five times faster, with about a 40 percent smaller overall project cost and better quality due to rapid prototyping during the requirement gathering—thanks to the declarative aspect of the Force.com development—and less testing due to proven code re-use. What empowers Force.com? Why is Force.com application development so successful? Primarily because of its key architectural features, discussed in the following sections. Multitenancy Multitenancy is a concept that is the opposite of single-tenancy. In the Cloud Computing jargon, a customer or an organization is referred to as tenant. The various downsides and cost inefficiencies of single-tenant models are overcame by the multitenant model. A multitenant application caters to multiple organizations, each working in its own isolated virtual environment called org and sharing a single physical instance and version of the application hosted on the Force.com infrastructure. It is isolated because although the infrastructure is shared, every customer's data, customizations, and code remain secure and insulated from other customers. Multitenant applications run on a single physical instance and version of the application, providing the same robust infrastructure to all their customers. This also means freedom from upfront costs, ongoing upgrades, and maintenance costs. The test methods written by the customers on respective orgs ensure more than 75 percent code coverage and thus help Salesforce.com in regression testing of the Force.com upgrades, releases, and patches. The same is difficult to even visualize with an in-house software application development. Metadata What drives the multitenant applications on Force.com? Nothing else but the metadata-driven architecture of the platform! Think about the following: The platform allows all tenants to coexist at the same time Tenants can extend the standard common object model without affecting others Tenants' data is kept isolated from others in a shared database The platform customizes the interface and business logic without disrupting the services for others The platform's codebase can be upgraded to offer new features without affecting the tenants' customizations The platform scales up with rising demands and new customers To meet all the listed challenges, Force.com has been built upon a metadata-driven architecture, where the runtime engine generates application components from the metadata. All customizations to the standard platform for each tenant are stored in the form of metadata, thus keeping the core Force.com application and the client customizations distinctly separate, making it possible to upgrade the core without affecting the metadata. The core Force.com application comprises the application data and the metadata describing the base application, thus forming three layers sitting on top of each other in a common database, with the runtime engine interpreting all these and rendering the final output in the client browser. As metadata is a virtual representation of the application components and customizations of the standard platform, the statically compiled Force.com application's runtime engine is highly optimized for dynamic metadata access and advanced caching techniques to produce remarkable application response times. Understanding the Force.com stack A white paper giving an excellent explanation of the Force.com stack has been published. It describes various layers of technologies and services that make up the platform. We will also cover it here briefly. The application stack is shown in the following diagram: Infrastructure as a service Infrastructure is the first layer of the stack on top of which other services function. It acts as the foundation for securely and reliably delivering the cloud applications developed by the customers as well as the core Salesforce CRM applications. It powers more than 200 million transactions per day and more than 1.5 million subscribers. The highly managed data centers provide unparalleled redundancy with near-real-time replication, world class security at physical, network, host, data transmission, and database levels, and excellent design to scale both vertically and horizontally. Database as a service The powerful and reliable data persistence layer in the Force.com stack is known as the Force.com database. It sits on top of the infrastructure and provides the majority of the Force.com platform capabilities. The declarative web interface allows user to create objects and fields generating the native application UI around them. Users can also define relationships between objects, create validation rules to ensure data integrity, track history on certain fields, create formula fields to logically derive new data values, create fine-grained security access with the point and click operations, and all of this without writing a single line of code or even worrying about the database backup, tuning, upgrade, and scalability issues! As compared with the relational database, it is similar in the sense that the object (a data instance) and fields are analogous to tables and columns, and Force.com relationships are similar to the referential integrity constraints in a relation DB. But unlike physically separate tables with dedicated storage, Force.com objects are maintained as a set of metadata interpreted on the fly by the runtime engine and all of the application data is stored in a set of a few large database tables. This data is represented as virtual records based on the interpretation of tenants' customizations stored as metadata. Integration as a service Integration as a service utilizes the underlying Force.com database layer and provides the platform's integration capabilities through the open-standards-based web services API. In today's world, most organizations have their applications developed on disparate platforms, which have to work in conjunction to correctly represent and support their internal business processes. Customers' existing applications can connect with Force.com through the SOAP or REST web services to access data and create mashups to combine data from multiple sources. The Force.com platform also allows native applications to integrate with third-party web services through callouts to include information from external systems in organizations' business processes. These integration capabilities of the platform through API (for example, Bulk API, Chatter API, Metadata API, Apex REST API, Apex SOAP API, Streaming API, and so on) can be used by developers to build custom integration solutions to both produce and consume web services. Accordingly, it's been leveraged by many third parties such as Informatica, Cast Iron, Talend, and so on, to create prepackaged connectors for applications and systems such as Outlook, Lotus Notes, SAP, Oracle Financials, and so on. It also allows clouds such as Facebook, Google, and Amazon to talk to each other and build useful mashups. The integration ability is the key for developing mobile applications for various device platforms, which solely rely on the web services exposed by the Force.com platform. Logic as a service A development platform has to have the capability to create business processes involving complex logic. The Force.com platform oversimplifies this task to automate a company's business processes and requirements. The platform logic features can be utilized by both developers and business analysts to build smart database applications that help increase user productivity, improve data quality, automate manual processes, and adapt quickly to changing requirements. The platform allows creating the business logic either through a declarative interface in the form of workflow rules, approval processes, required and unique fields, formula fields, validation rules, or in an advanced form by writing triggers and classes in the platform's programming language—Apex—to achieve greater levels of flexibility, which help define any kind of functionality and business requirement that otherwise may not be possible through the point and click operations. User interface as a service The user interface of platform applications can be created and customized by either of the two approaches. The Force.com builder application, an interface based on point-and-click/drag-and-drop, allows users to build page layouts that are interpreted from the data model and validation rules with user defined customizations, define custom application components, create application navigation structures through tabs, and define customizable reports and user-specific views. For more complex pages and tighter control over the presentation layer, a platform allows users to build custom user interfaces through a technology called Visualforce (VF), which is based on the XML markup tags. The custom VF pages may or may not adopt the standard look and feel based on the stylesheet applied and present data returned from the controller or the logic layer in the structured format. The Visualforce interfaces are either public, private, or a mix of the two. Private interfaces require users to log in to the system before they can access resources, whereas public interfaces, called sites, can be made available on the Internet to anonymous users. Development as a service This a set of features that allow developers to utilize traditional practices for building cloud applications. These features include the following: Force.com Metadata API: Lets developers push changes directly into the XML files describing the organization's customizations and acts as an alternative to platform's interface to manage applications IDE (Integrated Development Environment): A powerful client application built on the Eclipse platform, allowing programmers to code, compile, test, package, and deploy applications A development sandbox: A separate application environment for development, quality assurance, and training of programmers Code Share: A service for users around the globe to collaborate on development, testing, and deployment of the cloud applications Force.com also allows online browser based development providing code assist functionality, repository search, debugging, and so on, thus eliminating the need of a local machine specific IDE. DaaS expands the Cloud Computing development process to include external tools such as integrated development environments, source control systems, and batch scripts to facilitate developments and deployments. Force.com AppExchange This is a cloud marketplace (accessible at http://appexchange.salesforce.com/) that helps commercial application vendors to publish their custom development applications as packages and then reach out to potential customers who can install them on their orgs with merely a button click through the web interface, without going through the hassles of software installation and configuration. Here, you may find good apps that provide functionality, that are not available in Salesforce, or which may require some heavy duty custom development if carried out on-premises! Introduction to governor limits Any introduction to Force.com is incomplete without a mention of governor limits. By nature, all multitenant architecture based applications such as Force.com have to have a mechanism that does not allow the code to abuse the shared resources so that other tenants in the infrastructure remain unaffected. In the Force.com world, it is the Apex runtime engine that takes care of such malicious code by enforcing runtime limits (called governor limits) in almost all areas of programming on the Force.com platform. If these governor limits had not been in place, even the simplest code, such as an endless loop, would consume enough resources to disrupt the service to the other users of the system, as they all share the same physical infrastructure. The concept of governor limits is not just limited to Force.com, but extends to all SaaS/PaaS applications, such as Google App Engine, and is critical for making the cloud-based development platform stable. This concept may prove to be very painful for some people, but there is a key logic to it. The platform enforces the best practices so that the application is practically usable and makes an optimal usage of resources, keeping the code well under governor limits. So the longer you work on Force.com, the more you become familiar with these limits, the more stable your code becomes over time, and the easier it becomes to work around these limits. In one of the forthcoming chapters, we will discover how to work with these governor limits and not against them, and also talk about ways to work around them, if required. Salesforce environments An environment is a set of resources, physical or logical, that let users build, test, deploy, and use applications. In the traditional development model, one would expect to have application servers, web servers, databases, and their costly provisioning and configuration. But in the Force.com paradigm, all that's needed is a computer and an Internet connection to immediately get started to build and test a SaaS application. An environment, or a virtual or logical instance of the Force.com infrastructure and platform, is also called an organization or just org, which is provisioned in the cloud on demand. It has the following characteristics: Used for development, testing, and/or production Contains data and customizations Based on the edition containing specific functionality, objects, storage, and limits Certain restricted functionalities, such as the multicurrency feature (which is not available by default), can be enabled on demand All environments are accessible through a web browser There are broadly three types of environments available for developing, testing, and deploying applications: Production environments: The Salesforce.com environments that have active paying users accessing the business critical data. Development environments: These environments are used strictly for the development and testing applications with data that is not business critical, without affecting production environment. Developer environments are of two types: Developer Edition: This is a free, full-featured copy of the Enterprise Edition, with less storage and users. It allows users to create packaged applications suitable for any Salesforce production environment. It can be of two types: Regular Developer Edition: This is a regular DE org whose sign up is free and the user can register for any number of DE orgs. This is suitable when you want to develop managed packages for distribution through AppExchange or Trialforce, when you are working with an edition where sandbox is not available, or if you just want to explore the Force.com platform for free. Partner Developer Edition: This is a regular DE org but with more storage, features, and licenses. This is suitable when you expect a larger team to work who need a bigger environment to test the application against a larger real-life dataset. Note that this org can only be created with the Salesforce Consulting partners or Force.com ISV. Sandbox: This is nearly an identical copy of the production environment available to Enterprise or Unlimited Edition customers, and can contain data and/or customizations. This is suitable when developing applications for production environments only with no plans to distribute applications commercially through AppExchange or Trialforce, or if you want to test the beta-managed packages. Note that sandboxes are completely isolated from your Salesforce production organization, so operations you perform in your sandboxes do not affect your Salesforce production organization, and vice versa. Types of sandboxes are as follows: Full copy sandbox: Nearly an identical copy of the production environment, including data and customizations Configuration-only sandbox: Contains only configurations and not data from the production environment Developer sandbox: Same as Configuration-only sandbox but with less storage Test environments: These can be either production or developer environments, used speficially for testing application functionality before deploying to production or releasing to customers. These environments are suitable when you want to test applications in production such as environments with more users and storage to run real-life tests. Summary This article talked about the basic concepts of cloud computing. The key takeaway items from this article are the explanations of the different types of cloud-based services such as IaaS, SaaS, and PaaS. We introduced the Force.com platform and its key architectural features that power the platform types, such as multitenant and metadata. We briefly covered the application stack—technology and services layers—that makes up the Force.com platform. We gave an overview of governor limits without going too much detail about their use. We discussed situations where adopting cloud computing may be beneficial. We also discussed the guidelines that help you decide whether your software project should be developed on the Force.com platform or not. Last, but not least, we discussed various environments available to developers and business users and their characteristics and usage. Resources for Article : Further resources on this subject: Monitoring and Responding to Windows Intune Alerts [Article] Sharing a Mind Map: Using the Best of Mobile and Web Featuressil [Article] Force.com: Data Management [Article]
Read more
  • 0
  • 0
  • 2338

article-image-liferay-its-installation-and-setup
Packt
15 Apr 2013
7 min read
Save for later

Liferay, its Installation and setup

Packt
15 Apr 2013
7 min read
(For more resources related to this topic, see here.) Overview about portals Well, to understand more about what portals are, let me throw some familiar words at you. Have you used, heard, or seen iGoogle, the Yahoo! home page, or MSN? If the answer is yes, then you have been using portals already. All these websites have two things in common. A common dashboard Information from various sources shown on a single page, giving a uniform experience For example, on iGoogle, you can have a gadget showing the weather in Chicago, another gadget to play your favorite game of Sudoku, and a third one to read news from around the globe, everything on the same page without you knowing that all of these are served from different websites! That is what a portal is all about. So, a portal (or web portal) can be thought of as a website that shows, presents, displays, or brings together information or data from various sources and gives the user a uniform browsing experience. The small chunks of information that form the web page are given different names such as gadgets or widgets, portlets or dashlets. Introduction to Liferay Now that you have some basic idea about what portals are, let us revisit the initial statement I made about Liferay. Liferay is an open source portal solution. If you want to create a portal, you can use Liferay to do this. It is written in Java. It is an open source solution, which means the source code is freely available to everyone and people can modify and distribute it. With Liferay you can create basic intranet sites with minimal tweaking. You can also go for a full-fledged enterprise banking portal website with programming, and heavy customizations and integrations. Besides the powerful portal capabilities, Liferay also provides the following: Awesome enterprise and web content management capabilities Robust document management which supports protocols such as CMIS and WebDAV Good social collaboration features Liferay is backed up by a solid and active community, whose members are ever eager to help. Sounds good? So what are we waiting for? Let's take a look at Liferay and its features. Installation and setup In four easy steps, you can install Liferay and run it on your system. Step 1 – Prerequisites Before we go and start our Liferay download, we need to check if we have the requirements for the installation. They are as follows: Memory: 2 GB (minimum), 4 GB (recommended). Disk space: Around 5 GB of free space should be more than enough for the exercises mentioned in the book. The exercises performed in this book are done on Windows XP. So you can use the same or any subsequent versions of Windows OS. Although Liferay can be run on Mac OSX and Linux, it is beyond the scope of this book how to set up Liferay on them. The MySQL database should be installed. As with the OS, Liferay can be run on most of the major databases out there in the market. Liferay is shipped with the Hypersonic database by default for demo purpose, which should not be used for a production environment. Unzip tools such as gzip or 7-Zip. Step 2 – Downloading Liferay You can download the latest stable version of Liferay from https://www.liferay.com/downloads/liferay-portal/available-releases. Liferay comes in the following two versions: Enterprise Edition: This version is not free and you would have to purchase it. This version has undergone rigorous testing cycles to make sure that all the features are bug free, providing the necessary support and patches. Community Edition: This is a free downloadable version that has all the features but no enterprise support provided. Liferay is supported by a lot of open source application servers and the folks at Liferay have made it easy for end users by packaging everything as a bundle. What this means is that if you are asked to have Liferay installed in a JBoss application server, you can just go to the URL previously mentioned and select the Liferay-JBoss bundle to download, which gives you the JBoss Application server installed with Liferay. We will download the Community Edition of the Liferay-Tomcat bundle, which has Liferay preinstalled in the Tomcat server. The stable version at the time of writing this book was Liferay 6.1 GA2. As shown in the following screenshot, just click on Download after making sure that you have selected Liferay bundled with Tomcat and save the ZIP file at an appropriate location: Step 3 – Starting the server After you have downloaded the bundle, extract it to the location of your choice on your machine. You can see a folder named liferay-portal-6.1.1-ce-ga2. The latter part of the name can change based on the version that you download. Let us take a moment to have a look at the folder structure as shown in the following screenshot: The liferay-portal-6.1.1-ce-ga2 folder is what we will refer to as LIFERAY_HOME. This folder contains the server, which in our case is tomcat-7.0.27. Let's refer to this folder as SERVER_HOME. Liferay is created using Java, so to run Liferay we need Java Runtime Environment (JRE). The Liferay bundle is shipped with a JRE by default (as you can see inside our SERVER_HOME). So if you are running a Windows OS, you can directly start and run Liferay. If you are using any other OS, you need to set the JAVA_HOME environment variable. Navigate to SERVER_HOME/webapps. This is where all the web applications are deployed. Delete everything in this folder except marketplace-portlet and ROOT. Now go to SERVER/bin and double-click on startup.bat, since we are using Windows OS. This will bring up a console showing the server startup. Wait till you see the Server Startup message in the console, after which you can access Liferay from the browser. Step 4 – Doing necessary first-time configurations Once the server is up, open your favorite browser and type in http://localhost:8080. You will be shown a screen that performs basic configurations, such as changing the database and name of your portal, deciding what should be the admin name and e-mail address, or changing the default locale. This is a new feature introduced in Liferay 6.1 to ease the first-time setup, which on previous versions had to be done using the property file. Go change the name of the portal, administrator username, and e-mail address. Keep the locale as it is. As I stated earlier, Liferay is shipped with a default Hypersonic database which is normally used for demo purposes. You can change it to MySQL if you want, by selecting the database type from the drop-down list presented, and typing in the necessary JDBC details. I have created a database in MySQL by the name Portal Starter; hence my JDBC URL would contain that. You can create a blank database in MySQL and accordingly change the JDBC URL. Once you are done making your changes, click on the Finish Configuration button as shown in the following screenshot: This will open up a screen, which will show the path where this configuration is saved. What Liferay does behind the scenes is creates a property file named portal-setup-wizard.properties and put all the configurations in that. This, as I said earlier, was created manually in the previous versions of Liferay. Clicking on the Go to my portal button on this screen will take the user to the Terms of Use page. Agree to the terms and proceed further. A screen will be shown to change the password for your admin user that you earlier specified in the Basic Configuration screen. After you change the password, you will be presented with a screen to select a password reminder question. Select a question or create your own question from the drop-down list, set the password reminder, and move on. And that's it!! Finally, you can see the home page of Liferay. That's it and you are done setting up your very first Liferay instance. Summary So, we just gained a quick understanding about portals and Liferay and its installation and setup that teaches you how to set up Liferay on your local machine. Resources for Article : Further resources on this subject: Vaadin Portlets in Liferay User Interface Development [Article] Setting up and Configuring a Liferay Portal [Article] User Interface in Production [Article]
Read more
  • 0
  • 0
  • 4718

article-image-improving-performance-parallel-programming
Packt
12 Apr 2013
11 min read
Save for later

Improving Performance with Parallel Programming

Packt
12 Apr 2013
11 min read
(For more resources related to this topic, see here.) Parallelizing processing with pmap The easiest way to parallelize data is to take a loop we already have and handle each item in it in a thread. That is essentially what pmap does. If we replace a call to map with pmap, it takes each call to the function argument and executes it in a thread pool. pmap is not completely lazy, but it's not completely strict, either: it stays just ahead of the output consumed. So if the output is never used, it won't be fully realized. For this recipe, we'll calculate the Mandelbrot set. Each point in the output takes enough time that this is a good candidate to parallelize. We can just swap map for pmap and immediately see a speed-up. How to do it... The Mandelbrot set can be found by looking for points that don't settle on a value after passing through the formula that defines the set quickly. We need a function that takes a point and the maximum number of iterations to try and return the iteration that it escapes on. That just means that the value gets above 4. (defn get-escape-point [scaled-x scaled-y max-iterations] (loop [x 0, y 0, iteration 0] (let [x2 (* x x), y2 (* y y)] (if (and (< (+ x2 y2) 4) (< iteration max-iterations)) (recur (+ (- x2 y2) scaled-x) (+ (* 2 x y) scaled-y) (inc iteration)) iteration)))) The scaled points are the pixel points in the output, scaled to relative positions in the Mandelbrot set. Here are the functions that handle the scaling. Along with a particular x-y coordinate in the output, they're given the range of the set and the number of pixels each direction. (defn scale-to ([pixel maximum [lower upper]] (+ (* (/ pixel maximum) (Math/abs (- upper lower))) lower))) (defn scale-point ([pixel-x pixel-y max-x max-y set-range] [(scale-to pixel-x max-x (:x set-range)) (scale-to pixel-y max-y (:y set-range))])) The function output-points returns a sequence of x, y values for each of the pixels in the final output. (defn output-points ([max-x max-y] (let [range-y (range max-y)] (mapcat (fn [x] (map #(vector x %) range-y)) (range max-x))))) For each output pixel, we need to scale it to a location in the range of the Mandelbrot set and then get the escape point for that location. (defn mandelbrot-pixel ([max-x max-y max-iterations set-range] (partial mandelbrot-pixel max-x max-y max-iterations set-range)) ([max-x max-y max-iterations set-range [pixel-x pixel-y]] (let [[x y] (scale-point pixel-x pixel-y max-x max-y set-range)] (get-escape-point x y max-iterations)))) At this point, we can simply map mandelbrot-pixel over the results of outputpoints. We'll also pass in the function to use (map or pmap). (defn mandelbrot ([mapper max-iterations max-x max-y set-range] (doall (mapper (mandelbrot-pixel max-x max-y max-iterations set-range) (output-points max-x max-y))))) Finally, we have to define the range that the Mandelbrot set covers. (def mandelbrot-range {:x [-2.5, 1.0], :y [-1.0, 1.0]}) How do these two compare? A lot depends on the parameters we pass them. user=> (def m (time (mandelbrot map 500 1000 1000 mandelbrot-range))) "Elapsed time: 28981.112 msecs" #'user/m user=> (def m (time (mandelbrot pmap 500 1000 1000 mandelbrot-range))) "Elapsed time: 34205.122 msecs" #'user/m user=> (def m (time (mandelbrot map 1000 10001000 mandelbrot-range))) "Elapsed time: 85308.706 msecs" #'user/m user=> (def m (time (mandelbrot pmap 1000 10001000 mandelbrot-range))) "Elapsed time: 49067.584 msecs" #'user/m Refer to the following chart: If we only iterate at most 500 times for each point, it's slightly faster to use map and work sequentially. However, if we iterate 1,000 times each, pmap is faster. How it works... This shows that parallelization is a balancing act. If each separate work item is small, the overhead of creating the threads, coordinating them, and passing data back and forth takes more time than doing the work itself. However, when each thread has enough to do to make it worth it, we can get nice speed-ups just by using pmap. Behind the scenes, pmap takes each item and uses future to run it in a thread pool. It forces only a couple more items than you have processors, so it keeps your machine busy, without generating more work or data than you need. There's more... For an in-depth, excellent discussion of the nuts and bolts of pmap, along with pointers about things to watch out for, see David Liebke's talk, From Concurrency to Parallelism (http://blip.tv/clojure/david-liebke-from-concurrency-to-parallelism-4663526). See also The Partitioning Monte Carlo Simulations for better pmap performance recipe Parallelizing processing with Incanter One of its nice features is that it uses the Parallel Colt Java library (http://sourceforge.net/projects/parallelcolt/) to actually handle its processing, so when you use a lot of the matrix, statistical, or other functions, they're automatically executed on multiple threads. For this, we'll revisit the Virginia housing-unit census data and we'll fit it to a linear regression. Getting ready We'll need to add Incanter to our list of dependencies in our Leiningen project.clj file: :dependencies [[org.clojure/clojure "1.5.0"] [incanter "1.3.0"]] We'll also need to pull those libraries into our REPL or script: (use '(incanter core datasets io optimize charts stats)) We can use the following filename: (def data-file "data/all_160_in_51.P35.csv") How to do it... For this recipe, we'll extract the data to analyze and perform the linear regression. We'll then graph the data afterwards. First, we'll read in the data and pull the population and housing unit columns into their own matrix. (def data (to-matrix (sel (read-dataset data-file :header true) :cols [:POP100 :HU100]))) From this matrix, we can bind the population and the housing unit data to their own names. (def population (sel data :cols 0)) (def housing-units (sel data :cols 1)) Now that we have those, we can use Incanter to fit the data. (def lm (linear-model housing-units population)) Incanter makes it so easy, it's hard not to look at it. (def plot (scatter-plot population housing-units :legend true)) (add-lines plot population (:fitted lm)) (view plot) Here we can see that the graph of housing units to families makes a very straight line: How it works… Under the covers, Incanter takes the data matrix and partitions it into chunks. It then spreads those over the available CPUs to speed up processing. Of course, we don't have to worry about this. That's part of what makes Incanter so powerful. Partitioning Monte Carlo simulations for better pmap performance In the Parallelizing processing with pmap recipe, we found that while using pmap is easy enough, knowing when to use it is more complicated. Processing each task in the collection has to take enough time to make the costs of threading, coordinating processing, and communicating the data worth it. Otherwise, the program will spend more time concerned with how (parallelization) and not enough time with what (the task). The way to get around this is to make sure that pmap has enough to do at each step that it parallelizes. The easiest way to do that is to partition the input collection into chunks and run pmap on groups of the input. For this recipe, we'll use Monte Carlo methods to approximate pi . We'll compare a serial version against a naïve parallel version against a version that uses parallelization and partitions. Getting ready We'll use Criterium to handle benchmarking, so we'll need to include it as a dependency in our Leiningen project.clj file, shown as follows: :dependencies [[org.clojure/clojure "1.5.0"] [criterium "0.3.0"]] We'll use these dependencies and the java.lang.Math class in our script or REPL. (use 'criterium.core) (import [java.lang Math]) How to do it… To implement this, we'll define some core functions and then implement a Monte Carlo method for estimating pi that uses pmap. We need to define the functions necessary for the simulation. We'll have one that generates a random two-dimensional point that will fall somewhere in the unit square. (defn rand-point [] [(rand) (rand)]) Now, we need a function to return a point's distance from the origin. (defn center-dist [[x y]] (Math/sqrt (+ (* x x) (* y y)))) Next we'll define a function that takes a number of points to process, and creates that many random points. It will return the number of points that fall inside a circle. (defn count-in-circle [n] (->> (repeatedly n rand-point) (map center-dist) (filter #(<= % 1.0)) count)) That simplifies our definition of the base (serial) version. This calls count-incircle to get the proportion of random points in a unit square that fall inside a circle. It multiplies this by 4, which should approximate pi. (defn mc-pi [n] (* 4.0 (/ (count-in-circle n) n))) We'll use a different approach for the simple pmap version. The function that we'll parallelize will take a point and return 1 if it's in the circle, or 0 if not. Then we can add those up to find the number in the circle. (defn in-circle-flag [p] (if (<= (center-dist p) 1.0) 1 0)) (defn mc-pi-pmap [n] (let [in-circle (->> (repeatedly n rand-point) (pmap in-circle-flag) (reduce + 0))] (* 4.0 (/ in-circle n)))) For the version that chunks the input, we'll do something different again. Instead of creating the sequence of random points and partitioning that, we'll have a sequence that tells how large each partition should be and have pmap walk across that, calling count-in-circle. This means that creating the larger sequences are also parallelized. (defn mc-pi-part ([n] (mc-pi-part 512 n)) ([chunk-size n] (let [step (int (Math/floor (float (/ n chunk-size)))) remainder (mod n chunk-size) parts (lazy-seq (cons remainder (repeat step chunk-size))) in-circle (reduce + 0 (pmap count-in-circle parts))] (* 4.0 (/ in-circle n))))) Now, how do these work? We'll bind our parameters to names, and then we'll run one set of benchmarks before we look at a table of all of them. We'll discuss the results in the next section. user=> (def chunk-size 4096) #'user/chunk-size user=> (def input-size 1000000) #'user/input-size user=> (quick-bench (mc-pi input-size)) WARNING: Final GC required 4.001679309213317 % of runtime Evaluation count : 6 in 6 samples of 1 calls. Execution time mean :634.387833 ms Execution time std-deviation : 33.222001 ms Execution time lower quantile : 606.122000 ms ( 2.5%) Execution time upper quantile : 677.273125 ms (97.5%) nil Here's all the information in the form of a table: Function Input Size Chunk Size Mean Std Dev. GC Time mc-pi 1,000,000 NA 634.39ms 33.22 ms 4.0%   mc-pi-pmap 1,000,000 NA 1.92 sec 888.52 ms 2.60%   mc-pi-part 1,000,000 4,096 455.94 ms 4.19 ms 8.75%   Here's a chart with the same information: How it works… There are a couple of things we should talk about here. Primarily, we'll need to look at chunking the inputs for pmap, but we should also discuss Monte Carlo methods. Estimating with Monte Carlo simulations Monte Carlo simulations work by throwing random data at a problem that is fundamentally deterministic, but when it's practically infeasible to attempt a more straightforward solution. Calculating pi is one example of this. By randomly filling in points in a unit square, p/4 will be approximately the ratio of points that will fall within a circle centered on 0, 0. The more random points that we use, the better the approximation. I should note that this makes a good demonstration of Monte Carlo methods, but it's a terrible way to calculate pi. It tends to be both slower and less accurate than the other methods. Although not good for this task, Monte Carlo methods have been used for designing heat shields, simulating pollution, ray tracing, financial option pricing, evaluating business or financial products, and many, many more things. For a more in-depth discussion, Wikipedia has a good introduction to Monte Carlo methods at http://en.wikipedia.org/wiki/Monte_Carlo_method. Chunking data for pmap The table we saw earlier makes it clear that partitioning helped: the partitioned version took just 72 percent of the time that the serial version did, while the naïve parallel version took more than three times longer. Based on the standard deviations, the results were also more consistent. The speed up is because each thread is able to spend longer on each task. There is a performance penalty to spreading the work over multiple threads. Context switching (that is, switching between threads) costs time, and coordinating between threads does as well. But we expect to be able to make that time and more up by doing more things at once. However, if each task itself doesn't take long enough, then the benefit won't out-weigh the costs. Chunking the input—and effectively creating larger individual tasks for each thread— gets around this by giving each thread more to do, and thereby spending less time context switching and coordinating, relative to the overall time spent running.
Read more
  • 0
  • 0
  • 2790
article-image-advanced-performance-strategies
Packt
12 Apr 2013
6 min read
Save for later

Advanced Performance Strategies

Packt
12 Apr 2013
6 min read
(For more resources related to this topic, see here.) General tips Before diving into some advanced strategies for improving performance and scalability, let's briefly recap some of the general performance tips already spread across the book: When mapping your entity classes for Hibernate Search, use the optional elements of the @Field annotation to strip the unnecessary bloat from your Lucene indexes: If you are definitely not using index-time boosting , then there is no reason to store the information needed to make this possible. Set the norms element to Norms.NO . By default, the information needed for a projection-based query is not stored unless you set the store element to Store.YES or Store. COMPRESS. If you had projection-based queries that are no longer being used, then remove this element as part of the cleanup. Use conditional indexing and partial indexing to reduce the size of Lucene indexes. Rely on filters to narrow your results at the Lucene level, rather than using a WHERE clause at the database query level. Experiment with projection-based queries wherever possible , to reduce or eliminate the need for database calls. Be aware that with advanced database caching, the benefits might not always justify the added complexity. Test various index manager options , such as trying the near-real-time index manager or the async worker execution mode. Running applications in a cluster Making modern Java applications scale in a production environment usually involves running them in a cluster of server instances. Hibernate Search is perfectly at home in a clustered environment, and offers multiple approaches for configuring a solution. Simple clusters The most straightforward approach requires very little Hibernate Search configuration. Just set up a file server for hosting your Lucene indexes and make it available to every server instance in your cluster (for example, NFS, Samba, and so on): A simple cluster with multiple server nodes using a common Lucene index on a shared drive Each application instance in the cluster uses the default index manager, and the usual filesystem directory provider. In this arrangement, all of the server nodes are true peers. They each read from the same Lucene index, and no matter which node performs an update, that node is responsible for the write. To prevent corruption, Hibernate Search depends on simultaneous writes being blocked, by the locking strategy (that is, either "simple" or "native"). Recall that the "near-real-time" index manager is explicitly incompatible with a clustered environment. The advantage of this approach is two-fold. First and foremost is simplicity. The only steps involved are setting up a filesystem share, and pointing each application instance's directory provider to the same location. Secondly, this approach ensures that Lucene updates are instantly visible to all the nodes in the cluster. However, a serious downside is that this approach can only scale so far. Very small clusters may work fine, but larger numbers of nodes trying to simultaneously access the same shared files will eventually lead to lock contention. Also, the file server on which the Lucene indexes are hosted is a single point of failure. If the file share goes down, then your search functionality breaks catastrophically and instantly across the entire cluster. Master-slave clusters When your scalability needs outgrow the limitations of a simple cluster, Hibernate Search offers more advanced models to consider. The common element among them is the idea of a master node being responsible for all Lucene write operations. Clusters may also include any number of slave nodes. Slave nodes may still initiate Lucene updates, and the application code can't really tell the difference. However, under the covers, slave nodes delegate that work to be actually performed by the master node. Directory providers In a master-slave cluster, there is still an "overall master" Lucene index, which logically stands apart from all of the nodes. This may be filesystem-based, just as it is with a simple cluster. However, it may instead be based on JBoss Infinispan (http://www.jboss.org/infinispan), an open source in-memory NoSQL datastore sponsored by the same company that principally sponsors Hibernate development: In a filesystem-based approach, all nodes keep their own local copies of the Lucene indexes. The master node actually performs updates on the overall master indexes, and all of the nodes periodically read from that overall master to refresh their local copies. In an Infinispan-based approach, the nodes all read from the Infinispan index (although it is still recommended to delegate writes to a master node). Therefore, the nodes do not need to maintain their own local index copies. In reality, because Infinispan is a distributed datastore, portions of the index will reside on each node anyway. However, it is still best to visualize the overall index as a separate entity. Worker backends There are two available mechanisms by which slave nodes delegate write operations to the master node: A JMS message queue provider creates a queue, and slave nodes send messages to this queue with details about Lucene update requests. The master node monitors this queue, retrieves the messages, and actually performs the update operations. You may instead replace JMS with JGroups (http://www.jgroups.org), an open source multicast communication system for Java applications. This has the advantage of being faster and more immediate. Messages are received in real-time, synchronously rather than asynchronously. However, JMS messages are generally persisted to a disk while awaiting retrieval, and therefore can be recovered and processed later, in the event of an application crash. If you are using JGroups and the master node goes offline, then all the update requests sent by slave nodes during that outage period will be lost. To fully recover, you would likely need to reindex your Lucene indexes manually. A master-slave cluster using a directory provider based on filesystem or Infinispan, and worker based on JMS or JGroups. Note that when using Infinispan, nodes do not need their own separate index copies.   Summary In this article, we explored the options for running applications in multi-node server clusters, to spread out and handle user requests in a distributed fashion. We also learned how to use sharding to help make our Lucene indexes faster and more manageable. Resources for Article : Further resources on this subject: Integrating Spring Framework with Hibernate ORM Framework: Part 1 [Article] Developing Applications with JBoss and Hibernate: Part 1 [Article] Hibernate Types [Article]
Read more
  • 0
  • 0
  • 2378

article-image-show-hide-rows-and-highlighting-cells
Packt
09 Apr 2013
7 min read
Save for later

Show/hide rows and Highlighting cells

Packt
09 Apr 2013
7 min read
(For more resources related to this topic, see here.) Show/hide rows Click a link to trigger hiding or displaying of table rows. Getting ready Once again, start off with an HTML table. This one is not quite as simple a table as in previous recipes. You'll need to create a few <td> tags that span the entire table, as well as provide some specific classes to certain elements. How to do it... Again, give the table an id attribute. Each of the rows that represent a department, specifically the rows that span the entire table, should have a class attribute value of dept. <table border="1" id="employeeTable"> <thead> <tr> <th>Last Name</th> <th>First Name</th> <th>Phone</th> </tr> </thead> <tbody> <tr> <td colspan="3" class="dept"> </td> </tr> Each of the department names should be links where the <a> elements have a class of rowToggler. <a href="#" class="rowToggler">Accounting</a> Each table row that contains employee data should have a class attribute value that corresponds to its department. Note that class names cannot contain spaces. So in the case of the Information Technology department, the class names should be InformationTechnology without a space. The issue of the space will be addressed later. <tr class="Accounting"> <td>Frang</td> <td>Corey</td> <td>555-1111</td> </tr> The following script makes use of the class names to create a table whose rows can be easily hidden or shown by clicking a link: <script type="text/javascript"> $( document ).ready( function() { $( "a.rowToggler" ).click( function( e ) { e.preventDefault(); var dept = $( this ).text().replace( /s/g, "" ); $( "tr[class=" + dept + "]" ).toggle(); }) }); </script> With the jQuery implemented, departments are "collapsed", and will only reveal the employees when the link is clicked. How it works... The jQuery will "listen" for a click event on any <a> element that has a class of rowToggler. In this case, capture a reference to the event that triggered the action by passing e to the click handler function. $( "a.rowToggler" ).click( function( e ) In this case, e is simply a variable name. It can be any valid variable name, but e is a standard convention. The important thing is that jQuery has a reference to the event. Why? Because in this case, the event was that an <a> was clicked. The browser's default behavior is to follow a link. This default behavior needs to be prevented. As luck would have it, jQuery has a built-in function called preventDefault(). The first line of the function makes use of this by way of the following: e.preventDefault(); Now that you've safely prevented the browser from leaving or reloading the page, set a variable with a value that corresponds to the name of the department that was just clicked. var dept = $( this ).text().replace( /s/g, "" ); Most of the preceding line should look familiar. $( this ) is a reference to the element that was clicked, and text() is something you've already used. You're getting the text of the <a> tag that was clicked. This will be the name of the department. But there's one small issue. If the department name contains a space, such as "Information Technology", then this space needs to be removed. .replace( /s/g, "" ) replace() is a standard JavaScript function that uses a regular expression to replace spaces with an empty string. This turns "Information Technology" into "InformationTechnology", which is a valid class name. The final step is to either show or hide any table row with a class that matches the department name that was clicked. Ordinarily, the selector would look similar to the following: $( "tr.InformationTechnology" ) Because the class name is a variable value, an alternate syntax is necessary. jQuery provides a way to select an element using any attribute name and value. The selector above can also be represented as follows: $( "tr[class=InformationTechnology]" ) The entire selector is a literal string, as indicated by the fact that it's enclosed in quotes. But the department name is stored in a variable. So concatenate the literal string with the variable value: $( "tr[class=" + dept + "]" ) With the desired elements selected, either hide them if they're displayed, or display them if they're hidden. jQuery makes this very easy with its built-in toggle() method. Highlighting cells Use built-in jQuery traversal methods and selectors to parse the contents of each cell in a table and apply a particular style (for example, a yellow background or a red border) to all cells that meet a specified set of criteria. Getting ready Borrowing some data from Tiobe (http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html), create a table of the top five programming languages for 2012. To make it "pop" a bit more, each <td> in the Ratings column that's over 10 percent will be highlighted in yellow, and each <td> in the Delta column that's less than zero will be highlighted in red. Each <td> in the Ratings column should have a class of ratings, and each <td> in the Delta column should have a class of delta. Additionally, set up two CSS classes for the highlights as follows: .highlight { background-color: #FFFF00; } /* yellow */ .highlight-negative { background-color: #FF0000; } /* red */ Initially, the table should look as follows: How to do it... Once again, give the table an id attribute (but by now, you knew that), as shown in the following code snippet: <table border="1" id="tiobeTable"> <thead> <tr> <th>Position<br />Dec 2012</th> <th>Position<br />Dec 2011</th> <th>Programming Language</th> <th>Ratings<br />Dec 2012</th> <th>Delta<br />Dec 2011</th> </tr> </thead> Apply the appropriate class names to the last two columns in each table row within the <tbody>, as shown in the following code snippet: <tbody> <tr> <td>1</td> <td>2</td> <td>C</td> <td class="ratings">18.696%</td> <td class="delta">+1.64%</td> </tr> With the table in place and properly marked up with the appropriate class names, write the script to apply the highlights as follows: <script type="text/javascript"> $( document ).ready( function() { $( "#tiobeTable tbody tr td.ratings" ).each( function( index ) { if ( parseFloat( $( this ).text() ) > 10 ) { $( this ).addClass( "highlight" ); } }); $( "#tiobeTable tbody tr td.delta" ).each( function( index ) { if ( parseFloat( $( this ).text() ) < 0 ) { $( this ).addClass( "highlight-negative" ); } }); }); </script> Now, you will see a much more interesting table with multiple visual cues: How it works... Select the <td> elements within the tbody tag's table rows that have a class of ratings. For each iteration of the loop, test whether or not the value (text) of the <td> is greater than 10. Because the values in <td> contain non-numeric characters (in this case, % signs), we use JavaScript's parseFloat() to convert the text to actual numbers: parseFloat( $( this ).text() ) Much of that should be review. $( this ) is a reference to the element in question. text() retrieves the text from the element. parseFloat() ensures that the value is numeric so that it can be accurately compared to the value 10. If the condition is met, use addClass() to apply the highlight class to <td>. Do the same thing for the Delta column. The only difference is in checking to see if the text is less than zero. If it is, apply the class highlight-negative. The end result makes it much easier to identify specific data within the table. Summary In this article we covered two recipes Show/hide rows and Highlighting cells. Resources for Article : Further resources on this subject: Tips and Tricks for Working with jQuery and WordPress5 [Article] Using jQuery Script for Creating Dynamic Table of Contents [Article] Getting Started with jQuery [Article]
Read more
  • 0
  • 0
  • 2708

article-image-adding-feedback-moodle-quiz-questions
Packt
08 Apr 2013
4 min read
Save for later

Adding Feedback to the Moodle Quiz Questions

Packt
08 Apr 2013
4 min read
(For more resources related to this topic, see here.) Getting ready Any learner taking a quiz may want to know how well he/she has answered the questions posed. Often, working with Moodle, the instructor is at a distance from the learner. Providing feedback is a great way of enhancing communication between learner and instructor. Learner feedback can be provided at multiple levels using Moodle Quiz. You can create feedback at various levels in both the questions and the overall quiz. Here we will examine feedback at the question level. General feedback When we add General Feedback to a question, every student sees the feedback, regardless of their answer to the question. This is good opportunity to provide clarification for the learner who had guessed a correct answer, as well as for the learner whose response was incorrect. Individual response feedback We can create feedback tailored to each possible response in a multiple choice question. This feedback can be more focused in nature. Often, a carefully crafted distracter in a multiple choice can reveal misconceptions and the feedback can provide the correction required as soon as the learner completes the quiz. Feedback given when the question is fresh in the learner's mind, is very effective. How to do it... Let's create some learner feedback for some of the questions that we have created in the question bank: First of all, let's add general feedback to a question. Returning to our True-False question on Texture, we can see that general feedback is effective when there are only two choices. Remember that this type of feedback will appear for all learners, regardless of the answer they submitted. The intention of this feedback is to reflect the correct solution and also give more background information to enhance the teaching opportunity. Let's take a look at how to create a specific feedback for each possible response that a learner may submit. This is done by adding individual response feedback. Returning to our multiple choice question on application of the element line, a specific feedback response tailored to each possible choice will provide helpful clarification for the student. This type of feedback is entered after each possible choice. Here is an example of a feedback to reinforce a correct response and a feedback for an incorrect response: In this case, the feedback the learner receives is tailored to the response they have submitted. This provides much more specific feedback to the learner's choice of responses. For the embedded question (Cloze), feedback is easy to add in Moodle 2.0. In the following screenshot, we can see the question that we created with feedback added: And this is what the feedback looks like to the student: How it works... We have now improved questions in our exam bank by providing feedback for the learner. We have created both general feedback that all learners will see and specific feedback for each response the learner may choose. As we think about the learning experience for the learner, we can see that immediate feedback with our questions is an effective way to reinforce learning. This is another feature that makes Moodle Quiz such a powerful tool. There's more... As we think about the type of feedback we want for the learner, we can combine feedback for individual responses with general feedback. Also there are options for feedback for any correct response, for any partially correct response, or for any incorrect response. Feedback serves to engage the learners and personalize the experience. We created question categories, organized our questions into categories, and learned how to add learner feedback at various levels inside the questions. We are now ready to configure a quiz. Summary In the article we have seen how we can add feedback to the questions of the Moodle Quiz. Resources for Article : Further resources on this subject: Integrating Moodle 2.0 with Mahara and GoogleDocs for Business [Article] What's New in Moodle 2.0 [Article] Moodle 2.0 FAQs [Article]
Read more
  • 0
  • 0
  • 3448
article-image-getting-started-primefaces
Packt
04 Apr 2013
14 min read
Save for later

Getting Started with PrimeFaces

Packt
04 Apr 2013
14 min read
Setting up and configuring the PrimeFaces library PrimeFaces is a lightweight JSF component library with one JAR file, which needs no configuration and does not contain any required external dependencies. To start with the development of the library, all we need is to get the artifact for the library. Getting ready You can download the PrimeFaces library from http://primefaces.org/downloads.html and you need to add the primefaces-{version}.jar file to your classpath. After that, all you need to do is import the namespace of the library, which is necessary to add the PrimeFaces components to your pages, to get started. If you are using Maven (for more information on installing Maven, please visit http://maven.apache.org/guides/getting-started/maven-in-five-minutes.html), you can retrieve the PrimeFaces library by defining the Maven repository in your Project Object Model (POM) file as follows: <repository> <id>prime-repo</id> <name>PrimeFaces Maven Repository</name> <url>http://repository.primefaces.org</url> </repository> Add the dependency configuration as follows: <dependency> <groupId>org.primefaces</groupId> <artifactId>primefaces</artifactId> <version>3.4</version> </dependency> At the time of writing this book, the latest and most stable version of PrimeFaces was 3.4. To check out whether this is the latest available or not, please visit http://primefaces.org/downloads.html The code in this book will work properly with PrimeFaces 3.4. In prior versions or the future versions, some methods, attributes, or components' behaviors may change. How to do it... In order to use PrimeFaces components, we need to add the namespace declarations into our pages. The namespace for PrimeFaces components is as follows: For PrimeFaces Mobile, the namespace is as follows: That is all there is to it. Note that the p prefix is just a symbolic link and any other character can be used to define the PrimeFaces components. Now you can create your first page with a PrimeFaces component as shown in the following code snippet: <html > <f:view contentType="text/html"> <h:head /> <h:body> <h:form> <p:spinner /> </h:form> </h:body> </f:view> </html> This will render a spinner component with an empty value as shown in the following screenshot: A link to the working example for the given page is given at the end of this recipe. How it works... When the page is requested, the p:spinner component is rendered with the renderer implemented by the PrimeFaces library. Since the spinner component is a UI input component, the request-processing lifecycle will get executed when the user inputs data and performs a post back on the page. For the first page, we also needed to provide the contentType parameter for f:view, since the WebKit-based browsers, such as Google Chrome and Apple Safari, request the content type application/xhtml+xml by default. This would overcome unexpected layout and styling issues that might occur. There's more... PrimeFaces only requires Java 5+ runtime and a JSF 2.x implementation as mandatory dependencies. There are some optional libraries for certain features. Dependency Version Type Description JSF runtime iText Apache POI Rome commons-fileupload commons-io 2.0 or 2.1 2.1.7 3.7 1.0 1.2.1 1.4 Required Optional Optional Optional Optional Optional Apache MyFaces or Oracle Mojarra DataExporter (PDF) DataExporter (Excel) FeedReader FileUpload FileUpload Please ensure that you have only one JAR file of PrimeFaces or specific PrimeFaces Theme in your classpath in order to avoid any issues regarding resource rendering. Currently PrimeFaces supports the web browsers IE 7, 8, or 9, Safari, Firefox, Chrome, and Opera. PrimeFaces Cookbook Showcase application This recipe is available in the PrimeFaces Cookbook Showcase application on GitHub at https://github.com/ova2/primefaces-cookbook. You can find the details there for running the project. When the server is running, the showcase for the recipe is available at http://localhost:8080/primefaces-cookbook/views/chapter1 /yourFirstPage.jsf" AJAX basics with Process and Update PrimeFaces provides a partial page rendering (PPR) and view-processing feature based on standard JSF 2 APIs to enable choosing what to process in the JSF lifecycle and what to render in the end with AJAX. PrimeFaces AJAX Framework is based on standard server-side APIs of JSF 2. On the client side, rather than using the client-side API implementations of JSF implementations, such as Mojarra and MyFaces, PrimeFaces scripts are based on the jQuery JavaScript library. How to do it... We can create a simple page with a command button to update a string property with the current time in milliseconds on the server side and an output text to show the value of that string property, as follows: <p:commandButton update="display" action="#{basicPPRController. updateValue}" value="Update" /> <h:outputText id="display" value="#{basicPPRController.value}"/> If we would like to update multiple components with the same trigger mechanism, we can provide the IDs of the components to the update attribute by providing them a space, comma, or both, as follows: <p:commandButton update="display1,display2" /> <p:commandButton update="display1 display2" /> <p:commandButton update="display1,display2 display3" /> In addition, there are reserved keywords that are used for a partial update. We can also make use of these keywords along with the IDs of the components, as described in the following table: Keyword Description @this The component that triggers the PPR is updated @parent The parent of the PPR trigger is updated @form The encapsulating form of the PPR trigger is updated @none PPR does not change the DOM with AJAX response @all The whole document is updated as in non-AJAX requests We can also update a component that resides in a different naming container from the component that triggers the update. In order to achieve this, we need to specify the absolute component identifier of the component that needs to be updated. An example for this could be the following: <h:form id="form1"> <p:commandButton update=":form2:display" action="#{basicPPRController.updateValue}" value="Update" /> </h:form> <h:form id="form2"> <h:outputText id="display" value="#{basicPPRController.value}"/> </h:form> public String updateValue() { value = String.valueOf(System.currentTimeMillis()); return null; } PrimeFaces also provides partial processing, which executes the JSF lifecycle phases—Apply Request Values, Process Validations, Update Model, and Invoke Application—for determined components with the process attribute. This provides the ability to do group validation on the JSF pages easily. Mostly group-validation needs arise in situations where different values need to be validated in the same form, depending on an action that gets executed. By grouping components for validation, errors that would arise from other components when the page has been submitted can be overcome easily. Components like commandButton, commandLink, autoComplete, fileUpload, and many others provide this attribute to process partially instead of the whole view. Partial processing could become very handy in cases when a drop-down list needs to be populated upon a selection on another drop down and when there is an input field on the page with the required attribute set to true. This approach also makes immediate subforms and regions obsolete. It will also prevent submission of the whole page, thus this will result in lightweight requests. Without partially processing the view for the drop downs, a selection on one of the drop downs will result in a validation error on the required field. An example for this is shown in the following code snippet: <h:outputText value="Country: " /> <h:selectOneMenu id="countries" value="#{partialProcessingController. country}"> <f:selectItems value="#{partialProcessingController.countries}" /> <p:ajax listener= "#{partialProcessingController.handleCountryChange}" event="change" update="cities" process="@this"/> </h:selectOneMenu> <h:outputText value="City: " /> <h:selectOneMenu id="cities" value="#{partialProcessingController. city}"> <f:selectItems value="#{partialProcessingController.cities}" /> </h:selectOneMenu> <h:outputText value="Email: " /> <h:inputText value="#{partialProcessingController.email}" required="true" /> With this partial processing mechanism, when a user changes the country, the cities of that country will be populated in the drop down regardless of whether any input exists for the email field. How it works... As seen in partial processing example for updating a component in a different naming container, <p:commandButton> is updating the <h:outputText> component that has the ID display, and absolute client ID :form2:display, which is the search expression for the findComponent method. An absolute client ID starts with the separator character of the naming container, which is : by default. The <h:form>, <h:dataTable>, composite JSF components along with <p:tabView>, <p:accordionPanel>, <p:dataTable>, <p:dataGrid>, <p:dataList>, <p:carousel>, <p:galleria>, <p:ring>, <p:sheet>, and <p:subTable> are the components that implement the NamingContainer interface. The findComponent method, which is described at http://docs.oracle.com/javaee/6/api/javax/faces/component/UIComponent.html, is used by both JSF core implementation and PrimeFaces. There's more... JSF uses : (a colon) as the separator for the NamingContainer interface. The client IDs that will be rendered in the source page will be like :id1:id2:id3. If needed, the configuration of the separator can be changed for the web application to something other than the colon with a context parameter in the web.xml file of the web application, as follows: <context-param> <param-name>javax.faces.SEPARATOR_CHAR</param-name> <param-value>_</param-value> </context-param> It's also possible to escape the : character, if needed, in the CSS files with the character, as :. The problem that might occur with the colon is that it's a reserved keyword for the CSS and JavaScript frameworks, like jQuery, so it might need to be escaped. PrimeFaces Cookbook Showcase application This recipe is available in the PrimeFaces Cookbook Showcase application on GitHub at https://github.com/ova2/primefaces-cookbook. You can find the details there for running the project. For the demos of the showcase, refer to the following: Basic Partial Page Rendering is available at http://localhost:8080/ primefaces-cookbook/views/chapter1/basicPPR.jsf Updating Component in Different Naming Container is available at http://localhost:8080/primefaces-cookbook/views/chapter1/ componentInDifferentNamingContainer.jsf A Partial Processing example at http://localhost:8080/primefacescookbook/ views/chapter1/partialProcessing.jsf Internationalization (i18n) and Localization (L10n) Internationalization (i18n) and Localization (L10n) are two important features that should be provided in the web application's world to make it accessible globally. With Internationalization, we are emphasizing that the web application should support multiple languages; and with Localization, we are stating that the texts, dates, or any other fields should be presented in the form specific to a region. PrimeFaces only provides the English translations. Translations for the other languages should be provided explicitly. In the following sections, you will find the details on how to achieve this. Getting ready For Internationalization, first we need to specify the resource bundle definition under the application tag in faces-config.xml, as follows: <application> <locale-config> <default-locale>en</default-locale> <supported-locale>tr_TR</supported-locale> </locale-config> <resource-bundle> <base-name>messages</base-name> <var>msg</var> </resource-bundle> </application> A resource bundle would be a text file with the .properties suffix that would contain the locale-specific messages. So, the preceding definition states that the resource bundle messages_{localekey}.properties file will reside under classpath and the default value of localekey is en, which is English, and the supported locale is tr_TR, which is Turkish. For projects structured by Maven, the messages_{localekey}.properties file can be created under the src/main/resources project path. How to do it... For showcasing Internationalization, we will broadcast an information message via FacesMessage mechanism that will be displayed in the PrimeFaces growl component. We need two components, the growl itself and a command button, to broadcast the message. <p:growl id="growl" /> <p:commandButton action="#{localizationController.addMessage}" value="Display Message" update="growl" /> The addMessage method of localizationController is as follows: public String addMessage() { addInfoMessage("broadcast.message"); return null; } That uses the addInfoMessage method, which is defined in the static MessageUtil class as follows: public static void addInfoMessage(String str) { FacesContext context = FacesContext.getCurrentInstance(); ResourceBundle bundle = context.getApplication(). getResourceBundle(context, "msg"); String message = bundle.getString(str); FacesContext.getCurrentInstance().addMessage(null, new FacesMessage(FacesMessage.SEVERITY_INFO, message, "")); } Localization of components, such as calendar and schedule, can be achieved by providing the locale attribute. By default, locale information is retrieved from the view's locale and it can be overridden by a string locale key or the java.util.Locale instance. Components such as calendar and schedule use a shared PrimeFaces.locales property to display labels. PrimeFaces only provides English translations, so in order to localize the calendar we need to put corresponding locales into a JavaScript file and include the scripting file to the page. The content for the German locale of the Primefaces.locales property for calendar would be as shown in the following code snippet. For the sake of the recipe, only the German locale definition is given and the Turkish locale definition is omitted. PrimeFaces.locales['de'] = { closeText: 'Schließen', prevText: 'Zurück', nextText: 'Weiter', monthNames: ['Januar', 'Februar', 'März', 'April', 'Mai', 'Juni', 'Juli', 'August', 'September', 'Oktober', 'November', 'Dezember'], monthNamesShort: ['Jan', 'Feb', 'Mär', 'Apr', 'Mai', 'Jun', 'Jul', 'Aug', 'Sep', 'Okt', 'Nov', 'Dez'], dayNames: ['Sonntag', 'Montag', 'Dienstag', 'Mittwoch', 'Donnerstag', 'Freitag', 'Samstag'], dayNamesShort: ['Son', 'Mon', 'Die', 'Mit', 'Don', 'Fre', 'Sam'], dayNamesMin: ['S', 'M', 'D', 'M ', 'D', 'F ', 'S'], weekHeader: 'Woche', FirstDay: 1, isRTL: false, showMonthAfterYear: false, yearSuffix: '', timeOnlyTitle: 'Nur Zeit', timeText: 'Zeit', hourText: 'Stunde', minuteText: 'Minute', secondText: 'Sekunde', currentText: 'Aktuelles Datum', ampm: false, month: 'Monat', week: 'Woche', day: 'Tag', allDayText: 'Ganzer Tag' }; Definition of the calendar components with the locale attribute would be as follows: <p:calendar showButtonPanel="true" navigator="true" mode="inline" id="enCal"/> <p:calendar locale="tr" showButtonPanel="true" navigator="true" mode="inline" id="trCal"/> <p:calendar locale="de" showButtonPanel="true" navigator="true" mode="inline" id="deCal"/> They will be rendered as follows: How it works... For Internationalization of the Faces message, the addInfoMessage method retrieves the message bundle via the defined variable msg. It then gets the string from the bundle with the given key by invoking the bundle.getString(str) method. Finally, the message is added by creating a new Faces message with severity level FacesMessage.SEVERITY_INFO. There's more... For some components, Localization could be accomplished by providing labels to the components via attributes, such as with p:selectBooleanButton. <p:selectBooleanButton value="#{localizationController.selectedValue}" onLabel="#{msg['booleanButton.onLabel']}" offLabel="#{msg['booleanButton.offLabel']}" /> The msg variable is the resource bundle variable that is defined in the resource bundle definition in Faces configuration file. The English version of the bundle key definitions in the messages_en.properties file that resides under classpath would be as follows: booleanButton.onLabel=Yes booleanButton.offLabel=No PrimeFaces Cookbook Showcase application This recipe is available in the PrimeFaces Cookbook Showcase application on GitHub at https://github.com/ova2/primefaces-cookbook. You can find the details there for running the project. For the demos of the showcase, refer to the following: Internationalization is available at http://localhost:8080/primefacescookbook/ views/chapter1/internationalization.jsf Localization of the calendar component is available at http://localhost:8080/ primefaces-cookbook/views/chapter1/localization.jsf Localization with resources is available at http://localhost:8080/ primefaces-cookbook/views/chapter1/localizationWithResources. jsf For already translated locales of the calendar, see https://code.google.com/archive/p/primefaces/wikis/PrimeFacesLocales.wiki
Read more
  • 0
  • 0
  • 4641

article-image-building-custom-version-jquery
Packt
04 Apr 2013
9 min read
Save for later

Building a Custom Version of jQuery

Packt
04 Apr 2013
9 min read
(For more resources related to this topic, see here.) Why Is It Awesome? While it's fairly common for someone to say that they use jQuery in every site they build (this is usually the case for me), I would expect it much rarer for someone to say that they use the exact same jQuery methods in every project, or that they use a very large selection of the available methods and functionality that it offers. The need to reduce file size as aggressively as possible to cater for the mobile space, and the rise of micro-frameworks such as Zepto for example, which delivers a lot of jQuery functionality at a much-reduced size, have pushed jQuery to provide a way of slimming down. As of jQuery 1.8, we can now use the official jQuery build tool to build our own custom version of the library, allowing us to minimize the size of the library by choosing only the functionality we require. For more information on Zepto, see http://zeptojs.com/. Your Hotshot Objectives To successfully conclude this project we'll need to complete the following tasks: Installing Git and Make Installing Node.js Installing Grunt.js Configuring the environment Building a custom jQuery Running unit tests with QUnit Mission Checklist We'll be using Node.js to run the build tool, so you should download a copy of this now. The Node website (http://nodejs.org/download/) has an installer for both 64 and 32-bit versions of Windows, as well as a Mac OS X installer. It also features binaries for Mac OS X, Linux, and SunOS. Download and install the appropriate version for your operating system. The official build tool for jQuery (although it can do much more besides build jQuery) is Grunt.js, written by Ben Alman. We don't need to download this as it's installed via the Node Package Manager (NPM). We'll look at this process in detail later in the project. For more information on Grunt.js, visit the official site at http://gruntjs.com. First of all we need to set up a local working area. We can create a folder in our root project folder called jquery-source. This is where we'll store the jQuery source when we clone the jQuery Github repository, and also where Grunt will build the final version of jQuery. Installing Git and Make The first thing we need to install is Git, which we'll need in order to clone the jQuery source from the Github repository to our own computer so that we can work with the source files. We also need something called Make, but we only need to actually install this on Mac platforms because it gets installed automatically on Windows when Git is installed. As the file we'll create will be for our own use only and we don't want to contribute to jQuery by pushing code back to the repository, we don't need to worry about having an account set up on Github. Prepare for Lift Off First we'll need to download the relevant installers for both Git and Make. Different applications are required depending on whether you are developing on Mac or Windows platforms. Mac developers Mac users can visit http://git-scm.com/download/mac for Git. Next we can install Make. Mac developers can get this by installing XCode. This can be downloaded from https://developer.apple.com/xcode/. Windows developers Windows users can install msysgit, which can be obtained by visiting https://code.google.com/p/msysgit/downloads/detail?name=msysGit-fullinstall-1.8.0-preview20121022.exe. Engage Thrusters Once the installers have downloaded, run them to install the applications. The defaults selected by the installers should be fine for the purposes of this mission. First we should install Git (or msysgit on Windows). Mac developers Mac developers simply need to run the installer for Git to install it to the system. Once this is complete we can then install XCode. All we need to do is run the installer and Make, along with some other tools, will be installed and ready. Windows developers Once the full installer for msysgit has finished, you should be left with a Command Line Interface (CLI) window (entitled MINGW32) indicating that everything is ready for you to hack. However, before we can hack, we need to compile Git. To do this we need to run a file called initialize.sh. In the MINGW32 window, cd into the msysgit directory. If you allowed this to install to the default location, you can use the following command: cd C:msysgitmsysgitsharemsysGit Once we are in the correct directory, we can then run initialize.sh in the CLI. Like the installation, this process can take some time, so be patient and wait for the CLI to return a flashing cursor at the $ character. An Internet connection is required to compile Git in this way. Windows developers will need to ensure that the Git.exe and MINGW resources can be reached via the system's PATH variable. This can be updated by going to Control Panel | System | Advanced system settings | Environment variables. In the bottom section of the dialog box, double-click on Path and add the following two paths to the git.exe file in the bin folder, which is itself in a directory inside the msysgit folder wherever you chose to install it: ;C:msysgitmsysgitbin; C:msysgitmsysgitmingwbin; Update the path with caution! You must ensure that the path to Git.exe is separated from the rest of the Path variables with a semicolon. If the path does not end with a semicolon before adding the path to Git.exe, make sure you add one. Incorrectly updating your path variables can result in system instability and/or loss of data. I have shown a semicolon at the start of the previous code sample to illustrate this. Once the path has been updated, we should then be able to use a regular command prompt to run Git commands. Post-installation tasks In a terminal or Windows Command Prompt (I'll refer to both simply as the CLI from this point on for conciseness) window, we should first cd into the jquery-source folder we created at the start of the project. Depending on where your local development folder is, this command will look something like the following: cd c:jquery-hotshotsjquery-source To clone the jQuery repository, enter the following command in the CLI: git clone git://github.com/jquery/jquery.git Again, we should see some activity on the CLI before it returns to a flashing cursor to indicate that the process is complete. Depending on the platform you are developing on, you should see something like the following screenshot: Objective Complete — Mini Debriefing We installed Git and then used it to clone the jQuery Github repository in to this directory in order to get a fresh version of the jQuery source. If you're used to SVN, cloning a repository is conceptually the same as checking out a repository. Again, the syntax of these commands is very similar on Mac and Windows systems, but notice how we need to escape the backslashes in the path when using Windows. Once this is complete, we should end up with a new directory inside our jquery-source directory called jquery. If we go into this directory, there are some more directories including: build: This directory is used by the build tool to build jQuery speed: This directory contains benchmarking tests src: This directory contains all of the individual source files that are compiled together to make jQuery Test: This directory contains all of the unit tests for jQuery It also has a range of various files, including: Licensing and documentation, including jQuery's authors and a guide to contributing to the project Git-specific files such as .gitignore and .gitmodules Grunt-specific files such as Gruntfile.js JSHint for testing and code-quality purposes Make is not something we need to use directly, but Grunt will use it when we build the jQuery source, so it needs to be present on our system. Installing Node.js Node.js is a platform for running server-side applications built with JavaScript. It is trivial to create a web-server instance, for example, that receives and responds to HTTP requests using callback functions. Server-side JS isn't exactly the same as its more familiar client-side counterpart, but you'll find a lot of similarities in the same comfortable syntax that you know and love. We won't actually be writing any server-side JavaScript in this project – all we need Node for is to run the Grunt.js build tool. Prepare for Lift Off To get the appropriate installer for your platform, visit the Node.js website at http://nodejs.org and hit the download button. The correct installer for your platform, if supported, should be auto-detected. Engage Thrusters Installing Node is a straightforward procedure on either the Windows or Mac platforms as there are installers for both. This task will include running the installer, which is obviously simple, and testing the installation using a CLI. Installing Node is a straightforward procedure on either the Windows or Mac platforms as there are installers for both. This task will include running the installer, which is obviously simple, and testing the installation using a CLI. On Windows or Mac platforms, run the installer and it will guide you through the installation process. I have found that the default options are fine in most cases. As before, we also need to update the Path variable to include Node and Node's package manager NPM. The paths to these directories will differ between platforms. Mac Mac developers should check that the $PATH variable contains a reference to usr/local/bin. I found that this was already in my $PATH, but if you do find that it's not present, you should add it. For more information on updating your $PATH variable, see http://www.tech-recipes.com/rx/2621/os_x_change_path_environment_variable/. Windows Windows developers will need to update the Path variable, in the same way as before, with the following paths: C:Program Filesnodejs; C:UsersDesktopAppDataRoamingnpm; Windows developers may find that the Path variable already contains an entry for Node so may just need to add the path to NPM. Objective Complete — Mini Debriefing Once Node is installed, we will need to use a CLI to interact with it. To verify Node has installed correctly, type the following command into the CLI: node -v The CLI should report the version in use, as follows: We can test NPM in the same way by running the following command: npm -v
Read more
  • 0
  • 0
  • 3322
Modal Close icon
Modal Close icon