Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-getting-started-adobe-premiere-pro-cs6-hotshot
Packt
11 Jul 2013
14 min read
Save for later

Getting Started with Adobe Premiere Pro CS6 Hotshot

Packt
11 Jul 2013
14 min read
(For more resources related to this topic, see here.) Getting the story right! This is basic housekeeping and ignoring it is like making your own editing life much more frustrating. So take a deep breath, think of calm blue oceans, and begin by getting this project organized. First you need to set the Timeline correctly and then you will create a short storyboard of the interview; again you will do this by focusing on the beginning, middle, and end of the story. Always start this way as a good story needs these elements to make sense. For frame-accurate editing it's advisable to use the keyboard as much as possible, although some actions will need to be performed with the mouse. Towards the end of this task you will cover some new ground as you add and expand Timeline tracks in preparation for the tasks ahead. Prepare for Lift Off Once you have completed all the preparations detailed in the Mission Checklist section, you are ready to go. Launch Premiere Pro CS6 in the usual way and then proceed to the first task. Engage Thrusters First you will open the project template, save it as a new file, and then create a three-clip sequence; the rough assembly of your story. Once done, perform the following steps: When the Recent Projects splash screen appears, select Hotshots Template – Montage. Wait for the project to finish loading and save this as Hotshots – Interview Project. Close any sequences open on the Timeline. Select Editing Optimized Workspace. Select the Project panel and open the Video bin without creating a separate window. If you would like Premiere Pro to always open a bin without creating a separate window, select Edit | Preferences | General from the menu. When the General Options window displays, locate the Bins option area and change the Double-Click option to Open in Place. Import all eight video files into the Video folder inside the Project 3 folder. Create a new sequence. Pick any settings at random, you will correct this in the next step. Rename the sequence as Project 3. Match the Timeline settings with any clip from the Video bin, and then delete the clip from the Timeline. Set the Project panel as the active panel and switch to List View if it is not already displayed. Create the basic elements of a short story for this scene using only three of the available clips in the Video bin. To do this, hold down the Ctrl or command key and click on the clips named ahead. Make sure you click on them in the same order as they are presented here: Intro_Shot.avi Two_Shot.avi Exit_Shot.avi Ensure the Timeline indicator is at the start of the Timeline and then click on the Automate to Sequence icon. When the Automate To Sequence window appears, change Ordering to Selection Order and leave Placement as the default (Sequentially). Uncheck the Apply Default Audio Transition , Apply Default Video Transition, and Ignore Audio checkboxes. Click on OK or press Enter on the keyboard to complete this action. Right-click on the Video 1 track and select Add Tracks from the context menu. When the Add Tracks window appears, set the number of video tracks to be added as 2 and the number of audio tracks to be added as 0. Click on OK or press Enter to confirm these changes. Dial open the Audio 1 track (hint – small triangle next to Audio 1), then expand the Audio 1 track by placing the cursor at the bottom of the Audio 1 area, and clicking on it, and dragging it downwards. Stop before the Master audio track disappears below the bottom of the Timeline panel. The Master audio track is used to control the output of all the audio tracks present on the Timeline; this is especially useful when you come to prepare your timeline for exporting to DVD. The Master audio track also allows you to view the left and right audio channels of your project. More details on the use of the Master audio track can be found in the Premiere Pro reference guide, which can be downloaded from http://helpx.adobe.com/pdf/premiere_pro_reference.pdf. Make sure the Timeline panel is active and zoom in to show all the clips present (hint – press backslash). You should end this section with a Timeline that looks something like the following screenshot. Save your project (Press Ctrl + S or command + S) before moving on to the next task. Objective Complete - Mini Debriefing How did you do? Review the shortcuts listed next. Did you remember them all? In this task you should have automatically matched up the Timeline to the clips with one drag-and-drop, plus a delete. You should have then sent three clips from the Project panel to the Timeline using the Automate to Sequence function. Finally you should have added two new video tracks and expanded the Audio 1 track. Keyboard shortcuts covered in this task are as follows: (backslash): Zoom the Timeline to show all populated clips Ctrl or command + double-click: Open bin without creating a separate Project panel (also see the tip after step 3 in the Engage Thrusters section) Ctrl or command + N: Create a new sequence Ctrl or command + (backslash): Create new bin in the Project panel Ctrl or command + I: Open the Import window Shift + 1: Set the Project panel as active Shift + 3: Set Timeline as active Classified Intel In this project, the Automate to Timeline function is being used to create a rough assembly of three clips. These are placed on the Timeline in the order that you clicked on them in the project bin. This is known as the selection order and allows the Automate to Timeline function to ignore the clips-relative location in the project bin. This is not a practical work flow if you have too many clips in your Project panel (how would you remember the selection order of twenty clips?). However, for a small number of clips, this is a practical workflow to quickly and easily send a rough draft of your story to the Timeline in just a few clicks. If you remember nothing else from this book, always remember how to correctly use Automate To Timeline! Extracting audio fat Raw material from every interview ever filmed will have lulls and pauses, and some stuttering. People aren't perfect and time spent trying to get lines and timing just right can lead to an unfortunate waste of filming time. As this performance is not live, you, the all-seeing editor, have the power to cut those distracting lulls and pauses, keeping the pace on beat and audience's attention on track. In this task you will move through the Timeline, cutting out some of the audio fat using Premiere Pro's Extract function, and to get this frame accurate, you will use as many keyboard shortcuts as possible. Engage Thrusters You will now use the Extract function to remove "dead" audio areas from the Timeline. Perform the following steps: Set the Timeline panel as active then play the timeline back by pressing the L key once. Make a mental note of the silences that occur in the first clip (Intro_Shot.avi). Return the Timeline indicator to the start of the Timeline using the Home key. Zoom in on the Timeline by pressing the + (plus) key on the main keyboard area. Do this until your Timeline looks something like the screenshot just after the following tip: To zoom in and out of the Timeline use the + (plus) and - (minus) keys in the main keyboard area, not the ones in the number pad area. Pressing the plus or minus key in the number area allows you to enter an exact number of frames into whichever tool is currently active. You should be able to clearly see the first area of silence starting at around 06;09 on the Timeline. Use the J, K, and L keyboard shortcuts to place the Timeline indicator at this point. Press the I key to set an In point here, then move the Timeline indicator to the end of the silence (around 08;17), and press the O key to set an Out point. Press the # (hash) key on your keyboard to remove the marked section of silence using Premiere Pro's Extract function. Important information on Sync Locking tracks The above step will only work if you have the Sync Lock icons toggled on for both the Video 1 and Audio 1 tracks. The Sync Lock icon controls which Timeline tracks will be altered when using a function such as Extract. For example; if the Sync Lock icon was toggled off for the Audio 1 track, then only the video would be extracted, which is counterproductive to what you are trying to achieve in this task! By default each new project should open with the Sync Lock icon toggled on for all video and audio tracks that already exist on the Timeline, and those added at a later point in the project. More information on Sync Lock can be found in the Premiere Pro reference guide (tinyurl.com/cz5fvh9). Repeat steps 5 and 6 to remove silences from the following Timeline areas (you should judge these points for yourself rather than slavishly following the suggestions given next): i. Set In point at 07;11 and Out point at 08;10. ii.Press # (hash). iii.Set In point at 11;05 and Out point at 12;13. iv.Press # (hash). Play back the Timeline to make sure you haven't extracted away too much audio and clipped the end of a sentence. Use the Trim tool to restore the full sentence. You may have spotted other silences on the Timeline; for the moment leave them alone. You will deal with these using other methods later in this project. Save the project before moving on to the next section. Objective Complete - Mini Debriefing At the end of this section you should have successfully removed three areas of silence from the Intro_Shot.avi clip. You did this using the Extract function, an elegant way of removing unwanted areas from your clips. You may also have refreshed your working knowledge of the Trim tool. If this still feels a lit le alien to you, don't worry, you will have a chance to practice trimming skills later in this project. Classified Intel Extract is another cunningly simple function that does exactly what it says; it extracts a section of footage from the Timeline, and then closes the gap created by this ac i on. In one step it replicates the razor cut and ripple delete. Creating a J-cut (away) One of the most common video techniques used in interviews and documentaries (not to mention a number of films) is called a J-cut. This describes cutting away some of the video, while leaving the audio beneath intact. The deleted video area is then replaced with alternative footage. This creates a voice-over effect that allows for a seamless transfer between the alternative viewpoints and the original speaker. In this task you will create a J-cut by replacing the video at the start of Intro_Shot.avi, leaving the voice of the newsperson and replacing his image with cutaway shots of what he is describing. You will make full use of four-point edits. Engage Thrusters Create J-cuts and cutaway shots using work flows you should now be familiar with. Perform the following steps to do so: Send the Cutaways_1.avi clip from the Project panel to the Source Monitor. In the Source Monitor, create an In point at 00;00 and an Out point just before the shot changes (around 04;24). Switch to the Timeline and send the Timeline indicator to the start of the Timeline (00;00). Create an In point here. Use a keyboard shortcut of your choice to identify the point just before the newsperson mentions the "Local village shop". (hint – roughly at 06;09). Create an Out point here. You want to create a J-cut, which means protecting the audio track that is already on the Timeline. To do this click once on the Audio 1 track header so it turns dark gray. Switch back to the Source Monitor and send the marked Cutaways_1.avi clip to the Timeline using the Overwrite function (hint – press the '.' (period) key). When the Fit Clip window appears, select Change Clip Speed (Fit to Fill), and click on OK or press Enter on the keyboard. The village scene cutaway shot should now appear on Video 1, but Audio 1 should retain the newsperson's dialog. His inserted village scene clip will have also slowed slightly to match what's being said by the newsperson. Repeat steps 2 to 7 to place the Cutaways_1.avi clip that shows the shot of the village shop, to match the village church and the village pub on the Timeline with the newsperson's dialog. The following are some suggestions on times, but try to do this step first of all without looking too closely at them: For the village shop cutaway, set the Source Monitor In point at 05;00 and Out point at 09;24. Set the Timeline In point at 06;10 and Out point at 07;13. Switch back to Source Monitor. Send the clip in the Overwrite mode and set Change Clip Speed to Fit to Fill. For the village church cutaway, set the Source Monitor In point at 10;00 and Out point at 14;24. Set the Timeline In point at 07;14 and Out point at 09;03. Switch back to Source Monitor. Send the clip in the Overwrite mode and set Change Clip Speed to Fit to Fill. For the pub cutaway, send Reconstruction_1.avi to the Source Monitor. Set the Source Monitor In point at 04;11 and Out point at 04;17. Set the Timeline In point at 09;04 and Out point at 12;00. Switch back to Source Monitor. Send the clip in the Overwrite mode and set Change Clip Speed to Fit to Fill. The last cutaway shot here is part of the reconstruction reel and has been used because your camera person was unable (or forgot) to film a cutaway shot of the pub. This does sometimes happen and then it's down to you, the editor in charge, to get the piece on air with as few errors as possible. To do this you may find yourself scavenging footage from any of the other clips. In this case you have used just seven frames of Reconstruction_1.avi, but using the Premiere Pro feature, Fit to Fill , you are able to match the clip to the duration of the dialogue, saving your camera person from a production meeting dressing down! Review your edit decisions and use the Trim tool or the Undo command to alter edit points that you feel need adjustments. As always, being an editor is about experimentation, so don't be afraid to try something out of the box, you never know where it might lead. Once you are happy with your edit decisions, render any clips on the Timeline that display a red line above them. You should end up with a Timeline that looks something like the following screenshot; save your project before moving on to the next section. Objective Complete - Mini Debriefing In this task you have learned how to piece together cutaway shots to match the voice-over, creating an effective J-cut, as seen in the way the dialog seamlessly blends between the pub cutaway shot and the news reporter finishing his last sentence. You also learned how to scavenge source material from other reels in order to find the necessary shot to match the dialog. Classified Intel The last set of time suggestions given in this task allow the pub cutaway shot to run over the top of the newsperson saying "And now, much to the surprise…". This is an editorial decision that you can make on whether or not this cutaway should run over the dialog. It is simply a matter of taste, but you are the editor and the final decision is yours! In this article, we learned how to extract audio fat and create a J-cut. Resources for Article : Further resources on this subject: Responsive Design with Media Queries [Article] Creating a Custom HUD [Article] Top features you'll want to know about [Article]
Read more
  • 0
  • 0
  • 2684

article-image-making-your-store-look-amazing
Packt
10 Jul 2013
6 min read
Save for later

Making Your Store Look Amazing

Packt
10 Jul 2013
6 min read
(For more resources related to this topic, see here.) Looks are everything on the web. If your store doesn't look enticing and professional to your customers then everything else is a waste. This article looks at how to make your VirtueMart look stunning. There are many different approaches to creating a hot-looking store. The one that is best for you or your client will depend upon your budget and your skill set. The sections in this article will cater to all budgets and skill sets. For example, we will cover the very simple task of finding and installing a free Joomla! template or installing a VirtueMart theme. Then we will look at the pros and cons of using two different professional frameworks namely Warp and Gantry. In the middle of all this, we will also look at the stunningly versatile Artisteer design software that won't quite give you the perfect professional job but does a very fine job of letting you choose just about every aspect of your design without any CSS/coding skills. Removing the Joomla! branding at the footer With each version of Joomla! and VirtueMart being better than the last one in terms of looks and performance, it is not unheard of to launch your store with the default looks of Joomla! and VirtueMart. The least you will probably want to do is remove the Powered by Joomla!® link at the footer of your store. This will make your store appear entirely your own and perhaps have a minor benefit to SEO as well by removing the outbound link. Getting ready Log in to your Joomla! control panel. This section was tested using the Beez_20 template but should work on any template where the same message appears. We will also be using the Firefox web browser search function but again, this is almost identical in other browsers. Identify the message to be removed on the frontend of your site as shown in the following screenshot: How to do it... This is going to be nice and easy so let's get started and perform the following steps: Navigate to Extensions | Template Manager from the main Joomla! drop-down menu as shown in the following screenshot: Now click on the Templates link (it is the one next door to the Styles link) as shown in the following screenshot: Scroll down until you see Beez_20 details and Files click on it as shown in the following screenshot: Now scroll down and click on Edit main page template . Next press Ctrl + F on your keyboard to bring up the Firefox search bar and enter <div id="footer"> as your search term. Firefox will present you with the following code: Delete everything between <p> and </p> both inclusive. Click on Save & Close . How it works... Check your Joomla! home page. We now have a nice clean and empty footer. We can add Joomla! and VirtueMart modules or just leave it empty. Installing a VirtueMart template In this section we will look at how to install a theme to make your store look great with a couple of clicks. There are a few things to consider first. Is your website just a store? That is, are all your pages going to be VirtueMart pages? If the answer is yes then this is definitely the section for you. Alternatively you might just have a few shop pages in amongst an extensive Joomla! based content site. If this is the case then you might be better off installing a Joomla! template and then setting VirtueMart to use that. If this describes your situation then the next section, Installing a Joomla! template is more appropriate for you. And there is a third option as well. You have content pages and a large number of VirtueMart pages. In this situation some experimentation and planning is required. You will either need to choose a Joomla! template that you are happy with for everything or a Joomla! template and a VirtueMart theme which look good together. Or you could use two templates. This last scenario is covered in the Creating and installing a template with Artisteer design software section. Getting ready Find a template which is either free or paid and download the files from the template provider's site (they will be in the form of a single compressed archive) on your computer. How to do it... Installing a VirtueMart template has never been as easy as it is in VirtueMart 2. Perform the following steps for the same: Navigate to Extensions | Extension Manager from the top Joomla! menu. Click on the Browse... button in the Upload Package File area, find and select your template file as shown in the following screenshot: Click on the Upload & Install button and you are done! How it works... The VirtueMart template is now installed. Take a look at your shiny new store. Installing a Joomla! Template As there is clearly something of a supply problem when it comes to VirtueMart-specific free templates, this section will look at installing a regular Joomla! template and using it in your VirtueMart store. Installing a Joomla! template is a very easy thing to do. But if you have never done it before read on. Getting ready Check the resources appendix for a choice of places to get free and paid templates. Download your chosen template on your desktop. It should be in the form of a ZIP file. Log in to your Joomla! admin area and read on. How to do it... This simple section is in two steps. First we upload the template then we set it as the active template. Select Extensions | Extension Manager from the top Joomla! menu. Click on the Browse... button in the Upload Package File area, find and select your template file as shown in the following screenshot: Click on the Upload & Install button. Now select Extensions | Template Manager . Click on the checkbox of the template you just installed and then click on Make Default . How it works... So what we did was to install the template through the usual Joomla! installation mechanism and once the template was installed we simply told Joomla! to use it. That's it. You can now go and assign all your modules to your new template.
Read more
  • 0
  • 0
  • 1859

article-image-understanding-express-routes
Packt
10 Jul 2013
10 min read
Save for later

Understanding Express Routes

Packt
10 Jul 2013
10 min read
(For more resources related to this topic, see here.) What are Routes? Routes are URL schema, which describe the interfaces for making requests to your web app. Combining an HTTP request method (a.k.a. HTTP verb) and a path pattern, you define URLs in your app. Each route has an associated route handler, which does the job of performing any action in the app and sending the HTTP response. Routes are defined using an HTTP verb and a path pattern. Any request to the server that matches a route definition is routed to the associated route handler. Route handlers are middleware functions, which can send the HTTP response or pass on the request to the next middleware in line. They may be defined in the app file or loaded via a Node module. A quick introduction to HTTP verbs The HTTP protocol recommends various methods of making requests to a Web server. These methods are known as HTTP verbs. You may already be familiar with the GET and the POST methods; there are more of them, about which you will learn in a short while. Express, by default, supports the following HTTP request methods that allow us to define flexible and powerful routes in the app: GET POST PUT DELETE HEAD TRACE OPTIONS CONNECT PATCH M-SEARCH NOTIFY SUBSCRIBE UNSUBSCRIBE GET, POST, PUT, DELETE, HEAD, TRACE, OPTIONS, CONNECT, and PATCH are part of the Hyper Text Transfer Protocol (HTTP) specification as drafted by the Internet Engineering Task Force (IETF) and the World Wide Web Consortium (W3C). M-SEARCH, NOTIFY, SUBSCRIBE, and UNSUBSCRIBE are specified by the UPnP Forum. There are some obscure HTTP verbs such as LINK, UNLINK, and PURGE, which are currently not supported by Express and the underlying Node HTTP library. Routes in Express are defined using methods named after the HTTP verbs, on an instance of an Express application: app.get(), app.post(), app.put(), and so on. We will learn more about defining routes in a later section. Even though a total of 13 HTTP verbs are supported by Express, you need not use all of them in your app. In fact, for a basic website, only GET and POST are likely to be used. Revisiting the router middleware This article would be incomplete without revisiting the router middleware. The router middleware is very special middleware. While other Express middlewares are inherited from Connect, router is implemented by Express itself. This middleware is solely responsible for empowering Express with Sinatra-like routes. Connect-inherited middlewares are referred to in Express from the express object (express.favicon(), express.bodyParser(), and so on). The router middleware is referred to from the instance of the Express app (app.router)  To ensure predictability and stability, we should explicitly add router to the middleware stack: app.use(app.router); The router middleware is a middleware system of its own. The route definitions form the middlewares in this stack. Meaning, a matching route can respond with an HTTP response and end the request flow, or pass on the request to the next middleware in line. This fact will become clearer as we work with some examples in the upcoming sections. Though we won't be directly working with the router middleware, it is responsible for running the whole routing show in the background. Without the router middleware, there can be no routes and routing in Express. Defining routes for the app we know how routes and route handler callback functions look like. Here is an example to refresh your memory: app.get('/', function(req, res) { res.send('welcome'); }); Routes in Express are created using methods named after HTTP verbs. For instance, in the previous example, we created a route to handle GET requests to the root of the website. You have a corresponding method on the app object for all the HTTP verbs listed earlier. Let's create a sample application to see if all the HTTP verbs are actually available as methods in the app object: var http = require('http'); var express = require('express'); var app = express(); // Include the router middleware app.use(app.router); // GET request to the root URL app.get('/', function(req, res) { res.send('/ GET OK'); }); // POST request to the root URL app.post('/', function(req, res) { res.send('/ POST OK'); }); // PUT request to the root URL app.put('/', function(req, res) { res.send('/ PUT OK'); }); // PATCH request to the root URL app.patch('/', function(req, res) { res.send('/ PATCH OK'); }); // DELETE request to the root URL app.delete('/', function(req, res) { res.send('/ DELETE OK'); }); // OPTIONS request to the root URL app.options('/', function(req, res) { res.send('/ OPTIONS OK'); }); // M-SEARCH request to the root URL app['m-search']('/', function(req, res) { res.send('/ M-SEARCH OK'); }); // NOTIFY request to the root URL app.notify('/', function(req, res) { res.send('/ NOTIFY OK'); }); // SUBSCRIBE request to the root URL app.subscribe('/', function(req, res) { res.send('/ SUBSCRIBE OK'); }); // UNSUBSCRIBE request to the root URL app.unsubscribe('/', function(req, res) { res.send('/ UNSUBSCRIBE OK'); }); // Start the server http.createServer(app).listen(3000, function() { console.log('App started'); }); We did not include the HEAD method in this example, because it is best left for the underlying HTTP API to handle it, which it already does. You can always do it if you want to, but it is not recommended to mess with it, because the protocol will be broken unless you implement it as specified. The browser address bar isn't capable of making any type of request, except GET requests. To test these routes we will have to use HTML forms or specialized tools. Let's use Postman, a Google Chrome plugin for making customized requests to the server. We learned that route definition methods are based on HTTP verbs. Actually, that's not completely true, there is a method called app.all() that is not based on an HTTP verb. It is an Express-specific method for listening to requests to a route using any request method: app.all('/', function(req, res, next) { res.set('X-Catch-All', 'true'); next(); }); Place this route at the top of the route definitions in the previous example. Restart the server and load the home page. Using a browser debugger tool, you can examine the HTTP header response added to all the requests made to the home page, as shown in the following screenshot: Something similar can be achieved using a middleware. But the app.all() method makes it a lot easier when the requirement is route specified. Route identifiers So far we have been dealing exclusively with the root URL (/) of the app. Let's find out how to define routes for other parts of the app. Routes are defined only for the request path. GET query parameters are not and cannot be included in route definitions. Route identifiers can be string or regular expression objects. String-based routes are created by passing a string pattern as the first argument of the routing method. They support a limited pattern matching capability. The following example demonstrates how to create string-based routes: // Will match /abcd app.get('/abcd', function(req, res) { res.send('abcd'); }); // Will match /acd app.get('/ab?cd', function(req, res) { res.send('ab?cd'); }); // Will match /abbcd app.get('/ab+cd', function(req, res) { res.send('ab+cd'); }); // Will match /abxyzcd app.get('/ab*cd', function(req, res) { res.send('ab*cd'); }); // Will match /abe and /abcde app.get('/ab(cd)?e', function(req, res) { res.send('ab(cd)?e'); }); The characters ?, +, *, and () are subsets of their Regular Expression counterparts.   The hyphen (-) and the dot (.) are interpreted literally by string-based route identifiers. There is another set of string-based route identifiers, which is used to specify named placeholders in the request path. Take a look at the following example: app.get('/user/:id', function(req, res) { res.send('user id: ' + req.params.id); }); app.get('/country/:country/state/:state', function(req, res) { res.send(req.params.country + ', ' + req.params.state); } The value of the named placeholder is available in the req.params object in a property with a similar name. Named placeholders can also be used with special characters for interesting and useful effects, as shown here: app.get('/route/:from-:to', function(req, res) { res.send(req.params.from + ' to ' + req.params.to); }); app.get('/file/:name.:ext', function(req, res) { res.send(req.params.name + '.' + req.params.ext.toLowerCase()); }); The pattern-matching capability of routes can also be used with named placeholders. In the following example, we define a route that makes the format parameter optional: app.get('/feed/:format?', function(req, res) { if (req.params.format) { res.send('format: ' + req.params.format); } else { res.send('default format'); } }); Routes can be defined as regular expressions too. While not being the most straightforward approach, regular expression routes help you create very flexible and powerful route patterns. Regular expression routes are defined by passing a regular expression object as the first parameter to the routing method. Do not quote the regular expression object, or else you will get unexpected results. Using regular expression to create routes can be best understood by taking a look at some examples. The following route will match pineapple, redapple, redaple, aaple, but not apple, and apples: app.get(/.+app?le$/, function(req, res) { res.send('/.+ap?le$/'); }); The following route will match anything with an a in the route name: app.get(/a/, function(req, res) { res.send('/a/'); }); You will mostly be using string-based routes in a general web app. Use regular expression-based routes only when absolutely necessary; while being powerful, they can often be hard to debug and maintain. Order of route precedence Like in any middleware system, the route that is defined first takes precedence over other matching routes. So the ordering of routes is crucial to the behavior of an app. Let's review this fact via some examples. In the following case, http://localhost:3000/abcd will always print "abc*" , even though the next route also matches the pattern: app.get('/abcd', function(req, res) { res.send('abcd'); }); app.get('/abc*', function(req, res) { res.send('abc*'); }); Reversing the order will make it print "abc*": app.get('/abc*', function(req, res) { res.send('abc*'); }); app.get('/abcd', function(req, res) { res.send('abcd'); }); The earlier matching route need not always gobble up the request. We can make it pass on the request to the next handler, if we want to. In the following example, even though the order remains the same, it will print "abc*" this time, with a little modification in the code. Route handler functions accept a third parameter, commonly named next, which refers to the next middleware in line. We will learn more about it in the next section: app.get('/abc*', function(req, res, next) { // If the request path is /abcd, don't handle it if (req.path == '/abcd') { next(); } else { res.send('abc*'); } }); app.get('/abcd', function(req, res) { res.send('abcd'); }); So bear it in mind that the order of route definition is very important in Express. Forgetting this will cause your app behave unpredictably. We will learn more about this behavior in the examples in the next section.
Read more
  • 0
  • 0
  • 6000

article-image-improving-snake-game
Packt
08 Jul 2013
41 min read
Save for later

Improving the Snake Game

Packt
08 Jul 2013
41 min read
The game Two new features were added to this second version of the game. First, we now keep track of the highest score achieved by a player, saving it through local storage. Even if the player closes the browser application, or turns off the computer, that value will still be safely stored in the player's hard drive, and will be loaded when the game starts again. Second, we use session storage to save the game state every time the player eats a fruit in the game, and whenever the player kills the snake. This is used as an extra touch of awesomeness, where after the player loses, we display a snapshot of all the individual level ups the player achieved in that game, as well as a snapshot of when the player hit a wall or run the snake into itself, as shown in the following screenshot: At the end of each game, an image is shown of each moment when the player acquired a level up, as well as a snapshot of when the player eventually died. This images are created through the canvas API (calling the toDataURL function), and the data that composes each image is saved throughout the game, and stored using the web storage API. With a feature such as this in place, we make the game much more fun, and potentially much more social. Imagine how powerful it would be if the player could post, not only his or her high score to their favorite social network website, but also pictures of their game at key moments. Of course, only the foundation of this feature is implemented in this article (in other words, we only take the snapshots of these critical moments in the game). Adding the actual functionality to send that data to a real social network application is left as an exercise for the reader. A general description and demonstration of each of the APIs used in the game are given in the following sections. For an explanation of how each piece of functionality was incorporated into the final game, look at the code section. For the complete source code for this game, check out the book's page from Packt Publishing's website. Web messaging Web messaging allows us to communicate with other HTML document instances, even if they're not in the same domain. For example, suppose our snake game, hosted at http://snake.fun-html5-games.com, is embedded into a social website through iframe (let's say this social website is hosted at http://www.awesome-html5-games.net). When the player achieves a new high score, we want to post that data from the snake game directly into the host page (the page with iframe from which the game is loaded). With the web messaging API, this can be done natively, without the need for any server-side scripting whatsoever. Before web messaging, documents were not allowed to communicate with documents in other domains mostly because of security. Of course, web applications can still be vulnerable to malicious external applications if we just blindly take messages from any application. However, the web messaging API provides some solid security measures to protect the page receiving the message. For example, we can specify the domains that the message is going to, so that other domains cannot intercept the message. On the receiving end, we can also check the origin from whence the message came, thus ignoring messages from any untrusted domains. Finally, the DOM is never directly exposed through this API, providing yet another layer of security. How to use it Similar to web workers, the way in which two or more HTML contexts can communicate through the web messaging API is by registering an event handler for the on-message event, and sending messages out by using the postMessage function: code1 The first step to using the web messaging API is to get a reference to some document with whom we wish to communicate. This can be done by getting the contentWindow property of an iframe reference, or by opening a new window and holding on to that reference. The document that holds this reference is called the parent document, since this is where the communication is initiated. Although a child window can communicate with its parent, this can only happen when and for as long as this relationship holds true. In other words, a window cannot communicate with just any window; it needs a reference to it, either through a parent-child relationship, or through a child-parent relationship. Once the child window has been referenced, the parent can fire messages to its children through the postMessage function. Of course, if the child window hasn't defined a callback function to capture and process the incoming messages, there is little purpose in sending those messages in the first place. Still, the parent has no way of knowing if a child window has defined a callback to process incoming messages, so the best we can do is assume (and hope) that the child window is ready to receive our messages. The parameters used in the postMessage function are fairly similar to the version used in web workers. That is, any JavaScript value can be sent (numbers, strings, Boolean values, object literals, and arrays, including typed arrays). If a function is sent as the first parameter of postMessage (either directly, or as part of an object), the browser will raise a DATA_CLONE_ERR: DOM Exception 25 error. The second parameter is a string, and represents the domain that we allow our message to be received by. This can be an absolute domain, a forward slash (representing the same origin domain as the document sending the message), or a wild card character (*), representing any domain. If the message is received by a domain that doesn't match the second parameter in postMessage, the entire message fails. When receiving the message, the child window first registers a callback on the message event. This function is passed a MessageEvent object, which contains the following attributes: event.data: It returns the data of the message event.origin: It returns the origin of the message, for server-sent events and cross-document messaging event.lastEventId: It returns the last event ID string, for server-sent events event.sourceReturns: It is the WindowProxy of the source window, for cross-document messaging event.portsReturns: It is the MessagePort array sent with the message, for cross-document messaging and channel messaging Source: http://www.w3.org/TR/webmessaging/#messageevent As an example of the sort of things we could use this feature for in the real world, and in terms of game development, imagine being able to play our snake game, but where the snake moves through a couple of windows. How creative is that?! Of course, in terms of being practical, this may not be the best way to play a game, but I find it hard to argue with the fact that this would indeed be a very unique and engaging presentation of an otherwise common game. With the help of the web messaging API, we can set up a snake, where the snake is not constrained to a single window. Imagine the possibilities when we combine this clever API with another very powerful HTML5 feature, which just happens to lend itself incredibly well to games – web sockets. By combining web messaging with web sockets, we could play a game of snake, not only across multiple windows, but also with multiple players at the same time. Perhaps each player would control the snake when it got inside a given window, and all players could see all windows at the same time, even though they are each using a separate computer. The possibilities are endless, really. Surprisingly, the code used to set up a multi-window port of snake is incredibly simple. The basic setup is the same, we have a snake that only moves in one direction at a time. We also have one or more windows where the snake can go. If we store each window in an array, we can calculate which screen the snake needs to be rendered in, given its current position. Finding out which screen the snake is supposed to be in, given its world position, is the trickiest part. For example, imagine that each window is 200 pixels wide. Now, suppose there are three windows opened. Each window's canvas is only 200 pixels wide as well, so when the snake is at position 350, it would be printed too far to the right in all of the canvases. So what we need to do is first determine the total world width (canvas width multiplied by the total number of canvases), calculate which window the snake is at (position/canvas width), then convert the position from world space down to canvas space, given the canvas the snake is in. First, lets define our structures in the parent document. The code for this is as follows: code2 When this script loads, we'll need a way to create new windows, where the snake will be able to move about. This can easily be done with a button that spawns a new window when clicked, then adding that window to our array of frames, so that we can iterate through that array, and tell every window where the snake is. The code for this is as follows: code3 Now, the real magic happens in the following method. All that we'll do is update the snake's position, then tell each window where the snake is. This will be done by converting the snake's position from world coordinates to canvas coordinates (since every canvas has the same width, this is easy to do for every canvas), then telling every window where the snake should be rendered within a canvas. Since that position is valid for every window, we also tell each window individually whether or not they should render the information we're sending them. Only the window that we calculate the snake is in, is told to go ahead and render. code4 That's really all there is to it. The code that makes up all the other windows is the same for all of them. In fact, we only open a bunch of windows pointing to the exact same script. As far as each window is concerned, they are the only window opened. All they do is take a bunch of data through the messaging API, then render that data if the shouldDraw flag is set. Otherwise, they just clear their canvas, and sit tight waiting for further instructions from their parent window. code5 Web storage Before HTML5 came along, the only way web developers had to store data on the client was through cookies. While limited in scope, cookies did what they were meant to, although they had several limitations. For one thing, whenever a cookie was saved to the client, every HTTP request after that included the data for that cookie. This meant that the data was always explicitly exposed, and each of those HTTP requests were heavily laden with extra data that didn't belong there. This is especially inefficient when considering web applications that may need to store relatively large amounts of data. With the new web storage API, these issues have been addressed and satisfied. There are now three different options for client storage, all of which solve a different problem. Keep in mind, however, that any and all data stored in the client is still exposed to the client in plain text, and is therefore not meant for a secure storage solution. These three storage solutions are session storage, local storage, and the IndexedDB NoSQL data store. Session storage allows us to store key-value data pairs that persist until the browser is closed (in other words, until the session finishes). Local storage is similar to session storage in every way, except that the duration that the data persists is longer. Even when a session is closed, data stored in a local storage still persists. That data in local storage is only cleared when the user specifically tells the browser to do so, or when the application itself deletes data from the storage. Finally, IndexedDB is a robust data store that allows us to store custom objects (not including objects that contains functions), then query the database for those objects. Of course, with much robustness comes great complexity. Although having a dedicated NoSQL database built in right into the browser may sound exciting, but don't be fooled. While using IndexedDB can be a fascinating addition to the world of HTML, it is also by no means a trivial task for beginners. Compared to local storage and session storage, IndexedDB has somewhat of a steep learning curve, since it involves mastering some complex database concepts. As mentioned earlier, the only real difference between local storage and session storage is the fact that session storage clears itself whenever the browser closes down. Besides that, everything about the two is exactly the same. Thus, learning how to use both will be a simple experience, since learning one also means learning the other. However, knowing when to use one over the other might take a bit more thinking on your part. For best results, try to focus on the unique characteristics and needs of your own application before deciding which one to use. More importantly, realize that it is perfectly legal to use both storage systems in the same application. The key is to focus on a unique feature, and decide what storage API best suits those specific needs. Both the local storage and session storage objects are instances of the class Storage. The interface defined by the storage class, through which we can interact with these storage objects, is defined as follows (source: Web Storage W3C Candidate Recommendation, December 08, 2011, http://www.w3.org/TR/webstorage/): getItem(key): It returns the current value associated with the given key. If the given key does not exist in the list associated with the object then this method must return null. setItem(key, value): It first checks if a key/value pair with the given key already exists in the list associated with the object. If it does not, then a new key/value pair must be added to the list, with the given key and with its value set to value. If the given key does exist in the list, then it must have its value updated to value. If it couldn't set the new value, the method must throw a QuotaExceededError exception. (Setting could fail if, for example, the user has disabled storage for the site, or if the quota has been exceeded.) removeItem(key): It causes the key/value pair with the given key to be removed from the list associated with the object, if it exists. If no item with that key exists, the method must do nothing. clear(): It automatically causes the list associated with the object to be emptied of all key/value pairs, if there are any. If there are none, then the method must do nothing. key(n): It returns the name of the nth key in the list. The order of keys is user-agent defined, but must be consistent within an object so long as the number of keys doesn't change. (Thus, adding or removing a key may change the order of the keys, but merely changing the value of an existing key must not.) If n is greater than or equal to the number of key/value pairs in the object, then this method must return null. The supported property names on a Storage object are the keys of each key/value pair currently present in the list associated with the object. length: It returns the number of key/value pairs currently present in the list associated with the object. Local storage The local storage mechanism is accessed through a property of the global object, which on browsers is the window object. Thus, we can access the storage property explicitly through window.localStorage, or implicitly as simply localStorage. code28 Since only DOMString values are allowed to be stored in localStorage, any other values other than strings are converted into a string before being stored in localStorage. That is, we can't store arrays, objects, functions, and so on in localStorage. Only plain JavaScript strings are allowed. code6 Now, while this might seem like a limitation to the storage API, this is in fact done by design. If your goal is to store complex data types for later use, localStorage wasn't necessarily designed to solve this problem. In those situations, we have a much more powerful and convenient storage solution, which we'll look at soon (that is, IndexedDB). However, there is a way to store complex data (including arrays, typed arrays, objects, and so on) in localStorage. The key lies in the wonderful JSON data format. Modern browsers have the very handy JSON object available in the global scope, where we can access two important functions, namely JSON.stringify and JSON.parse. With these two methods, we can serialize complex data, store that in localStorage, then unserialize the data retrieved from the storage, and continue using it in the application. code7 While this is a nice little trick, you will notice what can be a major limitation: JSON stringify does not serialize functions. Also, if you pay close attention to the way that JSON.stringify works, you will realize that>Person, the result will be a simple object literal with no constructor or prototype information. Still, given that localStorage was never intended to fill the role of object persistence (but rather, simple key-value string pairs), this should be seen as nothing more than a limited, yet very neat trick. Session storage Since the sessionStorage interface is identical to that of localStorage, there is no reason to repeat all of the information just described. For a more in-depth discussion about sessionStorage, look at the two previous sections, and replace the word "local" with "session". Everything mentioned above that applies to local storage is also true for session storage. Again, the only difference between the two is that any data saved on sessionStorage is erased when the session with the client ends (that is, whenever the browser is shut down). Some examples of how to use sessionStorage will be shown below. In the example, we will attempt to store a value in the sessionStorage if that value doesn't already exist. Remember, when we set a key-value pair to the storage, if that key already exists in the storage, then whatever value was associated with that key will be overwritten. If the key doesn't exist, it gets created automatically. code8 Note that we can also query the sessionStorage object for a specific key using the in operator, which returns a Boolean value shown as follows: code9 Finally, although we can check the total amount of keys in the storage through sessionStorage.length, that by itself may not be very useful if we don't know what all the different keys are. Thankfully, the sessionStorage.key function allows us to get a specific key, through which we can then get a hold of the value stored with that key. code10 Thus, we can query sessionStorage for a key at a given position, and receive the string key representing that key. Then, with the key we can get a hold of the value stored with that key. Note, however, that the order in which items are stored within the sessionStorage object is totally arbitrary. While some browsers may keep the list of stored items sorted alphabetically by key value, this is clearly specified in the HTML5 spec as a decision to be left up to browser makers. As exciting as the web storage API might seem so far, there are cases when our needs might be such that serializing and unserializing data, as we use local or session storage, might not be quite sufficient. For example, imagine we have a few hundred (or perhaps, several thousand) similar records stored in local storage (say we're storing enemy description cards that are part of an RPG game). Think about how you would do the following using local storage: Retrieve, in alphabetical order, the first five records stored Delete all records stored that contain a particular characteristic (such as an enemy that doesn't survive in water, for example) Retrieve up to three records stored that contain a particular characteristic (for example, the enemy has a Hit Point score of 42,000 or more) The point is this: any querying that we may want to make against the data stored in local storage or session storage, must be handled by our own code. In other words, we'd be spending a lot of time and effort writing code just to help us get to some data. Let alone the fact that any complex data stored in local or session storage is converted to literal objects, and any and all functions that were once part of those objects are now gone, unless we write even more code to handle some sort of custom unserializing. In case you have not guessed it by now, IndexedDB solves these and other problems very beautifully. At its heart, IndexedDB is a NoSQL database engine that allows us to store whole objects and index them for fast insertions, deletions, and retrievals. The database system also provides us with a powerful querying engine, so that we can perform very advanced computations on the data that we have persisted. The following figure shows some of the similarities between IndexedDB and a traditional relational database. In relational databases, data is stored as a group of rows within a specific table structure. In IndexedDB, on the other hand, data is grouped in broadly-defined buckets known as data stores. The architecture of IndexedDB is somewhat similar to the popular relational database systems used in most web development projects today. One core difference is that, whereas relational databases store data in a database, which is a collection of related tables, an IndexedDB system groups data in databases, which is a collection of data stores. While conceptually similar, in practice these two architectures are actually quite different. Note If you come from a relational database background, and the concept of databases, tables, columns, and rows makes sense to you, then you're well on your way to becoming an IndexedDB expert. As you'll see, there are some significant distinctions between both systems and methodologies. While you might be tempted to simply replace the words data store with tables, know that the difference between the two concepts extends beyond a name difference. One key feature of data stores is that they don't have any specific schema associated with them. In relational databases, a table is defined by its very particular structure. Each column is specified ahead of time, when the table is first created. Then, every record saved in such a table follows the exact same format. In NoSQL databases (which IndexedDB is a type of), a data store can hold any object, with whatever format they may have. Essentially, this concept would be the same as having a relational database table that has a different schema for each record in it. IDBFactory To get started with IndexedDB, we first need to create a database. This is done through an implementation of IDBFactory, which in the browser, is the window.indexedDB object. Deleting a database is also done through the indexedDB object, as we'll see soon. In order to open a database (or create one if it doesn't exist yet), we simply call the indexedDB.open method, passing in a database name, along with a version number. If no version number is supplied, the default version number of one will be used as shown in the following code snippet: code11 As you'll soon notice, every method for asynchronous requests in IndexedDB (such as indexedDB.open, for example), will return a request object of type IDBRequest, or an implementation of it. Once we have that request object, we can set up callback functions on its properties, which get executed as the various events related to them are fired, as shown in the following code snippet: code12 IDBOpenDBRequest As mentioned in the previous section, once we make an asynchronous request to the IndexedDB API, the immediately returned object will be of type IDBRequest. In the particular case of an open request, the object that is returned to us is of type IDBOpenDBRequest. Two events that we might want to listen to on this object were shown in the preceding code snippet (onerror and onsuccess). There is also a very important event, wherein we can create an object store, which is the foundation of this storage system. This event is the onupgradeneeded (that is, on upgrade needed) event. This will be fired when the database is first created and, as you might expect, whenever the version number used to open the database is higher than the last value used when the database was opened, as shown in the following code: code13 The call to createObjectStore made on the database object takes two parameters. The first is a string representing the name of the object store. This store can be thought of as a table in the world of relational databases. Of course, instead of inserting records into columns from a table, we insert whole objects into the data store. The second parameter is an object defining properties of the data store. One important attribute that this object must define is the keyPath object, which is what makes each object we store unique. The value assigned to this property can be anything we choose. Now, any objects that we persist in this data store must have an attribute with the same name as the one assigned to keyPath. In this example, our objects will need to have an attribute of myKey. If a new object is persisted, it will be indexed by the value of this property. Any additional objects stored that have the same value for myKey will replace any old objects with that same key. Thus, we must provide a unique value for this object every time we want a unique object persisted. Alternatively, we can let the browser provide a unique value for this key for us. Again, comparing this concept to a relational database, we can think of the keyPath object as being the same thing as a unique ID for a particular element. Just as most relational database systems will support some sort of auto increment, so does IndexedDB. To specify that we want auto-incremented values, we simply add the flag to the object store properties object when the data store is first created (or upgraded) as shown in the following code snippet: code14 Now we can persist an object without having to provide a unique value for the property myKey. As a matter of fact, we don't even need to provide this attribute at all as part of any objects we store here. IndexedDB will handle that for us. Take a look at the following diagram: Using Google Chrome's developer tools, we can see all of the databases and data stores we have created for our domain. Note that the primary object key, which has whatever name we give it during the creation of our data store, has IndexedDB-generated values, which, as we have specified, are incremented over the last value. With this simple, yet verbose boilerplate code in place, we can now start using our databases and data stores. From this point on, the actions we take on the database will be done on the individual data store objects, which are accessed through the database objects that created them. IDBTransaction The last general thing we need to remember when dealing with IndexDB, is that every interaction we have with the data store is done inside transactions. If something goes wrong during a transaction, the entire transaction is rolled back, and nothing takes effect. Similarly, if the transaction is successful, IndexedDB will automatically commit the transaction for us, which is a pretty handy bonus. To use transaction, we need to get a reference to our database, then request a transaction for a particular data store. Once we have a reference to a data store, we can perform the various functions related to the data store, such as putting data into it, reading data from it, updating data, and finally, deleting data from a data store. code15 To store an item in our data store we need to follow a couple of steps. Note that if anything goes wrong during this transaction, we simply catch whatever error is thrown by the browser, and execution continues uninterrupted because of the try/catch block. The first step to persisting objects in IndexedDB is to start a transaction. This is done by requesting a transaction object from the database we have opened earlier. A transaction is always related to a particular data store. Also, when requesting a transaction, we can specify what type of transaction we'd like to start. The possible types of transactions in IndexedDB are as follows: readwrite This transaction mode allows for objects to be stored into the data store, retrieved from it, updated, and deleted. In other words, readwrite mode allows for full CRUD functionality. readonly This transaction mode is similar to readwrite, but clearly restricts the interactions with the data store to only reading. Anything that would modify the data store is not allowed, so any attempt to create a new record (in other words, persisting a new object into the data store), update an existing object (in other words, trying to save an object that was already in the data store), or delete an object from the data store will result in the transaction failing, and an exception being raised. versionchange This transaction mode allows us to create or modify an object store or indexes used in the data store. Within a transaction of this mode, we can perform any action or operation, including modifying the structure of the database. Getting elements Simply storing data into a black box is not at all useful if we're not able to retrieve that data at a later point in time. With IndexedDB, this can be done in several different ways. More commonly, the data store where we persist the data is set up with one or more indexes, which keep the objects organized by a particular field. Again, for those accustomed to relational databases, this would be similar to indexing/applying a key to a particular table column. If we want to get to an object, we can query it by its unique ID, or we can search the data store for objects that fit particular characteristics, which we can do through indexed values of that object. To create an index on a data store, we must specify our intentions during the creation of the data store (inside the onupgradeneeded callback when the store is first created, or inside a transaction mode versionchange). The code for this is as follows: code16 In the preceding example, we create an index for the task attribute of our objects. The name of this index can be anything we want, and commonly is the same name as the object property to which it applies. In our case, we simply named it taskIndex. The possible settings we can configure are as follows: unique – if true, an object being stored with a duplicate value for the same attribute is rejected multiEntry – if true, and the indexed attribute is an array, each element will be indexed Note that zero or more indexes can be created for a data store. Just like any other database system, indexing your database/data store can really boost the performance of the storage container. However, just adding indexes for the fun it provides is not a good idea, as the size of your data store will grow accordingly. A good data store design is one where the specific context of the data store with respect to the application is taken into account, and each indexed field is carefully considered. The phrase to keep in mind when designing your data stores is the following: measure it twice, cut it once. Although any object can be saved in a data store (as opposed to a relational database, where the data stored must carefully follow the table structure, as defined by the table's schema), in order to optimize the performance of your application, try to build your data stores with the data that it will store in mind. It is true that any data can be smacked into any data store, but a wise developer considers the data being stored very carefully before committing it to a database. Once the data store is set up, and we have at least one meaningful index, we can start to pull data out of the data store. The easiest way to retrieve objects from a data store is to use an index, and query for a specific object, as shown in the following code: code17 The preceding function attempts to retrieve a single saved object from our data store. The search is made for an object with its task property that matches the task name supplied to the function. If one is found, it will be retrieved from the data store, and passed to the store object's request through the event object passed in to the callback function. If an error occurs in the process (for example, if the index supplied doesn't exist), the onerror event is triggered. Finally, if no objects in the data store match the search criteria, the resulting property passed in through the request parameter object will be null. Now, to search for multiple items, we can take a similar approach, but instead we request an IndexedDBCursor object. A cursor is basically a pointer to a particular result from a result set of zero or more objects. We can use the cursor to iterate through every object in the result set, until the current cursor points at no object (null), indicating that there are no more objects in the result set. code18 You will note a few things with the above code snippet. First, any object that goes into our IndexedDB data store is stripped of its DNA, and only a simple hash is stored in its stead. Thus, if the prototype information of each object we retrieve from the data store is important to the application, we will need to manually reconstruct each object from the data that we get back from the data store. Second, observe that we can filter the subset of the data store that we would like to take out of it. This is done with an IndexedDB Key Range object, which specifies the offset from which to start fetching data. In our case, we specified a lower bound of zero, meaning that the lowest primary key value we want is zero. In other words, this particular query requests all of the records in the data store. Finally, remember that the result from the request is not a single result or an array of results. Instead, all of the results are returned one at a time in the form of a cursor. We can check for the presence of a cursor altogether, then use the cursor if one is indeed present. Then, the way we request the next cursor is by calling the continue() function on the cursor itself. Another way to think of cursors is by imagining a spreadsheet application. Pretend that the 10 objects returned from our request each represent a row in this spreadsheet. So IndexedDB will fetch all 10 of those objects to memory, and send a pointer to the first result through the event.target.result property in the onsuccess callback. By calling cursor.continue(), we simply tell IndexedDB to now give us a reference to the next object in the result set (or, in other words, we ask for the next row in the spreadsheet). This goes on until the tenth object, after which no more objects exist in the result set (again, to go along with the spreadsheet metaphor, after we fetch the last row, the next row after that is null – it doesn't exist). As a result, the data store will call the onsuccess callback, and pass in a null object. If we attempt to read properties in this null reference, as though we were working with a real object returned from the cursor, the browser will throw a null pointer exception. Instead of trying to reconstruct an object from a cursor one property at a time, we could abstract this functionality away in a generic form. Since objects being persisted into the object store can't have any functions, we're not allowed to keep such functionality inside the object itself. However, thanks to JavaScript's ability to build an object from a reference to a constructor function, we can create a very generic object builder function as follows: code19 Deleting elements To remove specific elements from a data store, the same principles involved in retrieving data apply. In fact, the entire process looks fairly identical to retrieving data, only we call the delete function on the object store object. Needless to say, the transaction used in this action must be readwrite, since readonly limits the object so that no changes can be done to it (including deletion). The first way to delete an object is by passing the object's primary key to the delete function. This is shown as follows: code20 The difficulty with this first approach is that we need to know the ID of the object. In some cases, this would involve a prior transaction request where we'd retrieve the object based on some easier to get data. For example, if we want to delete all tasks with the attribute of complete set to true, we'd need to query the data store for those objects first, then use the IDs associated with each result, and use those values in the transaction where the objects are deleted. A second way to remove data from the data store is to simply call clear() on the object store object. Again, the transaction must be set to readwrite. Doing this will obliterate every last object in the data store, even if they're all of different types as shown in the following code snippet: code21 Finally, we can delete multiple records using a cursor. This is similar to the way we retrieve objects. As we iterate through the result set using the cursor, we can simply delete the object at whatever position the cursor is currently on. Upon deletion, the reference from the cursor object is set to null as shown in the following code snippet: code22 This is pretty much the same routine as fetching data. The only detail is that we absolutely need to supply an object's key. The key is the value stored in the object's keyPath attribute, which can be user-provided, or auto-generated. Fortunately for us, the cursor object returns at least two references to this key through the cursor.primaryKey property, as well as through the object's own property that references that value (in our case, we chose the keyPath attribute to be named myKey). The two upgrades we added to this second version of the game are simple, yet they add a lot of value to the game. We added a persistent high score engine, so users can actually keep track of their latest record, and have a sticky record of past successes. We also added a pretty nifty feature that takes a snapshot of the game board each time the player scores, as well as whenever the player ultimately dies out. Once the player dies, we display all of the snapshots we had collected throughout the game, allowing the player to save those images, and possibly share it with his or her friends. Saving the high score The first thing you probably noticed about the previous version of this game was that we had a placeholder for a high score, but that number never changed. Now that we know how to persist data, we can very easily take advantage of this, and persist a player's high score through various games. In a more realistic scenario, we'd probably send the high score data to a backend server, where every time the game is served, we can keep track of the overall high score, and every user playing the game would know about this global score. However, in our situation, the high score is local to a browser only, since none of the persistence APIs (local and session storage, as well as IndexedDB) share data across other browsers, or natively to a remote server. Since we want the high score to still exist in a player's browser even a month from now, after the computer has been powered off (along with the browser, of course) multiple times, storing this high score data on sessionStorage would be silly. We could store this single number either in IndexedDB or in localStorage. Since we don't care about any other information associated with this score (such as the date when the score was achieved, and so on), all we're storing really is just the one number. For this reason, I think localStorage is a much better choice, because it can all be done in as few as 5 lines of code. Using IndexedDB would work, but would be like using a cannon to kill a mosquito: code23 This function is pretty straight forward. The two values we pass it are the actual score to set as the new high score (this value will be both saved to localStorage, as well as displayed to the user), and the HTML element where the value will be shown. First, we retrieve the existing value saved under the key high-score, and convert it to a number. We could have used the function parseInt(), but multiplying a string by a number does the same thing, but with a slightly faster execution. Next, we check if that value evaluated to something real. In other words, if there was no high-score value saved in local storage, then the variable score would have been evaluated to undefined multiplied by one, which is not a number. If there is a value saved with the key high-score, but that value is not something that can be converted into a number (such as a string of letters and such), we know that it is not a valid value. In this case, we set the incoming score as the new high score. This would work out in the case where the current persisted value is invalid, or not there (which would be the case the very first time the game loads). Next, once we have a valid score retried from local storage, we check if the new value is higher than the old, persisted value. If we have a higher score, we persist that value, and display it to the screen. If the new value is not higher than the existing value, we don't persist anything, but display the saved value, since that is the real high score at the time. Taking screenshots of the game This feature is not as trivial as saving the user's high score, but is nonetheless very straightforward to implement. Since we don't care about snapshots that we captured more than one game ago, we'll use sessionStorage to save data from the game, in real time as the player progresses. Behind the scenes, all we do to take these snapshots is save the game state into sessionStorage, then at the end of the game we retrieve all of the pieces that we'd been saving, and reconstruct the game at those points in time into an invisible canvas. We then use the canvas.toDataURL() function to extract that data as an image: code24 Each time the player eats a fruit, we call this function, passing it a reference to the snake (our hero in this game), and the fruit (the goal of this game) objects. What we do is really quite simple: we create an array representing the state of the snake and of the fruit at each event that we capture. Each element in this array is a string representing the serialized array that keeps track of where the fruit was, and where each body part of the snake was located as well. First, we check if this object currently exists in sessionStorage. For the first time we start the game, this object will not yet exist. Thus, we create an object that references those two objects, namely the snake and the fruit object. Next, we stringify the buffers keeping track of the locations of the elements we want to track. Each time we add a new event, we simply append to those two buffers. Of course, if the user closes down the browser, that data will be erased by the browser itself, since that's how sessionStorage works. However, we probably don't want to hold on to data from a previous game, so we also need a way to clear out our own data after each game. code25 Easy enough. All we need is to know the name of the key that we use to hold each element. For our purposes, we simply call the snapshots of the snake eating "eat", and the buffer with the snapshot of the snake dying "die". So before each game starts, we can simply call clearEvent() with those two global key values, and the cache will be cleared a new each time. Next, as each event takes place, we simply call the first function we defined, sending it the appropriate data as shown in the following code snippet: code26 Finally, whenever we wish to display all of these snapshots, we just need to create a separate canvas with the same dimensions as the one used in the game (so that the buffers we saved don't go out of bounds), and draw the buffers to that canvas. The reason we need a separate canvas element is because we don't want to draw on the same canvas that the player can see. This way, the process of producing these snapshots is more seamless and natural. Once each state is drawn, we can extract each image, resize it, and display it back to the user as shown in the following code: code27 Observe that we simply draw the points representing the snake and the fruit into that canvas. All of the other points in the canvas are ignored, meaning that we generate a transparent image. If we want the image to have an actual background color (even if it is just white), we can either call fillRect() over the entire canvas surface before drawing the snake and the fruit, or we can traverse each pixel in the pixelData array from the rendering context, and set the alpha channel to 100 percent opaque. Even if we set a color to each pixel by hand, but leave off the alpha channel, we'd have colorful pixels, but 100 percent transparent. Summary In this article we took a few extra steps into the fascinating world of 2D rendering using the long-awaited canvas API. We took advantage of the canvas' ability to export images to make our game more engaging, and potentially more social. We also made the game more engaging and social by adding a persistence layer on top of the game, whereby we were able to save a player's high score. Two other new powerful features of HTML5, web messaging and IndexedDB, were explored in this article, although there were no uses for these features in this version of the game. The web messaging API provides a mechanism for two or more windows to communicate directly through message passing. The exciting bit is that these windows (or HTML contexts) do not need to be in the same domain. Although this could sound like a security issue, there are several systems in place to ensure that cross-document and cross-domain messaging is secure and efficient. The web storage interface brings with it three distinct solutions for long term data persistence on the client. These are session storage, local storage, and IndexedDB. While IndexedDB is a full-blown, built-in, fully transactional and asynchronous NoSQL object store, local and session storage provide a very simple key-value pair storage for simpler needs. All three of these systems introduce great benefits and gains over the traditional cookie-based data storage, including the fact that the total amount of data that can be persisted in the browser is much greater, and none of the data saved in the user's browser ever travels back and forth between the server and the client through HTTP requests. Resources for Article :   Further resources on this subject: Interface Designing for Games in iOS [Article] Unity 3D Game Development: Don't Be a Clock Blocker [Article] Making Money with Your Game [Article]
Read more
  • 0
  • 0
  • 7653

article-image-installing-drupal
Packt
04 Jul 2013
14 min read
Save for later

Installing Drupal

Packt
04 Jul 2013
14 min read
(For more resources related to this topic, see here.) Assumptions To get Drupal up and running, you will need all of the following: A domain A web host Access to the web host's filesystem or You need a local testing environment, which takes care of the first three things For building sites, either a web host or a local testing environment will meet your needs. A site built on a web-accessible domain can be shared via the Internet, whereas sites built on local test machines will need to be moved to a web host before they can be used for your course.  In these instructions, we are assuming the use of phpMyAdmin, an open source, browser-based tool, for administering your database. A broad range of similar tools exist, and these general instructions can be used with most of these other tools. Information on phpMyAdmin is available at http://www.phpmyadmin.net; information on other browser-based database administration tools can be found at http://en.wikipedia.org/wiki/PhpMyAdmin#Similar_products. The domain The domain is the address on the Web from where people can access your site. If you are building this site as part of your work, you will probably be using the domain associated with your school or organization. If you are hosting this on your own server, you can buy a domain for under US $10.00 a year. Enter purchase domain name in Google, and you will have a plethora of options. The web host Your web host provides you with the server space on which to run your site. Within many schools, your website will be hosted by your school. In other environments, you might need to arrange for your own web host by using a hosting company. In selecting a web host, you need to be sure that they run software that meets or exceeds the recommended software versions. Web server Drupal is developed and tested extensively in an Apache environment. Drupal also runs on other web servers, including Microsoft IIS and Nginx. PHP version Drupal 7 will run on PHP 5.2.5 or higher; however, PHP 5.3 is recommended. The Drupal 8 release will require PHP 5.3.10. MySQL version Drupal 7 will run on MySQL 5.0.15 or higher, and requires the PHP Data Objects ( PDO ) extension for PHP. Drupal 7 has also been tested with MariaDB as a drop-in replacement, and Version 5.1.44 or greater is recommended. PDO is a consistent way for programmers to write code that interacts with the database. You can find out more about PDO and how to install it at http://drupal.org/requirements/pdo. Drupal can technically use any database that PDO supports, but MySQL is by far the most tested and best supported. Third-party modules are required to use Drupal with other database systems. You can find these modules listed at http://drupal.org/project/modules/?f[0]=im_vid_3%3A13158&f[1]=drupal_core%3A103&f[2]=bs_project_sandbox%3A0. FTP and shell access to your web host Your web host should also offer FTP access to your web server. You will need FTP (or SFTP) access in order to upload the Drupal codebase to your web space. Shell access, or SSH access, is not essential for basic site maintenance. However, SSH access can simplify maintaining your site, so contracting with a web host that provides SSH access is recommended. A local testing environment Alternatively, you can set up a local testing environment for your site. This allows you to set up Drupal and other applications on your computer. A local testing environment can be a great tool for learning a piece of software. Fortunately, open source tools can automate the process of setting up your testing environment. PC users can use XAMPP (http://www.apachefriends.org) to set up a local testing environment; Mac users can use MAMP (http://www.mamp.info). If you are working in a local testing environment set up via XAMPP or MAMP, you have all the pieces you need to start working with Drupal: your domain, your web host, the ability to move files into your web directory, and phpMyAdmin. Setting up a local environment using MAMP (Mac only) While Apple's operating system includes most of the programs required to run Drupal, setting up a testing environment can be tricky for inexperienced users. Installing MAMP allows you to create a preconfigured local environment quickly and easily using the following steps: Download the latest version of MAMP from http://www.mamp.info/en/index.html. Note that the paid version of the program will download as well. Feel free to pay for the software if you wish, but the free version will be sufficient for our needs. Navigate to where you downloaded the .zip file, and double-click to unzip it. Once it is unzipped, double click on the .pkg file that was contained in the .zip file. Follow the directions in the wizard until you reach the Installation Type screen. If you want to use only the free version of the program, click on the Customize button: In the Custom Install on "Macintosh HD" window, uncheck the MAMP PRO option and click on the Install button to install the application: Navigate to /Applications/MAMP and open the MAMP application. The Apache and MySQL servers will start, and the start page will open in your default web browser. If the start page opens, MAMP is installed correctly. Setting up a local environment using XAMPP (Windows only) Download the latest version of XAMPP from http://www.apachefriends.org/en/xampp-windows.html#641. Download the .zip version. Navigate to where you downloaded the file, right-click, and select Extract All... . Enter C: as the destination and click on Extract . Navigate to C:xampp and double-click the xampp-control application to start XAMPP Control Panel Application : Click on the Start buttons next to Apache and MySql . Open a web browser, and enter http://localhost or http://127.0.0.1 in the address bar, and you should see the following start page: Navigate to http://localhost/security/index.php, and enter a password for MySQL's root user. Make sure to remember this password or write it down in your notebook because we will need it later. Configuring your local environment for Drupal Now that we have the programs required to run Drupal (Apache, MySQL, and PHP), we need to modify some of their settings to match Drupal's system requirements. PHP configuration As mentioned before, Drupal 7 requires Version 5.2.5 or higher, and as of the writing of this book MAMP includes Version 5.4.4 (or you can switch to Version 5.2.17) and XAMPP includes Version 5.4.7. PHP configuration settings are found in the program's php.ini file. For MAMP, the php.ini file is located in /Applications/MAMP/bin/php/[php version number]/conf, where the php version number is either 5.4.4 or 5.2.17. For XAMPP, the php.ini file is located in C:xamppphp. Open the file in a text editor (not a word processor), find the Resource Limits section of the file and edit the values to match the following values: max_execution_time = 60;max_input_time = 120;memory_limit = 128M;error_reporting = E_ALL & ~E_NOTICE The last line is optional and is used if you want to display error messages in the browser, instead of only in the logs. MySQL configuration As mentioned before, Drupal 7 requires MySQL Version 5.0.15 or higher. MAMP includes Version 5.5.25 and XAMPP includes Version 5.5.27. MySQL's configuration settings are contained in a my.cnf or my.ini file. MAMP does not use a my.cnf file by default, so we need to copy the my-medium.cnf file from the /Applications/MAMP/Library/support-files directory to the /Applications/MAMP/conf folder. After copying the file, rename it to my.cnf. For XAMPP, the my.ini file is located in the C:xamppmysqlbin directory. Open the my.cnf or my.ini file in a text editor, find the following settings and edit them to match the following values: # * Fine Tuning#key_buffer = 16Mkey_buffer_size = 32Mmax_allowed_packet = 16Mthread_stack = 512Kthread_cache_size = 8max_connections = 300## * Query Cache Configuration#query_cache_type = 1query_cache_limit = 15Mquery_cache_size = 46Mjoin_buffer_size = 5M# Sort buffer size for ORDER BY and GROUP BY queries, data# gets spun out to disc if it does not fitsort_buffer_size = 10Minnodb_flush_method = O_DIRECTinnodb_file_per_table = 1innodb_flush_log_at_trx_commit = 2innodb_log_buffer_size = 4Minnodb_additional_mem_pool_size = 20M# num cpu's/cores *2 is a good base line for innodb_thread_concurrencyinnodb_thread_concurrency = 4 After you have made the edits, you have to stop and restart the servers for the changes to take effect. Once you have restarted the servers, we are ready to install Drupal! The most effective way versus the easy way There are many different ways to install Drupal. People familiar with working via the command line can install Drupal very quickly without an FTP client or any web-based tools to create and administer databases. The instructions in this book are geared towards people who would rather not use the command line. These instructions attempt to get you through the technical pieces as painlessly as possible, to speed up the process of building a site that supports teaching and learning. Installing Drupal - the quick version The following steps will get you up and running with your Drupal site. This quick-start version gives an overview of the steps required for most setups. A more detailed version follows immediately after this section. Once you are familiar with the setup process, installing a Drupal site takes between five to ten minutes. Download the core Drupal codebase from http://drupal.org/project/drupal. Extract the codebase on your local machine. Using phpMyAdmin, create a database on your server. Write down the name of the database. Using phpMyAdmin, create a user on the database using the following SQL statement: GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTERON databasename.*TO 'username'@'localhost' IDENTIFIED BY 'password'; You will have created the databasename in step 3; write down the username and password values, as you will need them to complete the install. Upload the Drupal codebase to your web folder. Navigate to the URL of your site. Follow the instructions of the install wizard. You will need your databasename (created in step 3), as well as the username and password for your database user (created in step 4). Installing Drupal - the detailed version This version goes over each step in more detail and includes screenshots. Download the core Drupal codebase from http://drupal.org/project/drupal. Extract the codebase on your local machine. The Drupal codebase (and all modules and themes) are compressed into a tarball, or a file that is first tarred, and then gzipped. Such compressed files end in .tar.gz. On Macs and Linux machines, tar.gz files can be extracted automatically using tools that come preinstalled with the operating system. On PC's, you can use 7-zip, an open source compression utility available at http://www.7-zip.org. In your web browser, navigate to your system's URL for phpMyAdmin. If you are using a different tool for creating and managing your database, use that tool to create your database and database user. As shown in the following screenshot, create the database on your server. Click on the Create button to create your database. Store your database name in a safe place. You will need to know your database name to complete your installation. To create your database user, click on the SQL tab as shown in the following screenshot. In the text area, enter the following SQL statement: GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTERON databasename.*TO 'username'@'localhost' IDENTIFIED BY 'password'; For databasename, use the name of the database you created in step 4. Replace the username and password with a username and password of your choice. Once you have entered the correct values, click on the Go button to create the user with rights on your database: Store the username and the password of your database user in a safe place. You will need them to complete the installation. Create and/or locate the directory from where you want Drupal to run. In this example, we are running Drupal from within a folder named drupal7; this means that our site will be available at http://ourdomain.org/drupal7. Running Drupal in a subfolder can make things a little trickier. If at all possible, copy the Drupal files directly into your web root. Using your FTP client, upload the Drupal codebase to your web folder: Navigate to the URL of your site. The automatic install wizard will appear on your screen: Click the Save and continue button with the Standard option selected. Click the Save and continue button with the English (built-in) option selected. To complete the Set up database screen, you will need the database name (created in step 4) and the database username and password (created in step 6). Select MySQL, MariaDB, or equivalent as the Database type and then enter these values in their respective text boxes as seen in the following screenshot: Most installs will not need to use any of settings under ADVANCED OPTIONS . However, if your database is located on a server other than localhost, you will need to adjust the settings as shown in the next screenshot. In most basic hosting setups, your database is accessible at localhost . To verify the name or location of your database host, you can use phpMyAdmin (as shown in the screenshot under step 4) or contact an administrator for your web server. For the vast majority of installs, none of the advanced options will need to be adjusted. Click on the Save and continue button. You will see a progress meter as Drupal installs itself on your web server. On the Configure site screen, you can enter some general information about your site, and create the first user account. The first user account has full rights over every aspect of your site. When you have finished with the settings on this page, click on the Save and continue button. When the install is finished, you will see the following splash screen: Additional details on installing Drupal are available in the handbook at http://drupal.org/documentation/install. Enabling core modules For a full description of the modules included in Drupal core, see http://drupal.org/node/1283408. To see the modules included in Drupal core, navigate to Modules or admin/modules. As shown in the following screenshot, the Standard installation profile enables the most commonly used core modules. (For clarity, we have divided the screenshot of the single screen in two parts.) Assigning rights to the authenticated user role Within your Drupal site, you can use roles to assign specific permissions to groups of users. Anonymous users are all people visiting the site who are not site members; all site members (that is, all people with a username and password) belong to the authenticated user role. To assign rights to specific roles, navigate to People | Permissions | Roles or admin/people/permissions/roles. As shown in the preceding screenshot, click on the edit permissions link for authenticated users. The Comment module: Authenticated users can see comments and post comments. These rights have the comments going into a moderation queue for approval, as we haven't checked the Skip comment approval box. The Node module: Authenticated users can see published content. The Search module: Authenticated users can search the site. The User module: Authenticated users can change their own username. Once these options have been selected, click on the Save permissions button at the bottom of the page. Summary In this article, we installed the core Drupal codebase, enabled some core modules, and assigned rights to the authenticated user role. We are now ready to start building a feature-rich site that will help support teaching and learning. In the next article, we will take a look around your new site and begin to get familiar with how to make your site do what you want. Resources for Article : Further resources on this subject: Creating Content in Drupal 7 [Article] Drupal and Ubercart 2.x: Install a Ready-made Drupal Theme [Article] Introduction to Drupal Web Services [Article]
Read more
  • 0
  • 0
  • 2524

article-image-breaching-wireless-security
Packt
01 Jul 2013
5 min read
Save for later

Breaching Wireless Security

Packt
01 Jul 2013
5 min read
(For more resources related to this topic, see here.) Different types of attacks We will now discuss each one of these attacks briefly. The probing and discovery attacks are accomplished by sending out probes and looking for the wireless networks. We have used several tools for discovery so far, but they have all been passive in how they discover information. A passive probing tool can detect the SSID of a network even when it is cloaked, as we have shown with the Kismet tool. With active probing, we are sending out probes with the SSID in it. This type of probing will not discover a hidden or cloaked SSID. An active probing tool for this is NetStumbler (www.netstumbler.com). With an active probe, the tool will actively send out probes and elicit responses from the access points to gather information. It is very difficult to prevent an attacker from gathering information about our wireless access points; this is because an access point has to be available for connection. We can cloak or hide the SSID. The next step an attacker will carry out is performing the surveillance of the network. This is the technique we used with Kismet, airodump-ng, and ssidsniff. An example of the output of the Kismet tool is shown in the next screenshot: All three of these tools are passive, so they do not probe the network for information. They just capture it from the wireless frequency that is received from the network. Each of these tools can discover the hidden SSID of a network, and again, are passive tools. Once the attacker has discovered the target network, they will move to the surveillance step and attempt to gather more information about the target. For this, we can again use any of the three tools we previously mentioned. The information that an attacker is looking for are as follows: Whether or not the network is protected The encryption level used The signal strength and the GPS coordinates When an attacker is scanning a network, he or she is looking for an "easy" target. This is the motive of most of the attackers; they want an easy way in, and almost always, they target the weakest link. The next step that an attacker will typically pursue is Denial of Service (DoS); unfortunately, this is one area we really cannot do much about. This is because, in the case of a wireless signal, the network can be jammed by using simple and inexpensive tools; so if an attacker wants to perform a DoS attack, there is really not much that we can do to prevent it. So we will not spend any more time on this attack. The next attack method is one that is shared between the "wired" network world and the wireless world. The attack of masquerading, or spoofing as it is sometimes referred to, involves impersonating an authorized client on a network. One of the protection mechanisms we have within our wireless networks is the capability to restrict or filter a client based on their Media Access Control (MAC) address. This address is that of the network card itself; it is how data is delivered on our networks. There are a number of ways to change the MAC address; we have tools, and we can also change it from the command line in Linux. The simplest way to change our MAC address is to use the macchanger tool. An example of how to use this tool to change an address is shown in the next screenshot: In the Windows world, we can do it in another way; but it involves editing the registry, which might be too difficult for some of you. The hardware address is in the registry; you can find it by searching for the term wireless within the registry. An example of this registry entry is shown in the following screenshot: The last category of attacks that we will cover here is the rogue access point. This is an attack that takes advantage of the fact that all wireless networks have a particular level of power that they transmit. What we do for this attack is create an access point with more power than the access point we are masquerading as; this results in a stronger signal being received by the client software. When would anyone take a three-bar signal over a five-bar signal? The answer for that would be: never; that is why the attack is so powerful. An attacker can create an access point as a rogue access point; there is no way for most clients to tell whether the access point is real or not. There really is nothing that you can do to stop this attack effectively. This is why it is a common attack used in areas that have a public hotspot. We do have a recommended mechanism you can use to help mitigate the impact of this type of attack. If you look at the example that is shown in the next screenshot, can you identify which one of the access points with the same name is the correct one? This is an example of what most clients see when they are using Windows. From this list, there is no way of knowing which one of the access points is the real one. Summary Thus in this article we covered, albeit briefly, the steps that an attacker typically uses when preparing for an attack. Resources for Article : Further resources on this subject: Tips and Tricks on BackTrack 4 [Article] BackTrack Forensics [Article] BackTrack 5: Attacking the Client [Article]
Read more
  • 0
  • 0
  • 2072
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-installing-and-configuring-drupal-commerce
Packt
28 Jun 2013
8 min read
Save for later

Installing and Configuring Drupal Commerce

Packt
28 Jun 2013
8 min read
(For more resources related to this topic, see here.) Installing Drupal Commerce to an existing Drupal 7 website There are two approaches to installing Drupal Commerce; this recipe covers installing Drupal Commerce on an existing Drupal 7 website. Getting started You will need to download Drupal Commerce from http://drupal.org/project/ commerce. Download the most recent recommended release you see that couples with your Drupal 7 website's core version: You will also require the following modules to allow Drupal Commerce to function: Ctools: http://drupal.org/project/ctools Entity API: http://drupal.org/project/entity Views: http://drupal.org/project/views Rules: http://drupal.org/project/rules Address Field: http://drupal.org/project/addressfield How to do it... Now that you're ready, install Drupal Commerce by performing the following steps: Install the modules that Drupal Commerce depends on, first by copying the preceding module files into your Drupal site's modules directory, sites/all/modules. Install Drupal Commerce's modules next, by copying the files into the sites/all/ modules directory, so that they appear in the sites/all/modules/commerce directory. Enable the newly installed Drupal Commerce module in your Drupal site's administration panel (example.com/admin/modules if you've installed Drupal Commerce at example.com), under the Modules navigation option, by ensuring the checkbox to the left-hand side of the module name is checked. Now that Drupal Commerce is installed, a new menu option will appear in the administration navigation at the top of your screen when you are logged in as a user with administration permissions. You may need to clear the cache to see this. Navigate to Configuration | Development | Performance in the administration panel to do this. How it works... Drupal Commerce depends on a number of other Drupal modules to function, and by installing and enabling these in your website's administration panel you're on your way to getting your Drupal Commerce store off the ground. You can also install the Drupal Commerce modules via Drush (the Drupal Shell) too. For more information on Drush, see http://drupal.org/project/drush. Installing Drupal Commerce with Commerce Kickstart 2 Drupal Commerce requires quite a number of modules, and doing a basic installation can be quite time-consuming, which is where Commerce Kickstart 2 comes in. It packages Drupal 7 core and all of the necessary modules. Using Commerce Kickstart 2 is a good idea if you are building a Drupal Commerce website from scratch, and don't already have Drupal core installed. Getting started Download Drupal Commerce Kickstart 2 from its drupal.org project page at http://drupal.org/project/commerce kickstart. How to do it... Once you have decompressed the Commerce Kickstart 2 files to the location you want to install Drupal Commerce in, perform the following steps: Visit the given location in your web browser. For this example, it is assumed that your website is at example.com, so visit this address in your web browser. You'll see that you are presented with a welcome screen as shown in the following screenshot: Click the Let's Get Started button underneath this, and the installer moves to the next configuration option. Next, your server's requirements are checked to ensure Drupal can run in this environment. In the preceding screenshot you can see some common problems when installing Drupal that prevent installation. In particular, ensure that you create the /sites/ default/files directory in your Drupal installation and ensure it has permissions to allow Drupal to write to it (as this is where your website's images and files are stored). You will also need to copy the /sites/default/default.settings.php file to /sites/default/settings.php before you can start. Make sure this file is writeable by Drupal too (you'll secure it after installation is complete). Once these problems have been resolved, refresh the page and you will be taken to the Set up database screen. Enter the database username, password, and database name you want to use with Drupal, and click on Save and continue: The next step is the Install profile section, which can take some time as Drupal Commerce is installed for you. There's nothing for you to do here; just wait for installation to complete! You can now safely remove write permissions for the settings.php file in the /sites/default directory of your Drupal Commerce installation. The next step is Configure site. Enter the name of your new store and your e-mail address here, and provide a username and password for your Drupal Commerce administrator account. Don't forget to make a note of these as you'll need them to access your website later! Below these options, you can specify the country of your server and the default time zone. These are usually picked up from your server itself, but you may want to change them: Click on the Save and continue button to progress now; the next step is Configure store. Here you can set your Default store country field (if it's different from your server settings) and opt to install Drupal Commerce's demo, which includes sample content and a sample Drupal Commerce theme too: Further down on this screen, you're presented with more options. By checking the Do you want to be able to translate the interface of your store? field, Drupal Commerce provides you with an ability to translate your website for customers of different languages (for this simple store installation, leave this set to No). Finally, you can set the Default store currency field you wish to use, and whether you want Commerce Kickstart to set up a sales tax rule for your store (select which is more appropriate for your store, or leave it set to No sample tax rate for now): Click on Create and finish at the bottom of the screen. If you chose to install the demo store in the previous screen, you will have to wait as it is added for you. There are now options to allow Drupal to check for updates automatically, and to receive e-mails about security updates. Leave these both checked to help you stay on top of keeping your Drupal website secure and up-to-date. Wait as Commerce Kickstart installs everything Drupal Commerce requires to run. That's it! Your Drupal Commerce store is now up and running thanks to Commerce Kickstart 2. How it works... The Commerce Kickstart package includes Drupal 7 core and the Drupal Commerce module. By packaging these together, installation and initial configuration for your Drupal Commerce store is made much easier! Creating your first product Now that you've installed Drupal Commerce, you can start to add products to display to customers and start making money. In this recipe you will learn how to add a basic product to your Drupal Commerce store. Getting started Log in to your Drupal Commerce store's administration panel, and navigate to Products | Add a product: If you haven't, navigate to Site settings | Modules and ensure that the Commerce Kickstart Menu module is enabled for your store. Note the sample products from Drupal Kickstart's installation are displaying there. How to do it... To get started adding a product to your store, click on the Add product button and follow these steps: Click on the Product display. Product displays groups of multiple related product variations together for display on the frontend of your website. Fill in the form that appears, entering a suitable Title, using the Body field for the product's description, as well as filling in the SKU (stock keeping unit; a unique reference for this product) and Price fields. Ensure that the Status field is set to Active. You can also optionally upload an image for the product here: Optionally, you can assign the product to one of the pre-existing categories in the Product catalog tab underneath these fields, as well as a URL for it in the URL path settings tab: Click on the Save product button, and you've now created a basic product in your store. To view the product on the frontend of your store, you can navigate to the category listings if you imported Drupal Commerce's demo data, or else you can return to the Products menu and click on the name of the product in the Title column: You'll now see your product on the frontend of your Drupal Commerce store: How it works... In Drupal Commerce, a product can represent several things, listed as follows: A single product for sale (for example, a one-size-fits-all t-shirt) A variation of a product (for example, a medium-size t-shirt) An item that is not necessarily a purchase as such (for example, it may represent a donation to a charity) An intangible product which the site allows reservations for (for example, an event booking) Product displays (for example, a blue t-shirt) are used to group product variations (for example, a medium-sized blue t-shirt and a large-sized blue t-shirt), and display them on your website to customers. So, depending on the needs of your Drupal Commerce website, products may be displayed on unique pages, or multiple products might be grouped onto one page as a product display.
Read more
  • 0
  • 0
  • 5751

article-image-building-chat-application
Packt
27 Jun 2013
4 min read
Save for later

Building a Chat Application

Packt
27 Jun 2013
4 min read
(For more resources related to this topic, see here.) The following is a screenshot of our chat application: Creating a project To begin developing our chat application, we need to create an Opa project using the following Opa command: opa create chat This command will create an empty Opa project. Also, it will generate the required directories and files automatically with the structure as shown in the following screenshot: Let's have a brief look at what these source code files do: controller.opa: This file serves as the entry point of the chat application; we start the web server in controller.opa view.opa: This file serves as an user interface model.opa: This is the model of the chat application; it defines the message, network, and the chat room style.css: This is an external stylesheet file Makefile: This file is used to build an application As we do not need database support in the chat application, we can remove --import-package stdlib.database.mongo from the FLAG option in Makefile. Type make and make run to run the empty application. Launching the web server Let's begin with controller.opa, the entry point of our chat application where we launch the web server. We have already discussed the function Server.start in the Server module section. In our chat application, we will use a handlers group to handle users requests. Server.start(Server.http, [ {resources: @static_resource_directory("resources")}, {register: [{css:["/resources/css/style.css"]}]}, {title:"Opa Chat", page: View.page } ]) So, what exactly are the arguments that we are passing to the Server.start function? The line {resources: @static_resource_direcotry("resources")} registers a resource handler and will serve resource files in the resources directory. Next, the line {register: [{css:["/resources/css/style.css"]}]} registers an external CSS file—style.css. This permits us to use styles in the style.css application scope. Finally, the line {title:"Opa Chat", page: View.page} registers a single page handler that will dispatch all other requests to the function View.page. The server uses the default configuration Server.http and will run on port 8080. Designing user interface When the application starts, all the requests (except requests for resources) will be distributed to the function View.page, which displays the chat page on the browser. Let's take a look at the view part; we define a module named View in view.opa. import stdlib.themes.bootstrap.css module View { function page(){ user = Random.string(8) <div id=#title class="navbar navbar-inverse navbar-fixed-top"> <div class=navbar-inner> <div id=#logo /> </div> </div> <div id=#conversation class=container-fluid onready={function(_){Model.join(updatemsg)}} /> <div id=#footer class="navbar navbar-fixed-bottom"> <div class=input-append> <input type=text id=#entry class=input-xxlarge onnewline={broadcast(user)}/> <button class="btn btn-primary" onclick={broadcast(user)}>Post</button> </div> </div> } ... } The module View contains functions to display the page on the browser. In the first line, import stdlib.themes.bootstrap.css, we import Bootstrap styles. This permits us to use Bootstrap markup in our code, such as navbar, navbar-fixtop, and btn-primary. We also registered an external style.css file so we can use styles in style.css such as conversation and footer. As we can see, this code in the function page follows almost the same syntax as HTML. As discussed in earlier, we can use HTML freely in the Opa code, the HTML values having a predefined type xhtml in Opa. Summary In this article, we started by creating and a project and launching the web server. Resources for Article : Further resources on this subject: MySQL 5.1 Plugin: HTML Storage Engine—Reads and Writes [Article] Using jQuery and jQueryUI Widget Factory plugins with RequireJS [Article] Oracle Web RowSet - Part1 [Article]
Read more
  • 0
  • 0
  • 7844

article-image-choosing-your-shipping-method
Packt
19 Jun 2013
9 min read
Save for later

Choosing your shipping method

Packt
19 Jun 2013
9 min read
(For more resources related to this topic, see here.) Getting ready To view and edit our shipping methods we must first navigate to System | Configuration | Shipping Methods. Remember, our Current Configuration Scope field is important as shipping methods can be set on a per website scope basis. There are many shipping methods available by default, but the main generic methods are Flat Rate, Table Rates, and Free Shipping. By default, Magento comes with the Flat Rate method enabled. We are going to start off by disabling this shipping method. Be careful when disabling shipping methods; if we leave our Magento installation without any active shipping methods then no orders can be placed—the customer would be presented with this error in the checkout: Sorry, no quotes are available for this order at this time. Likewise through the administration panel manual orders will also receive the error. How to do it... To disable our Flat Rate method we need to navigate to its configuration options in System | Configuration | Shipping Methods | Flat Rate and choose Enabled as No, and click on Save. The following screenshot highlights our current configuration scope and disabled Flat Rate method: Next we need to configure our Table Rates method, so we need to now click on the Table Rates tab and set Enabled to Yes , within Title enter National Delivery and within Method Name enter Shipping. Finally, for the Condition option select Weight vs. Destination (all the other information can be left as default as it will not affect our pricing for this scenario). To upload our spreadsheet for our new Table Rates method we need to first change our scope (shipping rates imported via a .csv file are always entered at a website view level). To do this we need to select Main Website (this wording can differ depending on System | Manage Stores Settings) from our Current Configuration Scope field. The following screenshot shows the change in input fields when our configuration scope has changed: Click on the Export CSV button and we should start downloading a blank .csv file (or if there are rates already, it will give us our active rates). Next we will populate our spreadsheet with the following information (shown in the screenshot) so that we can ship to anywhere in the USA: After finishing our spreadsheet we can now import it, so (with our Current Configuration Scope field set to our Website view) click on the Choose File/Browse button and upload it. Once the browser has uploaded the file we can click on Save. Next we are going to configure our Free Shipping method to run alongside our Table Rates method, so to start with we need to switch back to our Default Config scope and then click on the Free Shipping tab Within this tab we will set Enabled to Yes and Minimum Order Amount to 50. We can leave the other options as default. How it works... The following is a brief explanation of each of our main shipping methods. Flat Rate The Flat Rate method allows us to specify a fixed shipping charge to be applied either per item or per order. The Flat Rate method also allows us to specify a handling fee—a percentage or fixed amount surcharge of the flat rate fee. With this method we can also specify which countries we wish to make this shipping method applicable for (dependent solely on the customers' shipping address details). Unlike the Table Rates method, you cannot specify multiple flat rates for any given region of a country nor can you specify flat rates individually per country. Table Rates The Table Rates method uses a spreadsheet of data to increase the flexibility of our shipping charges by allowing us to apply different prices to our orders depending on the criteria we specify in the spreadsheet. Along with the liberty to specify which countries this method is applicable for and giving us the option to apply a handling fee, the Table Rates method also allows us to choose from a variety of shopping cart conditions. The choice that we select from these conditions affects the data that we can import via the spreadsheet. Inside this spreadsheet we can specify hundreds of rows of countries along with their specific states or Zip/Postal Codes. Each row has a condition such as weight (and above) and also a specific price. If a shopping cart matches the criteria entered on any of the rows, the shipping price will be taken from that row and set to the cart. In our example we have used Weight vs. Destination; there are two other alternative conditions which come with a default Magento installation that could be used to calculate the shipping: Price vs. Destination: This Table Rates condition takes into account the Order Subtotal (and above) amount in whichever currency is currently set for the store # of Items vs. Destination: This Table Rates condition calculates the shipping cost based on the # of Items (and above) within the customer's basket Free Shipping The Free Shipping method is one of the simplest and most commonly used of all the methods that come with a default Magento installation. One of the best ways to increase the conversion rate through your Magento store is to offer your customers Free Shipping. Magento allows you to do this by using its Free Shipping method. Selecting the countries that this method is applicable for and inputting a minimum order amount as the criteria will enable this method in the checkout for any matching shopping cart. Unfortunately, you cannot specify regions of a country within this method (although you can still offer a free shipping solution through table rates and promotional rules). Our configuration As mentioned previously, the Table Rates method provides us with three types of conditions. In our example we created a table rate spreadsheet that relies on the weight information of our products to work out the shipping price. Magento's default Free Shipping method is one of the most popular and useful shipping methods and its most important configuration option is Minimum Order Amount. Setting this value to 50 will tell Magento that any shopping cart with a subtotal greater than $50 should provide the Free Shipping method for the customer to use; we can see this demonstrated in the following screenshot: The enabled option is a standard feature among nearly all shipping method extensions. Whenever we wish to enable or disable a shipping method, all we need to do is set it to Yes for enabled and No to disable it. Once we have configured our Table Rates extension, Magento will use the values inputted by our customer and try to match them against our imported data. In our case if a customer has ordered a product weighing 2.5 kg and they live anywhere in the USA, they will be presented with our $6.99 price. However, a drawback of our example is if they live outside of the USA, our shipping method will not be available. The .csv file for our Weight vs. Destination spreadsheet is slightly different to the spreadsheet used for the other Table Rates conditions. It is therefore important to make sure that if we change our condition, we export a fresh spreadsheet with the correct column information. One very important point to note when editing our shipping spreadsheets is the format of the file—programs such as Microsoft Excel sometimes save in an incompatible format. It is recommended to use the free, downloadable Open Office suite to edit any of Magento's spreadsheets as they save the file in a compatible format. We can download Open Office here: www.openoffice.org If there is no alternative but to use Microsoft Excel then we must ensure we save as CSV for Windows or alternatively CSV (Comma Delimited). A few key points when editing the Table Rates spreadsheet: The * (Asterisk) is a wildcard—similar to saying ANY Weight (and above) is really a FROM weight and will set the price UNTIL the next row value that is higher than itself (for the matching Country, Region/State, and Zip/ Postal Code)—the downside of this is that you cannot set a maximum weight limit The Country column takes three-letter codes—ISO 3166-1 alpha-3 codes The Zip/Postal Code column takes either a full USA ZIP code or a full postal code The Region/State column takes all two-letter state codes from the USA or any other codes that are available in the drop-down select menus for regions on the checkout pages of Magento One final note is that we can run as many shipping methods as we like at the same time—just like we did with our Free Shipping method and our Table Rates method. There's more... For more information on setting up the many shipping methods that are available within Magento please see the following link: http://innoexts.com/magento-shipping-methods We can also enable and disable shipping methods on a per website view basis, so for example we could disable a shipping method for our French store. Disabling Free Shipping for French website If we wanted to disable our Free Shipping method for just our French store, we could change our Current Configuration Scope field to our French website view and then perform the following steps: Navigate to System | Configuration | Shipping Methods and click on the Free Shipping tab. Uncheck Use Default next to the Enabled option and set Enabled to No, and then click on Save Config. We can see that Magento normally defaults all of our settings to the Default Config scope; by unchecking the Use Default checkbox we can edit our method for our chosen store view. Summary This article explored the differences between the Flat Rate, Table Rates, and Free Shipping methods, as well as taught us how to disable a shipping method and configure your Table Rates. Resources for Article : Further resources on this subject: Magento Performance Optimization [Article] Magento: Exploring Themes [Article] Getting Started with Magento Development [Article]
Read more
  • 0
  • 0
  • 5787

article-image-responsive-design-media-queries
Packt
19 Jun 2013
6 min read
Save for later

Responsive Design with Media Queries

Packt
19 Jun 2013
6 min read
(For more resources related to this topic, see here.) Web design for a multimedia web world As noted in the introduction to this article, recent times have seen an explosion in the variety of media through which people interact with websites, particularly the way smart phones and tablets are defining the browsing experience more and more. Moreover, as noted, a web page design that is appropriate may be necessary for a wide-screen experience but is often inappropriate, overly cluttered, or just plain dysfunctional on a tiny screen. The solution is Media Queries—a new element of CSS stylesheets introduced with CSS3. But before we examine new media features in CSS3, it will be helpful to understand the basic evolutionary path that led to the development of CSS3 Media Queries. That background will be useful both in getting our heads around the concepts involved and because in the crazy Wild West state of browsing environments these days (with emerging and yet-unresolved standards conflicts), designing for the widest range of media requires combining new CSS3 Media Queries with older CSS Media detection tools. We'll see how this plays out in real life near the end of this article, when we examine particular challenges of creating Media Queries that can detect, for example, an Apple iPhone. How Media Queries work Let's look at an example. If you open the Boston Globe (newspaper) site (http://www.bostonglobe.com/) in a browser window the width of a laptop, you'll see a three-column page layout (go ahead, I'll wait while you check; or just take a look at the following example). The three-column layout works well in laptops. But in a smaller viewport, the design adjusts to present content in two columns, as shown in the following screenshot: The two-column layout is the same HTML page as the three-column layout. And the content of both pages (text, images, media, and so on) is the same. The crew at the Globe do not have to build a separate home page for tablets or smartphones. But a media query has linked a different CSS file that displays in narrower viewports. A short history of Media Queries Stepping back in time a bit, the current (pre-CSS3) version of CSS could already detect media, and enable different stylesheets depending on the media. Moreover, Dreamweaver CS6 (also CS5.5, CS5, and previous versions) provided very nice, intuitive support for these features. The way this works in Dreamweaver is that when you click the Attach Style Sheet icon at the bottom of the CSS Styles panel (with a web page open in Dreamweaver's Document window), the Attach External Style Sheet dialog appears. The Media popup in the dialog allows you to attach a stylesheet specifically designed for print, aural (to be read out loud by the reader software), Braille, handheld devices, and other "traditional" output options, as well as newer CSS3-based options. The handheld option, shown in the following screenshot, was available before CSS3: So, to summarize the evolutionary path, detecting media and providing a custom style for that media is not new to HTML5 and its companion CSS3, and there is support for those features in Dreamweaver CS6. Detecting and synchronizing styles with defined media has been available in Dreamweaver. However, what is relatively new is the ability to detect and supply defined stylesheets for specific screen sizes. And that new feature opens the door to new levels of customized page design for specific media. HTML5, CSS3, and Media Queries With HTML5 and CSS3, Media Queries have been expanded. We can now define all kinds of criteria for selecting a stylesheet to apply to a viewing environment, including orientation (whether or not a mobile phone, tablet, and so on, is held in the portrait [up-down] or landscape [sideways] view), whether the device displays color, the shape of the viewing area, and—of most value—the width and height of the viewing area. All these options present a multitude of possibilities for creating custom stylesheets for different viewing environments. In fact they open up a ridiculously large array of possibilities. But for most designers, simply creating three appropriate stylesheets, one for laptop/desktop viewing, one for mobile phones, and one for tablets, is sufficient. In order to define criteria for which stylesheet will display in an environment, HTML5 and CSS3 allow us to use if-then statements. So, for example, if we are assigning a stylesheet to tablets, we might specify that if the width of the viewing area is greater than that of a cell phone, but smaller than that of a laptop screen, we want the tablet stylesheet to be applied. Styling for mobile devices and tablets While a full exploration of the aesthetic dimensions of creating styles for different media is beyond the scope of our mission in this book, it is worth noting a few basic "dos and don'ts" vis-à-vis styling for mobile devices. I'll be back with more detailed advice on mobile styling later in this article, but in a word, the challenge is: simplify. In general, this means applying many or all of the following adjustments to your pages: Smaller margins Larger (more readable) type Much less complex backgrounds; no image backgrounds No sidebars or floated content (content around which other content wraps) Often, no containers that define page width Design advice online: If you search for "css for mobile devices" online, you'll find thousands of articles with different perspectives and advice on designing web pages that can be easily accessed with handheld devices. Media Queries versus jQuery Mobile and apps Before moving to the technical dimension of building pages with responsive design using Media Queries, let me briefly compare and contrast media queries to the two other options available for displaying content differently for fullscreen and mobile devices. One option is an app. Apps (short for applications) are full-blown computer programs created in a high-level programming language. Dreamweaver CS6 includes new tools to connect with and generate apps through the online PhoneGap resources. The second option is a jQuery Mobile site. jQuery Mobile sites are based on JavaScript. But, as we'll see later in this book, you don't need to know JavaScript to build jQuery Mobile sites. The main difference between jQuery Mobile sites and Media Query sites with mobile-friendly designs is that jQuery Mobile sites require different content while Media Query sites simply repackage the same content with different stylesheets. Which approach should you use, Media Queries or JavaScript? That is a judgment call. What I can advise here is that Media Queries provides the easiest way to create and maintain a mobile version of your site.
Read more
  • 0
  • 0
  • 1845
article-image-using-jquery-and-jqueryui-widget-factory-plugins-requirejs
Packt
18 Jun 2013
5 min read
Save for later

Using jQuery and jQueryUI Widget Factory plugins with RequireJS

Packt
18 Jun 2013
5 min read
(For more resources related to this topic, see here.) How to do it... We must declare the jquery alias name within our Require.js configuration file. require.config({// 3rd party script alias namespaths: {// Core Libraries// --------------// jQuery"jquery": "libs/jquery",// Plugins// -------"somePlugin": "libs/plugins/somePlugin"}}); If a jQuery plugin does not register itself as AMD compatible, we must also create a Require.js shim configuration to make sure Require.js loads jQuery before the jQuery plugin. shim: {// Twitter Bootstrap plugins depend on jQuery"bootstrap": ["jquery"]} We will now be able to dynamically load a jQuery plugin with the require() method. // Dynamically loads a jQuery plugin using the require() methodrequire(["somePlugin"], function() {// The callback function is executed after the pluginis loaded}); We will also be able to list a jQuery plugin as a dependency to another module. // Sample file// -----------// The define method is passed a dependency array and a callbackfunctiondefine(["jquery", "somePlugin"], function ($) {// Wraps all logic inside of a jQuery.ready event$(function() {});}); When using a jQueryUI Widget Factory plugin, we create Require.js path names for both the jQueryUI Widget Factory and the jQueryUI Widget Factory plugin: "jqueryui": "libs/jqueryui","selectBoxIt": "libs/plugins/selectBoxIt" Next, create a shim configuration property: // The jQueryUI Widget Factory depends on jQuery"jqueryui": ["jquery"],// The selectBoxIt plugin depends on both jQuery and the jQueryUIWidget Factory"selectBoxIt": ["jqueryui"] We will now be able to dynamically load the jQueryUI Widget Factory plugin with the require() method: // Dynamically loads the jQueryUI Widget Factory plugin, selectBoxIt,using the Require() methodrequire(["selectBoxIt"], function() {// The callback function is executed after selectBoxIt.js(and all of its dependencies) have been loaded}); We will also be able to list the jQueryUI Widget Factory plugin as a dependency to another module: // Sample file// -----------// The define method is passed a dependency array and a callbackfunctiondefine(["jquery", "selectBoxIt"], function ($) {// Wraps all logic inside of a jQuery.ready event$(function() {});}); How it works... Luckily for us, jQuery adheres to the AMD specification and registers itself as a named AMD module. If you are confused about how/why they are doing that, let's take a look at the jQuery source: // Expose jQuery as an AMD moduleif ( typeof define === "function" && define.amd && define.amd.jQuery ){define( "jquery", [], function () { return jQuery; } );} jQuery first checks to make sure there is a global define() function available on the page. Next, jQuery checks if the define function has an amd property, which all AMD loaders that adhere to the AMD API should have. Remember that in JavaScript, functions are first class objects, and can contain properties. Finally, jQuery checks to see if the amd property contains a jQuery property, which should only be there for AMD loaders that understand the issues with loading multiple versions of jQuery in a page that all might call the define() function. Essentially, jQuery is checking that an AMD script loader is on the page, and then registering itself as a named AMD module (jquery). Since jQuery exports itself as the named AMD module, jquery, you must use this exact name when setting the path configuration to your own version of jQuery, or Require.js will throw an error. If a jQuery plugin registers itself as an anonymous AMD module and jQuery is also listed with the proper lowercased jquery alias name within your Require.js configuration file, using the plugin with the require() and define() methods will work as you expect. Unfortunately, most jQuery plugins are not AMD compatible, and do not wrap themselves in an optional define() method and list jquery as a dependency. To get around this issue, we can use the Require.js shim object configuration like we have seen before to tell Require. js that a file depends on jQuery. The shim configuration is a great solution for jQuery plugins that do not register themselves as AMD modules. Unfortunately, unlike jQuery, the jQueryUI does not currently register itself as a named AMD module, which means that plugin authors that use the jQueryUI Widget Factory cannot provide AMD compatibility. Since the jQueryUI Widget Factory is not AMD compatible, we must use a workaround involving the paths and shim configuration objects to properly define the plugin as an AMD module. There's more... You will most likely always register your own files as anonymous AMD modules, but jQuery is a special case. Registering itself as a named AMD module allows other third-party libraries that depend on jQuery, such as jQuery plugins, to become AMD compatible by calling the define() method themselves and using the community agreed upon module name, jquery, to list jQuery as a dependency. Summary This article demonstrates how to use jQuery and jQueryUI Widget Factory plugins with Require.js. Resources for Article : Further resources on this subject: So, what is KineticJS? [Article] HTML5 Presentations - creating our initial presentation [Article] Tips & Tricks for Ext JS 3.x [Article]
Read more
  • 0
  • 0
  • 2982

article-image-fundamental-razor-syntaxes
Packt
18 Jun 2013
2 min read
Save for later

Fundamental Razor syntaxes

Packt
18 Jun 2013
2 min read
(For more resources related to this topic, see here.) Getting ready In this view page you can try all the Razor syntaxes given in this section. How to do it... Here, let's start learning the fundamene written using three different approaches: inline, code block, and mixed. Inline code expressions Inline code expressions are always written in a single line, as follows: I always enjoy @DateTime.Now.DayOfWeek with my family. At runtime, the inline code expression, which is @DateTime.Now.DayOfWeek, will be converted into a day, such as Sunday. This can be seen in the following screenshot: Let's look at one more example, which will pass the controller's ViewBag and ViewData messages on the view. The rendered output will be as follows: Code block expression Code block expression is actually a set of multiple code lines that start and end with @{}. The use of opening (@{) and closing (}) characters is mandatory, even for single line of C# or VB code; as shown in the following screenshot: This will render the following output: Mixed code expression Mixed code expression is a set of multiple inline code expressions in a code block where we switch between C# and HTML. The magical key here is @:, which allows writing HTML in a code block, as follows: This will render the following output: So, this is all about how we write the code on Razor view page. Summary This article thus you learned about inline code expressions, code block expressions, and mixed code expressions. Resources for Article : Further resources on this subject: Deploying HTML5 Applications with GNOME [Article] Making the World Wide Web an Easier Place to Talk About [Article] The Best Way to Create Round Cornered Boxes with CSS [Article]
Read more
  • 0
  • 0
  • 9505

article-image-so-what-kineticjs
Packt
14 Jun 2013
3 min read
Save for later

So, what is KineticJS?

Packt
14 Jun 2013
3 min read
(For more resources related to this topic, see here.) With KineticJS you can draw shapes on the stage and manipulate them using the following elements: Move Rotate Animate Even if your application has thousands of figures, the animation will run smoothly and with a high enough FPS. The items are organized into layers, of which you can have as many as you want. Shapes can also be organized into groups. KineticJS allows unlimited nesting of shapes and groups. Scenes, layers, groups, and figures are virtual nodes, similar to DOM nodes in HTML. Any node can be styled or transformed. There are several predefined shapes, such as rectangles, circles, images, text, lines, polygons, stars, and so on. You can also create custom drawing functions in order to create custom shapes. For each object you can assign different event handlers (touch or mouse). You can also apply filter or animation to the shapes. Of course, you can implement all the necessary HTML5 Canvas functionality without KineticJS, but you have to spend a lot more time, and not necessarily get the same level of performance. The creators of KineticJS put all their love and faith into a brighter future of HTML5 interactivity. The main advantage of the library is high performance, which is achieved by creating two canvas renderers – a scene renderer and a hit graph renderer. One renderer is what you see, and the second is a special hidden canvas that's used for high-performance event detection. A huge advantage of KineticJS is that it is an extension to HTML5 Canvas, and thus is perfectly suited for developing applications for mobile platforms. High performance can hide all the flaws of the canvas in iOS, Android, and other platforms. It is a known fact that the iOS platform does not support Adobe Flash. In this case, KineticJS is a good Flash alternative for iOS devices. You can wrap up your KineticJS application with Cordova/PhoneGap and use it as an offline application, or publish to the App store. In short, the following are the main advantages of KineticJS: Speed Scalability Extensibility Flexibility Familiarity with API (for developers with the knowledge of HTML, CSS, JS, and jQuery) If you are an active innovator and indomitable web developer, this library is for you. Summary In this article, we walked through the basics and main advantages KineticJS. Resources for Article : Further resources on this subject: HTML5 Presentations - creating our initial presentation [Article] Removing Unnecessary jQuery Loads [Article] Using JavaScript Effects with Joomla! [Article]
Read more
  • 0
  • 0
  • 3915
article-image-getting-started-leaflet
Packt
14 Jun 2013
9 min read
Save for later

Getting started with Leaflet

Packt
14 Jun 2013
9 min read
(For more resources related to this topic, see here.) Getting ready First, we need to get an Internet browser, if we don't have one already installed. Leaflet is tested with modern desktop browsers: Chrome, Firefox, Safari 5+, Opera 11.11+, and Internet Explorer 7-10. Internet Explorer 6 support is stated as not perfect but accessible. We can pick one of them, or all of them if we want to be thorough. Then, we need an editor. Editors come in many shapes and flavors: free or not free, with or without syntax highlighting, or remote file editing. A quick search on the Internet will provide thousands of capable editors. Notepad++ (http://notepad-plus-plus.org/) for Windows, Komodo Edit (http://www.activestate.com/komodo-edit) for Mac OS, or Vim (http://www.vim.org/) for Linux are among them. We can download Leaflet's latest stable release (v0.5.1 at the time of writing) and extract the content of the ZIP file somewhere appropriate. The ZIP file contains the sources as well as a prebuilt version of the library that can be found in the dist directory. Optionally, we can build from the sources included in the ZIP file; see this article's Building Leaflet from source section. Finally, let's create a new project directory on our hard drive and copy the dist folder from the extracted Leaflet package to it, ensuring we rename it to leaflet. How to do it... Note that the following code will constitute our code base throughout the rest of the article. Create a blank HTML file called index.html in the root of our project directory. Add the code given here and use the browser installed previously to execute it: <!DOCTYPE html> <html> <head> <link rel="stylesheet" type="text/css" href="leaflet/ leaflet.css" /> <!--[if lte IE 8]> <link rel="stylesheet" type="text/css" href=" leaflet/ leaflet.ie.css" /> <![endif]--> <script src = "leaflet/leaflet.js"></script> <style> html, body, #map { height: 100%; } body { padding: 0; margin: 0; } </style> <title>Getting Started with Leaflet</title> </head> <body> <div id="map"></div> <script type="text/javascript"> var map = L.map('map', { center: [52.48626, -1.89042], zoom: 14 }); L.tileLayer('http://{s}.tile.openstreetmap.org/{z}/ {x}/{y}.png', { attribution: '© OpenStreetMap contributors' }).addTo(map); </script> </body> </html> The following screenshot is of the first map we have created: How it works... The index.html file we created is a standardized file that all Internet browsers can read and display the contents. Our file is based on the HTML doctype standard produced by the World Wide Web Consortium (W3C), which is only one of many that can be used as seen at http://www.w3.org/QA/2002/04/valid-dtd-list.html. Our index file specifies the doctype on the first line of code as required by the W3C, using the <!DOCTYPE HTML> markup. We added a link to Leaflet's main CSS file in the head section of our code: <link rel="stylesheet" type="text/css" href="leaflet/leaflet.css" /> We also added a conditional statement to link an Internet Explorer 8 or lower only stylesheet when these browsers interpret the HTML code: <!--[if lte IE 8]> <link rel="stylesheet" type="text/css" href="leaflet/leaflet.ie.css" /> <![endif]--> This stylesheet mainly addresses Internet Explorer specific issues with borders and margins. Leaflet's JavaScript file is then referred to using a script tag: <script src = "leaflet/leaflet.js"></script> We are using the compressed JavaScript file that is appropriate for production but very inefficient for debugging. In the compressed version, every white space character has been removed, as shown in the following bullet list, which is a straight copy-paste from the source of both files for the function onMouseClick: compressed: _onMouseClick:function(t){!this._loaded||this.dragging&& this.dragging.moved()||(this.fire("preclick"),this._ fireMouseEvent(t))}, uncompressed: _onMouseClick: function (e) { if (!this._loaded || (this.dragging && this.dragging.moved())) { return; } this.fire('preclick'); this._fireMouseEvent(e); }, To make things easier, we can replace leaflet.js with leaflet-src.js—an uncompressed version of the library. We also added styles to our document to make the map fit nicely in our browser window: html, body, #map { height: 100%; } body { padding: 0; margin: 0; } The <div> tag with the id attribute map in the document's body is the container of our map. It must be given a height otherwise the map won't be displayed: <div id="map" style="height: 100%;" ></div> Finally, we added a script section enclosing the map's initialization code, instantiating a Map object using the L.map(…) constructor and a TileLayer object using the L.tileLayer(…) constructor. The script section must be placed after the map container declaration otherwise Leaflet will be referencing an element that does not yet exist when the page loads. When instantiating a Map object, we pass the id of the container of our map and an array of Map options: var map = L.map('map', { center: [52.48626, -1.89042], zoom: 14 }); There are a number of Map options affecting the state, the interactions, the navigation, and the controls of the map. See the documentation to explore those in detail at http://leafletjs.com/reference.html#map-options. Next, we instantiated a TileLayer object using the L.tileLayer(…) constructor and added to the map using the TileLayer.addTo(…) method: L.tileLayer('http://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png', { attribution: '© OpenStreetMap contributors' }).addTo(map); Here, the first parameter is the URL template of our tile provider—that is OpenStreetMap— and the second a noncompulsory array of TileLayer options including the recommended attribution text for our map tile's source. The TileLayer options are also numerous. Refer to the documentation for the exhaustive list at http://leafletjs.com/reference.html#tilelayer-options. There's more... Let's have a look at some of the Map options, as well as how to build Leaflet from source or use different tile providers. More on Map options We have encountered a few Map options in the code for this recipe, namely center and zoom. We could have instantiated our OpenStreetMap TileLayer object before our Map object and passed it as a Map option using the layers option. We also could have specified a minimum and maximum zoom or bounds to our map, using minZoom and maxZoom (integers) and maxBounds, respectively. The latter must be an instance of LatLngBounds: var bounds = L.latLngBounds([ L.latLng([52.312, -2.186]), L.latLng([52.663, -1.594]) ]); We also came across the TileLayer URL template that will be used to fetch the tile images, replacing { s} by a subdomain and { x}, {y}, and {z} by the tiles coordinate and zoom. The subdomains can be configured by setting the subdomains property of a TileLayer object instance. Finally, the attribution property was set to display the owner of the copyright of the data and/or a description. Building Leaflet from source A Leaflet release comes with the source code that we can build using Node.js. This will be a necessity if we want to fix annoying bugs or add awesome new features. The source code itself can be found in the src directory of the extracted release ZIP file. Feel free to explore and look at how things get done within a Leaflet. First things first, go to http://nodejs.org and get the install file for your platform. It will install Node.js along with npm, a command line utility that will download and install Node Packaged Modules and resolve their dependencies for us. Following is the list of modules we are going to install: Jake: A JavaScript build program similar to make JSHint: It will detect potential problems and errors in JavaScript code UglifyJS: A mangler and compressor library for JavaScript Hopefully, we won't need to delve into the specifics of these tools to build Leaflet from source. So let's open a command line interpreter— cmd.exe on Windows, or a terminal on Mac OSX or Linux—and navigate to the Leaflet's src directory using the cd command, then use npm to install Jake, JSHint and UglifyJS: cd leaflet/src npm install –g jake npm install jshint npm install uglify-js We can now run Jake in Leaflet's directory: jake What about tile providers? We could have chosen a different tile provider as OpenStreetMap is free of charge but has its limitations in regard of a production environment. A number of web services provide tiles but might come at a price depending on your usage: CloudMade, MapQuest. These three providers serve tiles use the OpenStreetMap tile scheme described at http://wiki.openstreetmap.org/wiki/Slippy_map_tilenames. Remember the way we added the OpenStreetMap layer to the map? L.tileLayer('http://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png', { attribution: '© OpenStreetMap contributors' }).addTo(map); Remember the way we added the OpenStreetMap layer to the map? Cloudmade: L.tileLayer(' http://{s}.tile.cloudmade.com/API-key/997/256/{z}/ {x}/{y}.png', { attribution: ' Map data © <a href="http://openstreetmap. org">OpenStreetMap</a> contributors, <a href="http:// creativecommons.org/licenses/by-sa/2.0/">CC-BY-SA</a>, Imagery © <a href="http://cloudmade.com">CloudMade</a>' }).addTo(map); MapQuest: L.tileLayer('http://{s}.mqcdn.com/tiles/1.0.0/map/{z}/{x}/{y}. png', { attribution: ' Tiles Courtesy of <a href="http://www.mapquest. com/" target="_blank">MapQuest</a> <img src = "http://developer.mapquest.com/content/osm/mq_logo.png">', subdomains: ['otile1', 'otile2', 'otile3', 'otile4'] }).addTo(map); You will learn more about the Layer URL template and subdomains option in the documentation at http://leafletjs.com/reference.html#tilelayer. Leaflet also supports Web Map Service (WMS) tile layers—read more about it at http://leafletjs.com/reference.html#tilelayer-wms—and GeoJSON layers in the documentation at http://leafletjs.com/reference.html#geojson. Summary In this article we have learned how to create map using Leaflet and created our first map. We learned about different map options and also how to build a leaflet from source. Resources for Article : Further resources on this subject: Using JavaScript Effects with Joomla! [Article] Getting Started with OpenStreetMap [Article] Quick start [Article]
Read more
  • 0
  • 0
  • 4041

article-image-so-what-play
Packt
14 Jun 2013
11 min read
Save for later

So, what is Play?

Packt
14 Jun 2013
11 min read
(For more resources related to this topic, see here.) Quick start – Creating your first Play application Now that we have a working Play installation in place, we will see how easy it is to create and run a new application with just a few keystrokes. Besides walking through the structure of our Play application, we will also look at what we can do with the command-line interface of Play and how fast modifications of our application are made visible. Finally, we will take a look at the setup of integrated development environments ( IDEs ). Step 1 – Creating a new Play application So, let's create our first Play application. In fact, we create two applications, because Play comes with the APIs for Java and Scala, the sample accompanying us in this book is implemented twice, each in one separate language. Please note that it is generally possible to use both languages in one project. Following the DRY principle, we will show code only once if it is the same for the Java and the Scala application. In such cases we will use the play-starter-scala project. First, we create the Java application. Open a command line and change to a directory where you want to place the project contents. Run the play script with the new command followed by the application name (which is used as the directory name for our project): $ play new play-starter-java We are asked to provide two additional information: The application name, for display purposes. Just press the Enter key here to use the same name we passed to the play script. You can change the name later by editing the appName variable in play-starter-java/project/Build.scala. The template we want to use for the application. Here we choose 2 for Java. Repeat these steps for our Scala application, but now choose 1 for the Scala template. Please note the difference in the application name: $ play new play-starter-scala The following screenshot shows the output of the play new command: On our way through the next sections, we will build an ongoing example step-by-step. We will see Java and Scala code side-by-side, so create both projects if you want to find out more about the difference between Java and Scala based Play applications. Structure of a Play application Physically, a Play application consists of a series of folders containing source code, configuration files, and web page resources. The play new command creates the standardized directory structure for these files: /path/to/play-starter-scala└app source code| └controllers http request processors| └views templates for html files└conf configuration files└project sbt project definition└public folder containing static assets| └images images| └javascripts javascript files| └stylesheets css style sheets└test source code of test cases During development, Play generates several other directories, which can be ignored, especially when using a version control system: /path/to/play-starter-scala└dist releases in .zip format└logs log files└project THIS FOLDER IS NEEDED| └project but this...| └target ...and this can be ignored└target generated sources and binaries There are more folders that can be found in a Play application depending on the IDE we use. In particular, a Play project has optional folders on more involved topics we do not discuss in this book. Please refer to the Play documentation for more details. The app/ folder The app/ folder contains the source code of our application. According to the MVC architectural pattern, we have three separate components in the form of the following directories: app/models/: This directory is not generated by default, but it is very likely present in a Play application. It contains the business logic of the application, for example, querying or calculating data. app/views/: In this directory we find the view templates. Play's view templates are basically HTML files with dynamic parts. app/controllers/: This controllers contain the application specific logic, for example, processing HTTP requests and error handling. The default directory (or package) names, models, views, and controllers, can be changed if needed. The conf/ directory The conf/ directory is the place where the application's configuration files are placed. There are two main configuration files: application.conf: This file contains standard configuration parameters routes – This file defines the HTTP interface of the application The application.conf file is the best place to add more configuration options if needed for our application. Configuration files for third-party libraries should also be put in the conf/ directory or an appropriate sub-directory of conf/. The project/ folder Play builds applications with the Simple Build Tool ( SBT ). The project/ folder contains the SBT build definitions: Build.scala: This is the application's build script executed by SBT build.properties: This definition contains properties such as the SBT version plugins.sbt: This definition contains the SBT plugins used by the project The public/ folder Static web resources are placed in the public/ folder. Play offers standard sub-directories for images, CSS stylesheets, and JavaScript files. Use these directories to keep your Play applications consistent. Create additional sub-directories of public/ for third-party libraries for a clear resource management and to avoid file name clashes. The test/ folder Finally, the test/ folder contains unit tests or functional tests. This code is not distributed with a release of our application. Step 2 – Using the Play console Play provides a command-line interface (CLI), the so-called Play console. It is based on the SBT and provides several commands to manage our application's development cycle. Starting our application To enter the Play console, open a shell, change to the root directory of one of our Play projects, and run the play script. $ cd /path/to/play-starter-scala$ play On the Play console, type run to run our application in development (DEV) mode. [play-starter-scala] $ run Use ~run instead of run to enable automatic compilation of file changes. This gives us an additional performance boost when accessing our application during development and it is recommended by the author. All console commands can be called directly on the command line by running play <command>. Multiple arguments have to be denoted in quotation marks, for example, play "~run 9001" A web server is started by Play, which will listen for HTTP requests on localhost:9000 by default. Now open a web browser and go to this location. The page displayed by the web browser is the default implementation of a new Play application. To return to our shell, type the keys Ctrl + D to stop the web server and get back to the Play console. Play console commands Besides run , we typically use the following console commands during development: clean: This command deletes cached files, generated sources, and compiled classes compile: This command compiles the current application test: This command executes unit tests and functional tests We get a list of available commands by typing help play in the Play development console. A release of an application is started with the start command in production (PROD) mode. In contrast to the DEV mode no internal state is displayed in the case of an error. There are also commands of the play script, available only on the command line: clean-all: This command deletes all generated directories, including the logs. debug: This command runs the Play console in debug mode, listening on the JPDA port 9999. Setting the environment variable JDPA_PORT changes the port. stop: This command stops an application that is running in production mode. Closing the console We exit the Play console and get back to the command line with the exit command or by simply typing the key Ctrl + D . Step 3 – Modifying our application We now come to the part that we love the most as impatient developers: the rapid development turnaround cycles. In the following sections, we will make some changes to the given code of our new application visible. Fast turnaround – change your code and hit reload! First we have to ensure that our applications are running. In the root of each of our Java and Scala projects, we start the Play console. We start our Play applications in parallel on two different ports to compare them side-by-side with the commands ~run and ~run 9001. We go to the browser and load both locations, localhost:9000 and I Then we open the default controller app/controllers/Application.java and app/controllers/Application.scala respectively, which we created at application creation, in a text editor of our choice, and change the message to be displayed in the Java code: public class Application extends Controller {public static Result index() {return ok(index.render("Look ma! No restart!"));}} and then in the Scala code: object Application extends Controller {def index = Action {Ok(views.html.index("Look ma! No restart!"))}} Finally, we reload our web pages and immediately see the changes: That's it. We don't have to restart our server or re-deploy our application. The code changes take effect by simply reloading the page. Step 4 – Setting up your preferred IDE Play takes care of automatically compiling modifications we make to our source code. That is why we don't need a full-blown IDE to develop Play applications. We can use a simple text editor instead. However, using an IDE has many advantages, such as code completion, refactoring assistance, and debugging capabilities. Also it is very easy to navigate through the code. Therefore, Play has built-in project generation support for two of the most popular IDEs: IntelliJ IDEA and Eclipse . IntelliJ IDEA The free edition, IntelliJ IDEA Community , can be used to develop Play projects. However, the commercial release, IntelliJ IDEA Ultimate , includes Play 2.0 support for Java and Scala. Currently, it offers the most sophisticated features compared to other IDEs.More information can be found here: http://www.jetbrains.com/idea and also here: http://confluence.jetbrains.com/display/IntelliJIDEA/Play+Framework+2.0 We generate the required IntelliJ IDEA project files by typing the idea command on the Play console or by running it on the command line: $ play idea We can also download the available source JAR files by running idea with-source=true on the console or on the command line: $ play "idea with-source=true" After that, the project can be imported into IntelliJ IDEA. Make sure you have the IDE plugins Scala, SBT , and Play 2 (if available) installed. The project files have to be regenerated by running play idea every time the classpath changes, for example, when adding or changing project dependencies. IntelliJ IDEA will recognize the changes and reloads the project automatically. The generated files should not be checked into a version control system, as they are specific to the current environment. Eclipse Eclipse is also supported by Play. The Eclipse Classic edition is fine, which can be downloaded here: http://www.eclipse.org/downloads. It is recommended to install the Scala IDE plugin, which comes up with great features for Scala developers and can be downloaded here: http://scala-ide.org. You need to download Version 2.1.0 (milestone) or higher to get Scala 2.10 support for Play 2.1. A Play 2 plugin exists also for Eclipse, but it is in a very early stage. It will be available in a future release of the Scala IDE. More information can be found here: https://github.com/scala-ide/scala-ide-play2/wiki The best way to edit Play templates with Eclipse currently is by associating HTML files with the Scala Script Editor. You get this editor by installing the Scala Worksheet plugin, which is bundled with the Scala IDE. We generate the required Eclipse project files by typing the eclipse command on the Play console or by running it on the command line: $ play eclipse Analogous to the previous code, we can also download available source JAR files by running eclipse with-source=true on the console or on the command line: $ play "eclipse with-source=true" Also, don't check in generated project files for a version control system or regenerate project files if dependencies change. Eclipse (Juno) is recognizing the changed project files automatically. Other IDEs Other IDEs are not supported by Play out of the box. There are a couple of plugins, which can be configured manually. For more information on this topic, please consult the Play documentation. Summary We saw how easy it is to create and run a new application with just a few keystrokes. Besides walking through the structure of our Play application, we also looked at what we can do with the command-line interface of Play and how fast modifications of our application are made visible. Finally, we looked at the setup of integrated development environments ( IDEs ). Resources for Article : Further resources on this subject: Play! Framework 2 – Dealing with Content [Article] Play Framework: Data Validation Using Controllers [Article] Play Framework: Binding and Validating Objects and Rendering JSON Output [Article]
Read more
  • 0
  • 0
  • 2304
Modal Close icon
Modal Close icon