Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-making-web-server-nodejs
Packt
25 Feb 2016
38 min read
Save for later

Making a Web Server in Node.js

Packt
25 Feb 2016
38 min read
In this article, we will cover the following topics: Setting up a router Serving static files Caching content in memory for immediate delivery Optimizing performance with streaming Securing against filesystem hacking exploits (For more resources related to this topic, see here.) One of the great qualities of Node is its simplicity. Unlike PHP or ASP, there is no separation between the web server and code, nor do we have to customize large configuration files to get the behavior we want. With Node, we can create the web server, customize it, and deliver content. All this can be done at the code level. This article demonstrates how to create a web server with Node and feed content through it, while implementing security and performance enhancements to cater for various situations. If we don't have Node installed yet, we can head to http://nodejs.org and hit the INSTALL button appearing on the homepage. This will download the relevant file to install Node on our operating system. Setting up a router In order to deliver web content, we need to make a Uniform Resource Identifier (URI) available. This recipe walks us through the creation of an HTTP server that exposes routes to the user. Getting ready First let's create our server file. If our main purpose is to expose server functionality, it's a general practice to call the server.js file (because the npm start command runs the node server.js command by default). We could put this new server.js file in a new folder. It's also a good idea to install and use supervisor. We use npm (the module downloading and publishing command-line application that ships with Node) to install. On the command-line utility, we write the following command: sudo npm -g install supervisor Essentially, sudo allows administrative privileges for Linux and Mac OS X systems. If we are using Node on Windows, we can drop the sudo part in any of our commands. The supervisor module will conveniently autorestart our server when we save our changes. To kick things off, we can start our server.js file with the supervisor module by executing the following command: supervisor server.js For more on possible arguments and the configuration of supervisor, check out https://github.com/isaacs/node-supervisor. How to do it... In order to create the server, we need the HTTP module. So let's load it and use the http.createServer method as follows: var http = require('http'); http.createServer(function (request, response) {   response.writeHead(200, {'Content-Type': 'text/html'});   response.end('Woohoo!'); }).listen(8080); Now, if we save our file and access localhost:8080 on a web browser or using curl, our browser (or curl) will exclaim Woohoo! But the same will occur at localhost:8080/foo. Indeed, any path will render the same behavior. So let's build in some routing. We can use the path module to extract the basename variable of the path (the final part of the path) and reverse any URI encoding from the client with decodeURI as follows: var http = require('http'); var path = require('path'); http.createServer(function (request, response) {   var lookup=path.basename(decodeURI(request.url)); We now need a way to define our routes. One option is to use an array of objects as follows: var pages = [   {route: '', output: 'Woohoo!'},   {route: 'about', output: 'A simple routing with Node example'},   {route: 'another page', output: function() {return 'Here's     '+this.route;}}, ]; Our pages array should be placed above the http.createServer call. Within our server, we need to loop through our array and see if the lookup variable matches any of our routes. If it does, we can supply the output. We'll also implement some 404 error-related handling as follows: http.createServer(function (request, response) {   var lookup=path.basename(decodeURI(request.url));   pages.forEach(function(page) {     if (page.route === lookup) {       response.writeHead(200, {'Content-Type': 'text/html'});       response.end(typeof page.output === 'function'       ? page.output() : page.output);     }   });   if (!response.finished) {      response.writeHead(404);      response.end('Page Not Found!');   } }).listen(8080); How it works... The callback function we provide to http.createServer gives us all the functionality we need to interact with our server through the request and response objects. We use request to obtain the requested URL and then we acquire its basename with path. We also use decodeURI, without which another page route would fail as our code would try to match another%20page against our pages array and return false. Once we have our basename, we can match it in any way we want. We could send it in a database query to retrieve content, use regular expressions to effectuate partial matches, or we could match it to a filename and load its contents. We could have used a switch statement to handle routing, but our pages array has several advantages—it's easier to read, easier to extend, and can be seamlessly converted to JSON. We loop through our pages array using forEach. Node is built on Google's V8 engine, which provides us with a number of ECMAScript 5 (ES5) features. These features can't be used in all browsers as they're not yet universally implemented, but using them in Node is no problem! The forEach function is an ES5 implementation; the ES3 way is to use the less convenient for loop. While looping through each object, we check its route property. If we get a match, we write the 200 OK status and content-type headers, and then we end the response with the object's output property. The response.end method allows us to pass a parameter to it, which it writes just before finishing the response. In response.end, we have used a ternary operator (?:) to conditionally call page.output as a function or simply pass it as a string. Notice that the another page route contains a function instead of a string. The function has access to its parent object through the this variable, and allows for greater flexibility in assembling the output we want to provide. In the event that there is no match in our forEach loop, response.end would never be called and therefore the client would continue to wait for a response until it times out. To avoid this, we check the response.finished property and if it's false, we write a 404 header and end the response. The response.finished flag is affected by the forEach callback, yet it's not nested within the callback. Callback functions are mostly used for asynchronous operations, so on the surface this looks like a potential race condition; however, the forEach loop does not operate asynchronously; it blocks until all loops are complete. There's more... There are many ways to extend and alter this example. There are also some great non-core modules available that do the legwork for us. Simple multilevel routing Our routing so far only deals with a single level path. A multilevel path (for example, /about/node) will simply return a 404 error message. We can alter our object to reflect a subdirectory-like structure, remove path, and use request.url for our routes instead of path.basename as follows: var http=require('http'); var pages = [   {route: '/', output: 'Woohoo!'},   {route: '/about/this', output: 'Multilevel routing with Node'},   {route: '/about/node', output: 'Evented I/O for V8 JavaScript.'},   {route: '/another page', output: function () {return 'Here's '     + this.route; }} ]; http.createServer(function (request, response) {   var lookup = decodeURI(request.url); When serving static files, request.url must be cleaned prior to fetching a given file. Check out the Securing against filesystem hacking exploits recipe in this article. Multilevel routing could be taken further; we could build and then traverse a more complex object as follows: {route: 'about', childRoutes: [   {route: 'node', output: 'Evented I/O for V8 JavaScript'},   {route: 'this', output: 'Complex Multilevel Example'} ]} After the third or fourth level, this object would become a leviathan to look at. We could alternatively create a helper function to define our routes that essentially pieces our object together for us. Alternatively, we could use one of the excellent noncore routing modules provided by the open source Node community. Excellent solutions already exist that provide helper methods to handle the increasing complexity of scalable multilevel routing. Parsing the querystring module Two other useful core modules are url and querystring. The url.parse method allows two parameters: first the URL string (in our case, this will be request.url) and second a Boolean parameter named parseQueryString. If the url.parse method is set to true, it lazy loads the querystring module (saving us the need to require it) to parse the query into an object. This makes it easy for us to interact with the query portion of a URL as shown in the following code: var http = require('http'); var url = require('url'); var pages = [   {id: '1', route: '', output: 'Woohoo!'},   {id: '2', route: 'about', output: 'A simple routing with Node     example'},   {id: '3', route: 'another page', output: function () {     return 'Here's ' + this.route; }   }, ]; http.createServer(function (request, response) {   var id = url.parse(decodeURI(request.url), true).query.id;   if (id) {     pages.forEach(function (page) {       if (page.id === id) {         response.writeHead(200, {'Content-Type': 'text/html'});         response.end(typeof page.output === 'function'         ? page.output() : page.output);       }     });   }   if (!response.finished) {     response.writeHead(404);     response.end('Page Not Found');   } }).listen(8080); With the added id properties, we can access our object data by, for instance, localhost:8080?id=2. The routing modules There's an up-to-date list of various routing modules for Node at https://github.com/joyent/node/wiki/modules#wiki-web-frameworks-routers. These community-made routers cater to various scenarios. It's important to research the activity and maturity of a module before taking it into a production environment. NodeZoo (http://nodezoo.com) is an excellent tool to research the state of a NODE module. See also The Serving static files and Securing against filesystem hacking exploits recipes discussed in this article Serving static files If we have information stored on disk that we want to serve as web content, we can use the fs (filesystem) module to load our content and pass it through the http.createServer callback. This is a basic conceptual starting point to serve static files; as we will learn in the following recipes, there are much more efficient solutions. Getting ready We'll need some files to serve. Let's create a directory named content, containing the following three files: index.html styles.css script.js Add the following code to the HTML file index.html: <html>   <head>     <title>Yay Node!</title>     <link rel=stylesheet href=styles.css type=text/css>     <script src=script.js type=text/javascript></script>   </head>   <body>     <span id=yay>Yay!</span>   </body> </html> Add the following code to the script.js JavaScript file: window.onload = function() { alert('Yay Node!'); }; And finally, add the following code to the CSS file style.css: #yay {font-size:5em;background:blue;color:yellow;padding:0.5em} How to do it... As in the previous recipe, we'll be using the core modules http and path. We'll also need to access the filesystem, so we'll require fs as well. With the help of the following code, let's create the server and use the path module to check if a file exists: var http = require('http'); var path = require('path'); var fs = require('fs'); http.createServer(function (request, response) {   var lookup = path.basename(decodeURI(request.url)) ||     'index.html';   var f = 'content/' + lookup;   fs.exists(f, function (exists) {     console.log(exists ? lookup + " is there"     : lookup + " doesn't exist");   }); }).listen(8080); If we haven't already done it, then we can initialize our server.js file by running the following command: supervisor server.js Try loading localhost:8080/foo. The console will say foo doesn't exist, because it doesn't. The localhost:8080/script.js URL will tell us that script.js is there, because it is. Before we can serve a file, we are supposed to let the client know the content-type header, which we can determine from the file extension. So let's make a quick map using an object as follows: var mimeTypes = {   '.js' : 'text/javascript',   '.html': 'text/html',   '.css' : 'text/css' }; We could extend our mimeTypes map later to support more types. Modern browsers may be able to interpret certain mime types (like text/javascript), without the server sending a content-type header, but older browsers or less common mime types will rely upon the correct content-type header being sent from the server. Remember to place mimeTypes outside of the server callback, since we don't want to initialize the same object on every client request. If the requested file exists, we can convert our file extension into a content-type header by feeding path.extname into mimeTypes and then pass our retrieved content-type to response.writeHead. If the requested file doesn't exist, we'll write out a 404 error and end the response as follows: //requires variables, mimeType object... http.createServer(function (request, response) {     var lookup = path.basename(decodeURI(request.url)) ||     'index.html';   var f = 'content/' + lookup;   fs.exists(f, function (exists) {     if (exists) {       fs.readFile(f, function (err, data) {         if (err) {response.writeHead(500); response.end('Server           Error!'); return; }         var headers = {'Content-type': mimeTypes[path.extname          (lookup)]};         response.writeHead(200, headers);         response.end(data);       });       return;     }     response.writeHead(404); //no such file found!     response.end();   }); }).listen(8080); At the moment, there is still no content sent to the client. We have to get this content from our file, so we wrap the response handling in an fs.readFile method callback as follows: //http.createServer, inside fs.exists: if (exists) {   fs.readFile(f, function(err, data) {     var headers={'Content-type': mimeTypes[path.extname(lookup)]};     response.writeHead(200, headers);     response.end(data);   });  return; } Before we finish, let's apply some error handling to our fs.readFile callback as follows: //requires variables, mimeType object... //http.createServer,  path exists, inside if(exists):  fs.readFile(f, function(err, data) {     if (err) {response.writeHead(500); response.end('Server       Error!');  return; }     var headers = {'Content-type': mimeTypes[path.extname      (lookup)]};     response.writeHead(200, headers);     response.end(data);   }); return; } Notice that return stays outside of the fs.readFile callback. We are returning from the fs.exists callback to prevent further code execution (for example, sending the 404 error). Placing a return statement in an if statement is similar to using an else branch. However, the pattern of the return statement inside the if loop is encouraged instead of if else, as it eliminates a level of nesting. Nesting can be particularly prevalent in Node due to performing a lot of asynchronous tasks, which tend to use callback functions. So, now we can navigate to localhost:8080, which will serve our index.html file. The index.html file makes calls to our script.js and styles.css files, which our server also delivers with appropriate mime types. We can see the result in the following screenshot: This recipe serves to illustrate the fundamentals of serving static files. Remember, this is not an efficient solution! In a real world situation, we don't want to make an I/O call every time a request hits the server; this is very costly especially with larger files. In the following recipes, we'll learn better ways of serving static files. How it works... Our script creates a server and declares a variable called lookup. We assign a value to lookup using the double pipe || (OR) operator. This defines a default route if path.basename is empty. Then we pass lookup to a new variable that we named f in order to prepend our content directory to the intended filename. Next, we run f through the fs.exists method and check the exist parameter in our callback to see if the file is there. If the file does exist, we read it asynchronously using fs.readFile. If there is a problem accessing the file, we write a 500 server error, end the response, and return from the fs.readFile callback. We can test the error-handling functionality by removing read permissions from index.html as follows: chmod -r index.html Doing so will cause the server to throw the 500 server error status code. To set things right again, run the following command: chmod +r index.html chmod is a Unix-type system-specific command. If we are using Windows, there's no need to set file permissions in this case. As long as we can access the file, we grab the content-type header using our handy mimeTypes mapping object, write the headers, end the response with data loaded from the file, and finally return from the function. If the requested file does not exist, we bypass all this logic, write a 404 error message, and end the response. There's more... The favicon icon file is something to watch out for. We will explore the file in this section. The favicon gotcha When using a browser to test our server, sometimes an unexpected server hit can be observed. This is the browser requesting the default favicon.ico icon file that servers can provide. Apart from the initial confusion of seeing additional hits, this is usually not a problem. If the favicon request does begin to interfere, we can handle it as follows: if (request.url === '/favicon.ico') {   console.log('Not found: ' + f);   response.end();   return; } If we wanted to be more polite to the client, we could also inform it of a 404 error by using response.writeHead(404) before issuing response.end. See also The Caching content in memory for immediate delivery recipe The Optimizing performance with streaming recipe The Securing against filesystem hacking exploits recipe Caching content in memory for immediate delivery Directly accessing storage on each client request is not ideal. For this task, we will explore how to enhance server efficiency by accessing the disk only on the first request, caching the data from file for that first request, and serving all further requests out of the process memory. Getting ready We are going to improve upon the code from the previous task, so we'll be working with server.js and in the content directory, with index.html, styles.css, and script.js. How to do it... Let's begin by looking at our following script from the previous recipe Serving Static Files: var http = require('http'); var path = require('path'); var fs = require('fs');    var mimeTypes = {   '.js' : 'text/javascript',   '.html': 'text/html',   '.css' : 'text/css' };   http.createServer(function (request, response) {   var lookup = path.basename(decodeURI(request.url)) ||     'index.html';   var f = 'content/'+lookup;   fs.exists(f, function (exists) {     if (exists) {       fs.readFile(f, function(err,data) {         if (err) {           response.writeHead(500); response.end('Server Error!');           return;         }         var headers = {'Content-type': mimeTypes[path.extname          (lookup)]};         response.writeHead(200, headers);         response.end(data);       });     return;     }     response.writeHead(404); //no such file found!     response.end('Page Not Found');   }); } We need to modify this code to only read the file once, load its contents into memory, and respond to all requests for that file from memory afterwards. To keep things simple and preserve maintainability, we'll extract our cache handling and content delivery into a separate function. So above http.createServer, and below mimeTypes, we'll add the following: var cache = {}; function cacheAndDeliver(f, cb) {   if (!cache[f]) {     fs.readFile(f, function(err, data) {       if (!err) {         cache[f] = {content: data} ;       }       cb(err, data);     });     return;   }   console.log('loading ' + f + ' from cache');   cb(null, cache[f].content); } //http.createServer A new cache object and a new function called cacheAndDeliver have been added to store our files in memory. Our function takes the same parameters as fs.readFile so we can replace fs.readFile in the http.createServer callback while leaving the rest of the code intact as follows: //...inside http.createServer:   fs.exists(f, function (exists) {   if (exists) {     cacheAndDeliver(f, function(err, data) {       if (err) {         response.writeHead(500);         response.end('Server Error!');         return; }       var headers = {'Content-type': mimeTypes[path.extname(f)]};       response.writeHead(200, headers);       response.end(data);     }); return;   } //rest of path exists code (404 handling)... When we execute our server.js file and access localhost:8080 twice, consecutively, the second request causes the console to display the following output: loading content/index.html from cache loading content/styles.css from cache loading content/script.js from cache How it works... We defined a function called cacheAndDeliver, which like fs.readFile, takes a filename and callback as parameters. This is great because we can pass the exact same callback of fs.readFile to cacheAndDeliver, padding the server out with caching logic without adding any extra complexity visually to the inside of the http.createServer callback. As it stands, the worth of abstracting our caching logic into an external function is arguable, but the more we build on the server's caching abilities, the more feasible and useful this abstraction becomes. Our cacheAndDeliver function checks to see if the requested content is already cached. If not, we call fs.readFile and load the data from disk. Once we have this data, we may as well hold onto it, so it's placed into the cache object referenced by its file path (the f variable). The next time anyone requests the file, cacheAndDeliver will see that we have the file stored in the cache object and will issue an alternative callback containing the cached data. Notice that we fill the cache[f] property with another new object containing a content property. This makes it easier to extend the caching functionality in the future as we would just have to place extra properties into our cache[f] object and supply logic that interfaces with these properties accordingly. There's more... If we were to modify the files we are serving, the changes wouldn't be reflected until we restart the server. We can do something about that. Reflecting content changes To detect whether a requested file has changed since we last cached it, we must know when the file was cached and when it was last modified. To record when the file was last cached, let's extend the cache[f] object as follows: cache[f] = {content: data,timestamp: Date.now() // store a Unix                                                 // time stamp }; That was easy! Now let's find out when the file was updated last. The fs.stat method returns an object as the second parameter of its callback. This object contains the same useful information as the command-line GNU (GNU's Not Unix!) coreutils stat. The fs.stat function supplies three time-related properties: last accessed (atime), last modified (mtime), and last changed (ctime). The difference between mtime and ctime is that ctime will reflect any alterations to the file, whereas mtime will only reflect alterations to the content of the file. Consequently, if we changed the permissions of a file, ctime would be updated but mtime would stay the same. We want to pay attention to permission changes as they happen so let's use the ctime property as shown in the following code: //requires and mimeType object.... var cache = {}; function cacheAndDeliver(f, cb) {   fs.stat(f, function (err, stats) {     if (err) { return console.log('Oh no!, Error', err); }     var lastChanged = Date.parse(stats.ctime),     isUpdated = (cache[f]) && lastChanged  > cache[f].timestamp;     if (!cache[f] || isUpdated) {       fs.readFile(f, function (err, data) {         console.log('loading ' + f + ' from file');         //rest of cacheAndDeliver   }); //end of fs.stat } If we're using Node on Windows, we may have to substitute ctime with mtime, since ctime supports at least Version 0.10.12. The contents of cacheAndDeliver have been wrapped in an fs.stat callback, two variables have been added, and the if(!cache[f]) statement has been modified. We parse the ctime property of the second parameter dubbed stats using Date.parse to convert it to milliseconds since midnight, January 1st, 1970 (the Unix epoch) and assign it to our lastChanged variable. Then we check whether the requested file's last changed time is greater than when we cached the file (provided the file is indeed cached) and assign the result to our isUpdated variable. After that, it's merely a case of adding the isUpdated Boolean to the conditional if(!cache[f]) statement via the || (or) operator. If the file is newer than our cached version (or if it isn't yet cached), we load the file from disk into the cache object. See also The Optimizing performance with streaming recipe discussed in this article Optimizing performance with streaming Caching content certainly improves upon reading a file from disk for every request. However, with fs.readFile, we are reading the whole file into memory before sending it out in a response object. For better performance, we can stream a file from disk and pipe it directly to the response object, sending data straight to the network socket a piece at a time. Getting ready We are building on our code from the last example, so let's get server.js, index.html, styles.css, and script.js ready. How to do it... We will be using fs.createReadStream to initialize a stream, which can be piped to the response object. In this case, implementing fs.createReadStream within our cacheAndDeliver function isn't ideal because the event listeners of fs.createReadStream will need to interface with the request and response objects, which for the sake of simplicity would preferably be dealt with in the http.createServer callback. For brevity's sake, we will discard our cacheAndDeliver function and implement basic caching within the server callback as follows: //...snip... requires, mime types, createServer, lookup and f //  vars...   fs.exists(f, function (exists) {   if (exists) {     var headers = {'Content-type': mimeTypes[path.extname(f)]};     if (cache[f]) {       response.writeHead(200, headers);       response.end(cache[f].content);       return;    } //...snip... rest of server code... Later on, we will fill cache[f].content while we are interfacing with the readStream object. The following code shows how we use fs.createReadStream: var s = fs.createReadStream(f); The preceding code will return a readStream object that streams the file, which is pointed at by variable f. The readStream object emits events that we need to listen to. We can listen with addEventListener or use the shorthand on method as follows: var s = fs.createReadStream(f).on('open', function () {   //do stuff when the readStream opens }); Because createReadStream returns the readStream object, we can latch our event listener straight onto it using method chaining with dot notation. Each stream is only going to open once; we don't need to keep listening to it. Therefore, we can use the once method instead of on to automatically stop listening after the first event occurrence as follows: var s = fs.createReadStream(f).once('open', function () {   //do stuff when the readStream opens }); Before we fill out the open event callback, let's implement some error handling as follows: var s = fs.createReadStream(f).once('open', function () {   //do stuff when the readStream opens }).once('error', function (e) {   console.log(e);   response.writeHead(500);   response.end('Server Error!'); }); The key to this whole endeavor is the stream.pipe method. This is what enables us to take our file straight from disk and stream it directly to the network socket via our response object as follows: var s = fs.createReadStream(f).once('open', function () {   response.writeHead(200, headers);   this.pipe(response); }).once('error', function (e) {   console.log(e);   response.writeHead(500);   response.end('Server Error!'); }); But what about ending the response? Conveniently, stream.pipe detects when the stream has ended and calls response.end for us. There's one other event we need to listen to, for caching purposes. Within our fs.exists callback, underneath the createReadStream code block, we write the following code: fs.stat(f, function(err, stats) {   var bufferOffset = 0;   cache[f] = {content: new Buffer(stats.size)};   s.on('data', function (chunk) {     chunk.copy(cache[f].content, bufferOffset);     bufferOffset += chunk.length;   }); }); //end of createReadStream We've used the data event to capture the buffer as it's being streamed, and copied it into a buffer that we supplied to cache[f].content, using fs.stat to obtain the file size for the file's cache buffer. For this case, we're using the classic mode data event instead of the readable event coupled with stream.read() (see http://nodejs.org/api/stream.html#stream_readable_read_size_1) because it best suits our aim, which is to grab data from the stream as soon as possible. How it works... Instead of the client waiting for the server to load the entire file from disk prior to sending it to the client, we use a stream to load the file in small ordered pieces and promptly send them to the client. With larger files, this is especially useful as there is minimal delay between the file being requested and the client starting to receive the file. We did this by using fs.createReadStream to start streaming our file from disk. The fs.createReadStream method creates a readStream object, which inherits from the EventEmitter class. The EventEmitter class accomplishes the evented part pretty well. Due to this, we'll be using listeners instead of callbacks to control the flow of stream logic. We then added an open event listener using the once method since we want to stop listening to the open event once it is triggered. We respond to the open event by writing the headers and using the stream.pipe method to shuffle the incoming data straight to the client. If the client becomes overwhelmed with processing, stream.pipe applies backpressure, which means that the incoming stream is paused until the backlog of data is handled. While the response is being piped to the client, the content cache is simultaneously being filled. To achieve this, we had to create an instance of the Buffer class for our cache[f].content property. A Buffer class must be supplied with a size (or array or string), which in our case is the size of the file. To get the size, we used the asynchronous fs.stat method and captured the size property in the callback. The data event returns a Buffer variable as its only callback parameter. The default value of bufferSize for a stream is 64 KB; any file whose size is less than the value of the bufferSize property will only trigger one data event because the whole file will fit into the first chunk of data. But for files that are greater than the value of the bufferSize property, we have to fill our cache[f].content property one piece at a time. Changing the default readStream buffer size We can change the buffer size of our readStream object by passing an options object with a bufferSize property as the second parameter of fs.createReadStream. For instance, to double the buffer, you could use fs.createReadStream(f,{bufferSize: 128 * 1024});. We cannot simply concatenate each chunk with cache[f].content because this will coerce binary data into string format, which, though no longer in binary format, will later be interpreted as binary. Instead, we have to copy all the little binary buffer chunks into our binary cache[f].content buffer. We created a bufferOffset variable to assist us with this. Each time we add another chunk to our cache[f].content buffer, we update our new bufferOffset property by adding the length of the chunk buffer to it. When we call the Buffer.copy method on the chunk buffer, we pass bufferOffset as the second parameter, so our cache[f].content buffer is filled correctly. Moreover, operating with the Buffer class renders performance enhancements with larger files because it bypasses the V8 garbage-collection methods, which tend to fragment a large amount of data, thus slowing down Node's ability to process them. There's more... While streaming has solved the problem of waiting for files to be loaded into memory before delivering them, we are nevertheless still loading files into memory via our cache object. With larger files or a large number of files, this could have potential ramifications. Protecting against process memory overruns Streaming allows for intelligent and minimal use of memory for processing large memory items. But even with well-written code, some apps may require significant memory. There is a limited amount of heap memory. By default, V8's memory is set to 1400 MB on 64-bit systems and 700 MB on 32-bit systems. This can be altered by running node with --max-old-space-size=N, where N is the amount of megabytes (the actual maximum amount that it can be set to depends upon the OS, whether we're running on a 32-bit or 64-bit architecture—a 32-bit may peak out around 2 GB and of course the amount of physical RAM available). The --max-old-space-size method doesn't apply to buffers, since it applies to the v8 heap (memory allocated for JavaScript objects and primitives) and buffers are allocated outside of the v8 heap. If we absolutely had to be memory intensive, we could run our server on a large cloud platform, divide up the logic, and start new instances of node using the child_process class, or better still the higher level cluster module. There are other more advanced ways to increase the usable memory, including editing and recompiling the v8 code base. The http://blog.caustik.com/2012/04/11/escape-the-1-4gb-v8-heap-limit-in-node-js link has some tips along these lines. In this case, high memory usage isn't necessarily required and we can optimize our code to significantly reduce the potential for memory overruns. There is less benefit to caching larger files because the slight speed improvement relative to the total download time is negligible, while the cost of caching them is quite significant in ratio to our available process memory. We can also improve cache efficiency by implementing an expiration time on cache objects, which can then be used to clean the cache, consequently removing files in low demand and prioritizing high demand files for faster delivery. Let's rearrange our cache object slightly as follows: var cache = {   store: {},   maxSize : 26214400, //(bytes) 25mb } For a clearer mental model, we're making a distinction between the cache object as a functioning entity and the cache object as a store (which is a part of the broader cache entity). Our first goal is to only cache files under a certain size; we've defined cache.maxSize for this purpose. All we have to do now is insert an if condition within the fs.stat callback as follows: fs.stat(f, function (err, stats) {   if (stats.size<cache.maxSize) {     var bufferOffset = 0;     cache.store[f] = {content: new Buffer(stats.size),       timestamp: Date.now() };     s.on('data', function (data) {       data.copy(cache.store[f].content, bufferOffset);       bufferOffset += data.length;     });   } }); Notice that we also slipped in a new timestamp property into our cache.store[f] method. This is for our second goal—cleaning the cache. Let's extend cache as follows: var cache = {   store: {},   maxSize: 26214400, //(bytes) 25mb   maxAge: 5400 * 1000, //(ms) 1 and a half hours   clean: function(now) {     var that = this;     Object.keys(this.store).forEach(function (file) {       if (now > that.store[file].timestamp + that.maxAge) {         delete that.store[file];       }     });   } }; So in addition to maxSize, we've created a maxAge property and added a clean method. We call cache.clean at the bottom of the server with the help of the following code: //all of our code prior   cache.clean(Date.now()); }).listen(8080); //end of the http.createServer The cache.clean method loops through the cache.store function and checks to see if it has exceeded its specified lifetime. If it has, we remove it from the store. One further improvement and then we're done. The cache.clean method is called on each request. This means the cache.store function is going to be looped through on every server hit, which is neither necessary nor efficient. It would be better if we clean the cache, say, every two hours or so. We'll add two more properties to cache—cleanAfter to specify the time between cache cleans, and cleanedAt to determine how long it has been since the cache was last cleaned, as follows: var cache = {   store: {},   maxSize: 26214400, //(bytes) 25mb   maxAge : 5400 * 1000, //(ms) 1 and a half hours   cleanAfter: 7200 * 1000,//(ms) two hours   cleanedAt: 0, //to be set dynamically   clean: function (now) {     if (now - this.cleanAfter>this.cleanedAt) {       this.cleanedAt = now;       that = this;       Object.keys(this.store).forEach(function (file) {         if (now > that.store[file].timestamp + that.maxAge) {           delete that.store[file];         }       });     }   } }; So we wrap our cache.clean method in an if statement, which will allow a loop through cache.store only if it has been longer than two hours (or whatever cleanAfter is set to) since the last clean. See also The Securing against filesystem hacking exploits recipe discussed in this article Securing against filesystem hacking exploits For a Node app to be insecure, there must be something an attacker can interact with for exploitation purposes. Due to Node's minimalist approach, the onus is on the programmer to ensure that their implementation doesn't expose security flaws. This recipe will help identify some security risk anti-patterns that could occur when working with the filesystem. Getting ready We'll be working with the same content directory as we did in the previous recipes. But we'll start a new insecure_server.js file (there's a clue in the name!) from scratch to demonstrate mistaken techniques. How to do it... Our previous static file recipes tend to use path.basename to acquire a route, but this ignores intermediate paths. If we accessed localhost:8080/foo/bar/styles.css, our code would take styles.css as the basename property and deliver content/styles.css to us. How about we make a subdirectory in our content folder? Call it subcontent and move our script.js and styles.css files into it. We'd have to alter our script and link tags in index.html as follows: <link rel=stylesheet type=text/css href=subcontent/styles.css> <script src=subcontent/script.js type=text/javascript></script> We can use the url module to grab the entire pathname property. So let's include the url module in our new insecure_server.js file, create our HTTP server, and use pathname to get the whole requested path as follows: var http = require('http'); var url = require('url'); var fs = require('fs');   http.createServer(function (request, response) {   var lookup = url.parse(decodeURI(request.url)).pathname;   lookup = (lookup === "/") ? '/index.html' : lookup;   var f = 'content' + lookup;   console.log(f);   fs.readFile(f, function (err, data) {     response.end(data);   }); }).listen(8080); If we navigate to localhost:8080, everything works great! We've gone multilevel, hooray! For demonstration purposes, a few things have been stripped out from the previous recipes (such as fs.exists); but even with them, this code presents the same security hazards if we type the following: curl localhost:8080/../insecure_server.js Now we have our server's code. An attacker could also access /etc/passwd with a few attempts at guessing its relative path as follows: curl localhost:8080/../../../../../../../etc/passwd If we're using Windows, we can download and install curl from http://curl.haxx.se/download.html. In order to test these attacks, we have to use curl or another equivalent because modern browsers will filter these sort of requests. As a solution, what if we added a unique suffix to each file we wanted to serve and made it mandatory for the suffix to exist before the server coughs it up? That way, an attacker could request /etc/passwd or our insecure_server.js file because they wouldn't have the unique suffix. To try this, let's copy the content folder and call it content-pseudosafe, and rename our files to index.html-serve, script.js-serve, and styles.css-serve. Let's create a new server file and name it pseudosafe_server.js. Now all we have to do is make the -serve suffix mandatory as follows: //requires section ...snip... http.createServer(function (request, response) {   var lookup = url.parse(decodeURI(request.url)).pathname;   lookup = (lookup === "/") ? '/index.html-serve'     : lookup + '-serve';   var f = 'content-pseudosafe' + lookup; //...snip... rest of the server code... For feedback purposes, we'll also include some 404 handling with the help of fs.exists as follows: //requires, create server etc fs.exists(f, function (exists) {   if (!exists) {     response.writeHead(404);     response.end('Page Not Found!');     return;   } //read file etc So, let's start our pseudosafe_server.js file and try out the same exploit by executing the following command: curl -i localhost:8080/../insecure_server.js We've used the -i argument so that curl will output the headers. The result? A 404, because the file it's actually looking for is ../insecure_server.js-serve, which doesn't exist. So what's wrong with this method? Well it's inconvenient and prone to error. But more importantly, an attacker can still work around it! Try this by typing the following: curl localhost:8080/../insecure_server.js%00/index.html And voilà! There's our server code again. The solution to our problem is path.normalize, which cleans up our pathname before it gets to fs.readFile as shown in the following code: http.createServer(function (request, response) {   var lookup = url.parse(decodeURI(request.url)).pathname;   lookup = path.normalize(lookup);   lookup = (lookup === "/") ? '/index.html' : lookup;   var f = 'content' + lookup } Prior recipes haven't used path.normalize and yet they're still relatively safe. The path.basename method gives us the last part of the path, thus removing any preceding double dot paths (../) that would take an attacker higher up the directory hierarchy than should be allowed. How it works... Here we have two filesystem exploitation techniques: the relative directory traversal and poison null byte attacks. These attacks can take different forms, such as in a POST request or from an external file. They can have different effects—if we were writing to files instead of reading them, an attacker could potentially start making changes to our server. The key to security in all cases is to validate and clean any data that comes from the user. In insecure_server.js, we pass whatever the user requests to our fs.readFile method. This is foolish because it allows an attacker to take advantage of the relative path functionality in our operating system by using ../, thus gaining access to areas that should be off limits. By adding the -serve suffix, we didn't solve the problem, we put a plaster on it, which can be circumvented by the poison null byte. The key to this attack is the %00 value, which is a URL hex code for the null byte. In this case, the null byte blinds Node to the ../insecure_server.js portion, but when the same null byte is sent through to our fs.readFile method, it has to interface with the kernel. But the kernel gets blinded to the index.html part. So our code sees index.html but the read operation sees ../insecure_server.js. This is known as null byte poisoning. To protect ourselves, we could use a regex statement to remove the ../ parts of the path. We could also check for the null byte and spit out a 400 Bad Request statement. But we don't have to, because path.normalize filters out the null byte and relative parts for us. There's more... Let's further delve into how we can protect our servers when it comes to serving static files. Whitelisting If security was an extreme priority, we could adopt a strict whitelisting approach. In this approach, we would create a manual route for each file we are willing to deliver. Anything not on our whitelist would return a 404 error. We can place a whitelist array above http.createServer as follows: var whitelist = [   '/index.html',   '/subcontent/styles.css',   '/subcontent/script.js' ]; And inside our http.createServer callback, we'll put an if statement to check if the requested path is in the whitelist array, as follows: if (whitelist.indexOf(lookup) === -1) {   response.writeHead(404);   response.end('Page Not Found!');   return; } And that's it! We can test this by placing a file non-whitelisted.html in our content directory and then executing the following command: curl -i localhost:8080/non-whitelisted.html This will return a 404 error because non-whitelisted.html isn't on the whitelist. Node static The module's wiki page (https://github.com/joyent/node/wiki/modules#wiki-web-frameworks-static) has a list of static file server modules available for different purposes. It's a good idea to ensure that a project is mature and active before relying upon it to serve your content. The node-static module is a well-developed module with built-in caching. It's also compliant with the RFC2616 HTTP standards specification, which defines how files should be delivered over HTTP. The node-static module implements all the essentials discussed in this article and more. For the next example, we'll need the node-static module. You could install it by executing the following command: npm install node-static The following piece of code is slightly adapted from the node-static module's GitHub page at https://github.com/cloudhead/node-static: var static = require('node-static'); var fileServer = new static.Server('./content'); require('http').createServer(function (request, response) {   request.addListener('end', function () {     fileServer.serve(request, response);   }); }).listen(8080); The preceding code will interface with the node-static module to handle server-side and client-side caching, use streams to deliver content, and filter out relative requests and null bytes, among other things. Summary To learn more about Node.js and creating web servers, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Node Cookbook Second Edition (https://www.packtpub.com/web-development/node-cookbook-second-edition) Node.js Design Patterns (https://www.packtpub.com/web-development/nodejs-design-patterns) Node Web Development Second Edition (https://www.packtpub.com/web-development/node-web-development-second-edition) Resources for Article: Further resources on this subject: Working With Commands And Plugins [article] Node.js Fundamentals And Asynchronous Javascript [article] Building A Movie API With Express [article]
Read more
  • 0
  • 0
  • 16435

article-image-understanding-php-basics
Packt
17 Feb 2016
27 min read
Save for later

Understanding PHP basics

Packt
17 Feb 2016
27 min read
In this article by Antonio Lopez Zapata, the author of the book Learning PHP 7, you need to understand not only the syntax of the language, but also its grammatical rules, that is, when and why to use each element of the language. Luckily, for you, some languages come from the same root. For example, Spanish and French are romance languages as they both evolved from spoken Latin; this means that these two languages share a lot of rules, and learning Spanish if you already know French is much easier. (For more resources related to this topic, see here.) Programming languages are quite the same. If you already know another programming language, it will be very easy for you to go through this chapter. If it is your first time though, you will need to understand from scratch all the grammatical rules, so it might take some more time. But fear not! We are here to help you in this endeavor. In this chapter, you will learn about these topics: PHP in web applications Control structures Functions PHP in web applications Even though the main purpose of this chapter is to show you the basics of PHP, doing so in a reference-manual way is not interesting enough. If we were to copy paste what the official documentation says, you might as well go there and read it by yourself. Instead, let's not forget the main purpose of this book and your main goal—to write web applications with PHP. We will show you how can you apply everything you are learning as soon as possible, before you get too bored. In order to do that, we will go through the journey of building an online bookstore. At the very beginning, you might not see the usefulness of it, but that is just because we still haven't seen all that PHP can do. Getting information from the user Let's start by building a home page. In this page, we are going to figure out whether the user is looking for a book or just browsing. How do we find this out? The easiest way right now is to inspect the URL that the user used to access our application and extract some information from there. Save this content as your index.php file: <?php $looking = isset($_GET['title']) || isset($_GET['author']); ?> <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Bookstore</title> </head> <body> <p>You lookin'? <?php echo (int) $looking; ?></p> <p>The book you are looking for is</p> <ul> <li><b>Title</b>: <?php echo $_GET['title']; ?></li> <li><b>Author</b>: <?php echo $_GET['author']; ?></li> </ul> </body> </html> And now, access http://localhost:8000/?author=Harper Lee&title=To Kill a Mockingbird. You will see that the page is printing some of the information that you passed on to the URL. For each request, PHP stores in an array—called $_GET- all the parameters that are coming from the query string. Each key of the array is the name of the parameter, and its associated value is the value of the parameter. So, $_GET contains two entries: $_GET['author'] contains Harper Lee and $_GET['title'] contains To Kill a Mockingbird. On the first highlighted line, we are assigning a Boolean value to the $looking variable. If either $_GET['title'] or $_GET['author'] exists, this variable will be true; otherwise, false. Just after that, we close the PHP tag and then we start printing some HTML, but as you can see, we are actually mixing HTML with PHP code. Another interesting line here is the second highlighted line. We are printing the content of $looking, but before that, we cast the value. Casting means forcing PHP to transform a type of value to another one. Casting a Boolean to an integer means that the resultant value will be 1 if the Boolean is true or 0 if the Boolean is false. As $looking is true since $_GET contains valid keys, the page shows 1. If we try to access the same page without sending any information as in http://localhost:8000, the browser will say "Are you looking for a book? 0". Depending on the settings of your PHP configuration, you will see two notice messages complaining that you are trying to access the keys of the array that do not exist. Casting versus type juggling We already knew that when PHP needs a specific type of variable, it will try to transform it, which is called type juggling. But PHP is quite flexible, so sometimes, you have to be the one specifying the type that you need. When printing something with echo, PHP tries to transform everything it gets into strings. Since the string version of the false Boolean is an empty string, this would not be useful for our application. Casting the Boolean to an integer first assures that we will see a value, even if it is just "0". HTML forms HTML forms are one of the most popular ways to collect information from users. They consist a series of fields called inputs in the HTML world and a final submit button. In HTML, the form tag contains two attributes: the action points, where the form will be submitted and method that specifies which HTTP method the form will use—GET or POST. Let's see how it works. Save the following content as login.html and go to http://localhost:8000/login.html: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Bookstore - Login</title> </head> <body> <p>Enter your details to login:</p> <form action="authenticate.php" method="post"> <label>Username</label> <input type="text" name="username" /> <label>Password</label> <input type="password" name="password" /> <input type="submit" value="Login"/> </form> </body> </html> This form contains two fields, one for the username and one for the password. You can see that they are identified by the name attribute. If you try to submit this form, the browser will show you a Page Not Found message, as it is trying to access http://localhost:8000/authenticate.phpand the web server cannot find it. Let's create it then: <?php $submitted = !empty($_POST); ?> <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Bookstore</title> </head> <body> <p>Form submitted? <?php echo (int) $submitted; ?></p> <p>Your login info is</p> <ul> <li><b>username</b>: <?php echo $_POST['username']; ?></li> <li><b>password</b>: <?php echo $_POST['password']; ?></li> </ul> </body> </html> As with $_GET, $_POST is an array that contains the parameters received by POST. In this piece of code, we are first asking whether that array is not empty—note the ! operator. Afterwards, we just display the information received, just as in index.php. Note that the keys of the $_POST array are the values for the name argument of each input field. Control structures So far, our files have been executed line by line. Due to that, we are getting some notices on some scenarios, such as when the array does not contain what we are looking for. Would it not be nice if we could choose which lines to execute? Control structures to the rescue! A control structure is like a traffic diversion sign. It directs the execution flow depending on some predefined conditions. There are different control structures, but we can categorize them in conditionals and loops. A conditional allows us to choose whether to execute a statement or not. A loop will execute a statement as many times as you need. Let's take a look at each one of them. Conditionals A conditional evaluates a Boolean expression, that is, something that returns a value. If the expression is true, it will execute everything inside its block of code. A block of code is a group of statements enclosed by {}. Let's see how it works: <?php echo "Before the conditional."; if (4 > 3) { echo "Inside the conditional."; } if (3 > 4) { echo "This will not be printed."; } echo "After the conditional."; In this piece of code, we are using two conditionals. A conditional is defined by the keyword if followed by a Boolean expression in parentheses and by a block of code. If the expression is true, it will execute the block; otherwise, it will skip it. You can increase the power of conditionals by adding the keyword else. This tells PHP to execute a block of code if the previous conditions were not satisfied. Let's see an example: if (2 > 3) { echo "Inside the conditional."; } else { echo "Inside the else."; } This will execute the code inside else as the condition of if was not satisfied. Finally, you can also add an elseif keyword followed by another condition and block of code to continue asking PHP for more conditions. You can add as many elseif as you need after if. If you add else, it has to be the last one of the chain of conditions. Also keep in mind that as soon as PHP finds a condition that resolves to true, it will stop evaluating the rest of the conditions: <?php if (4 > 5) { echo "Not printed"; } elseif (4 > 4) { echo "Not printed"; } elseif (4 == 4) { echo "Printed."; } elseif (4 > 2) { echo "Not evaluated."; } else { echo "Not evaluated."; } if (4 == 4) { echo "Printed"; } In this last example, the first condition that evaluates to true is the one that is highlighted. After that, PHP does not evaluate any more conditions until a new if starts. With this knowledge, let's try to clean up a bit of our application, executing statements only when needed. Copy this code to your index.php file: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Bookstore</title> </head> <body> <p> <?php if (isset($_COOKIE[username'])) { echo "You are " . $_COOKIE['username']; } else { echo "You are not authenticated."; } ?> </p> <?php if (isset($_GET['title']) && isset($_GET['author'])) { ?> <p>The book you are looking for is</p> <ul> <li><b>Title</b>: <?php echo $_GET['title']; ?></li> <li><b>Author</b>: <?php echo $_GET['author']; ?></li> </ul> <?php } else { ?> <p>You are not looking for a book?</p> <?php } ?> </body> </html> In this new code, we are mixing conditionals and HTML code in two different ways. The first one opens a PHP tag and adds an if-else clause that will print whether we are authenticated or not with echo. No HTML is merged within the conditionals, which makes it clear. The second option—the second highlighted block—shows an uglier solution, but this is sometimes necessary. When you have to print a lot of HTML code, echo is not that handy, and it is better to close the PHP tag; print all the HTML you need and then open the tag again. You can do that even inside the code block of an if clause, as you can see in the code. Mixing PHP and HTML If you feel like the last file we edited looks rather ugly, you are right. Mixing PHP and HTML is confusing, and you have to avoid it by all means. Let's edit our authenticate.php file too, as it is trying to access $_POST entries that might not be there. The new content of the file would be as follows: <?php $submitted = isset($_POST['username']) && isset($_POST['password']); if ($submitted) { setcookie('username', $_POST['username']); } ?> <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Bookstore</title> </head> <body> <?php if ($submitted): ?> <p>Your login info is</p> <ul> <li><b>username</b>: <?php echo $_POST['username']; ?></li> <li><b>password</b>: <?php echo $_POST['password']; ?></li> </ul> <?php else: ?> <p>You did not submitted anything.</p> <?php endif; ?> </body> </html> This code also contains conditionals, which we already know. We are setting a variable to know whether we've submitted a login or not and to set the cookies if we have. However, the highlighted lines show you a new way of including conditionals with HTML. This way, tries to be more readable when working with HTML code, avoiding the use of {} and instead using : and endif. Both syntaxes are correct, and you should use the one that you consider more readable in each case. Switch-case Another control structure similar to if-else is switch-case. This structure evaluates only one expression and executes the block depending on its value. Let's see an example: <?php switch ($title) { case 'Harry Potter': echo "Nice story, a bit too long."; break; case 'Lord of the Rings': echo "A classic!"; break; default: echo "Dunno that one."; break; } The switch case takes an expression; in this case, a variable. It then defines a series of cases. When the case matches the current value of the expression, PHP executes the code inside it. As soon as PHP finds break, it will exit switch-case. In case none of the cases are suitable for the expression, if there is a default case         , PHP will execute it, but this is optional. You also need to know that breaks are mandatory if you want to exit switch-case. If you do not specify any, PHP will keep on executing statements, even if it encounters a new case. Let's see a similar example but without breaks: <?php $title = 'Twilight'; switch ($title) { case 'Harry Potter': echo "Nice story, a bit too long."; case 'Twilight': echo 'Uh...'; case 'Lord of the Rings': echo "A classic!"; default: echo "Dunno that one."; } If you test this code in your browser, you will see that it is printing "Uh...A classic!Dunno that one.". PHP found that the second case is valid so it executes its content. But as there are no breaks, it keeps on executing until the end. This might be the desired behavior sometimes, but not usually, so we need to be careful when using it! Loops Loops are control structures that allow you to execute certain statements several times—as many times as you need. You might use them on several different scenarios, but the most common one is when interacting with arrays. For example, imagine you have an array with elements but you do not know what is in it. You want to print all its elements so you loop through all of them. There are four types of loops. Each of them has their own use cases, but in general, you can transform one type of loop into another. Let's see them closely While While is the simplest of the loops. It executes a block of code until the expression to evaluate returns false. Let's see one example: <?php $i = 1; while ($i < 4) { echo $i . " "; $i++; } Here, we are defining a variable with the value 1. Then, we have a while clause in which the expression to evaluate is $i < 4. This loop will execute the content of the block of code until that expression is false. As you can see, inside the loop we are incrementing the value of $i by 1 each time, so after 4 iterations, the loop will end. Check out the output of that script, and you will see "0 1 2 3". The last value printed is 3, so by that time, $i was 3. After that, we increased its value to 4, so when the while was evaluating whether $i < 4, the result was false. Whiles and infinite loops One of the most common problems with while loops is creating an infinite loop. If you do not add any code inside while, which updates any of the variables considered in the while expression so it can be false at some point, PHP will never exit the loop! For This is the most complex of the four loops. For defines an initialization expression, an exit condition, and the end of the iteration expression. When PHP first encounters the loop, it executes what is defined as the initialization expression. Then, it evaluates the exit condition, and if it resolves to true, it enters the loop. After executing everything inside the loop, it executes the end of the iteration expression. Once this is done, it will evaluate the end condition again, going through the loop code and the end of iteration expression until it evaluates to false. As always, an example will help clarify this: <?php for ($i = 1; $i < 10; $i++) { echo $i . " "; } The initialization expression is $i = 1 and is executed only the first time. The exit condition is $i < 10, and it is evaluated at the beginning of each iteration. The end of the iteration expression is $i++, which is executed at the end of each iteration. This example prints numbers from 1 to 9. Another more common usage of the for loop is with arrays: <?php $names = ['Harry', 'Ron', 'Hermione']; for ($i = 0; $i < count($names); $i++) { echo $names[$i] . " "; } In this example, we have an array of names. As it is defined as a list, its keys will be 0, 1, and 2. The loop initializes the $i variable to 0, and it will iterate until the value of $i is not less than the amount of elements in the array 3 The first iteration $i is 0, the second will be 1, and the third one will be 2. When $i is 3, it will not enter the loop as the exit condition evaluates to false. On each iteration, we are printing the content of the $i position of the array; hence, the result of this code will be all three names in the array. We careful with exit conditions It is very common to set an exit condition. This is not exactly what we need, especially with arrays. Remember that arrays start with 0 if they are a list, so an array of 3 elements will have entries 0, 1, and 2. Defining the exit condition as $i <= count($array) will cause an error on your code, as when $i is 3, it also satisfies the exit condition and will try to access to the key 3, which does not exist. Foreach The last, but not least, type of loop is foreach. This loop is exclusive for arrays, and it allows you to iterate an array entirely, even if you do not know its keys. There are two options for the syntax, as you can see in these examples: <?php $names = ['Harry', 'Ron', 'Hermione']; foreach ($names as $name) { echo $name . " "; } foreach ($names as $key => $name) { echo $key . " -> " . $name . " "; } The foreach loop accepts an array; in this case, $names. It specifies a variable, which will contain the value of the entry of the array. You can see that we do not need to specify any end condition, as PHP will know when the array has been iterated. Optionally, you can specify a variable that will contain the key of each iteration, as in the second loop. Foreach loops are also useful with maps, where the keys are not necessarily numeric. The order in which PHP will iterate the array will be the same order in which you used to insert the content in the array. Let's use some loops in our application. We want to show the available books in our home page. We have the list of books in an array, so we will have to iterate all of them with a foreach loop, printing some information from each one. Append the following code to the body tag in index.php: <?php endif; $books = [ [ 'title' => 'To Kill A Mockingbird', 'author' => 'Harper Lee', 'available' => true, 'pages' => 336, 'isbn' => 9780061120084 ], [ 'title' => '1984', 'author' => 'George Orwell', 'available' => true, 'pages' => 267, 'isbn' => 9780547249643 ], [ 'title' => 'One Hundred Years Of Solitude', 'author' => 'Gabriel Garcia Marquez', 'available' => false, 'pages' => 457, 'isbn' => 9785267006323 ], ]; ?> <ul> <?php foreach ($books as $book): ?> <li> <i><?php echo $book['title']; ?></i> - <?php echo $book['author']; ?> <?php if (!$book['available']): ?> <b>Not available</b> <?php endif; ?> </li> <?php endforeach; ?> </ul> The highlighted code shows a foreach loop using the : notation, which is better when mixing it with HTML. It iterates all the $books arrays, and for each book, it will print some information as a HTML list. Also note that we have a conditional inside a loop, which is perfectly fine. Of course, this conditional will be executed for each entry in the array, so you should keep the block of code of your loops as simple as possible. Functions A function is a reusable block of code that, given an input, performs some actions and optionally returns a result. You already know several predefined functions, such as empty, in_array, or var_dump. These functions come with PHP so you do not have to reinvent the wheel, but you can create your own very easily. You can define functions when you identify portions of your application that have to be executed several times or just to encapsulate some functionality. Function declaration Declaring a function means to write it down so that it can be used later. A function has a name, takes arguments, and has a block of code. Optionally, it can define what kind of value is returning. The name of the function has to follow the same rules as variable names; that is, it has to start by a letter or underscore and can contain any letter, number, or underscore. It cannot be a reserved word. Let's see a simple example: function addNumbers($a, $b) { $sum = $a + $b; return $sum; } $result = addNumbers(2, 3); Here, the function's name is addNumbers, and it takes two arguments: $a and $b. The block of code defines a new variable $sum that is the sum of both the arguments and then returns its content with return. In order to use this function, you just need to call it by its name, sending all the required arguments, as shown in the highlighted line. PHP does not support overloaded functions. Overloading refers to the ability of declaring two or more functions with the same name but different arguments. As you can see, you can declare the arguments without knowing what their types are, so PHP would not be able to decide which function to use. Another important thing to note is the variable scope. We are declaring a $sum variable inside the block of code, so once the function ends, the variable will not be accessible any more. This means that the scope of variables declared inside the function is just the function itself. Furthermore, if you had a $sum variable declared outside the function, it would not be affected at all since the function cannot access that variable unless we send it as an argument. Function arguments A function gets information from outside via arguments. You can define any number of arguments—including 0. These arguments need at least a name so that they can be used inside the function, and there cannot be two arguments with the same name. When invoking the function, you need to send the arguments in the same order as we declared them. A function may contain optional arguments; that is, you are not forced to provide a value for those arguments. When declaring the function, you need to provide a default value for those arguments, so in case the user does not provide a value, the function will use the default one: function addNumbers($a, $b, $printResult = false) { $sum = $a + $b; if ($printResult) { echo 'The result is ' . $sum; } return $sum; } $sum1 = addNumbers(1, 2); $sum1 = addNumbers(3, 4, false); $sum1 = addNumbers(5, 6, true); // it will print the result This new function takes two mandatory arguments and an optional one. The default value is false, and is used as a normal value inside the function. The function will print the result of the sum if the user provides true as the third argument, which happens only the third time that the function is invoked. For the first two times, $printResult is set to false. The arguments that the function receives are just copies of the values that the user provided. This means that if you modify these arguments inside the function, it will not affect the original values. This feature is known as sending arguments by a value. Let's see an example: function modify($a) { $a = 3; } $a = 2; modify($a); var_dump($a); // prints 2 We are declaring the $a variable with the value 2, and then we are calling the modify method, sending $a. The modify method modifies the $a argument, setting its value to 3. However, this does not affect the original value of $a, which reminds to 2 as you can see in the var_dump function. If what you want is to actually change the value of the original variable used in the invocation, you need to pass the argument by reference. To do that, you add & in front of the argument when declaring the function: function modify(&$a) { $a = 3; } Now, after invoking the modify function, $a will be always 3. Arguments by value versus by reference PHP allows you to do it, and in fact, some native functions of PHP use arguments by reference—remember the array sorting functions; they did not return the sorted array; instead, they sorted the array provided. But using arguments by reference is a way of confusing developers. Usually, when someone uses a function, they expect a result, and they do not want their provided arguments to be modified. So, try to avoid it; people will be grateful! The return statement You can have as many return statements as you want inside your function, but PHP will exit the function as soon as it finds one. This means that if you have two consecutive return statements, the second one will never be executed. Still, having multiple return statements can be useful if they are inside conditionals. Add this function inside your functions.php file: function loginMessage() { if (isset($_COOKIE['username'])) { return "You are " . $_COOKIE['username']; } else { return "You are not authenticated."; } } Let's use it in your index.php file by replacing the highlighted content—note that to save some tees, I replaced most of the code that was not changed at all with //…: //... <body> <p><?php echo loginMessage(); ?></p> <?php if (isset($_GET['title']) && isset($_GET['author'])): ?> //... Additionally, you can omit the return statement if you do not want the function to return anything. In this case, the function will end once it reaches the end of the block of code. Type hinting and return types With the release of PHP7, the language allows developers to be more specific about what functions get and return. You can—always optionally—specify the type of argument that the function needs, for example, type hinting, and the type of result the function will return—return type. Let's first see an example: <?php declare(strict_types=1); function addNumbers(int $a, int $b, bool $printSum): int { $sum = $a + $b; if ($printSum) { echo 'The sum is ' . $sum; } return $sum; } addNumbers(1, 2, true); addNumbers(1, '2', true); // it fails when strict_types is 1 addNumbers(1, 'something', true); // it always fails This function states that the arguments need to be an integer, and Boolean, and that the result will be an integer. Now, you know that PHP has type juggling, so it can usually transform a value of one type to its equivalent value of another type, for example, the string 2 can be used as integer 2. To stop PHP from using type juggling with the arguments and results of functions, you can declare the strict_types directive as shown in the first highlighted line. This directive has to be declared on the top of each file, where you want to enforce this behavior. The three invocations work as follows: The first invocation sends two integers and a Boolean, which is what the function expects. So, regardless of the value of strict_types, it will always work. The second invocation sends an integer, a string, and a Boolean. The string has a valid integer value, so if PHP was allowed to use type juggling, the invocation would resolve to just normal. But in this example, it will fail because of the declaration on top of the file. The third invocation will always fail as the something string cannot be transformed into a valid integer. Let's try to use a function within our project. In our index.php file, we have a foreach loop that iterates the books and prints them. The code inside the loop is kind of hard to understand as it is mixing HTML with PHP, and there is a conditional too. Let's try to abstract the logic inside the loop into a function. First, create the new functions.php file with the following content: <?php function printableTitle(array $book): string { $result = '<i>' . $book['title'] . '</i> - ' . $book['author']; if (!$book['available']) { $result .= ' <b>Not available</b>'; } return $result; } This file will contain our functions. The first one, printableTitle, takes an array representing a book and builds a string with a nice representation of the book in HTML. The code is the same as before, just encapsulated in a function. Now, index.php will have to include the functions.php file and then use the function inside the loop. Let's see how this is done: <?php require_once 'functions.php' ?> <!DOCTYPE html> <html lang="en"> //... ?> <ul> <?php foreach ($books as $book): ?> <li><?php echo printableTitle($book); ?> </li> <?php endforeach; ?> </ul> //... Well, now our loop looks way cleaner, right? Also, if we need to print the title of the book somewhere else, we can reuse the function instead of duplicating code! Summary In this article, we went through all the basics of procedural PHP while writing simple examples in order to practice them. You now know how to use variables and arrays with control structures and functions and how to get information from HTTP requests among others. Resources for Article: Further resources on this subject: Getting started with Modernizr using PHP IDE[article] PHP 5 Social Networking: Implementing Public Messages[article] Working with JSON in PHP jQuery[article]
Read more
  • 0
  • 0
  • 16342

article-image-gnu-octave-data-analysis-examples
Packt
28 Jun 2011
7 min read
Save for later

GNU Octave: data analysis examples

Packt
28 Jun 2011
7 min read
Loading data files When performing a statistical analysis of a particular problem, you often have some data stored in a file. You can save your variables (or the entire workspace) using different file formats and then load them back in again. Octave can, of course, also load data from files generated by other programs. There are certain restrictions when you do this which we will discuss here. In the following matter, we will only consider ASCII files, that is, readable text files. When you load data from an ASCII file using the load command, the data is treated as a two-dimensional array. We can then think of the data as a matrix where lines represent the matrix rows and columns the matrix columns. For this matrix to be well defined, the data must be organized such that all the rows have the same number of columns (and therefore the columns the same number of rows). For example, the content of a file called series.dat can be: Next we to load this into Octave's workspace: octave:1> load -ascii series.dat; whereby the data is stored in the variable named series. In fact, Octave is capable of loading the data even if you do not specify the ASCII format. The number of rows and columns are then: octave:2> size(series) ans = 4 3 I prefer the file extension .dat, but again this is optional and can be anything you wish, say .txt, .ascii, .data, or nothing at all. In the data files you can have: Octave comments Data blocks separated by blank lines (or equivalent empty rows) Tabs or single and multi-space for number separation Thus, the following data file will successfully load into Octave: # First block 1 232 334 2 245 334 3 456 342 4 555 321 # Second block 1 231 334 2 244 334 3 450 341 4 557 327 The resulting variable is a matrix with 8 rows and 3 columns. If you know the number of blocks or the block sizes, you can then separate the blocked-data. Now, the following data stored in the file bad.dat will not load into Octave's workspace: 1 232.1 334 2 245.2 3 456.23 4 555.6 because line 1 has three columns whereas lines 2-4 have two columns. If you try to load this file, Octave will complain: octave:3> load -ascii bad.dat error: load: bad.dat: inconsisitent number of columns near line 2 error:load: unable to extract matrix size from file 'bad.dat' Simple descriptive statistics Consider an Octave function mcintgr and its vectorized version mcintgrv. This function can evaluate the integral for a mathematical function f in some interval [a; b] where the function is positive. The Octave function is based on the Monte Carlo method and the return value, that is, the integral, is therefore a stochastic variable. When we calculate a given integral, we should as a minimum present the result as a mean or another appropriate measure of a central value together with an associated statistical uncertainty. This is true for any other stochastic variable, whether it is the height of the pupils in class, length of a plant's leaves, and so on. In this section, we will use Octave for the most simple statistical description of stochastic variables. Histogram and moments Let us calculate the integral given in Equation (5.9) one thousand times using the vectorized version of the Monte Carlo integrator: octave:4> for i=1:1000 > s(i) = mcintgrv("sin", 0, pi, 1000); > endfor The array s now contains a sequence of numbers which we know are approximately 2. Before we make any quantitative statistical description, it is always a good idea to first plot a histogram of the data as this gives an approximation to the true underlying probability distribution of the variable s. The easiest way to do this is by using Octave's hist function which can be called using: octave:5> hist(s, 30, 1) The first argument, s, to hist is the stochastic variable, the second is the number of bins that s should be grouped into (here we have used 30), and the third argument gives the sum of the heights of the histogram (here we set it to 1). The histogram is shown in the figure below. If hist is called via the command hist(s), s is grouped into ten bins and the sum of the heights of the histogram is equal to sum(s). From the figure, we see that mcintgrv produces a sequence of random numbers that appear to be normal (or Gaussian) distributed with a mean of 2. This is what we expected. It then makes good sense to describe the variable via the sample mean defined as: where N is the number of samples (here 1000) and si the i'th data point, as well as the sample variance given by: The variance is a measure of the distribution width and therefore an estimate of the statistical uncertainty of the mean value. Sometimes, one uses the standard deviation instead of the variance. The standard deviation is simply the square root of the variance To calculate the sample mean, sample variance, and the standard deviation in Octave, you use: octave:6> mean(s) ans = 1.9999 octave:7> var(s) ans = 0.002028 octave:8> std(s) ans = 0.044976 In the statistical description of the data, we can also include the skewness which measures the symmetry of the underlying distribution around the mean. If it is positive, it is an indication that the distribution has a long tail stretching towards positive values with respect to the mean. If it is negative, it has a long negative tail. The skewness is often defined as: We can calculate this in Octave via: octave:9> skewness(s) ans = -0.15495 This result is a bit surprising because we would assume from the histogram that the data set represents numbers picked from a normal distribution which is symmetric around the mean and therefore has zero skewness. It illustrates an important point—be careful to use the skewness as a direct measure of the distributions symmetry—you need a very large data set to get a good estimate. You can also calculate the kurtosis which measures the flatness of the sample distribution compared to a normal distribution. Negative kurtosis indicates a relative flatter distribution around the mean and a positive kurtosis that the sample distribution has a sharp peak around the mean. The kurtosis is defined by the following: It can be calculated by the kurtosis function. octave:10> kurtosis(s) ans = -0.02310 The kurtosis has the same problem as the skewness—you need a very large sample size to obtain a good estimate. Sample moments As you may know, the sample mean, variance, skewness, and kurtosis are examples of sample moments. The mean is related to the first moment, the variance the second moment, and so forth. Now, the moments are not uniquely defined. One can, for example, define the k'th absolute sample moment pka and k'th central sample moment pkc as: Notice that the first absolute moment is simply the sample mean, but the first central sample moment is zero. In Octave, you can easily retrieve the sample moments using the moment function, for example, to calculate the second central sample moment you use: octave:11> moment(s, 2, 'c') ans = 0.002022 Here the first input argument is the sample data, the second defines the order of the moment, and the third argument specifies whether we want the central moment 'c' or absolute moment 'a' which is the default. Compare the output with the output from Command 7—why is it not the same?
Read more
  • 0
  • 0
  • 16254

article-image-article-phone-calls-send-sms-your-website-using-twilio
Packt
21 Mar 2014
9 min read
Save for later

Make phone calls and send SMS messages from your website using Twilio

Packt
21 Mar 2014
9 min read
(For more resources related to this topic, see here.) Sending a message from a website Sending messages from a website has many uses; sending notifications to users is one good example. In this example, we're going to present you with a form where you can enter a phone number and message and send it to your user. This can be quickly adapted for other uses. Getting ready The complete source code for this recipe can be found in the Chapter6/Recipe1/ folder. How to do it... Ok, let's learn how to send an SMS message from a website. The user will be prompted to fill out a form that will send the SMS message to the phone number entered in the form. Download the Twilio Helper Library from https://github.com/twilio/twilio-php/zipball/master and unzip it. Upload the Services/ folder to your website. Upload config.php to your website and make sure the following variables are set: <?php $accountsid = ''; // YOUR TWILIO ACCOUNT SID $authtoken = ''; // YOUR TWILIO AUTH TOKEN $fromNumber = ''; // PHONE NUMBER CALLS WILL COME FROM ?> Upload a file called sms.php and add the following code to it: <!DOCTYPE html> <html> <head> <title>Recipe 1 – Chapter 6</title> </head> <body> <?php include('Services/Twilio.php'); include("config.php"); include("functions.php"); $client = new Services_Twilio($accountsid, $authtoken); if( isset($_POST['number']) && isset($_POST['message']) ){ $sid = send_sms($_POST['number'],$_POST['message']); echo "Message sent to {$_POST['number']}"; } ?> <form method="post"> <input type="text" name="number" placeholder="Phone Number...." /><br /> <input type="text" name="message" placeholder="Message...." /><br /> <button type="submit">Send Message</button> </form> </body> </html> Create a file called functions.php and add the following code to it: <?php function send_sms($number,$message){ global $client,$fromNumber; $sms = $client->account->sms_messages->create( $fromNumber, $number, $message ); return $sms->sid; } How it works... In steps 1 and 2, we downloaded and installed the Twilio Helper Library for PHP. This library is the heart of your Twilio-powered apps. In step 3, we uploaded config.php that contains our authentication information to talk to Twilio's API. In steps 4 and 5, we created sms.php and functions.php, which will send a message to the phone number we enter. The send_sms function is handy for initiating SMS conversations; we'll be building on this function heavily in the rest of the article. Allowing users to make calls from their call logs We're going to give your user a place to view their call log. We will display a list of incoming calls and give them the option to call back on these numbers. Getting ready The complete source code for this recipe can be found in the Chapter9/Recipe4 folder. How to do it... Now, let's build a section for our users to log in to using the following steps: Update a file called index.php with the following content: <?php session_start(); include 'Services/Twilio.php'; require("system/jolt.php"); require("system/pdo.class.php"); require("system/functions.php"); $_GET['route'] = isset($_GET['route']) ? '/'.$_GET['route'] : '/'; $app = new Jolt('site',false); $app->option('source', 'config.ini'); #$pdo = Db::singleton(); $mysiteURL = $app->option('site.url'); $app->condition('signed_in', function () use ($app) { $app->redirect( $app->getBaseUri().'/login',!$app->store('user')); }); $app->get('/login', function() use ($app){ $app->render( 'login', array(),'layout' ); }); $app->post('/login', function() use ($app){ $sql = "SELECT * FROM `user` WHERE `email`='{$_POST['user']}' AND `password`='{$_POST['pass']}'"; $pdo = Db::singleton(); $res = $pdo->query( $sql ); $user = $res->fetch(); if( isset($user['ID']) ){ $_SESSION['uid'] = $user['ID']; $app->store('user',$user['ID']); $app->redirect( $app->getBaseUri().'/home'); }else{ $app->redirect( $app->getBaseUri().'/login'); } }); $app->get('/signup', function() use ($app){ $app->render( 'register', array(),'layout' ); }); $app->post('/signup', function() use ($app){ $client = new Services_Twilio($app->store('twilio.accountsid'), $app->store('twilio.authtoken') ); extract($_POST); $timestamp = strtotime( $timestamp ); $subaccount = $client->accounts->create(array( "FriendlyName" => $email )); $sid = $subaccount->sid; $token = $subaccount->auth_token; $sql = "INSERT INTO 'user' SET `name`='{$name}',`email`='{$email }',`password`='{$password}',`phone_number`='{$phone_number}',`sid` ='{$sid}',`token`='{$token}',`status`=1"; $pdo = Db::singleton(); $pdo->exec($sql); $uid = $pdo->lastInsertId(); $app->store('user',$uid ); // log user in $app->redirect( $app->getBaseUri().'/phone-number'); }); $app->get('/phone-number', function() use ($app){ $app->condition('signed_in'); $user = $app->store('user'); $client = new Services_Twilio($user['sid'], $user['token']); $app->render('phone-number'); }); $app->post("search", function() use ($app){ $app->condition('signed_in'); $user = get_user( $app->store('user') ); $client = new Services_Twilio($user['sid'], $user['token']); $SearchParams = array(); $SearchParams['InPostalCode'] = !empty($_POST['postal_code']) ? trim($_POST['postal_code']) : ''; $SearchParams['NearNumber'] = !empty($_POST['near_number']) ? trim($_POST['near_number']) : ''; $SearchParams['Contains'] = !empty($_POST['contains'])? trim($_ POST['contains']) : '' ; try { $numbers = $client->account->available_phone_numbers->getList('US', 'Local', $SearchParams); if(empty($numbers)) { $err = urlencode("We didn't find any phone numbers by that search"); $app->redirect( $app->getBaseUri().'/phone-number?msg='.$err); exit(0); } } catch (Exception $e) { $err = urlencode("Error processing search: {$e->getMessage()}"); $app->redirect( $app->getBaseUri().'/phone-number?msg='.$err); exit(0); } $app->render('search',array('numbers'=>$numbers)); }); $app->post("buy", function() use ($app){ $app->condition('signed_in'); $user = get_user( $app->store('user') ); $client = new Services_Twilio($user['sid'], $user['token']); $PhoneNumber = $_POST['PhoneNumber']; try { $number = $client->account->incoming_phone_numbers->create(array( 'PhoneNumber' => $PhoneNumber )); $phsid = $number->sid; if ( !empty($phsid) ){ $sql = "INSERT INTO numbers (user_id,number,sid) VALUES('{$u ser['ID']}','{$PhoneNumber}','{$phsid}');"; $pdo = Db::singleton(); $pdo->exec($sql); $fid = $pdo->lastInsertId(); $ret = editNumber($phsid,array( "FriendlyName"=>$PhoneNumber, "VoiceUrl" => $mysiteURL."/voice?id=".$fid, "VoiceMethod" => "POST", ),$user['sid'], $user['token']); } } catch (Exception $e) { $err = urlencode("Error purchasing number: {$e->getMessage()}"); $app->redirect( $app->getBaseUri().'/phone-number?msg='.$err); exit(0); } $msg = urlencode("Thank you for purchasing $PhoneNumber"); header("Location: index.php?msg=$msg"); $app->redirect( $app->getBaseUri().'/home?msg='.$msg); exit(0); }); $app->route('/voice', function() use ($app){ }); $app->get('/transcribe', function() use ($app){ }); $app->get('/logout', function() use ($app){ $app->store('user',0); $app->redirect( $app->getBaseUri().'/login'); }); $app->get('/home', function() use ($app){ $app->condition('signed_in'); $uid = $app->store('user'); $user = get_user( $uid ); $client = new Services_Twilio($user['sid'], $user['token']); $app->render('dashboard',array( 'user'=>$user, 'client'=>$client )); }); $app->get('/delete', function() use ($app){ $app->condition('signed_in'); }); $app->get('/', function() use ($app){ $app->render( 'home' ); }); $app->listen(); Upload a file called dashboard.php with the following content to your views folder: <h2>My Number</h2> <?php $pdo = Db::singleton(); $sql = "SELECT * FROM `numbers` WHERE `user_ id`='{$user['ID']}'"; $res = $pdo->query( $sql ); while( $row = $res->fetch() ){ echo preg_replace("/[^0-9]/", "", $row['number']); } try { ?> <h2>My Call History</h2> <p>Here are a list of recent calls, you can click any number to call them back, we will call your registered phone number and then the caller</p> <table width=100% class="table table-hover tabled-striped"> <thead> <tr> <th>From</th> <th>To</th> <th>Start Date</th> <th>End Date</th> <th>Duration</th> </tr> </thead> <tbody> <?php foreach ($client->account->calls as $call) { # echo "<p>Call from $call->from to $call->to at $call->start_time of length $call->duration</p>"; if( !stristr($call->direction,'inbound') ) continue; $type = find_in_list($call->from); ?> <tr> <td><a href="<?=$uri?>/call?number=<?=urlencode($call->from)?>"><?=$call->from?></a></td> <td><?=$call->to?></td> <td><?=$call->start_time?></td> <td><?=$call->end_time?></td> <td><?=$call->duration?></td> </tr> <?php } ?> </tbody> </table> <?php } catch (Exception $e) { echo 'Error: ' . $e->getMessage(); } ?> <hr /> <a href="<?=$uri?>/delete" onclick="return confirm('Are you sure you wish to close your account?');">Delete My Account</a> How it works... In step 1, we updated the index.php file. In step 2, we uploaded dashboard.php to the views folder. This file checks if we're logged in using the $app->condition('signed_in') method, which we discussed earlier, and if we are, it displays all incoming calls we've had to our account. We can then push a button to call one of those numbers and whitelist or blacklist them. Summary Thus in this article we have learned how to send messages and make phone calls from your website using Twilio. Resources for Article: Further resources on this subject: Make phone calls, send SMS from your website using Twilio [article] Trunks in FreePBX 2.5 [article] Trunks using 3CX: Part 1 [article]
Read more
  • 0
  • 0
  • 16111

article-image-elgg-social-networking-installation
Packt
28 Oct 2009
14 min read
Save for later

Elgg Social Networking - Installation

Packt
28 Oct 2009
14 min read
Installing Elgg In addition to its impressive feature list, Elgg is an admin's dolly. In this tutorial by Mayank Sharma, we will see how Elgg can be installed in popular Linux web application rollout stack of Linux, Apache, MySQL, and PHP, fondly referred to as LAMP. As MySQL and PHP can run under Windows operating system as well, you can set up Elgg to serve your purpose for such an environment. Setting Up LAMP Let's look at setting up the Linux, Apache, MySQL, PHP web server environment. There are several reasons for the LAMP stack's popularity. While most people enjoy the freedom offered by these Open Source software, small business and non-profits will also be impressed by its procurement cost—$0. Step 1: Install Linux The critical difference between setting up Elgg under Windows or Linux is installing the operating system. The Linux distribution I'm using to set up Elgg is Ubuntu Linux ( http://www.ubuntu.com/ ).It's available as a free download and has a huge and active global community, should you run into any problems. Covering step-by-step installation of Ubuntu Linux is too much of a digression for this tutorial. Despite the fact that Ubuntu isn't too difficult to install, because of its popularity there are tons of installation and usage documentation available all over the Web. Linux.com has a set of videos that detail the procedure of installing Ubuntu ( http://www.linux.com/articles/114152 ).Ubuntu has a dedicated help section ( https://help.ubuntu.com/ ) for introduction and general usage of the distribution. Step 2: Install Apache Apache is the most popular web server used on the Internet. Reams and reams of documents have been written on installing Apache under Linux. Apache's documentation sub-project ( http://httpd.apache.org/docs-project/ ) has information on installing various versions of Apache under Linux. Ubuntu, based on another popular Linux distribution, Debian, uses a very powerful and user-friendly packaging system. It's called apt-get and can install an Apache server within minutes. All you have to do is open a terminal and write this command telling apt-get what to install: apt-get install apache2 apache2-common apache2-doc apache2-mpm-prefork apache2-utils libapr0 libexpat1 ssl-cert This will download Apache and its most essential libraries. Next, you need to enable some of Apache's most critical modules: a2enmod ssla2enmod rewritea2enmod include The rewrite module is critical to Elgg, so make sure it's enabled, else Elgg wouldn't work properly. That's it. Now, just restart Apache with: /etc/init.d/apache2 restart. Step 3: MySQL Installing MySQL isn't too much of an issue either. Again, like Ubuntu and Apache, MySQL can also boast of a strong and dedicated community. This means there's no dearth of MySQL installation or usage related documentation ( http://www.mysql.org/doc/ ). If you're using MySQL under Ubuntu, like me, installation is just a matter of giving apt-get a set of packages to install: apt-get install mysql-server mysql-client libmysqlclient12-dev Finally, set up a password for MySQL with: mysqladmin -h yourserver.example.com -u root password yourrootmysqlpassword Step 4: Install PHP Support You might think I am exaggerating things a little bit here, but I am not, PHP is one of the most popular and easy to learn languages for writing web applications. Why do you think we are setting up out Linux web server environment to execute PHP? It's because Elgg itself is written in PHP! And so are hundreds and thousands of other web applications. So I'm sure you've guessed by now that PHP has a good deal of documentation ( http://www.php.net/docs.php )as well. You've also guessed it's now time to call upon Ubuntu's apt-get package manager to set up PHP:  apt-get install libapache2-mod-php5 php5 php5-common php5-gd php5-mysql php5-mysqli As you can see, in addition to PHP, we are also installing packages that'll hook up PHP with the MySQL database and the Apache web server. That's all there is to setting up the LAMP architecture to power your Elgg network. Setting Up WAMP If you are used to Microsoft's Windows operating system or want to avoid the extra minor learning curve involved with setting up the web server on a Linux distribution, especially, if you haven't done it before, you can easily replicate the Apache, MySQL, PHP web server on a Windows machine. Cost wise, all server components the Apache web server, MySQL database, and the PHP development language have freely available Windows versions as well. But the base component of this stack, the operating system —Microsoft Windows, isn't. Versions of Apache, MySQL, and PHP for Windows are all available on the same websites mentioned above. As Windows doesn't have an apt-get kind of utility, you'll have to download and install all three components from their respective websites, but you have an easier way to set up a WAMP server. There are several pre-packaged Apache, MySQL, and PHP software bundles available for Windows(http://en.wikipedia.org/wiki/Comparison_of_WAMPs).I've successfully run Elgg on the WAMP5 bundle (http://www.en.wampserver.com/). The developer updates the bundle, time and again, to make sure it's running the latest versions of all server components included in the bundle. Note - While WAMP5 requires no configuration, make sure you have Apache's rewrite_module and PHP's php_gd2 extension enabled. They will have a bullet next to their name if they are enabled. If the bullet is missing, click on the respective entries under the Apache and PHP sub-categories and restart WAMP5. Installing Elgg Now that we have a platform ready for Elgg, let's move on to the most important step of setting up Elgg. Download the latest version of Elgg from its website. At the time of writing this tutorial, the latest version of Elgg was Elgg-0.8. Elgg is distributed as a zipped file. To uncompress under Linux: Move this zipped file to /tmp and uncompress it with the following command: $ unzip /tmp/elgg-0.8.zip To uncompress under Windows: Right-click on the ZIP file and select the Extract here option. After uncompressing the ZIP file, you should have a directory called Elgg-<version-number>, in my case, elgg-0.8/. This directory contains several sub directories and files. The INSTALL file contains detailed installation instructions. The first step is to move this uncompressed directory to your web server. Note: You can set up Elgg on your local web server that sits on the Internet or on a paid web server in a data center anywhere on the planet. The only difference between the two setups is that if you don't have access to the local web server, you'll have to contact the web service provider and ask him about the transfer options available to you. Most probably, you'll have FTP access to your web server, and you'll have to use one of the dozens of FTP clients, available for free, to transfer Elgg's files from your computer to the remote web server. Optionally, if you have "shell" access on the web server, you might want to save time by transferring just the zipped file and unzipping it on the web server itself. Contact your web server provider for this information. The web server's directory where you need to copy the contents of the Elgg directory depends upon your Apache installation and operating system. In Ubuntu Linux, the default web server directory is /var/www/. In Windows, WAMP5 asks where it should create this directory during installation. By default, it's the www directory and is created within the directory you installed WAMP5 under. Note: Another important decision you need to make while installing Elgg is how do you want your users to access your network. If you're setting up the network to be part of your existing web infrastructure, you'll need to install Elgg inside a directory. If, on the other hand, you are setting up a new site just for the Elgg-powered social network, copy the contents of the Elgg directory inside the www directory itself and not within a subdirectory. Once you have the Elgg directory within your web server's www directory, it's time to set things in motion. Start by renaming the config-dist.php file to config.php and the htaccess-dist to .htaccess. Simply right-click on the file and give it a new name or use the mv command in this format: $ mv <original-file-name> <new-file-name> Note : To rename htacces-dist to .htaccess in Windows, you'll have to open the htaccess-dist file in notepad and then go to File | Save As and specify the name as .htaccess with the quotes. Editing config.php Believe it or not, we've completed the "installation" bit of setting up Elgg. But we still need to configure it before throwing the doors open to visitors. Not surprisingly, all this involves is creating a database and editing the config.php file to our liking. Creating a Database Making an empty database in MySQL isn't difficult at all. Just enter the MySQL interactive shell using your username, password, and hostname you specified while installing MySQL. $ mysql -u root -h localhost -pEnter password: Welcome to the MySQL monitor. Commands end with ; or g.Your MySQL connection id is 9 to server version: 5.0.22-Debian_0ubuntu6.06.3-logType 'help;' or 'h' for help. Type 'c' to clear the buffer.mysql> CREATE DATABASE elgg; You can also create a MySQL database using a graphical front-end manager like PHPMyAdmin, which comes with WAMP5. Just look for a database field, enter a new name (Elgg), and hit the Create button to create an empty Elgg database. Initial Configuration Elgg has a front-end interface to set up config.php, but there are a couple of things you need to do before you can use that interface: Create a data directory outside your web server root. As described in the configuration file, this is a special directory where uploaded files will go. It's also advisable to create this directory outside your main Elgg install. This is because this directory will be writable by everyone accessing the Elgg site and having such a "world-accessible" directory under your Elgg installation is a security risk. If you call the directory Elgg-data, make it world-readable with the following command: $ chmod 777 elgg-data Setup admin username and password. Before you can access Elgg's configuration web front-end, you need an admin user and a password. For that open the config.php file in your favorite text editor and scroll down to the following variables: $CFG->adminuser = "";$CFG->adminpassword = ""; Specify your chosen admin username and password between the quotes, so that it looks something like this: $CFG->adminuser = "admin";$CFG->adminpassword = "765thyr3"; Make sure you don't forget the username and password of the admin user. Important Settings When you have created the data directory and specified an admin username and password, it's time to go ahead with the rest of the configuration. Open a web browser and point to http://<your-web-server>/<Elgg-installation>/elggadmin/ This will open up a simple web page with lots of fields. All fields have a title and a brief description of the kind of information you need to fill in that field. There are some drop-down lists as well, from which you have to select one of the listed options. Here are all the options and their descriptions: Administration panel username: Username to log in to this admin panel, in future, to change your settings. Admin password: Password to log in to this admin panel in future. Site name: Enter the name of your site here (e.g. Elgg, Apcala, University of Bogton's Social Network, etc.). Tagline: A tagline for your site (e.g. Social network for Bogton). Web root: External URL to the site (e.g. http://elgg.bogton.edu/). Elgg install root: Physical path to the files (e.g./home/Elggserver/httpdocs/). Elgg data root: This is a special directory where uploaded files will go. If possible, this should live outside your main Elgg installation? (you'll need to create it by hand). It must have world writable permissions set, and have a final slash at the end. Note: Even in Windows, where we use back slashes () to separate directories, use Unix's forward slashes (/) to specify the path to the install root, data root, and other path names. For example, if you have Elgg files under WAMP's default directory in your C drive, use this path: C:/wamp/www/elgg/. Database type: Acceptable values are mysql and postgres - MySQL is highly recommended. System administrator email: The email address your site will send emails from(e.g. elgg-admin@bogton.edu). News account initial password: The initial password for the 'news' account. This will be the first administrator user within your system, and you should change the password immediately after the first time you log in. Default locale: Country code to set language to if you have gettext installed. Public registration: Can general members of the public register for this system? Public invitations: Can users of this system invite other users? Maximum users: The maximum number of users in your system. If you set this to 0, you will have an unlimited number of users. Maximum disk space: The maximum disk space taken up by all uploaded files. Disable public comments: Set the following to true to force users to log in before they can post comments, overriding the per-user option. This is a handy sledgehammer-to-crack-a-nut tactic to protect against comment spam (although an Akismet plug-in is available from elgg.org). Email filter: Anything you enter here must be present in the email address of anyone who registers; e.g. @mycompany.com will only allow email address from mycompany.com to register. Default access: The default access level for all new items in the system. Disable user templates: If this is set, users can only choose from available templates rather than defining their own. Persistent connections: Should Elgg use persistent database connections? Debug: Set this to 2047 to get ADOdb error handling. RSS posts maximum age: Number of days for which to keep incoming RSS feed entries before deleting them. Set this to 0 if you don't want RSS posts to be removed. Community create flag: Set this to admin if you would like to restrict the ability to create communities to admin users. cURL path: Set this to the cURL executable if cURL is installed; otherwise leave blank. Note : According to Wikipedia, cURL is a command line tool for transferring files with URL syntax, supporting FTP, FTPS, HTTP, HTTPS, TFTP, SCP,SFTP, Telnet, DICT, FILE, and LDAP. The main purpose and use for cURL is to automate unattended file transfers or sequences of operations. For example, it is a good tool for simulating a user's actions at a web browser. Under Ubuntu Linux, you can install curl using the following command: apt-get install curl Templates location: The full path of your Default_Template directory. Profile location: The full path to your profile configuration file (usually, it's best to leave this in mod/profile/). Finally, when you're done, click on the Save button to save the settings. Note : The next version of Elgg, Elgg 0.9, will further simplify installation. Already an early release candidate of this version (elgg-0.9rc1) is a lot more straightforward to install and configure, for initial use. First Log In Now, it's time to let Elgg use these settings and set things up for you. Just point your browser to your main Elgg installation (http://<your-web-servergt;/<Elgg-installation>). It'll connect to the MySQL database and create some tables, then upload some basic data, before taking you to the main page. On the main page, you can use the news account and the password you specified for this account during configuration to log in to your Elgg installation.
Read more
  • 0
  • 0
  • 16066

article-image-implementing-ajax-grid-using-jquery-data-grid-plugin-jqgrid
Packt
05 Feb 2010
9 min read
Save for later

Implementing AJAX Grid using jQuery data grid plugin jqGrid

Packt
05 Feb 2010
9 min read
In this article by Audra Hendrix, Bogdan Brinzarea and Cristian Darie, authors of AJAX and PHP: Building Modern Web Applications 2nd Edition, we will discuss the usage of an AJAX-enabled data grid plugin, jqGrid. One of the most common ways to render data is in the form of a data grid. Grids are used for a wide range of tasks from displaying address books to controlling inventories and logistics management. Because centralizing data in repositories has multiple advantages for organizations, it wasn't long before a large number of applications were being built to manage data through the Internet and intranet applications by using data grids. But compared to their desktop cousins, online applications using data grids were less than stellar - they felt cumbersome and time consuming, were not always the easiest things to implement (especially when you had to control varying access levels across multiple servers), and from a usability standpoint, time lags during page reloads, sorts, and edits made online data grids a bit of a pain to use, not to mention the resources that all of this consumed. As you are a clever reader, you have undoubtedly surmised that you can use AJAX to update the grid content; we are about to show you how to do it! Your grids can update without refreshing the page, cache data for manipulation on the client (rather than asking the server to do it over and over again), and change their looks with just a few keystrokes! Gone forever are the blinking pages of partial data and sessions that time out just before you finish your edits. Enjoy! In this article, we're going to use a jQuery data grid plugin named jqGrid. jqGrid is freely available for private and commercial use (although your support is appreciated) and can be found at: http://www.trirand.com/blog/. You may have guessed that we'll be using PHP on the server side but jqGrid can be used with any of the several server-side technologies. On the client side, the grid is implemented using JavaScript's jQuery library and JSON. The look and style of the data grid will be controlled via CSS using themes, which make changing the appearance of your grid easy and very fast. Let's start looking at the plugin and how easily your newly acquired AJAX skills enable you to quickly add functionality to any website. Our finished grid will look like the one in Figure 9-1:   Figure 9-1: AJAX Grid using jQuery Let's take a look at the code for the grid and get started building it. Implementing the AJAX data grid The files and folders for this project can be obtained directly from the code download(chap:9) for this article, or can be created by typing them in. We encourage you to use the code download to save time and for accuracy. If you choose to do so, there are just a few steps you need to follow: Copy the grid folder from the code download to your ajax folder. Connect to your ajax database and execute the product.sql script. Update config.php with the correct database username and password. Load http://localhost/ajax/grid to verify the grid works fine - it should look just like Figure 9-1. You can test the editing feature by clicking on a row, making changes, and hitting the Enter key. Figure 9-2 shows a row in editing mode:     Figure 9-2: Editing a row Code overview If you prefer to type the code yourself, you'll find a complete step-by-step exercise a bit later in this article. Before then, though, let's quickly review what our grid is made of. We'll review the code in greater detail at the end of this article. The editable grid feature is made up of a few components: product.sql is the script that creates the grid database config.php and error_handler.php are our standard helper scripts grid.php and grid.class.php make up the server-side functionality index.html contains the client-side part of our project The scripts folder contains the jQuery scripts that we use in index.html   Figure 9-3: The components of the AJAX grid The database Our editable grid displays a fictional database with products. On the server side, we store the data in a table named product, which contains the following fields: product_id: A unique number automatically generated by auto-increment in the database and used as the Primary Key name: The actual name of the product price: The price of the product for sale on_promotion: A numeric field that we use to store 0/1 (or true/false) values. In the user interface, the value is expressed via a checkbox The Primary Key is defined as the product_id, as this will be unique for each product it is a logical choice. This field cannot be empty and is set to auto-increment as entries are added to the database: CREATE TABLE product( product_id INT UNSIGNED NOT NULL AUTO_INCREMENT, name VARCHAR(50) NOT NULL DEFAULT '', price DECIMAL(10,2) NOT NULL DEFAULT '0.00', on_promotion TINYINT NOT NULL DEFAULT '0', PRIMARY KEY (product_id)); The other fields are rather self-explanatory—none of the fields may be left empty and each field, with the exception of product_id, has been assigned a default value. The tinyint field will be shown as a checkbox in our grid that the user can simply set on or off. The on-promotion field is set to tinyint, as it will only need to hold a true (1) or false (0) value. Styles and colors Leaving the database aside, it's useful to look at the more pertinent and immediate aspects of the application code so as to get a general overview of what's going on here. We mentioned earlier that control of the look of the grid is accomplished through CSS. Looking at the index.html file's head region, we find the following code: <link rel="stylesheet" type="text/css" href="scripts/themes/coffee/grid.css" title="coffee" media="screen" /><link rel="stylesheet" type="text/css" media="screen" href="themes/jqModal.css" /> Several themes have been included in the themes folder; coffee is the theme being used in the code above. To change the look of the grid, you need only modify the theme name to another theme, green, for example, to modify the color theme for the entire grid. Creating a custom theme is possible by creating your own images for the grid (following the naming convention of images), collecting them in a folder under the themes folder, and changing this line to reflect your new theme name. There is one exception here though, and it affects which buttons will be used. The buttons' appearance is controlled by imgpath: 'scripts/themes/green/images', found in index.html; you must alter this to reflect the path to the proper theme. Changing the theme name in two different places is error prone and we should do this carefully. By using jQuery and a nifty trick, we will be able to define the theme as a simple variable. We will be able to dynamically load the CSS file based on the current theme and imgpath will also be composed dynamically. The nifty trick involves dynamically creating the < link > tag inside head and setting the appropriate href attribute to the chosen theme. Changing the current theme simply consists of changing the theme JavaScript variable. JqModal.css controls the style of our pop-up or overlay window and is a part of the jqModal plugin. (Its functionality is controlled by the file jqModal.js found in the scripts/js folder.) You can find the plugin and its associated CSS file at: http://dev.iceburg.net/jquery/jqModal/   In addition, in the head region of index.html, there are several script src declarations for the files used to build the grid (and jqModal.js for the overlay): <script src="scripts/jquery-1.3.2.js" type="text/javascript"></script><script src="scripts/jquery.jqGrid.js" type="text/javascript"></script><script src="scripts/js/jqModal.js" type="text/javascript"></script><script src="scripts/js/jqDnR.js" type="text/javascript"></script> There are a number of files that are used to make our grid function and we will talk about these scripts in more detail later. Looking at the body of our index page, we find the declaration of the table that will house our grid and the code for getting the grid on the page and populated with our product data. <script type="text/javascript">var lastSelectedId;$('#list').jqGrid({ url:'grid.php', //name of our server side script. datatype: 'json', mtype: 'POST', //specifies whether using post or get//define columns grid should expect to use (table columns) colNames:['ID','Name', 'Price', 'Promotion'], //define data of each column and is data editable? colModel:[ {name:'product_id',index:'product_id', width:55,editable:false}, //text data that is editable gets defined {name:'name',index:'name', width:100,editable:true, edittype:'text',editoptions:{size:30,maxlength:50}},//editable currency {name:'price',index:'price', width:80, align:'right',formatter:'currency', editable:true},// T/F checkbox for on_promotion {name:'on_promotion',index:'on_promotion', width:80, formatter:'checkbox',editable:true, edittype:'checkbox'} ],//define how pages are displayed and paged rowNum:10, rowList:[5,10,20,30], imgpath: 'scripts/themes/green/images', pager: $('#pager'), sortname: 'product_id',//initially sorted on product_id viewrecords: true, sortorder: "desc", caption:"JSON Example", width:600, height:250, //what will we display based on if row is selected onSelectRow: function(id){ if(id && id!==lastSelectedId){ $('#list').restoreRow(lastSelectedId); $('#list').editRow(id,true,null,onSaveSuccess); lastSelectedId=id; } },//what to call for saving edits editurl:'grid.php?action=save'});//indicate if/when save was successfulfunction onSaveSuccess(xhr){ response = xhr.responseText; if(response == 1) return true; return false;}</script>
Read more
  • 0
  • 0
  • 16054
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-shipping-and-tax-calculations-php-5-ecommerce
Packt
20 Jan 2010
8 min read
Save for later

Shipping and Tax Calculations with PHP 5 Ecommerce

Packt
20 Jan 2010
8 min read
Shipping Shipping is a very important aspect of an e-commerce system; without it customers will not accurately know the cost of their order. The only situation where we wouldn't want to include shipping costs is where we always offer free shipping. However, in that situation, we could either add provisions to ignore shipping costs, or we could set all values to zero, and remove references to shipping costs from the user interface. Shipping methods The first requirement to calculate shipping costs is a shipping method. We may wish to offer a number of different shipping methods to our customers, such as standard shipping, next-day shipping, International shipping, and so on. The system will require a default shipping method, so when the customer visits their basket, they see shipping costs calculated based off the default method. There should be a suitable drop-down list on the basket page containing the list of shipping methods; when this is changed, the costs in the basket should be updated to reflect the selected method. We should store the following details for each shipping method: An ID number A name for the shipping method If the shipping method is active or not, indicating if it should be selectable by customers If the shipping method is the default method for the store A default shipping cost, this would: Be pre-populated in a suitable field when creating new products; however, when the product is created through the administration interface, we would store the shipping cost for the product with the product. Automatically be assigned to existing products in a store when a new shipping method is created to a store that already contains products. This could be suitably stored in our database as the following: Field Type Description ID Integer, Primary Key, Auto Increment ID number for the shipping method Name Varchar The name of the shipping method Active Boolean Indicates if the shipping method is active Default_cost Float The default cost for products for this shipping method This can be represented in the database using the following SQL: CREATE TABLE `shipping_methods` (`ID` INT NOT NULL AUTO_INCREMENT PRIMARY KEY ,`name` VARCHAR( 50 ) NOT NULL ,`active` BOOL NOT NULL ,`is_default` BOOL NOT NULL ,`default_cost` DOUBLE NOT NULL ,INDEX ( `active` , `is_default` )) ENGINE = INNODB COMMENT = 'Shipping methods'; Shipping costs There are several different ways to calculate the costs of shipping products to customers: We could associate a cost to each product for each shipping method we have in our store We could associate costs for each shipping method to ranges of weights, and either charge the customer based on the weight-based shipping cost for each product combined, or based on the combined weight of the order We could base the cost on the customer's delivery address The exact methods used, and the way they are used, depends on the exact nature of the store, as there are implications to these methods. If we were to use location-based shipping cost calculations, then the customer would not be aware of the total cost of their order until they entered their delivery address. There are a few ways this can be avoided: the system could assume a default delivery location and associated costs, and then update the customer's delivery cost at a later stage. Alternatively, if we enabled delivery methods for different locations or countries, we could associate the appropriate costs to these methods, although this does of course rely on the customer selecting the correct shipping method for their order to be approved; appropriate notifications to the customer would be required to ensure they do select the correct ones. For this article we will implement: Weight-based shipping costs: Here the cost of shipping is based on the weight of the products. Product-based shipping costs: Here the cost of shipping is set on a per product basis for each product in the customer's basket. We will also discuss location-based shipping costs, and look at how we may implement it. To account for international or long-distance shipping, we will use varying shipping methods; perhaps we could use: Shipping within state X. Shipping outside of state X. International shipping. (This could be broken down per continent if we wanted, without imposing on the customer too much.) Product-based shipping costs Product-based shipping costs would simply require each product to have a shipping cost associated to it for each shipping method in the store. As discussed earlier, when a new method is added to an existing store, a default value will initially be used, so in theory the administrator only needs to alter products whose shipping costs shouldn't be the default cost, and when creating new products, the relevant text box for the shipping cost for that method will have the default cost pre-populated. To facilitate these costs, we need a new table in our database storing: Product IDs Shipping method IDs Shipping costs The following SQL represents this table in our database: CREATE TABLE `shipping_costs_product` (`shipping_id` int(11) NOT NULL, `product_id` int(11) NOT NULL,`cost` float NOT NULL, PRIMARY KEY (`shipping_id`,`product_id`) )ENGINE=InnoDB DEFAULT CHARSET=latin1; Weight-based shipping costs Depending on the store being operated from our framework, we may need to base shipping costs on the weights of products. If a particular courier for a particular shipping method charges based on weights, then there isn't any point in creating costs for each product for that shipping method. Our framework can calculate the shipping costs based on the weight ranges and costs for the method, and the weight of the product. Within our database we would need to store: The shipping method in question A lower bound for the product weight, so we know which cost to apply to a product A cost associated for anything between this and the next weight bound The table below illustrates these fields in our database: Field Type Description ID Integer, primary key, Auto Increment A unique reference for the weight range Shipping_id Integer The shipping method the range applies to Lower_weight Float For working out which products this weight range cost applies to Cost Float The shipping cost for a product of this weight The following SQL represents this table: CREATE TABLE `shipping_costs_weight` (`ID` int(11) NOT NULL auto_increment,`shipping_id` int(11) NOT NULL,`lower_weight` float NOT NULL,`cost` float NOT NULL,PRIMARY KEY (`ID`)) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ; To think about: Location-based shipping costs One thing we should still think about is location-based shipping costs, and how we may implement this. There are two primary ways in which we can do this: Assign shipping costs or cost surpluses/reductions to delivery addresses (either countries or states) and shipping methods Calculate costs using third-party service APIs These two methods have one issue, which is why we are not going to implement them—that is the costs are calculated later in the checkout process. We want our customers to be well informed and aware of all of their costs as early as possible. As mentioned earlier, however, we could get round this by assuming a default delivery location and providing customers with a guideline shipping cost, which would be subject to change based on their delivery address. Alternatively, we could allow customers to select their delivery location region from a drop-down list on the main "shopping basket" page. This way they would know the costs right away. Regional shipping costs We could look at storing: Shipping method IDs Region types (states or countries) Region values (an ID corresponding to a list of states or countries) A priority (in some cases, we may need to only consider the state delivery costs, and not country costs; in others cases, it may be the other way around) The associated costs changes (this could be a positive or negative value to be added to a product's delivery cost, as calculated by the other shipping systems already) By doing this, we can then combine the delivery address with the products and lookup a price alteration, which is applied to the product's delivery cost, which has already been calculated. Ideally, we would use all the shipping cost calculation systems discussed, to make something as flexible as possible, based on the needs of a particular product, particular shipping method or courier, or of a particular store or business. Third-party APIs The most accurate method of charging delivery costs, encompassing weights and delivery addresses is via APIs provided by couriers themselves, such as UPS. The following web pages may be of reference: http://www.ups.com/onlinetools http://answers.google.com/answers/threadview/id/429083.html Using such an API, means our shipping cost would be accurate, assuming our weight values were correct for our products, and we would not over or under charge customers for shipping costs. One additional consideration that third-party APIs may require would be dimensions of products, if their costs are also based on product sizes.
Read more
  • 0
  • 0
  • 16015

article-image-grunt-makes-it-easy-to-test-and-optimize-your-website-heres-how-tutorial
Sugandha Lahoti
18 Jun 2018
17 min read
Save for later

Grunt makes it easy to test and optimize your website. Here's how. [Tutorial]

Sugandha Lahoti
18 Jun 2018
17 min read
Meet Grunt, the JavaScript Task Runner. As implied by its name, Grunt is a tool that allows us to automatically run any set of tasks. Grunt can even wait while you code, pick up changes made to your source code files (CSS, HTML, or JavaScript) and then execute a pre-configured set of tasks every time you save your changes. This way, you are no longer required to manually execute a set of commands in order for the changes to take effect. In this article, we will learn how to optimize responsive web design using Grunt. Read this article about Tips and tricks to optimize your responsive web design before we get started with Grunt. This article is an excerpt from Mastering Bootstrap 4 - Second Edition by Benjamin Jakobus, and Jason Marah. Let's go ahead and install Grunt: npm install grunt Before we can start using run with MyPhoto, we need to tell Grunt: What tasks to run, that is, what to do with the input (the input being our MyPhoto files) and where to save the output What software is to be used to execute the tasks How to name the tasks so that we can invoke them when required With this in mind, we create a new JavaScript file (assuming UTF-8 encoding), called Gruntfile.js, inside our project root. We will also need to create a JSON file, called package.json, inside our project root. Our project folder should have the following structure (note how we created one additional folder, src, and moved our source code and development assets inside it): src |__bower_components |__images |__js |__styles |__index.html Gruntfile.js package.json Open the newly created Gruntfile.js and insert the following function definition: module.exports = function(grunt) { grunt.initConfig({ pkg: grunt.file.readJSON("package.json") }); }; As you can see, this is plain, vanilla JavaScript. Anything that we need to make Grunt aware of (such as the Grunt configuration) will go inside the grunt.initConfig function definition. Adding the configuration outside the scope of this function will cause Grunt to ignore it. Now open package.json and insert the following: { "name": "MyPhoto", "version": "0.1.0", "devDependencies": { } } The preceding code should be self-explanatory. The name property refers to the project name, version refers to the project's version, and devDependencies refers to any dependencies that are required (we will be adding to those in a while). Great, now we are ready to start using Grunt! Minification and concatenation using Grunt The first thing that we want Grunt to be able to do is minify our files. Yes, we already have minifier installed, but remember that we want to use Grunt so that we can automatically execute a bunch of tasks (such as minification) in one go. To do so, we will need to install the grunt-contrib-cssmin package (a Grunt package that performs minification and concatenation. Visit https://github.com/gruntjs/grunt-contrib-cssmin for more information.): npm install grunt-contrib-cssmin --save-dev Once installed, inspect package.json. Observe how it has been modified to include the newly installed package as a development dependency: { "name": "MyPhoto", "version": "0.1.0", "devDependencies": { "grunt": "^0.4.5", "grunt-contrib-cssmin": "^0.14.0" } } We must tell Grunt about the plugin. To do so, insert the following line inside the function definition within our Gruntfile.js: grunt.loadNpmTasks("grunt-contrib-cssmin"); Our Gruntfile.js should now look as follows: module.exports = function(grunt) { grunt.initConfig({ pkg: grunt.file.readJSON("package.json") }); grunt.loadNpmTasks("grunt-contrib-cssmin"); }; As such, we still cannot do much. The preceding code makes Grunt aware of the grunt-contrib-cssmin package (that is, it tells Grunt to load it). In order to be able to use the package to minify our files, we need to create a Grunt task. We need to call this task cssmin: module.exports = function(grunt) { grunt.initConfig({ pkg: grunt.file.readJSON("package.json"), "cssmin": { "target": { "files": { "dist/styles/myphoto.min.css": [ "styles/*.css", "!styles/myphoto-hcm.css" ] } } } }); grunt.loadNpmTasks("grunt-contrib-cssmin"); }; Whoa! That's a lot of code at once. What just happened here? Well, we registered a new task called cssmin. We then specified the target, that is, the input files that Grunt should use for this task. Specifically, we wrote this: "src/styles/myphoto.min.css": ["src/styles/*.css"] The name property here is being interpreted as denoting the output, while the value property represents the input. Therefore, in essence, we are saying something along the lines of "In order to produce myphoto.min.css, use the files a11yhcm.css, alert.css, carousel.css, and myphoto.css". Go ahead and run the Grunt task by typing as follows: grunt cssmin Upon completion, you should see output along the lines of the following: Figure 8.1: The console output after running cssmin The first line indicates that a new output file (myphoto.min.css) has been created and that it is 3.25 kB in size (down from the original 4.99 kB). The second line is self-explanatory, that is, the task executed successfully without any errors. Now that you know how to use grunt-contrib-cssmin, go ahead and take a look at the documentation for some nice extras! Running tasks automatically Now that we know how to configure and use Grunt to minify our style sheets, let's turn our attention to task automation, that is, how we can execute our Grunt minification task automatically as soon as we make changes to our source files. To this end, we will learn about a second Grunt package, called grunt-contrib-watch (https://github.com/gruntjs/grunt-contrib-watch). As with contrib-css-min, this package can be installed using npm: npm install grunt-contrib-watch --save-dev Open package.json and verify that grunt-contrib-watch has been added as a dependency: { "name": "MyPhoto", "version": "0.1.0", "devDependencies": { "grunt": "^0.4.5", "grunt-contrib-cssmin": "^0.14.0", "grunt-contrib-watch": "^0.6.1" } } Next, tell Grunt about our new package by adding grunt.loadNpmTasks('grunt-contrib-watch'); to Gruntfile.js. Furthermore, we need to define the watch task by adding a new empty property called watch: module.exports = function(grunt) { grunt.initConfig({ pkg: grunt.file.readJSON("package.json"), "cssmin": { "target":{ "files": { "src/styles/myphoto.min.css": ["src/styles/*.css", "src/styles!*.min.css"] } } }, "watch": { } }); grunt.loadNpmTasks("grunt-contrib-cssmin"); grunt.loadNpmTasks("grunt-contrib-watch"); }; Now that Grunt loads our newly installed watch package, we can execute the grunt watch command. However, as we have not yet configured the task, Grunt will terminate with the following: Figure 8.2: The console output after running the watch task The first thing that we need to do is tell our watch task what files to actually "watch". We do this by setting the files property, just as we did with grunt-contrib-cssmin: "watch": { "target": { "files": ["src/styles/myphoto.css"], } } This tells the watch task to use the myphoto.css located within our src/styles folder as input (it will only watch for changes made to myphoto.css). Even better, we can watch all files: "watch": { "target": { "files": [ "styles/*.css", "!styles/myphoto-hcm.css" ], } } In reality, you would want to be watching all CSS files inside styles/; however, to keep things simple, let's just watch myphoto.css. Go ahead and execute grunt watch again. Unlike the first time that we ran the command, the task should not terminate now. Instead, it should halt with the Waiting... message. Go ahead and make a trivial change (such as removing a white space) to our myphoto.css file. Then, save this change. Note what the terminal output is now: Figure 8.3: The console output after running the watch task Great! Our watch task is now successfully listening for file changes made to any style sheet within src/styles. The next step is to put this achievement to good use, that is, we need to get our watch task to execute the minification task that we created in the previous section. To do so, simply add the tasks property to our target: "watch": { "target": { "files": [ "styles/*.css", "!styles/myphoto-hcm.css" ], "tasks": ["cssmin"] } } Once again, run grunt watch. This time, make a visible change to our myphoto.css style sheet. For example, you can add an obvious rule such as body {background-color: red;}. Observe how, as you save your changes, our watch task now runs our cssmin task: Figure 8.4: The console output after making a change to the style sheet that is being watched Refresh the page in your browser and observe the changes. Voilà! We now no longer need to run our minifier manually every time we change our style sheet. Stripping our website of unused CSS Dead code is never good. As such, whatever the project that you are working on maybe, you should always strive to eliminate code that is no longer in use, as early as possible. This is especially important when developing websites, as unused code will inevitably be transferred to the client and hence result in additional, unnecessary bytes being transferred (although maintainability is also a major concern). Programmers are not perfect, and we all make mistakes. As such, unused code or style rules are bound to slip past us during development and testing. Consequently, it would be nice if we could establish a safeguard to ensure that at least no unused style makes it past us into production. This is where grunt-uncss fits in. Visit https://github.com/addyosmani/grunt-uncss for more. UnCSS strips any unused CSS from our style sheet. When configured properly, it can, therefore, be very useful to ensure that our production-ready website is as small as possible. Let's go ahead and install UnCSS: npm install grunt-uncss -save-dev Once installed, we need to tell Grunt about our plugin. Just as in the previous subsections, update the Gruntfile.js by adding the grunt.loadNpmTasks('grunt-uncss'); line to our Grunt configuration. Next, go ahead and define the uncss task: "uncss": { "target": { "files": { "src/styles/output.css": ["src/index.html"] } } }, In the preceding code, we specified a target consisting of the index.html file. This index.html will be parsed by Uncss. The class and id names used within it will be compared to those appearing in our style sheets. Should our style sheets contain selectors that are unused, then those are removed from the output. The output itself will be written to src/styles/output.css. Let's go ahead and test this. Add a new style to our myphoto.css that will not be used anywhere within our index.html. Consider this example: #foobar { color: red; } Save and then run this: grunt uncss Upon successful execution, the terminal should display output along the lines of this: Figure 8.5: The console output after executing our uncss task Go ahead and open the generated output.css file. The file will contain a concatenation of all of our CSS files (including Bootstrap). Go ahead and search for #foobar. Find it? That's because UnCSS detected that it was no longer in use and removed it for us. Now, we successfully configured a Grunt task to strip our website of the unused CSS. However, we need to run this task manually. Would it not be nice if we could configure the task to run with the other watch tasks? If we were to do this, the first thing that we would need to ask ourselves is how do we combine the CSS minification task with UnCSS? After all, grunt watch would run one before the other. As such, we would be required to use the output of one task as input for the other. So how would we go about doing this? Well, we know that our cssmin task writes its output to myphoto.min.css. We also know that index.html references myphoto.min.css. Furthermore, we also know uncss receives its input by checking the style sheets referenced in index.html. Therefore, we know that the output produced by our cssmin task is sure to be used by our uncss as long as it is referenced within index.html. In order for the output produced by uncss to take effect, we will need to reconfigure the task to write its output into myphoto.min.css. We will then need to add uncss to our list of watch tasks, taking care to insert the task into the list after cssmin. However, this leads to a problem—running uncss after cssmin will produce an un-minified style sheet. Furthermore, it also requires the presence of myphoto.min.css. However, as myphoto.min.css is actually produced by cssmin, the sheet will not be present when running the task for the first time. Therefore, we need a different approach. We will need to use the original myphoto.css as input to uncss, which then writes its output into a file called myphoto.min.css. Our cssmin task then uses this file as input, minifying it as discussed earlier. Since uncss parses the style sheet references in index.html, we will need to first revert our index.html to reference our development style sheet, myphoto.css. Go ahead and do just that. Replace the <link rel="stylesheet" href="styles/myphoto.min.css" /> line with <link rel="stylesheet" href="styles/myphoto.css" />. Processing HTML For the minified changes to take effect, we now need a tool that replaces our style sheet references with our production-ready style sheets. Meet grunt-processhtml. Visit https://www.npmjs.com/package/grunt-processhtml for more. Go ahead and install it using the following command: npm install grunt-processhtml --save-dev Add grunt.loadNpmTasks('grunt-processhtml'); to our Gruntfile.js to enable our freshly installed tool. While grunt-processhtml is very powerful, we will only cover how to replace style sheet references. Therefore, we recommend that you read the tool's documentation to discover further features. In order to replace our style sheets with myphoto.min.css, we wrap them inside special grunt-processhtml comments: <!-- build:css styles/myphoto.min.css --> <link rel="stylesheet" href="bower_components/bootstrap/ dist/css/bootstrap.min.css" /> <link href='https://fonts.googleapis.com/css?family=Poiret+One' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=Lato& subset=latin,latin-ext' rel='stylesheet' type='text/css'> <link rel="stylesheet" href="bower_components/Hover/css/ hover-min.css" /> <link rel="stylesheet" href="styles/myphoto.css" /> <link rel="stylesheet" href="styles/alert.css" /> <link rel="stylesheet" href="styles/carousel.css" /> <link rel="stylesheet" href="styles/a11yhcm.css" /> <link rel="stylesheet" href="bower_components/components- font-awesome/css/font-awesome.min.css" /> <link rel="stylesheet" href="bower_components/lightbox-for -bootstrap/css/bootstrap.lightbox.css" /> <link rel="stylesheet" href="bower_components/DataTables/ media/css/dataTables.bootstrap.min.css" /> <link rel="stylesheet" href="resources/animate/animate.min.css" /> <!-- /build --> Note how we reference the style sheet that is meant to replace the style sheets contained within the special comments on the first line, inside the comment: <!-- build:css styles/myphoto.min.css --> Last but not least, add the following task: "processhtml": { "dist": { "files": { "dist/index.html": ["src/index.html"] } } }, Note how the output of our processhtml task will be written to dist. Test the newly configured task through the grunt processhtml command. The task should execute without errors: Figure 8.6: The console output after executing the processhtml task Open dist/index.html and observe how, instead of the 12 link tags, we only have one: <link rel="stylesheet" href="styles/myphoto.min.css"> Next, we need to reconfigure our uncss task to write its output to myphoto.min.css. To do so, simply replace the 'src/styles/output.css' output path with 'dist/styles/myphoto.min.css' inside our Gruntfile.js (note how myphoto.min.css will now be written to dist/styles as opposed to src/styles). We then need to add uncss to our list of watch tasks, taking care to insert the task into the list after cssmin: "watch": { "target": { "files": ["src/styles/myphoto.css"], "tasks": ["uncss", "cssmin", "processhtml"], "options": { "livereload": true } } } Next, we need to configure our cssmin task to use myphoto.min.css as input: "cssmin": { "target": { "files": { "dist/styles/myphoto.min.css": ["src/styles/myphoto.min.css"] } } }, Note how we removed src/styles/*.min.css, which would have prevented cssmin from reading files ending with the min.css extension. Running grunt watch and making a change to our myphoto.css file should now trigger the uncss task and then the cssmin task, resulting in console output indicating the successful execution of all tasks. This means that the console output should indicate that first uncss, cssmin, and then processhtml were successfully executed. Go ahead and check myphoto.min.css inside the dist folder. You should see how the following things were done: The CSS file contains an aggregation of all of our style sheets The CSS file is minified The CSS file contains no unused style rules However, you will also note that the dist folder contains none of our assets—neither images nor Bower components, nor our custom JavaScript files. As such, you will be forced to copy any assets manually. Of course, this is less than ideal. So let's see how we can copy our assets to our dist folder automatically. The dangers of using UnCSS UnCSS may cause you to lose styles that are applied dynamically. As such, care should be taken when using this tool. Take a closer look at the MyPhoto style sheet and see whether you spot any issues. You should note that our style rules for overriding the background color of our navigation pills were removed. One potential fix for this is to write a dedicated class for gray nav-pills (as opposed to overriding them with the Bootstrap classes). Deploying assets To copy our assets from src into dist, we will use grunt-contrib-copy. Visit https://github.com/gruntjs/grunt-contrib-copy for more on this. Go ahead and install it: npm install grunt-contrib-copy -save-dev Once installed, enable it by adding grunt.loadNpmTasks('grunt-contrib-copy'); to our Gruntfile.js. Then, configure the copy task: "copy": { "target": { "files": [ { "cwd": "src/images", "src": ["*"], "dest": "dist/images/", "expand": true }, { "cwd": "src/bower_components", "src": ["*"], "dest": "dist/bower_components/", "expand": true }, { "cwd": "src/js", "src": ["**/*"], "dest": "dist/js/", "expand": true }, ] } }, The preceding configuration should be self-explanatory. We are specifying a list of copy operations to perform: src indicates the source and dest indicates the destination. The cwd variable indicates the current working directory. Note how, instead of a wildcard expression, we can also match a certain src pattern. For example, to only copy minified JS files, we can write this: "src": ["*.min.js"] Take a look at the following screenshot: Figure 8.7: The console output indicating the number of copied files and directories after running the copy task Update the watch task: "watch": { "target": { 'files": ['src/styles/myphoto.css"], "tasks": ["uncss", "cssmin", "processhtml", "copy"] } }, Test the changes by running grunt watch. All tasks should execute successfully. The last task that was executed should be the copy task. Note that myphoto-hcm.css needs to be included in the process and copied to /dist/styles/, otherwise the HCM will not work. Try this yourself using the lessons learned so far! Stripping CSS comments Another common source for unnecessary bytes is comments. While needed during development, they serve no practical purpose in production. As such, we can configure our cssmin task to strip our CSS files of any comments by simply creating an options property and setting its nested keepSpecialComments property to 0: "cssmin": { "target":{ "options": { "keepSpecialComments": 0 }, "files": { "dist/src/styles/myphoto.min.css": ["src/styles /myphoto.min.css"] } } }, We saw how to use the build tool Grunt to automate the more common and mundane optimization tasks. To build responsive, dynamic, and mobile-first applications on the web with Bootstrap 4, check out this book Mastering Bootstrap 4 - Second Edition. Get ready for Bootstrap v4.1; Web developers to strap up their boots How to use Bootstrap grid system for responsive website design? Bootstrap 4 Objects, Components, Flexbox, and Layout  
Read more
  • 0
  • 0
  • 16014

article-image-managing-users-php-nuke
Packt
09 Mar 2010
19 min read
Save for later

Managing Users with PHP-Nuke

Packt
09 Mar 2010
19 min read
PHP-Nuke is about web communities, and communities need members. PHP-Nuke enables visitors to your site to create and maintain their own user account, and add their personal details. This is usually required for them to post their own new stories, make comments, or contribute to discussions in the forums. Those annoying little tasks like managing lost passwords are also taken care of for you by PHP-Nuke. User accounts can be created in two ways: By the super user (that's you) By the user registering on your site The second method involves a confirmation email sent to the user's email account. This email contains a link for them to click and confirm their registration to activate their account (this needs to be done within 24 hours or the registration expires). Once a visitor is registered on your site, the gates to enjoy the full glory of your site will be thrown wide open. Visitors, or users as you could now call them, will be able to contribute to discussions on forums, add comment on posted stories, even add their own new stories, as well as access parts of the site that are off-limits to the 'riff-raff' unregistered visitor. Ingredients of a User Every user requires a certain amount of information to uniquely identify them in PHP-Nuke. There are the usual three things required of every user in PHP-Nuke: A nickname: This is an alias or username if you like. This identifies who the user is, and is their online identity in PHP-Nuke. A password: This is required to verify that the user is who they claim to be. A valid email address: This is where the confirmation email is to be sent. Once the user account is created for a user, the user is of course able to modify their details, and also view the details of other users. Information such as the URL of the user's own website, messenger ID (MSN, AIM, and others), their location, and interests are also part of the user 'profile', but are not compulsory. By default, the real email address of any user is never made public, for both security and to prevent harvesting by spammers. Users can specify a 'fake email' address, possibly in spam-obfuscated form (for example, address_at_mydomain.com) which will be displayed to other users, although this is not required. A user's privacy is always protected. Setting Up a New User User management starts by clicking the Users icon in the Modules Administration menu: Clicking on this icon brings you to the User's Administration panel. This panel consists of two mini-panels, Edit User and Add a New User , whose use is given away by their titles. We'll start by setting up a new user. Our user will imaginatively be called testuser. Time For Action—Setting Up a New User Manually If you're not at the User's Administration panel, click on the Users icon in the Modules Administration menu. In the Add a New User panel, enter testuser into the Nickname field. Enter Test User into the Name field. Enter your own email address into the Email field. Scroll down to the Password field. Enter testuser as the password. Click the Add User button. When the page reloads, you will be taken straight back to the administration homepage. What Just Happened? We created a new user. For this simple user, we only specified the required fields Nickname, Email, and Password, and provided a single piece of personal information, Name. Failing to specify the required fields will mean that the user is not set up, and you will be prompted to go back and add the missing fields. No email notification is sent to the user when the user is set up in this way, and no confirmation of the registration is required. As soon as you click Add User, provided all the required fields have been entered, the user is ready to go. Editing the details of a user is equally easy, but you do have to know their nickname to edit the details. Simply enter this into the Nickname field of the Edit User panel, select Modify from the drop-down box and click Ok! If you have taken a sudden dislike to a particular user, enter their nickname into the Nickname field and select Delete from the drop-down box, click Ok! and they are gone forever (the account, not the person). Subscribing a User Once a user has been created, you have the option to subscribe this user. We mentioned the idea of Subscribed Users in earlier articles; it's a mechanism for restricting module access to specific groups of people, such as fee-paying customers. There is only one group of Subscribed Users in PHP-Nuke at present, so once a user has a subscription, they are able to access any module restricted to Subscribed Users only. The option to subscribe a user is not available when you create the user manually, as we did above. To find the option, you have to edit the user's details. This is done by entering their username into the Edit User panel, selecting Modify from the drop-down box, and clicking on the Ok! button. The subscription options are near the bottom of the user details, underneath the newsletter option. The Subscribe User option does not refer to 'subscribing to' the newsletter; you sign up the user or remove them from your newsletter mailing list with the Newsletter option. The Subscribe User option makes the user into one of the site's elite, a Subscribed User. If you subscribe the user, then you must specify the Subscription Period. This is the length of time that the user remains subscribed, and ranges from 1 year to 10 years, in yearly increments. If you leave the Subscription Period at None then the user will not be subscribed. Once a user has been subscribed, you can change their subscription details from the same panel: You can unsubscribe the user, or extend their subscription period. To shorten the subscription period, you would have to unsubscribe the user, subscribe them again, and then set the new period. Subscribed users are reminded of the passing of time and the impending expiry of their subscriptions when they visit the Your Account module—we'll further explore this module later in the article: Time For Action—Registering as a User This time we'll register to create a user account as a normal visitor would. We'll call the user account userdude. If you do not have your mail server set up, then you will just have to follow the text and screenshots for now. The confirmation email sent by PHP-Nuke is a key part of the registration process, and includes a special link for the visitor to click to activate their account. Don't worry though, when your site is live on your web hosting account, you will undoubtedly be able to access a mail server. If you are still logged in as the super user, logout by clicking the Logout icon in either of the administration menus, or click the Logout link in the Administration block. If you are still logged in as testuser, logout by clicking on the Your Account link in the modules block, then click the Logout/Exit link in the navigation bar that appears: Alternatively, you can enter the logout URL directly: http://localhost/nuke/modules.php?name=Your_Account&op=logout You will be redirected to the site homepage. Now click the Your Account link in the Modules block: Click the New User Registration link. This brings you to the New User Registration panel. The top part of that panel is shown here: Enter the Nickname of userdude. Enter your own email address into the Email field. We are going to use userdude for the password as well as the nickname. If you think of another password at this point, enter it instead. Then put the password into the Re-type password field as well. Click the New User button. You will come to the final step of the registration process: Click the Finish button. Open up your email client, and log in to check your mail. You should find a mail with the subject New User Account Activation waiting for you. It will be from the email address you specified in the Administrator Email field in the Site Configuration Menu. The body of that email will look something like this: Welcome to the Dinosaur Portal You or someone else has used your email account (myaddress@packtpub.com) to register an account at the Dinosaur Portal To finish the registration process you should visit the following link in the next 24 hours to activate your user account, otherwise the information will be automatically deleted by the system and you should apply again: http://thedinosaurportal.com/modules.php?name=Your_Account&op=activate&use rname=userdude&check_num=64ad845758d7f8f572b12800f60842ba Following is the member information: -Nickname: userdude -Password: userdude Click the link in the email, or copy the link and paste it into your browser, and you will be taken to the New User Activation page where you will see a message of the form: userdude: Your account has been activated. Please login from this link using your assigned Nickname and Password. Clicking on this link takes you back to the User Registration/Login page of the Your Account module, and you can use your nickname and password to login. What Just Happened? You just created a new user account. The page for logging in is the homepage of the Your Account module. We'll talk more about this module in a minute; as you could guess, it handles everything to do with 'your' user account. If the visitor is not logged in, they are presented with the login panel when they visit the Your Account module page. From here they can enter their nickname and password to log in, or click the New User Registration link to register a new user account, as we did. For visitors that have forgotten their password, clicking on the Lost your Password? link will take them to a screen where they can enter their nickname, and an email will be sent to their registered email address containing a confirmation code, a random-looking 10 digit string; with this code they can have their password changed. A new, random password is generated and emailed to them. PHP-Nuke never stores raw passwords in its database, so it can never reveal any password. With the new password, the user can log in and change their password to something easier to remember. The registration process for the user is straightforward; they only require a nickname, a valid email address, and a password. There are certain rules, however, that are followed by PHP-Nuke: Only one occurrence of an email address is allowed on the system; if someone uses an email address that belongs to another user account that address will be rejected, and the user will have to choose another. Only one occurrence of a particular nickname is allowed as well; the system will check the uniqueness of the nickname before creating the account. After the visitor clicks Finish on the final step, the user account is created. Following that, the confirmation email is sent to the email address. If the email address specified is invalid, or not the visitor's email address, then that visitor will have to create their account with a new email address. If the user doesn't mind being embarrassed, they can contact the site administrator, or wait 24 hours for the account to be deleted from the list of 'waiting to be activated' accounts, and then try again. You will notice that the link to activate the account contains the URL of your PHP-Nuke site: http://thedinosaurportal.com/modules.php?name=Your_Account&op=activate&use rname=userdude&check_num=64ad845758d7f8f572b12800f60842ba It is very important that you have configured your Site URL option correctly in the Web Site Configuration menu (we saw this in Aritcle 4). If you haven't done that, then the activation link will point to the wrong site! The check_num part of the URL is what identifies the unregistered visitor to the system. When the visitor registers his details, PHP-Nuke stores them in the database along with the check_num value. When the visitor visits the above link, PHP-Nuke will check the value of check_num against the values stored in the database, and if it finds a match, it will move that visitor's details to the proper users table in the database, and remove them from the table of visitors waiting to confirm their registration. That's all there is to creating user accounts. It is possible to turn off the registration, so that only the administrator can create accounts. If you feel the need for this, you can read more about it in the PHP-Nuke HOWTO: http://www.karakas-online.de/EN-Book/disable-registration.html That section of the PHP-Nuke HOWTO also has a number of other user account hacks that you can make use of. Graphical Code for User Registration PHP-Nuke enables you to add a security code to the login or registration pages on the site. The security code is a small graphic with some digits, and is shown under the password fields, along with a textbox for the visitor to type in the digits from the graphic. The point of this device is to prevent automated registrations; without typing the correct digits into the Type Security Code field, the submission will not be accepted. The digits displayed in the image are not part of the page HTML, and the only way for the digits to be read is to actually see them when they are displayed on a monitor. Use of the security code is controlled by a setting in the file config.php in the root of your PHP-Nuke installation. (This was the file in which we made some database settings in Article 2.) The setting to change is the value of the $gfx_chk variable. By default, it looks like this in the file, which means that the security code is not used: $gfx_chk = 0; The config.php file itself has a description of the values for this variable as seen in the table: Value Effect on the Security Code 0 Security code is never used. 1 Security code only appears on the administrators login page (admin.php). 2 Security code only appears on the normal user login page. 3 Security code only appears for new user registrations. 4 Security code appears for user login and new user registrations. Thus to have the security code appear only at the administrator login, you would set $gfx_chk to 1 and then save the config.php file: $gfx_chk = 1; For the graphical code to function properly, the GD extension will need to work properly with PHP on the web server. The GD extension takes care of drawing the graphics, and if this isn't functioning for whatever reason (possibly it's not installed), then the graphic will not be displayed properly, and it will be impossible to determine the security code. In that case, you will have to change the setting in config.php to remove the graphical code. If you are running your site on a web hosting account and the graphical security code is not being displayed when it should, then you should contact your host's technical support to find out if there is a problem with the GD extension. You can tell if the GD extension is installed by using the phpinfo() PHP function. Open a text editor and enter the following code: <?php phpinfo(); ?> Save this file as phpinfo.php in the web server root (xampphtdocs). When you navigate to that page in your browser, a number of PHP settings are displayed, including the status of the GD extension: If you do not see a table like the one above on the page, or if it does not say enabled next to GD Support, then contact your host's technical support. The XAMPP package we install in Appendix A has GD installed and working. Seeing Who's Who Log in to your site as the super user and activate the Members List module (deactivated by default). After activation there will be an additional option available in the Modules block called the Members List module, which provides anyone able to view this module with a list of the registered users: Clicking on the username will bring up a view of that user's profile: This is only a view of the user profile, and it is not an editable form. You will notice the word Forum in the above screenshot. The user profile displayed here is actually the user profile from the Forums module (and note also that the Forums module needs to be activated for this screen to be seen). You will also notice that the name of the site is wrong—it says MySite.com, which is not the value we set for our site name. This is because the Forums module has its own set of configuration settings. We will see how to set these in Article 8. Also note that the Members List module takes information from the Forums module configuration settings. The Forums module is a complete application—phpBB, one of the best pieces of free, open-source forum software around—integrated into PHP-Nuke. One aspect of the integration is the shared user account—the user account you create for the PHP-Nuke site also functions as a user account on the forums. As a user, it is possible to work with your details in two places in PHP-Nuke—from the Your Account module and also from within the Forums module. Although there are two views of information, and two places to edit your details, there is still only one user account. At the moment, the Your Account module offers more user details than are found in the Forums module, such as newsletter subscription information. The integration between the PHP-Nuke user account and the user account for the Forums module has gradually become tighter over the versions of PHP-Nuke, and they are likely to 'converge' further in future versions of PHP-Nuke. Once a user account is created, and the user has logged in, a whole new world opens up to them. The Your Account Module The Your Account module is a visitor's space. The visitor is guided round their space by a graphical navigation bar as seen below: Before we look at each of these links, let's mention what else is on the front page of the Your Account module: My Headlines: The user can view a list of headlines from an RSS news feed of another site. The user can select from one of the headline sites that we saw in the previous article, or enter the URL of the site directly. Broadcast Public Message: The user can enter the text of a public message to be shown to all current visitors of the site. We'll look at this in a moment. These two features are not always displayed; their display is controlled by options in the Web Site Configuration menu that we'll see in a moment. However, the user is always able to see their Last 10 Comments posted and their Last 10 News Submissions on this page. Returning to our discussion of the links in the navigation bar of the Your Account module, we've already seen what the Logout/Exit link does; it logs the visitor out. The Themes link takes the visitor to a page from where they can choose one from the list of themes installed on the site. We'll look at the Comments link in detail in the next article; it leads to options for viewing and posting comments on stories. Note that when you are logged in as the super user, the Your Account module displays another panel called Administration Functions. This panel allows you to modify certain details of that user. We will talk about these in the next article and meet them in their natural context. Editing the User Profile The Your Info link takes the user to their user profile. We saw some of the options here when we looked at creating the user manually. These options are generally for personal details (name, email, and so on), newsletter subscription, private message options, and forum configuration, among others. The options themselves are straightforward. A number of options in the user profile correspond to forum profile options, and don't particularly affect the user outside of the Forums module. After making any changes to a user profile, the Save Changes button needs to be clicked to save these changes. Note that the Save Changes button is not the button at the very bottom of the user details page—the Save Changes button is above the Avatar Control Panel: The button at the bottom of the form is marked Submit , and is only active when the options in the Avatar Control Panel are enabled. The Avatar Control Panel seen at the bottom of the user profile contains an interesting option. An avatar is a small graphic, representing you as an online character. You can choose a graphic from the already existing library by clicking on the Show Gallery button next to the Select Avatar from gallery option: Clicking on this button brings up a selection of little images for the user to choose from. Simply click on the required image and this will be assigned to the user profile: Clicking the Back to Profile link will return you to the Your Info page. The library of images you just saw can be found in the modulesForumsimagesavatarsgallery folder of your PHP-Nuke installation. If you want you can add in more images here, but make sure your image is a GIF file, and that it isn't more than 80 pixels wide or 80 pixels high. Your Account Configuration The Your Home link provides some options for configuring Your Account further: From this panel, the number of news stories displayed on the homepage of the site can be controlled. Remember, this setting only applies to you—and only when you are logged in.
Read more
  • 0
  • 0
  • 15884

article-image-adding-real-time-functionality-using-socketio
Packt
22 Sep 2014
18 min read
Save for later

Adding Real-time Functionality Using Socket.io

Packt
22 Sep 2014
18 min read
In this article by Amos Q. Haviv, the author of MEAN Web Development, decribes how Socket.io enables Node.js developers to support real-time communication using WebSockets in modern browsers and legacy fallback protocols in older browsers. (For more resources related to this topic, see here.) Introducing WebSockets Modern web applications such as Facebook, Twitter, or Gmail are incorporating real-time capabilities, which enable the application to continuously present the user with recently updated information. Unlike traditional applications, in real-time applications the common roles of browser and server can be reversed since the server needs to update the browser with new data, regardless of the browser request state. This means that unlike the common HTTP behavior, the server won't wait for the browser's requests. Instead, it will send new data to the browser whenever this data becomes available. This reverse approach is often called Comet, a term coined by a web developer named Alex Russel back in 2006 (the term was a word play on the AJAX term; both Comet and AJAX are common household cleaners in the US). In the past, there were several ways to implement a Comet functionality using the HTTP protocol. The first and easiest way is XHR polling. In XHR polling, the browser makes periodic requests to the server. The server then returns an empty response unless it has new data to send back. Upon a new event, the server will return the new event data to the next polling request. While this works quite well for most browsers, this method has two problems. The most obvious one is that using this method generates a large number of requests that hit the server with no particular reason, since a lot of requests are returning empty. The second problem is that the update time depends on the request period. This means that new data will only get pushed to the browser on the next request, causing delays in updating the client state. To solve these issues, a better approach was introduced: XHR long polling. In XHR long polling, the browser makes an XHR request to the server, but a response is not sent back unless the server has a new data. Upon an event, the server responds with the event data and the browser makes a new long polling request. This cycle enables a better management of requests, since there is only a single request per session. Furthermore, the server can update the browser immediately with new information, without having to wait for the browser's next request. Because of its stability and usability, XHR long polling has become the standard approach for real-time applications and was implemented in various ways, including Forever iFrame, multipart XHR, JSONP long polling using script tags (for cross-domain, real-time support), and the common long-living XHR. However, all these approaches were actually hacks using the HTTP and XHR protocols in a way they were not meant to be used. With the rapid development of modern browsers and the increased adoption of the new HTML5 specifications, a new protocol emerged for implementing real-time communication: the full duplex WebSockets. In browsers that support the WebSockets protocol, the initial connection between the server and browser is made over HTTP and is called an HTTP handshake. Once the initial connection is made, the browser and server open a single ongoing communication channel over a TCP socket. Once the socket connection is established, it enables bidirectional communication between the browser and server. This enables both parties to send and retrieve messages over a single communication channel. This also helps to lower server load, decrease message latency, and unify PUSH communication using a standalone connection. However, WebSockets still suffer from two major problems. First and foremost is browser compatibility. The WebSockets specification is fairly new, so older browsers don't support it, and though most modern browsers now implement the protocol, a large group of users are still using these older browsers. The second problem is HTTP proxies, firewalls, and hosting providers. Since WebSockets use a different communication protocol than HTTP, a lot of these intermediaries don't support it yet and block any socket communication. As it has always been with the Web, developers are left with a fragmentation problem, which can only be solved using an abstraction library that optimizes usability by switching between protocols according to the available resources. Fortunately, a popular library called Socket.io was already developed for this purpose, and it is freely available for the Node.js developer community. Introducing Socket.io Created in 2010 by JavaScript developer, Guillermo Rauch, Socket.io aimed to abstract Node.js' real-time application development. Since then, it has evolved dramatically, released in nine major versions before being broken in its latest version into two different modules: Engine.io and Socket.io. Previous versions of Socket.io were criticized for being unstable, since they first tried to establish the most advanced connection mechanisms and then fallback to more primitive protocols. This caused serious issues with using Socket.io in production environments and posed a threat to the adoption of Socket.io as a real-time library. To solve this, the Socket.io team redesigned it and wrapped the core functionality in a base module called Engine.io. The idea behind Engine.io was to create a more stable real-time module, which first opens a long-polling XHR communication and then tries to upgrade the connection to a WebSockets channel. The new version of Socket.io uses the Engine.io module and provides the developer with various features such as events, rooms, and automatic connection recovery, which you would otherwise implement by yourself. In this article's examples, we will use the new Socket.io 1.0, which is the first version to use the Engine.io module. Older versions of Socket.io prior to Version 1.0 are not using the new Engine.io module and therefore are much less stable in production environments. When you include the Socket.io module, it provides you with two objects: a socket server object that is responsible for the server functionality and a socket client object that handles the browser's functionality. We'll begin by examining the server object. The Socket.io server object The Socket.io server object is where it all begins. You start by requiring the Socket.io module, and then use it to create a new Socket.io server instance that will interact with socket clients. The server object supports both a standalone implementation and the ability to use it in conjunction with the Express framework. The server instance then exposes a set of methods that allow you to manage the Socket.io server operations. Once the server object is initialized, it will also be responsible for serving the socket client JavaScript file for the browser. A simple implementation of the standalone Socket.io server will look as follows: var io = require('socket.io')();io.on('connection', function(socket){ /* ... */ });io.listen(3000); This will open a Socket.io over the 3000 port and serve the socket client file at the URL http://localhost:3000/socket.io/socket.io.js. Implementing the Socket.io server in conjunction with an Express application will be a bit different: var app = require('express')();var server = require('http').Server(app);var io = require('socket.io')(server);io.on('connection', function(socket){ /* ... */ });server.listen(3000); This time, you first use the http module of Node.js to create a server and wrap the Express application. The server object is then passed to the Socket.io module and serves both the Express application and the Socket.io server. Once the server is running, it will be available for socket clients to connect. A client trying to establish a connection with the Socket.io server will start by initiating the handshaking process. Socket.io handshaking When a client wants to connect the Socket.io server, it will first send a handshake HTTP request. The server will then analyze the request to gather the necessary information for ongoing communication. It will then look for configuration middleware that is registered with the server and execute it before firing the connection event. When the client is successfully connected to the server, the connection event listener is executed, exposing a new socket instance. Once the handshaking process is over, the client is connected to the server and all communication with it is handled through the socket instance object. For example, handling a client's disconnection event will be as follows: var app = require('express')();var server = require('http').Server(app);var io = require('socket.io')(server);io.on('connection', function(socket){socket.on('disconnect', function() {   console.log('user has disconnected');});});server.listen(3000); Notice how the socket.on() method adds an event handler to the disconnection event. Although the disconnection event is a predefined event, this approach works the same for custom events as well, as you will see in the following sections. While the handshake mechanism is fully automatic, Socket.io does provide you with a way to intercept the handshake process using a configuration middleware. The Socket.io configuration middleware Although the Socket.io configuration middleware existed in previous versions, in the new version it is even simpler and allows you to manipulate socket communication before the handshake actually occurs. To create a configuration middleware, you will need to use the server's use() method, which is very similar to the Express application's use() method: var app = require('express')();var server = require('http').Server(app);var io = require('socket.io')(server);io.use(function(socket, next) {/* ... */next(null, true);});io.on('connection', function(socket){socket.on('disconnect', function() {   console.log('user has disconnected');});});server.listen(3000); As you can see, the io.use() method callback accepts two arguments: the socket object and a next callback. The socket object is the same socket object that will be used for the connection and it holds some connection properties. One important property is the socket.request property, which represents the handshake HTTP request. In the following sections, you will use the handshake request to incorporate the Passport session with the Socket.io connection. The next argument is a callback method that accepts two arguments: an error object and Boolean value. The next callback tells Socket.io whether or not to proceed with the handshake process, so if you pass an error object or a false value to the next method, Socket.io will not initiate the socket connection. Now that you have a basic understanding of how handshaking works, it is time to discuss the Socket.io client object. The Socket.io client object The Socket.io client object is responsible for the implementation of the browser socket communication with the Socket.io server. You start by including the Socket.io client JavaScript file, which is served by the Socket.io server. The Socket.io JavaScript file exposes an io() method that connects to the Socket.io server and creates the client socket object. A simple implementation of the socket client will be as follows: <script src="/socket.io/socket.io.js"></script><script>var socket = io();socket.on('connect', function() {   /* ... */});</script> Notice the default URL for the Socket.io client object. Although this can be altered, you can usually leave it like this and just include the file from the default Socket.io path. Another thing you should notice is that the io() method will automatically try to connect to the default base path when executed with no arguments; however, you can also pass a different server URL as an argument. As you can see, the socket client is much easier to implement, so we can move on to discuss how Socket.io handles real-time communication using events. Socket.io events To handle the communication between the client and the server, Socket.io uses a structure that mimics the WebSockets protocol and fires events messages across the server and client objects. There are two types of events: system events, which indicate the socket connection status, and custom events, which you'll use to implement your business logic. The system events on the socket server are as follows: io.on('connection', ...): This is emitted when a new socket is connected socket.on('message', ...): This is emitted when a message is sent using the socket.send() method socket.on('disconnect', ...): This is emitted when the socket is disconnected The system events on the client are as follows: socket.io.on('open', ...): This is emitted when the socket client opens a connection with the server socket.io.on('connect', ...): This is emitted when the socket client is connected to the server socket.io.on('connect_timeout', ...): This is emitted when the socket client connection with the server is timed out socket.io.on('connect_error', ...): This is emitted when the socket client fails to connect with the server socket.io.on('reconnect_attempt', ...): This is emitted when the socket client tries to reconnect with the server socket.io.on('reconnect', ...): This is emitted when the socket client is reconnected to the server socket.io.on('reconnect_error', ...): This is emitted when the socket client fails to reconnect with the server socket.io.on('reconnect_failed', ...): This is emitted when the socket client fails to reconnect with the server socket.io.on('close', ...): This is emitted when the socket client closes the connection with the server Handling events While system events are helping us with connection management, the real magic of Socket.io relies on using custom events. In order to do so, Socket.io exposes two methods, both on the client and server objects. The first method is the on() method, which binds event handlers with events and the second method is the emit() method, which is used to fire events between the server and client objects. An implementation of the on() method on the socket server is very simple: var app = require('express')();var server = require('http').Server(app);var io = require('socket.io')(server);io.on('connection', function(socket){socket.on('customEvent', function(customEventData) {   /* ... */});});server.listen(3000); In the preceding code, you bound an event listener to the customEvent event. The event handler is being called when the socket client object emits the customEvent event. Notice how the event handler accepts the customEventData argument that is passed to the event handler from the socket client object. An implementation of the on() method on the socket client is also straightforward: <script src="/socket.io/socket.io.js"></script><script>var socket = io();socket.on('customEvent', function(customEventData) {   /* ... */});</script> This time the event handler is being called when the socket server emits the customEvent event that sends customEventData to the socket client event handler. Once you set your event handlers, you can use the emit() method to send events from the socket server to the socket client and vice versa. Emitting events On the socket server, the emit() method is used to send events to a single socket client or a group of connected socket clients. The emit() method can be called from the connected socket object, which will send the event to a single socket client, as follows: io.on('connection', function(socket){socket.emit('customEvent', customEventData);}); The emit() method can also be called from the io object, which will send the event to all connected socket clients, as follows: io.on('connection', function(socket){io.emit('customEvent', customEventData);}); Another option is to send the event to all connected socket clients except from the sender using the broadcast property, as shown in the following lines of code: io.on('connection', function(socket){socket.broadcast.emit('customEvent', customEventData);}); On the socket client, things are much simpler. Since the socket client is only connected to the socket server, the emit() method will only send the event to the socket server: var socket = io();socket.emit('customEvent', customEventData); Although these methods allow you to switch between personal and global events, they still lack the ability to send events to a group of connected socket clients. Socket.io offers two options to group sockets together: namespaces and rooms. Socket.io namespaces In order to easily control socket management, Socket.io allow developers to split socket connections according to their purpose using namespaces. So instead of creating different socket servers for different connections, you can just use the same server to create different connection endpoints. This means that socket communication can be divided into groups, which will then be handled separately. Socket.io server namespaces To create a socket server namespace, you will need to use the socket server of() method that returns a socket namespace. Once you retain the socket namespace, you can just use it the same way you use the socket server object: var app = require('express')();var server = require('http').Server(app);var io = require('socket.io')(server);io.of('/someNamespace').on('connection', function(socket){socket.on('customEvent', function(customEventData) {   /* ... */});});io.of('/someOtherNamespace').on('connection', function(socket){socket.on('customEvent', function(customEventData) {   /* ... */});});server.listen(3000); In fact, when you use the io object, Socket.io actually uses a default empty namespace as follows: io.on('connection', function(socket){/* ... */}); The preceding lines of code are actually equivalent to this: io.of('').on('connection', function(socket){/* ... */}); Socket.io client namespaces On the socket client, the implementation is a little different: <script src="/socket.io/socket.io.js"></script><script>var someSocket = io('/someNamespace');someSocket.on('customEvent', function(customEventData) {   /* ... */});var someOtherSocket = io('/someOtherNamespace');someOtherSocket.on('customEvent', function(customEventData) {   /* ... */});</script> As you can see, you can use multiple namespaces on the same application without much effort. However, once sockets are connected to different namespaces, you will not be able to send an event to all these namespaces at once. This means that namespaces are not very good for a more dynamic grouping logic. For this purpose, Socket.io offers a different feature called rooms. Socket.io rooms Socket.io rooms allow you to partition connected sockets into different groups in a dynamic way. Connected sockets can join and leave rooms, and Socket.io provides you with a clean interface to manage rooms and emit events to the subset of sockets in a room. The rooms functionality is handled solely on the socket server but can easily be exposed to the socket client. Joining and leaving rooms Joining a room is handled using the socket join() method, while leaving a room is handled using the leave() method. So, a simple subscription mechanism can be implemented as follows: io.on('connection', function(socket) {   socket.on('join', function(roomData) {       socket.join(roomData.roomName);   })   socket.on('leave', function(roomData) {       socket.leave(roomData.roomName);   })}); Notice that the join() and leave() methods both take the room name as the first argument. Emitting events to rooms To emit events to all the sockets in a room, you will need to use the in() method. So, emitting an event to all socket clients who joined a room is quite simple and can be achieved with the help of the following code snippets: io.on('connection', function(socket){   io.in('someRoom').emit('customEvent', customEventData);}); Another option is to send the event to all connected socket clients in a room except the sender by using the broadcast property and the to() method: io.on('connection', function(socket){   socket.broadcast.to('someRoom').emit('customEvent', customEventData);}); This pretty much covers the simple yet powerful room functionality of Socket.io. In the next section, you will learn how implement Socket.io in your MEAN application, and more importantly, how to use the Passport session to identify users in the Socket.io session. While we covered most of Socket.io features, you can learn more about Socket.io by visiting the official project page at https://socket.io. Summary In this article, you learned how the Socket.io module works. You went over the key features of Socket.io and learned how the server and client communicate. You configured your Socket.io server and learned how to integrate it with your Express application. You also used the Socket.io handshake configuration to integrate the Passport session. In the end, you built a fully functional chat example and learned how to wrap the Socket.io client with an AngularJS service. Resources for Article: Further resources on this subject: Creating a RESTful API [article] Angular Zen [article] Digging into the Architecture [article]
Read more
  • 0
  • 0
  • 15879
article-image-e-commerce-mean
Packt
05 Nov 2015
8 min read
Save for later

E-commerce with MEAN

Packt
05 Nov 2015
8 min read
These days e-commerce platforms are widely available. However, as common as they might be, there are instances that after investing a significant amount of time learning how to use a specific tool you might realize that it can not fit your unique e-commerce needs as it promised. Hence, a great advantage of building your own application with an agile framework is that you can quickly meet your immediate and future needs with a system that you fully understand. Adrian Mejia Rosario, the author of the book, Building an E-Commerce Application with MEAN, shows us how MEAN stack (MongoDB, ExpressJS, AngularJS and NodeJS) is a killer JavaScript and full-stack combination. It provides agile development without compromising on performance and scalability. It is ideal for the purpose of building responsive applications with a large user base such as e-commerce applications. Let's have a look at a project using MEAN. (For more resources related to this topic, see here.) Understanding the project structure The applications built with the angular-fullstack generator have many files and directories. Some code goes in the client, other executes in the backend and another portion is just needed for development cases such as the tests suites. It’s important to understand the layout to keep the code organized. The Yeoman generators are time savers! They are created and maintained by the community following the current best practices. It creates many directories and a lot of boilerplate code to get you started. The numbers of unknown files in there might be overwhelming at first. On reviewing the directory structure created, we see that there are three main directories: client, e2e and server: The client folder will contain the AngularJS files and assets. The server directory will contain the NodeJS files, which handles ExpressJS and MongoDB. Finally, the e2e files will contain the AngularJS end-to-end tests. File Structure This is the overview of the file structure of this project: meanshop ├── client │ ├── app - App specific components │ ├── assets - Custom assets: fonts, images, etc… │ └── components - Non-app specific/reusable components │ ├── e2e - Protractor end to end tests │ └── server ├── api - Apps server API ├── auth - Authentication handlers ├── components - App-wide/reusable components ├── config - App configuration │ └── local.env.js - Environment variables │ └── environment - Node environment configuration └── views - Server rendered views Components You might be already familiar with a number of tools used in this project. If that’s not the case, you can read the brief description here. Testing AngularJS comes with a default test runner called Karma and we are going going to leverage its default choices: Karma: JavaScript unit test runner. Jasmine: It's a BDD framework to test JavaScript code. It is executed with Karma. Protractor: They are end-to-end tests for AngularJS. These are the highest levels of testing that run in the browser and simulate user interactions with the app. Tools The following are some of the tools/libraries that we are going to use in order to increase our productivity: GruntJS: It's a tool that serves to automate repetitive tasks, such as a CSS/JS minification, compilation, unit testing, and JS linting. Yeoman (yo): It's a CLI tool to scaffold web projects., It automates directory creation and file creation through generators and also provides command lines for common tasks. Travis CI: Travis CI is a continuous integration tool that runs your test suites every time you commit to the repository. EditorConfig: EditorConfig is an IDE plugin that loads the configuration from a file .editorconfig. For example, you can set indent_size = 2 indent with spaces, tabs, and so on. It’s a time saver and helps maintain consistency across multiple IDEs/teams. SocketIO: It's a library that enables real-time bidirectional communication between the server and the client. Bootstrap: It's a frontend framework for web development. We are going to use it to build the theme thought-out for this project. AngularJS full-stack: It's a generator for Yeoman that will provide useful command lines to quickly generate server/client code and deploy it to Heroku or OpenShift. BabelJS: It's a js-tojs compiler that allows to use features from the next generation JavaScript (ECMAScript 6), currently without waiting for browser support. Git: It's a distributed code versioning control system. Package managers We have package managers for our third-party backend and frontend modules. They are as follows: NPM: It is the default package manager for NodeJS. Bower: It is the frontend package manager that can be used to handle versions and dependencies of libraries and assets used in a web project. The file bower.json contains the packages and versions to install and the file .bowerrc contains the path where those packages are to be installed. The default directory is ./bower_components. Bower packages If you have followed the exact steps to scaffold our app you will have the following frontend components installed: angular angular-cookies angular-mocks angular-resource angular-sanitize angular-scenario angular-ui-router angular-socket-io angular-bootstrap bootstrap es5-shim font-awesome json3 jquery lodash Previewing the final e-commerce app Let’s take a pause from the terminal. In any project, before starting coding, we need to spend some time planning and visualizing what we are aiming for. That’s exactly what we are going to do, draw some wireframes that walk us through the app. Our e-commerce app, MEANshop, will have three main sections: Homepage Marketplace Back-office Homepage The home page will contain featured products, navigation, menus, and basic information, as you can see in the following image: Figure 2 - Wireframe of the homepage Marketplace This section will show all the products, categories, and search results. Figure 3 - Wireframe of the products page Back-office You need to be a registered user to access the back office section, as shown in the following figure:   Figure 4 - Wireframe of the login page After you login, it will present you with different options depending on the role. If you are the seller, you can create new products, such as the following: Figure 5 - Wireframe of the Product creation page If you are an admin, you can do everything that a seller does (create products) plus you can manage all the users and delete/edit products. Understanding requirements for e-commerce applications There’s no better way than to learn new concepts and technologies while developing something useful with it. This is why we are building a real-time e-commerce application from scratch. However, there are many kinds of e-commerce apps. In the following sections we will delimit what we are going to do. Minimum viable product for an e-commerce site Even the largest applications that we see today started small and grew their way up. The minimum viable product (MVP) is strictly the minimum that an application needs to work on. In the e-commerce example, it will be: Add products with title, price, description, photo, and quantity. Guest checkout page for products. One payment integration (for example, Paypal). This is strictly the minimum requirement to get an e-commerce site working. We are going to start with these but by no means will we stop there. We will keep adding features as we go and build a framework that will allow us to extend the functionality with high quality. Defining the requirements We are going to capture our requirements for the e-commerce application with user stories. A user story is a brief description of a feature told from the perspective of a user where he expresses his desire and benefit in the following format: As a <role>, I want <desire> [so that <benefit>] User stories and many other concepts were introduced with the Agile Manifesto. Learn more at https://en.wikipedia.org/wiki/Agile_software_development Here are the features that we are planning to develop through this book that have been captured as user stories: As a seller, I want to create products. As a user, I want to see all published products and its details when I click on them. As a user, I want to search for a product so that I can find what I’m looking for quickly. As a user, I want to have a category navigation menu so that I can narrow down the search results. As a user, I want to have real-time information so that I can know immediately if a product just got sold-out or became available. As a user, I want to check out products as a guest user so that I can quickly purchase an item without registering. As a user, I want to create an account so that I can save my shipping addresses, see my purchase history, and sell products. As an admin, I want to manage user roles so that I can create new admins, sellers, and remove seller permission. As an admin, I want to manage all the products so that I can ban them if they are not appropriate. As an admin, I want to see a summary of the activities and order status. All these stories might seem verbose but they are useful in capturing requirements in a consistent way. They are also handy to develop test cases against it. Summary Now that we have a gist of an e-commerce app with MEAN, lets build a full-fledged e-commerce project with Building an E-Commerce Application with MEAN. Resources for Article:   Further resources on this subject: Introduction to Couchbase [article] Protecting Your Bitcoins [article] DynamoDB Best Practices [article]
Read more
  • 0
  • 0
  • 15867

article-image-websockets-wildfly
Packt
30 Dec 2014
22 min read
Save for later

WebSockets in Wildfly

Packt
30 Dec 2014
22 min read
In this article by the authors, Michał Ćmil and Michał Matłoka, of Java EE 7 Development with WildFly, we will cover WebSockets and how they are one of the biggest additions in Java EE 7. In this article, we will explore the new possibilities that they provide to a developer. In our ticket booking applications, we already used a wide variety of approaches to inform the clients about events occurring on the server side. These include the following: JSF polling Java Messaging Service (JMS) messages REST requests Remote EJB requests All of them, besides JMS, were based on the assumption that the client will be responsible for asking the server about the state of the application. In some cases, such as checking if someone else has not booked a ticket during our interaction with the application, this is a wasteful strategy; the server is in the position to inform clients when it is needed. What's more, it feels like the developer must hack the HTTP protocol to get a notification from a server to the client. This is a requirement that has to be implemented in most nontrivial web applications, and therefore, deserves a standardized solution that can be applied by the developers in multiple projects without much effort. WebSockets are changing the game for developers. They replace the request-response paradigm in which the client always initiates the communication with a two-point bidirectional messaging system. After the initial connection, both sides can send independent messages to each other as long as the session is alive. This means that we can easily create web applications that will automatically refresh their state with up-to-date data from the server. You probably have already seen this kind of behavior in Google Docs or live broadcasts on news sites. Now we can achieve the same effect in a simpler and more efficient way than in earlier versions of Java Enterprise Edition. In this article, we will try to leverage these new, exciting features that come with WebSockets in Java EE 7 thanks to JSR 356 (https://jcp.org/en/jsr/detail?id=356) and HTML5. In this article, you will learn the following topics: How WebSockets work How to create a WebSocket endpoint in Java EE 7 How to create an HTML5/AngularJS client that will accept push notifications from an application deployed on WildFly (For more resources related to this topic, see here.) An overview of WebSockets A WebSocket session between the client and server is built upon a standard TCP connection. Although the WebSocket protocol has its own control frames (mainly to create and sustain the connection) coded by the Internet Engineering Task Force in the RFC 6455 (http://tools.ietf.org/html/rfc6455), whose peers are not obliged to use any specific format to exchange application data. You may use plaintext, XML, JSON, or anything else to transmit your data. As you probably remember, this is quite different from SOAP-based WebServices, which had bloated specifications of the exchange protocol. The same goes for RESTful architectures; we no longer have the predefined verb methods from HTTP (GET, PUT, POST, and DELETE), status codes, and the whole semantics of an HTTP request. This liberty means that WebSockets are pretty low level compared to the technologies that we used up to this point, but thanks to this, the communication overhead is minimal. The protocol is less verbose than SOAP or RESTful HTTP, which allows us to achieve higher performance. This, however, comes with a price. We usually like to use the features of higher-level protocols (such as horizontal scaling and rich URL semantics), and with WebSockets, we would need to write them by hand. For standard CRUD-like operations, it would be easier to use a REST endpoint than create everything from scratch. What do we get from WebSockets compared to the standard HTTP communication? First of all, a direct connection between two peers. Normally, when you connect to a web server (which can, for instance, handle a REST endpoint), every subsequent call is a new TCP connection, and your machine is treated like it is a different one every time you make a request. You can, of course, simulate a stateful behavior (so that the server would recognize your machine between different requests) using cookies and increase the performance by reusing the same connection in a short period of time for a specific client, but basically, it is a workaround to overcome the limitations of the HTTP protocol. Once you establish a WebSocket connection between a server and client, you can use the same session (and underlying TCP connection) during the whole communication. Both sides are aware of it, and can send data independently in a full-duplex manner (both sides can send and receive data simultaneously). Using plain HTTP, there is no way for the server to spontaneously start sending data to the client without any request from its side. What's more, the server is aware of all of its WebSocket clients connected, and can even send data between them! The current solution that includes trying to simulate real-time data delivery using HTTP protocol can put a lot of stress on the web server. Polling (asking the server about updates), long polling (delaying the completion of a request to the moment when an update is ready), and streaming (a Comet-based solution with a constantly open HTTP response) are all ways to hack the protocol to do things that it wasn't designed for and have their own limitations. Thanks to the elimination of unnecessary checks, WebSockets can heavily reduce the number of HTTP requests that have to be handled by the web server. The updates are delivered to the user with a smaller latency because we only need one round-trip through the network to get the desired information (it is pushed by the server immediately). All of these features make WebSockets a great addition to the Java EE platform, which fills the gaps needed to easily finish specific tasks, such as sending updates, notifications, and orchestrating multiple client interactions. Despite these advantages, WebSockets are not intended to replace REST or SOAP WebServices. They do not scale so well horizontally (they are hard to distribute because of their stateful nature), and they lack most of the features that are utilized in web applications. URL semantics, complex security, compression, and many other features are still better realized using other technologies. How does WebSockets work To initiate a WebSocket session, the client must send an HTTP request with an upgraded, WebSocket header field. This informs the server that the peer client has asked the server to switch to the WebSocket protocol. You may notice that the same happens in WildFly for Remote EJBs; the initial connection is made using an HTTP request, and is later switched to the remote protocol thanks to the Upgrade mechanism. The standard Upgrade header field can be used to handle any protocol, other than HTTP, which is accepted by both sides (the client and server). In WildFly, this allows to reuse the HTTP port (80/8080) for other protocols and therefore, minimise the number of required ports that should be configured. If the server can understand the WebSocket protocol, the client and server then proceed with the handshaking phase. They negotiate the version of the protocol, exchange security keys, and if everything goes well, the peers can go to the data transfer phase. From now on, the communication is only done using the WebSocket protocol. It is not possible to exchange any HTTP frames using the current connection. The whole life cycle of a connection can be summarized in the following diagram: A sample HTTP request from a JavaScript application to a WildFly server would look similar to this: GET /ticket-agency-websockets/tickets HTTP/1.1 Upgrade: websocket Connection: Upgrade Host: localhost:8080 Origin: http://localhost:8080 Pragma: no-cache Cache-Control: no-cache Sec-WebSocket-Key: TrjgyVjzLK4Lt5s8GzlFhA== Sec-WebSocket-Version: 13 Sec-WebSocket-Extensions: permessage-deflate; client_max_window_bits, x-webkit-deflate-frame User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.116 Safari/537.36 Cookie: [45 bytes were stripped] We can see that the client requests an upgrade connection with WebSocket as the target protocol on the URL /ticket-agency-websockets/tickets. It additionally passes information about the requested version and key. If the server supports the request protocol and all the required data is passed by the client, then it would respond with the following frame: HTTP/1.1 101 Switching Protocols X-Powered-By: Undertow 1 Server: Wildfly 8 Origin: http://localhost:8080 Upgrade: WebSocket Sec-WebSocket-Accept: ZEAab1TcSQCmv8RsLHg4RL/TpHw= Date: Sun, 13 Apr 2014 17:04:00 GMT Connection: Upgrade Sec-WebSocket-Location: ws://localhost:8080/ticket-agency-websockets/tickets Content-Length: 0 The status code of the response is 101 (switching protocols) and we can see that the server is now going to start using the WebSocket protocol. The TCP connection initially used for the HTTP request is now the base of the WebSocket session and can be used for transmissions. If the client tries to access a URL, which is only handled by another protocol, then the server can ask the client to do an upgrade request. The server uses the 426 (upgrade required) status code in such cases. The initial connection creation has some overhead (because of the HTTP frames that are exchanged between the peers), but after it is completed, new messages have only 2 bytes of additional headers. This means that when we have a large number of small messages, WebSocket will be an order of magnitude faster than REST protocols simply because there is less data to transmit! If you are wondering about the browser support of WebSockets, you can look it up at http://caniuse.com/websockets. All new versions of major browsers currently support WebSockets; the total coverage is estimated (at the time of writing) at 74 percent. You can see it in the following screenshot: After this theoretical introduction, we are ready to jump into action. We can now create our first WebSocket endpoint! Creating our first endpoint Let's start with a simple example: package com.packtpub.wflydevelopment.chapter8.boundary; import javax.websocket.EndpointConfig; import javax.websocket.OnOpen; import javax.websocket.Session; import javax.websocket.server.ServerEndpoint; import java.io.IOException; @ServerEndpoint("/hello") public class HelloEndpoint {    @OnOpen    public void open(Session session, EndpointConfig conf) throws IOException {        session.getBasicRemote().sendText("Hi!");    } } Java EE 7 specification has taken into account developer friendliness, which can be clearly seen in the given example. In order to define your WebSocket endpoint, you just need a few annotations on a Plain Old Java Object (POJO). The first POJO @ServerEndpoint("/hello") defines a path to your endpoint. It's a good time to discuss the endpoint's full address. We placed this sample in the application named ticket-agency-websockets. During the deployment of application, you can spot information in the WildFly log about endpoints creation, as shown in the following command line: 02:21:35,182 INFO [io.undertow.websockets.jsr] (MSC service thread 1-7)UT026003: Adding annotated server endpoint class com.packtpub.wflydevelopment.chapter8.boundary.FirstEndpoint for path /hello 02:21:35,401 INFO [org.jboss.resteasy.spi.ResteasyDeployment](MSC service thread 1-7) Deploying javax.ws.rs.core.Application: classcom.packtpub.wflydevelopment.chapter8.webservice.JaxRsActivator$Proxy$_$$_WeldClientProxy 02:21:35,437 INFO [org.wildfly.extension.undertow](MSC service thread 1-7) JBAS017534: Registered web context:/ticket-agency-websockets The full URL of the endpoint is ws://localhost:8080/ticket-agency-websockets/hello, which is just a concatenation of the server and application address with an endpoint path on an appropriate protocol. The second used annotation @OnOpen defines the endpoint behavior when the connection from the client is opened. It's not the only behavior-related annotation of the WebSocket endpoint. Let's look to the following table: Annotation Description @OnOpen Connection is open. With this annotation, we can use the Session and EndpointConfig parameters. The first parameter represents the connection to the user and allows further communication. The second one provides some client-related information. @OnMessage This annotation is executed when a message from the client is being received. In such a method, you can just have Session and for example, the String parameter, where the String parameter represents the received message. @OnError There are bad times when some errors occur. With this annotation, you can retrieve a Throwable object apart from standard Session. @OnClose When the connection is closed, it is possible to get some data concerning this event in the form of the CloseReason type object. There is one more interesting line in our HelloEndpoint. Using the Session object, it is possible to communicate with the client. This clearly shows that in WebSockets, two-directional communication is easily possible. In this example, we decided to respond to a connected user synchronously (getBasicRemote()) with just a text message Hi! (sendText (String)). Of course, it's also possible to communicate asynchronously and send, for example, sending binary messages using your own binary bandwidth saving protocol. We will present some of these processes in the next example. Expanding our client application It's time to show how you can leverage the WebSocket features in real life. We created the ticket booking application based on the REST API and AngularJS framework. It was clearly missing one important feature; the application did not show information concerning ticket purchases of other users. This is a perfect use case for WebSockets! Since we're just adding a feature to our previous app, we will describe the changes we will introduce to it. In this example, we would like to be able to inform all current users about other purchases. This means that we have to store information about active sessions. Let's start with the registry type object, which will serve this purpose. We can use a Singleton session bean for this task, as shown in the following code: @Singleton public class SessionRegistry {    private final Set<Session> sessions = new HashSet<>();    @Lock(LockType.READ)    public Set<Session> getAll() {        return Collections.unmodifiableSet(sessions);    }    @Lock(LockType.WRITE)    public void add(Session session) {        sessions.add(session);    }    @Lock(LockType.WRITE)    public void remove(Session session) {        sessions.remove(session);    } } We could use Collections.synchronizedSet from standard Java libraries but it's a great chance to remember what we described earlier about container-based concurrency. In SessionRegistry, we defined some basic methods to add, get, and remove sessions. For the sake of collection thread safety during retrieval, we return an unmodifiable view. We defined the registry, so now we can move to the endpoint definition. We will need a POJO, which will use our newly defined registry as shown: @ServerEndpoint("/tickets") public class TicketEndpoint {    @Inject    private SessionRegistry sessionRegistry;    @OnOpen    public void open(Session session, EndpointConfig conf) {        sessionRegistry.add(session);    }    @OnClose    public void close(Session session, CloseReason reason) {        sessionRegistry.remove(session);    }    public void send(@Observes Seat seat) {        sessionRegistry.getAll().forEach(session -> session.getAsyncRemote().sendText(toJson(seat)));    }    private String toJson(Seat seat) {        final JsonObject jsonObject = Json.createObjectBuilder()                .add("id", seat.getId())                .add("booked", seat.isBooked())                .build();        return jsonObject.toString();    } } Our endpoint is defined in the /tickets address. We injected a SessionRepository to our endpoint. During @OnOpen, we add Sessions to the registry, and during @OnClose, we just remove them. Message sending is performed on the CDI event (the @Observers annotation), which is already fired in our code during TheatreBox.buyTicket(int). In our send method, we retrieve all sessions from SessionRepository, and for each of them, we asynchronously send information about booked seats. We don't really need information about all the Seat fields to realize this feature. That's the reason why we don't use the automatic JSON serialization here. Instead, we decided to use a minimalistic JSON object, which provides only the required data. To do this, we used the new Java API for JSON Processing (JSR-353). Using a fluent-like API, we're able to create a JSON object and add two fields to it. Then, we just convert JSON to the String, which is sent in a text message. Because in our example we send messages in response to a CDI event, we don't have (in the event handler) an out-of-the-box reference to any of the sessions. We have to use our sessionRegistry object to access the active ones. However, if we would like to do the same thing but, for example, in the @OnMessage method, then it is possible to get all active sessions just by executing the session.getOpenSessions() method. These are all the changes required to perform on the backend side. Now, we have to modify our AngularJS frontend to leverage the added feature. The good news is that JavaScript already includes classes that can be used to perform WebSocket communication! There are a few lines of code we have to add inside the module defined in the seat.js file, which are as follows: var ws = new WebSocket("ws://localhost:8080/ticket-agency-websockets/tickets"); ws.onmessage = function (message) {    var receivedData = message.data;    var bookedSeat = JSON.parse(receivedData);    $scope.$apply(function () {        for (var i = 0; i < $scope.seats.length; i++) {           if ($scope.seats[i].id === bookedSeat.id) {                $scope.seats[i].booked = bookedSeat.booked;                break;            }        }    }); }; The code is very simple. We just create the WebSocket object using the URL to our endpoint, and then we define the onmessage function in that object. During the function execution, the received message is automatically parsed from the JSON to JavaScript object. Then, in $scope.$apply, we just iterate through our seats, and if the ID matches, we update the booked state. We have to use $scope.$apply because we are touching an Angular object from outside the Angular world (the onmessage function). Modifications performed on $scope.seats are automatically visible on the website. With this, we can just open our ticket booking website in two browser sessions, and see that when one user buys a ticket, the second users sees almost instantly that the seat state is changed to booked. We can enhance our application a little to inform users if the WebSocket connection is really working. Let's just define onopen and onclose functions for this purpose: ws.onopen = function (event) {    $scope.$apply(function () {        $scope.alerts.push({            type: 'info',            msg: 'Push connection from server is working'        });    }); }; ws.onclose = function (event) {    $scope.$apply(function () {        $scope.alerts.push({            type: 'warning',            msg: 'Error on push connection from server '        });    }); }; To inform users about a connection's state, we push different types of alerts. Of course, again we're touching the Angular world from the outside, so we have to perform all operations on Angular from the $scope.$apply function. Running the described code results in the notification, which is visible in the following screenshot: However, if the server fails after opening the website, you might get an error as shown in the following screenshot: Transforming POJOs to JSON In our current example, we transformed our Seat object to JSON manually. Normally, we don't want to do it this way; there are many libraries that will do the transformation for us. One of them is GSON from Google. Additionally, we can register an encoder/decoder class for a WebSocket endpoint that will do the transformation automatically. Let's look at how we can refactor our current solution to use an encoder. First of all, we must add GSON to our classpath. The required Maven dependency is as follows: <dependency>    <groupId>com.google.code.gson</groupId>    <artifactId>gson</artifactId>    <version>2.3</version> </dependency> Next, we need to provide an implementation of the javax.websocket.Encoder.Text interface. There are also versions of the javax.websocket.Encoder.Text interface for binary and streamed data (for both binary and text formats). A corresponding hierarchy of interfaces is also available for decoders (javax.websocket.Decoder). Our implementation is rather simple. This is shown in the following code snippet: public class JSONEncoder implements Encoder.Text<Object> {    private Gson gson;    @Override    public void init(EndpointConfig config) {        gson = new Gson(); [1]    }    @Override    public void destroy() {        // do nothing    }    @Override    public String encode(Object object) throws EncodeException {        return gson.toJson(object); [2]    } } First, we create an instance of GSON in the init method; this action will be executed when the endpoint is created. Next, in the encode method, which is called every time, we send an object through an endpoint. We use JSON to create JSON from an object. This is quite concise when we think how reusable this little class is. If you want more control on the JSON generation process, you can use the GsonBuilder class to configure the GSON object before creation of the GsonBuilder class. We have the encoder in place. Now it's time to alter our endpoint: @ServerEndpoint(value = "/tickets", encoders={JSONEncoder.class})[1] public class TicketEndpoint {    @Inject    private SessionRegistry sessionRegistry;    @OnOpen    public void open(Session session, EndpointConfig conf) {        sessionRegistry.add(session);    }    @OnClose    public void close(Session session, CloseReason reason) {        sessionRegistry.remove(session);    }    public void send(@Observes Seat seat) {        sessionRegistry.getAll().forEach(session -> session.getAsyncRemote().sendObject(seat)); [2]    } } The first change is done on the @ServerEndpoint annotation. We have to define a list of supported encoders; we simply pass our JSONEncoder.class wrapped in an array. Additionally, we have to pass the endpoint name using the value attribute. Earlier, we used the sendText method to pass a string containing a manually created JSON. Now, we want to send an object and let the encoder handle the JSON generation; therefore, we'll use the getAsyncRemote().sendObject() method. That's all! Our endpoint is ready to be used. It will work the same as the earlier version, but now our objects will be fully serialized to JSON, so they will contain every field, not only IDs and be booked. After deploying the server, you can connect to the WebSocket endpoint using one of the Chrome extensions, for instance, the Dark WebSocket terminal from the Chrome store (use the ws://localhost:8080/ticket-agency-websockets/tickets address). When you book tickets using the web application, the WebSocket terminal should show something similar to the output shown in the following screenshot: Of course, it is possible to use different formats other than JSON. If you want to achieve better performance (when it comes to the serialization time and payload size), you may want to try out binary serializers such as Kryo (https://github.com/EsotericSoftware/kryo). They may not be supported by JavaScript, but may come in handy if you would like to use WebSockets for other clients also. Tyrus (https://tyrus.java.net/) is a reference implementation of the WebSocket standard for Java; you can use it in your standalone desktop applications. In that case, besides the encoder (which is used to send messages), you would also need to create a decoder, which can automatically transform incoming messages. An alternative to WebSockets The example we presented in this article is possible to be implemented using an older, lesser-known technology named Server-Sent Events (SSE). SSE allows for one-way communication from the server to client over HTTP. It is much simpler than WebSockets but has a built-in support for things such as automatic reconnection and event identifiers. WebSockets are definitely more powerful, but are not the only way to pass events, so when you need to implement some notifications from the server side, remember about SSE. Another option is to explore the mechanisms oriented around the Comet techniques. Multiple implementations are available and most of them use different methods of transportation to achieve their goals. A comprehensive comparison is available at http://cometdaily.com/maturity.html. Summary In this article, we managed to introduce the new low-level type of communication. We presented how it works underneath and compares to SOAP and REST introduced earlier. We also discussed how the new approach changes the development of web applications. Our ticket booking application was further enhanced to show users the changing state of the seats using push-like notifications. The new additions required very little code changes in our existing project when we take into account how much we are able to achieve with them. The fluent integration of WebSockets from Java EE 7 with the AngularJS application is another great showcase of flexibility, which comes with the new version of the Java EE platform. Resources for Article: Further resources on this subject: Various subsystem configurations [Article] Running our first web application [Article] Creating Java EE Applications [Article]
Read more
  • 0
  • 0
  • 15855

article-image-multiple-templates-django
Packt
21 Oct 2009
13 min read
Save for later

Multiple Templates in Django

Packt
21 Oct 2009
13 min read
Considering the different approaches Though there are different approaches that can be taken to serve content in multiple formats, the best solution will be specific to your circumstances and implementation. Almost any approach you take will have maintenance overhead. You'll have multiple places to update when things change. As copies of your template files proliferate, a simple text change can become a large task. Some of the cases we'll look at don't require much consideration. Serving a printable version of a page, for example, is straightforward and easily accomplished. Putting a pumpkin in your site header at Halloween or using a heart background around Valentine's Day can make your site seem timely and relevant, especially if you are in a seasonal business. Other techniques, such as serving different templates to different browsers, devices, or user-agents might create serious debate among content authors. Since serving content to mobile devices is becoming a new standard of doing business, we'll make it the focus of this article. Serving mobile devices The Mobile Web will remind some old timers (like me!) of the early days of web design where we'd create different sites for Netscape and Internet Explorer. Hopefully, we take lessons from those days as we go forward and don't repeat our mistakes. Though we're not as apt to serve wholly different templates to different desktop browsers as we once were, the mobile device arena creates special challenges that require careful attention. One way to serve both desktop and mobile devices is a one-size-fits-all approach. Through carefully structured and semantically correct XHTML markup and CSS selectors identified to be applied to handheld output, you can do a reasonable job of making your content fit a variety of contexts and devices. However, this method has a couple of serious shortcomings. First, it does not take into account the limitations of devices for rich media presentation with Flash, JavaScript, DHTML, and AJAX as they are largely unsupported on all but the highest-end devices. If your site depends on any of these technologies, your users can get frustrated when trying to experience it on a mobile device. Also, it doesn't address the varying levels of CSS support by different mobile devices. What looks perfect on one device might look passable on another and completely unusable on a third because only some of the CSS rules were applied properly. It also does not take into account the potentially high bandwidth costs for large markup files and CSS for users who pay by the amount of data transferred. For example, putting display: none on an image doesn't stop a mobile device from downloading the file. It only prevents it from being shown. Finally, this approach doesn't tailor the experience to the user's circumstances. Users tend to be goal-oriented and have specific actions in mind when using the mobile web, and content designers should recognize that simply recreating the desktop experience on a smaller screen might not solve their needs. Limiting the information to what a mobile user is looking for and designing a simplified navigation can provide a better user experience. Adapting content You know your users best, and it is up to you to decide the best way to serve them. You may decide to pass on the one-size-fits-all approach and serve a separate mobile experience through content adaptation. The W3C's Mobile Web Initiative best practices guidelines suggest giving users the flexibility and freedom to choose their experience, and provide links between the desktop and mobile templates so that they can navigate between the two. It is generally not recommended to automatically redirect users on mobile devices to a mobile site unless you give them a way to access the full site. The dark side to this kind of content adaptation is that you will have a second set of template files to keep updated when you make site changes. It can also cause your visitors to search through different bookmarks to find the content they have saved. Before we get into multiple sites, let's start with some examples of showing alternative templates on our current site. Setting up our example Since we want to customize the output of our detail page based on the presence of a variable in the URL, we're going to use a view function instead of a generic view. Let us consider a press release application for a company website. The press release object will have a title, body, published date, and author name.In the root directory of your project (in the directory projects/mycompany), create the press application by using the startapp command: $ python manage.py startapp press This will create a press folder in your site. Edit the mycompany/press/models.py file: from django.db import models class PressRelease(models.Model): title = models.CharField(max_length=100) body = models.TextField() pub_date = models.DateTimeField() author = models.CharField(max_length=100) def __unicode__(self): return self.title Create a file called admin.py in the mycompany/press directory, adding these lines: from django.contrib import adminfrom mycompany.press.models import PressRelease admin.site.register(PressRelease) Add the press and admin applications to your INSTALLED_APPS variable in the settings.py file: INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.admin', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'mycompany.press',) In the root directory of your project, run the syncdb command to add the new models to the database: $ python manage.py syncdb We will be prompted to create a superuser, go ahead and create it. We can access the admin site by browsing to http://localhost:8000/admin/ and add data. Create your mycompany/press/urls.py file as shown: urlpatterns = patterns('', (r'detail/(?P<pid>d+)/$', 'mycompany.press.views.detail'), (r'list/$','django.views.generic.list_detail.object_list', press_list_dict), (r'latest/$','mycompany.press.views.latest'), (r'$','django.views.generic.simple.redirect_to', {'url': '/press/list/'})) In your mycompany/press/views.py file, your detail view should look like this: from django.http import HttpResponsefrom django.shortcuts import get_object_or_404from django.template import loader, Contextfrom mycompany.press.models import PressRelease def detail(request, pid): ''' Accepts a press release ID and returns the detail page ''' p = get_object_or_404(PressRelease, id=pid) t = loader.get_template('press/detail.html') c = Context({'press': p}) return HttpResponse(t.render(c)) Let's jazz up our template a little more for the press release detail by adding some CSS to it. In mycompany/templates/press/detail.html, edit the file to look like this: <html><head><title>{{ press.title }}</title><style type="text/css">body { text-align: center;}#container { margin: 0 auto; width: 70%; text-align: left;}.header { background-color: #000; color: #fff;}</style></head><body><div id="container"><div class="header"><h1>MyCompany Press Releases</h1></div><div><h2>{{ press.title }}</h2><p>Author: {{ press.author }}<br/>Date: {{ press.pub_date }}<br/></p><p>{{ press.body }}</p></div></body></html> Start your development server and point your browser to the URL http://localhost:8000/press/detail/1/. You should see something like this, depending on what data you entered before when you created your press release: If your press release detail page is serving correctly, you're ready to continue. Remember that generic views can save us development time, but sometimes you'll need to use a regular view because you're doing something in a way that requires a view function customized to the task at hand. The exercise we're about to do is one of those circumstances, and after going through the exercise, you'll have a better idea of when to use one type of view over another. Serving printable pages One of the easiest approaches we will look at is serving an alternative version of a page based on the presence of a variable in the URL (aka a URL parameter). To serve a printable version of an article, for example, we can add ?printable to the end of the URL. To make it work, we'll add an extra step in our view to check the URL for this variable. If it exists, we'll load up a printer-friendly template file. If it doesn't exist, we'll load the normal template file. Start by adding the highlighted lines to the detail function in the mycompany/press/views.py file: def detail(request, pid): ''' Accepts a press release ID and returns the detail page ''' p = get_object_or_404(PressRelease, id=pid) if request.GET.has_key('printable'): template_file = 'press/detail_printable.html' else: template_file = 'press/detail.html' t = loader.get_template(template_file) c = Context({'press': p}) return HttpResponse(t.render(c)) We're looking at the request.GET object to see if a query string parameter of printable was present in the current request. If it was, we load the press/detail_printable.html file. If not, we load the press/detail.html file. We've also changed the loader.get_template function to look for the template_file variable. To test our changes, we'll need to create a simple version of our template that only has minimal formatting. Create a new file called detail_printable.html in the mycompany/templates/press/ directory and add these lines into it: <html><head><title>{{ press.title }}</title></head><body><h1>{{ press.title }}</h1><p>Author: {{ press.author }}<br/>Date: {{ press.pub_date }}<br/></p><p>{{ press.body }}</p></body></html> Now that we have both regular and printable templates, let's test our view.Point your browser to the URL http://localhost:8000/press/detail/1/, and you should see our original template as it was before. Change the URL to http://localhost:8000/press/detail/1/?printable and you should see our new printable template: Creating site themes Depending on the audience and focus of your site, you may want to temporarily change the look of your site for a season or holiday such as Halloween or Valentine's Day. This is easily accomplished by leveraging the power of the TEMPLATE_DIRS configuration setting. The TEMPLATE_DIRS variable in the settings.py file allows you to specify the location of the templates for your site. Also TEMPLATE_DIRS allows you to specify multiple locations for your template files. When you specify multiple paths for your template files, Django will look for a requested template file in the first path, and if it doesn't find it, it will keep searching through the remaining paths until the file is located. We can use this to our advantage by adding an override directory as the first element of the TEMPLATE_DIRS value. When we want to override a template with a special themed one, we'll add the file to the override directory. The next time the template loader tries to load the template, it will find it in the override directory and serve it. For example, let's say we want to override our press release page from the previous example. Recall that the view loaded the template like this (from mycompany/press/views.py): template_file = 'press/detail.html't = loader.get_template(template_file) When the template engine loads the press/detail.html template file, it gets itfrom the mycompany/templates/ directory as specified in the mycompany/settings.py file: TEMPLATE_DIRS = ( '/projects/mycompany/templates/',) If we add an additional directory to our TEMPLATE_DIRS setting, Django will look in the new directory first: TEMPLATE_DIRS = ( '/projects/mycompany/templates/override/’, '/projects/mycompany/templates/',) Now when the template is loaded, it will first check for the file /projects/mycompany/templates/override/press/detail.html. If that file doesn't exist, it will go on to the next directory and look for the file in /projects/mycompany/templates/press/detail.html. If you're using Windows, use the Windows-style file path c:/projects/mycompany/templates/ for these examples. Therein lies the beauty. If we want to override our press release template, we simply drop an alternative version with the same file name into the override directory. When we're done using it, we just remove it from the override directory and the original version will be served (or rename the file in the override directory to something other than detail.html). If you're concerned about the performance overhead of having a nearly empty override directory that is constantly checked for the existence of template files, we should consider caching techniques as a potential solution for this. Testing the template overrides Let's create a template override to test the concept we just learned. In your mycompany/settings.py file, edit the TEMPLATE_DIRS setting to look like this: TEMPLATE_DIRS = ( '/projects/mycompany/templates/override/', '/projects/mycompany/templates/',) Create a directory called override at mycompany/templates/ and another directory underneath that called press. You should now have these directories: /projects/mycompany/templates/override//projects/mycompany/templates/override/press/ Create a new file called detail.html in mycompany/templates/override/press/ and add these lines to the file: <html><head><title>{{ press.title }}</title></head><body><h1>Happy Holidays</h1><h2>{{ press.title }}</h2><p>Author: {{ press.author }}<br/>Date: {{ press.pub_date }}<br/></p><p>{{ press.body }}</p></body></html> You'll probably notice that this is just our printable detail template with an extra "Happy Holidays" line added to the top of it. Point your browser to the URL http://localhost:8000/press/detail/1/ and you should see something like this: By creating a new press release detail template and dropping it in the override directory, we caused Django to automatically pick up the new template and serve it without us having to change the view. To change it back, you can simply remove the file from the override directory (or rename it). One other thing to notice is that if you add ?printable to the end of the URL, it still serves the printable version of the file we created earlier. Delete the mycompany/templates/override/ directory and any files in it as we won't need them again.
Read more
  • 0
  • 0
  • 15561
article-image-using-openshift
Packt
21 Oct 2013
5 min read
Save for later

Using OpenShift

Packt
21 Oct 2013
5 min read
(For more resources related to this topic, see here.) Each of these utilize the OpenShift REST API at the backend; therefore, as a user, we could potentially orchestrate OpenShift using the API with such common command-line utilities as curl to write scripts for automation. We could also use the API to write our own custom user interface, if we had the desire. In the following sections, we will explore using each of the currently supported user experiences, all of which can be intermixed as they communicate with the backend in a uniform fashion using the REST API previously mentioned. Getting started using OpenShift As discussed previously, we will be using the OpenShift Online free hosted service for example portions. OpenShift Online has the lowest barrier of entry from a user's perspective because we will not have to deploy our own OpenShift PaaS before being able to utilize it. Since we will be using the OpenShift Online service, the very first step is going to be to visit their website and sign up for a free account via https://openshift.redhat.com/app/account/new. New Account Form Once this step is complete, we will find an e-mail in our inbox that was provided during sign up, with a subject line similar to Confirm your Red Hat OpenShift account; inside that e-mail will be a URL that needs to be followed to complete the setup and verification step. Now that we've successfully completed the sign up phase, let's move on to exploring the different ways in which we can use and interact with OpenShift. Command-line utilities Due to the advancements in modern computing and the advent of mobile devices such as tablets, smart phones, and many other devices, we are often accustomed to Graphical User Interface (GUI) over Command-Line Interface (CLI) for most of our computing needs. This trend is heavier in the realm of web applications because of the rich visual experiences that can be delivered using next generation web technologies. However, those of us who are in the development and system administration circles of the world are no strangers to the CLI, and we know that it is often the most powerful way to accomplish an array of tasks pertaining to development and administration. Much of this is a credit to powerful shell environments that have their roots in traditional UNIX environments; popular examples of these are bash and zsh. Also, in more recent years, PowerShell for the Microsoft Windows platform has aimed to provide some relatively similar CLI power. The shell, as it is referenced here, is that of a UNIX shell, which is a command interpreter that supports such features as variables, functions, pipes, I/O redirection, variable substitution, flow control, conditionals, the ability to be scripted, and more. There is also a POSIX standard for a shell that defines a standard set of features and behaviors that must be complied with, allowing for portability of complex scripts. With this inherent power at the fingertips of the person who wields the command line, the development team of the OpenShift PaaS has written a command-line utility, much in the spirit of offering powerful utilities to its users and developers. Before we get too deep into the details, let's quickly look at what a normal application creation and deployment requires in OpenShift using the following command: $ rhc app create myawesomewebapp ruby-1.9 $ cd myawesomewebapp (Write, create, and implement code changes) $ git commit -a -m "wrote awesome code" $ git push It will be discussed at length shortly, but for a quick rundown, the rhc app create myawesomewebapp ruby-1.9 command creates an application, which runs on OpenShift using ruby-1.9 as the programming platform. Behind the scenes, it's provisioning space, resources, and configuring services for us. It also creates a git repository that is then cloned locally—in our example named myawesomewebapp—and in order to access this, we need to change directories into the git repository. That is precisely what the next command cd myawesomewebapp does. And you're live, running your web application in the cloud. While this is an extremely high-level overview and there are some prerequisites necessary, normal use of OpenShift is that easy. In the following section, we will discuss at length all the steps necessary to launch a live application in OpenShift Online using the rhc command-line utility and git. This command-line utility, rhc, is written in the Ruby programming language and is distributed as a RubyGem (https://rubygems.org/). This is the recommended method of installation for Ruby modules, libraries, and utilities due to the platform-independent nature of Ruby and the ease of distribution of gems. The rhc command-line utility is also available using the native package management for both Fedora and Red Hat Enterprise Linux (via the EPEL repository, available at https://fedoraproject.org/wiki/EPEL) by running the yum install rubygem-rhc command. Another noteworthy proponent of RubyGems is that they can be installed to a user's home directory within their local machine's operating system, allowing them to be utilized even in environments where systems are centrally managed by an IT department. RubyGems are also installed using the gem package manager for users of GNU/Linux package managers, such as yum, apt-get, and pacman or Mac OS X's community homebrew (brew) package manager, which will be familiar with the concept. For those unfamiliar with these concepts, a package manager will track a software named "package" and its dependencies, handle installation, updates, as well as removal. We will take a short moment to tangent into the topic of RubyGems before moving on to the command-line utility for OpenShift to ensure that we don't leave out any background information. Summary Hopefully, we can select our preferred method of deploying on OpenShift, and developers of all backgrounds, preferences, and development platforms will feel at home working with OpenShift as a development and deployment platform. Resources for Article: Further resources on this subject: What is Oracle Public Cloud? [Article] Features of CloudFlare [Article] vCloud Networks [Article]
Read more
  • 0
  • 0
  • 15477

article-image-creating-maze-and-animating-cube
Packt
07 Jul 2014
9 min read
Save for later

Creating the maze and animating the cube

Packt
07 Jul 2014
9 min read
(For more resources related to this topic, see here.) A maze is a rather simple shape that consists of a number of walls and a floor. So, what we need is a way to create these shapes. Three.js, not very surprisingly, doesn't have a standard geometry that will allow you to create a maze, so we need to create this maze by hand. To do this, we need to take two different steps: Find a way to generate the layout of the maze so that not all the mazes look the same. Convert that to a set of cubes (THREE.BoxGeometry) that we can use to render the maze in 3D. There are many different algorithms that we can use to generate a maze, and luckily there are also a number of open source JavaScript libraries that implement such an algorithm. So, we don't have to start from scratch. For the example in this book, I've used the following random-maze-generator project that you can find on GitHub at the following link: https://github.com/felipecsl/random-maze-generator Generating a maze layout Without going into too much detail, this library allows you to generate a maze and render it on an HTML5 canvas. The result of this library looks something like the following screenshot: You can generate this by just using the following JavaScript: var maze = new Maze(document, 'maze'); maze.generate(); maze.draw(); Even though this is a nice looking maze, we can't use this directly to create a 3D maze. What we need to do is change the code the library uses to write on the canvas, and change it to create Three.js objects. This library draws the lines on the canvas in a function called drawLine: drawLine: function(x1, y1, x2, y2) { self.ctx.beginPath(); self.ctx.moveTo(x1, y1); self.ctx.lineTo(x2, y2); self.ctx.stroke(); } If you're familiar with the HTML5 canvas, you can see that this function draws lines based on the input arguments. Now that we've got this maze, we need to convert it to a number of 3D shapes so that we can render them in Three.js. Converting the layout to a 3D set of objects To change this library to create Three.js objects, all we have to do is change the drawLine function to the following code snippet: drawLine: function(x1, y1, x2, y2) { var lengthX = Math.abs(x1 - x2); var lengthY = Math.abs(y1 - y2); // since only 90 degrees angles, so one of these is always 0 // to add a certain thickness to the wall, set to 0.5 if (lengthX === 0) lengthX = 0.5; if (lengthY === 0) lengthY = 0.5; // create a cube to represent the wall segment var wallGeom = new THREE.BoxGeometry(lengthX, 3, lengthY); var wallMaterial = new THREE.MeshPhongMaterial({ color: 0xff0000, opacity: 0.8, transparent: true }); // and create the complete wall segment var wallMesh = new THREE.Mesh(wallGeom, wallMaterial); // finally position it correctly wallMesh.position = new THREE.Vector3( x1 - ((x1 - x2) / 2) - (self.height / 2), wallGeom.height / 2, y1 - ((y1 - y2)) / 2 - (self.width / 2)); self.elements.push(wallMesh); scene.add(wallMesh); } In this new drawLine function, instead of drawing on the canvas, we create a THREE.BoxGeometry object whose length and depth are based on the supplied arguments. Using this geometry, we create a THREE.Mesh object and use the position attribute to position the mesh on a specific points with the x, y, and z coordinates. Before we add the mesh to the scene, we add it to the self.elements array. Now we can just use the following code snippet to create a 3D maze: var maze = new Maze(scene,17, 100, 100); maze.generate(); maze.draw(); As you can see, we've also changed the input arguments. These properties now define the scene to which the maze should be added and the size of the maze. The result from these changes can be seen in the following screenshot: Every time you refresh, you'll see a newly generated random maze. Now that we've got our generated maze, the next step is to add the object that we'll move through the maze. Animating the cube Before we dive into the code, let's first look at the result as shown in the following screenshot: Using the controls at the top-right corner, you can move the cube around. What you'll see is that the cube rotates around its edges, not around its center. In this section, we'll show you how to create that effect. Let's first look at the default rotation, which is along an object's central axis, and the translation behavior of Three.js. The standard Three.js rotation behavior Let's first look at all the properties you can set on THREE.Mesh. They are as follows: Function/property Description position This property refers to the position of an object, which is relative to the position of its parent. In all our examples, so far the parent is THREE.Scene. rotation This property defines the rotation of THREE.Mesh around its own x, y, or z axis. scale With this property, you can scale the object along its own x, y, and z axes. translateX(amount) This property moves the object by a specified amount over the x axis. translateY(amount) This property moves the object by a specified amount over the y axis. translateZ(amount) This property moves the object by a specified amount over the z axis. If we want to rotate a mesh around one of its own axes, we can just call the following line of code: plane.rotation.x = -0.5 * Math.PI; We've used this to rotate the ground area from a horizontal position to a vertical one. It is important to know that this rotation is done around its own internal axis, not the x, y, or z axis of the scene. So, if you first do a number of rotations one after another, you have to keep track at the orientation of your mesh to make sure you get the required effect. Another point to note is that rotation is done around the center of the object—in this case the center of the cube. If we look at the effect we want to accomplish, we run into the following two problems: First, we don't want to rotate around the center of the object; we want to rotate around one of its edges to create a walking-like animation Second, if we use the default rotation behavior, we have to continuously keep track of our orientation since we're rotating around our own internal axis In the next section, we'll explain how you can solve these problems by using matrix-based transformations. Creating an edge rotation using matrix-based transformation If we want to perform edge rotations, we have to take the following few steps: If we want to rotate around the edge, we have to change the center point of the object to the edge we want to rotate around. Since we don't want to keep track of all the rotations we've done, we'll need to make sure that after each rotation, the vertices of the cube represent the correct position. Finally, after we've rotated around the edge, we have to do the inverse of the first step. This is to make sure the center point of the object is back in the center of the cube so that it is ready for the next step. So, the first thing we need to do is change the center point of the cube. The approach we use is to offset the position of all individual vertices and then change the position of the cube in the opposite way. The following example will allow us to make a step to the right-hand side: cubeGeometry.applyMatrix(new THREE.Matrix4().makeTranslation (0, width / 2, width / 2)); cube.position.y += -width / 2; cube.position.z += -width / 2; With the cubeGeometry.applyMatrix function, we can change the position of the individual vertices of our geometry. In this example, we will create a translation (using makeTranslation), which offsets all the y and z coordinates by half the width of the cube. The result is that it will look like the cube moved a bit to the right-hand side and then up, but the actual center of the cube now is positioned at one of its lower edges. Next, we use the cube.position property to position the cube back at the ground plane since the individual vertices were offset by the makeTranslation function. Now that the edge of the object is positioned correctly, we can rotate the object. For rotation, we could use the standard rotation property, but then, we will have to constantly keep track of the orientation of our cube. So, for rotations, we once again use a matrix transformation on the vertices of our cube: cube.geometry.applyMatrix(new THREE.Matrix4().makeRotationX(amount); As you can see, we use the makeRotationX function, which changes the position of our vertices. Now we can easily rotate our cube, without having to worry about its orientation. The final step we need to take is reset the cube to its original position; taking into account that we've moved a step to the right, we can take the next step: cube.position.y += width/2; // is the inverse + width cube.position.z += -width/2; cubeGeometry.applyMatrix(new THREE.Matrix4().makeTranslation(0, - width / 2, width / 2)); As you can see, this is the inverse of the first step; we've added the width of the cube to position.y and subtracted the width from the second argument of the translation to compensate for the step to the right-hand side we've taken. If we use the preceding code snippet, we will only see the result of the step to the right. Summary In this article, we have seen how to create a maze and animate a cube. Resources for Article: Further resources on this subject: Working with the Basic Components That Make Up a Three.js Scene [article] 3D Websites [article] Rich Internet Application (RIA) – Canvas [article]
Read more
  • 0
  • 0
  • 15469
Modal Close icon
Modal Close icon