Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-communicating-servers
Packt
02 Sep 2013
24 min read
Save for later

Communicating with Servers

Packt
02 Sep 2013
24 min read
(For more resources related to this topic, see here.) Creating an HTTP GET request to fetch JSON One of the basic means of retrieving information from the server is using HTTP GET. This type of method in a RESTful manner should be only used for reading data. So, GET calls should never change server state. Now, this may not be true for every possible case, for example, if we have a view counter on a certain resource, is that a real change? Well, if we follow the definition literally then yes, this is a change, but it's far from significant to be taken into account. Opening a web page in a browser does a GET request, but often we want to have a scripted way of retrieving data. This is usually to achieve Asynchronous JavaScript and XML (AJAX ), allowing reloading of data without doing a complete page reload. Despite the name, the use of XML is not required, and these days, JSON is the format of choice. A combination of JavaScript and the XMLHttpRequest object provides a method for exchanging data asynchronously, and in this recipe, we are going to see how to read JSON for the server using plain JavaScript and jQuery. Why use plain JavaScript rather than using jQuery directly? We strongly believe that jQuery simplifies the DOM API, but it is not always available to us, and additionally, we need have to know the underlying code behind asynchronous data transfer in order to fully grasp how applications work. Getting ready The server will be implemented using Node.js. In this example, for simplicity, we will use restify (http://mcavage.github.io/node-restify/), a Node.js module for creation of correct REST web services. How to do it... Let's perform the following steps. In order to include restify to our project in the root directory of our server side scripts, use the following command: npm install restify After adding the dependency, we can proceed to creating the server code. We create a server.js file that will be run by Node.js, and at the beginning of it we add restify: var restify = require('restify'); With this restify object, we can now create a server object and add handlers for get methods: var server = restify.createServer(); server.get('hi', respond); server.get('hi/:index', respond); The get handlers do a callback to a function called respond, so we can now define this function that will return the JSON data. We will create a sample JavaScript object called hello, and in case the function was called having a parameter index part of the request it was called from the "hi/:index" handler: function respond(req, res, next) { console.log("Got HTTP " + req.method + " on " + req.url + " responding"); var hello = [{ 'id':'0', 'hello': 'world' },{ 'id':'1', 'say':'what' }]; if(req.params.index){ var found = hello[req.params.index]; if(found){ res.send(found); } else { res.status(404); res.send(); } }; res.send(hello); addHeaders(req,res); return next(); } The following addHeaders function that we call at the beginning is adding headers to enable access to the resources served from a different domain or a different server port: function addHeaders(req, res) { res.header("Access-Control-Allow-Origin", "*"); res.header("Access-Control-Allow-Headers", "X-Requested-With"); }; The definition of headers and what they mean will be discussed later on in the Article. For now, let's just say they enable accesses to the resources from a browser using AJAX. At the end, we add a block of code that will set the server to listen on port 8080: server.listen(8080, function() { console.log('%s listening at %s', server.name, server.url); }); To start the sever using command line, we type the following command: node server.js If everything went as it should, we will get a message in the log: restify listening at http://0.0.0.0:8080 We can then test it by accessing directly from the browser on the URL we defined http://localhost:8080/hi Now we can proceed with the client-side HTML and JavaScript. We will implement two ways for reading data from the server, one using standard XMLHttpRequest and the other using jQuery.get(). Note that not all features are fully compatible with all browsers. We create a simple page where we have two div elements, one with the ID data and another with the ID say. These elements will be used as placeholders to load data form the server into them: Hello <div id="data">loading</div> <hr/> Say <div id="say">No</div>s <script src = "http://ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.min.js"></script> <script src = "example.js"></script> <script src = "exampleJQuery.js"></script> In the example.js file, we define a function called getData that will create a AJAX call to a given url and do a callback if the request went successfully: function getData(url, onSuccess) { var request = new XMLHttpRequest(); request.open("GET", url); request.onload = function() { if (request.status === 200) { console.log(request); onSuccess(request.response); } }; request.send(null); } After that, we can call the function directly, but in order to demonstrate that the call happens after the page is loaded, we will call it after a timeout of three seconds: setTimeout( function() { getData( 'http://localhost:8080/hi', function(response){ console.log('finished getting data'); var div = document.getElementById('data'); var data = JSON.parse(response); div.innerHTML = data[0].hello; }) }, 3000); The jQuery version is a lot cleaner, as the complexity that comes with the standard DOM API and the event handling is reduced substantially: (function(){ $.getJSON('http://localhost:8080/hi/1', function(data) { $('#say').text(data.say); }); }()) How it works... At the beginning, we installed the dependency using npm install restify; this is sufficient to have it working, but in order to define dependencies in a more expressive way, npm has a way of specifying it. We can add a file called package.json, a packaging format that is mainly used for for publishing details for Node.js applications. In our case, we can define package.json with the flowing code: { "name" : "ch8-tip1-http-get-example", "description" : "example on http get", "dependencies" : ["restify"], "author" : "Mite Mitreski", "main" : "html5dasc", "version" : "0.0.1" } If we have a file like this, npm will automatically handle the installation of dependencies after calling npm install from the command line in the directory where the package.json file is placed. Restify has a simple routing where functions are mapped to appropriate methods for a given URL. The HTTP GET request for '/hi' is mapped with server.get('hi', theCallback), where theCallback is executed, and a response should be returned. When we have a parameterized resource, for example in 'hi/:index', the value associated with :index will be available under req.params. For example, in a request to '/hi/john' to access the john value, we simple have req.params.index. Additionally, the value for index will automatically get URL-decoded before it is passed to our handler. One other notable part of the request handlers in restify is the next() function that we called at the end. In our case, it mostly does not makes much sense, but in general, we are responsible for calling it if we want the next handler function in the chain to be called. For exceptional circumstances, there is also an option to call next() with an error object triggering custom responses. When it comes to the client-side code, XMLHttpRequest is the mechanism behind the async calls, and on calling request.open("GET", url, true) with the last parameter value as true, we get a truly asynchronous execution. Now you might be wondering why is this parameter here, isn't the call already done after loading the page? That is true, the call is done after loading the page, but if, for example, the parameter was set to false, the execution of the request will be a blocking method, or to put it in layman's terms, the script will pause until we get a response. This might look like a small detail, but it can have a huge impact on performance. The jQuery part is pretty straightforward; there is function that accepts a URL value of the resource, the data handler function, and a success function that gets called after successfully getting a response: jQuery.getJSON( url [, data ] [, success(data, textStatus, jqXHR) ] ) When we open index.htm, the server should log something like the following: Got HTTP GET on /hi/1 responding Got HTTP GET on /hi responding Here one is from the jQuery request and the other from the plain JavaScript. There's more... XMLHttpRequest Level 2 is one of the new improvements being added to the browsers, although not part of HTML5 it is still a significant change. There are several features with the Level 2 changes, mostly to enable working with files and data streams, but there is one simplification we already used. Earlier we would have to use onreadystatechange and go through all of the states, and if the readyState was 4, which is equal to DONE, we could read the data: var xhr = new XMLHttpRequest(); xhr.open('GET', 'someurl', true); xhr.onreadystatechange = function(e) { if (this.readyState == 4 && this.status == 200) { // response is loaded } } In a Level 2 request however, we can use request.onload = function() {} directly without checking states. Possible states can be seen in the table: table One other thing to note is that XMLHttpRequest Level 2 is supported in all major browsers and IE 10; the older XMLHttpRequest has a different way of instantiation on older versions of IE (older than IE 7), where we can access it through an ActiveX object via new ActiveXObject("Msxml2.XMLHTTP.6.0");. Creating a request with custom headers The HTTP headers are a part of the request object being sent to the server. Many of them give information about the client's user agent setup and configuration, as that is sometimes the basis of making description for the resources being fetched from the server. Several of them such as Etag, Expires, and If-Modified-Since are closely related to caching, while others such as DNT that stands for "Do Not Track" (http://www.w3.org/2011/tracking-protection/drafts/tracking-dnt.html) can be quite controversial. In this recipe, we will take a look at a way for using the custom X-Myapp header in our server and client-side code. Getting ready The server will be implemented using Node.js. In this example, again for simplicity, we will use restify (http://mcavage.github.io/node-restify/). Also, monitoring the console in your browser and server is crucial in order to understand what happens in the background. How to do it... We can start by defining the dependencies for the server side in package.json file: { "name" : "ch8-tip2-custom-headers", "dependencies" : ["restify"], "main" : "html5dasc", "version" : "0.0.1" } After that, we can call npm install from the command line that will automatically retrieve restify and place it in a node_modules folder created in the root directory of the project. After this part, we can proceed to creating the server-side code in a server.js file where we set the server to listen on port 8080 and add a route handler for 'hi' and for every other path when the request method is HTTP OPTIONS: var restify = require('restify'); var server = restify.createServer(); server.get('hi', addHeaders, respond); server.opts(/.*/, addHeaders, function (req, res, next) { console.log("Got HTTP " + req.method + " on " + req.url + " with headersn"); res.send(200); return next(); }); server.listen(8080, function() { console.log('%s listening at %s', server.name, server.url); }); In most cases, the documentation should be enough when we write the application's build onto Restify, but sometimes, it is a good idea to take a look a the source code as well. It can be found on https://github.com/mcavage/node-restify/. One thing to notice is that we can have multiple chained handlers; in this case, we have addHeaders before the others. In order for every handler to be propagated, next() should be called: function addHeaders(req, res, next) { res.setHeader("Access-Control-Allow-Origin", "*"); res.setHeader('Access-Control-Allow-Headers', 'X-Requested-With, X-Myapp'); res.setHeader('Access-Control-Allow-Methods', 'GET, OPTIONS'); res.setHeader('Access-Control-Expose-Headers', 'X-Myapp, X-Requested-With'); return next(); }; The addHeaders adds access control options in order to enable cross-origin resource sharing. Cross-origin resource sharing (CORS ) defines a way in which the browser and server can interact to determine if the request should be allowed. It is more secure than allowing all cross-origin requests, but is more powerful than simply allowing all of them. After this, we can create the handler function that will return a JSON response with the headers the server received and a hello world kind of object: function respond(req, res, next) { console.log("Got HTTP " + req.method + " on " + req.url + " with headersn"); console.log("Request: ", req.headers); var hello = [{ 'id':'0', 'hello': 'world', 'headers': req.headers }]; res.send(hello); console.log('Response:n ', res.headers()); return next(); } We additionally log the request and response headers to the sever console log in order to see what happens in the background. For the client-side code, we need a plain "vanilla" JavaScript approach and jQuery method, so in order to do that, include example.js and exampleJquery.js as well as a few div elements that we will use for displaying data retrieved from the server: Hi <div id="data">loading</div> <hr/> Headers list from the request: <div id="headers"></div> <hr/> Data from jQuery: <div id="dataRecieved">loading</div> <script src = "http://ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.min.js"></script> <script src = "example.js"></script> <script src = "exampleJQuery.js"></script> A simple way to add the headers is to call setRequestHeader on a XMLHttpRequest object after the call of open(): function getData(url, onSucess) { var request = new XMLHttpRequest(); request.open("GET", url, true); request.setRequestHeader("X-Myapp","super"); request.setRequestHeader("X-Myapp","awesome"); request.onload = function() { if (request.status === 200) { onSuccess(request.response); } }; request.send(null); } The XMLHttpRequest automatically sets headers, such as "Content-Length","Referer", and "User-Agent", and does not allow you to change them using JavaScript. A more complete list of headers and the reasoning behind this can be found in the W3C documentation at http://www.w3.org/TR/XMLHttpRequest/#the-setrequestheader%28%29-method. To print out the results, we add a function that will add each of the header keys and values to an unordered list: getData( 'http://localhost:8080/hi', function(response){ console.log('finished getting data'); var data = JSON.parse(response); document.getElementById('data').innerHTML = data[0].hello; var headers = data[0].headers, headersList = "<ul>"; for(var key in headers){ headersList += '<li><b>' + key + '</b>: ' + headers[key] +'</li>'; }; headersList += "</ul>"; document.getElementById('headers').innerHTML = headersList; }); When this gets executed. a list of all the request headers should be displayed on a page, and our custom x-myapp should be shown: host: localhost:8080 connection: keep-alive origin: http://localhost:8000 x-myapp: super, awesome user-agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.27 (KHTML, like Gecko) Chrome/26.0.1386.0 Safari/537.27 The jQuery approach is far simpler, we can use the beforeSend hook to call a function that will set the 'x-myapp' header. When we receive the response, write it down to the element with the ID dataRecived: $.ajax({ beforeSend: function (xhr) { xhr.setRequestHeader('x-myapp', 'this was easy'); }, success: function (data) { $('#dataRecieved').text(data[0].headers['x-myapp']); } Output from the jQuery example will be the data contained in x-myapp header: Data from jQuery: this was easy How it works... You may have noticed that on the server side, we added a route that has a handler for HTTP OPTIONS method, but we never explicitly did a call there. If we take a look at the server log, there should be something like the following output: Got HTTP OPTIONS on /hi with headers Got HTTP GET on /hi with headers This happens because the browser first issues a preflight request , which in a way is the browser's question whether or not there is a permission to make the "real" request. Once the permission has been received, the original GET request happens. If the OPTIONS response is cached, the browser will not issue any extra preflight calls for subsequent requests. The setRequestHeader function of XMLHttpRequest actually appends each value as a comma-separated list of values. As we called the function two times, the value for the header is as follows: 'x-myapp': 'super, awesome' There's more... For most use cases, we do not need custom headers to be part of our logic, but there are plenty of API's that make good use of them. For example, many server-side technologies add the X-Powered-By header that contains some meta information, such as JBoss 6 or PHP/5.3.0. Another example is Google Cloud Storage, where among other headers there are x-goog-meta-prefixed headers such as x-goog-meta-project-name and x-goog-meta-project-manager. Versioning your API We do not always have the best solution while doing the first implementation. The API can be extended up to a certain point, but afterwards needs to undergo some structural changes. But we might already have users that depend on the current version, so we need a way to have different representation versions of the same resource. Once a module has users, the API cannot be changed at our own will. One way to resolve this issue is to use a so-called URL versioning, where we simply add a prefix. For example, if the old URL was http://example.com/rest/employees, the new one could be http://example.com/rest/v1/employees, or under a subdomain it could be http://v1.example.com/rest/employee. This approach only works if you have direct control over all the servers and clients. Otherwise, you need to have a way of handling fallback to older versions. In this recipe, we are going implement a so-called "Semantic versioning", http://semver.org/, using HTTP headers to specify accepted versions. Getting ready The server will be implemented using Node.js. In this example, we will use restify (http://mcavage.github.io/node-restify/) for the server-side logic to monitor the requests to understand what is sent. How to do it... Let's perform the following steps. We need to define the dependencies first, and after installing restify, we can proceed to the creation of the server code. The main difference with the previous examples is the definition of the "Accept-version" header. restify has built-in handling for this header using versioned routes . After creating the server object, we can set which methods will get called for what version: server.get({ path: "hi", version: '2.1.1'}, addHeaders, helloV2, logReqRes); server.get({ path: "hi", version: '1.1.1'}, addHeaders, helloV1, logReqRes); We also need the handler for the HTTP OPTIONS, as we are using cross-origin resource sharing and the browser needs to do the additional request in order to get permissions: server.opts(/.*/, addHeaders, logReqRes, function (req, res, next) { res.send(200); return next(); }); The handlers for Version 1 and Version 2 will return different objects in order for us to easily notice the difference between the API calls. In the general case, the resource should be the same, but can have different structural changes. For Version 1, we can have the following: function helloV1(req, res, next) { var hello = [{ 'id':'0', 'hello': 'grumpy old data', 'headers': req.headers }]; res.send(hello); return next() } As for Version 2, we have the following: function helloV2(req, res, next) { var hello = [{ 'id':'0', 'awesome-new-feature':{ 'hello': 'awesomeness' }, 'headers': req.headers }]; res.send(hello); return next(); } One other thing we must do is add the CORS headers in order to enable the accept-version header, so in the route we included the addHeaders that should be something like the following: function addHeaders(req, res, next) { res.setHeader("Access-Control-Allow-Origin", "*"); res.setHeader('Access-Control-Allow-Headers', 'X-Requested-With, accept-version'); res.setHeader('Access-Control-Allow-Methods', 'GET, OPTIONS'); res.setHeader('Access-Control-Expose-Headers', 'X-Requested-With, accept-version'); return next(); }; Note that you should not forget to the call to next() in order to call the next function in the route chain. For simplicity, we will only implement the client side in jQuery, so we create a simple HTML document, where we include the necessary JavaScript dependencies: Old api: <div id="data">loading</div> <hr/> New one: <div id="dataNew"> </div> <hr/> <script src = "http://ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.min.js"></script> <script src = "exampleJQuery.js"></script> In the example.js file, we do two AJAX calls to our REST API, one is set to use the Version 1 and other to use Version 2: $.ajax({ url: 'http://localhost:8080/hi', type: 'GET', dataType: 'json', success: function (data) { $('#data').text(data[0].hello); }, beforeSend: function (xhr) { xhr.setRequestHeader('accept-version', '~1'); } }); $.ajax({ url: 'http://localhost:8080/hi', type: 'GET', dataType: 'json', success: function (data) { $('#dataNew').text(data[0]['awesome-new-feature'].hello); }, beforeSend: function (xhr) { xhr.setRequestHeader('accept-version', '~2'); } }); Notice that the accept-version header contains values ~1 and ~2. These designate that all the semantic versions such as 1.1.0 and 1.1.1 1.2.1 will get matched by ~1 and similarly for ~2. At the end, we should get an output like the following text: Old api:grumpy old data New one:awesomeness How it works... Versioned routes are a built-in feature of restify that work through the use of accept-version. In our example, we used Versions ~1 and ~2, but what happens if we don't specify a version? restify will do the choice for us, as the the request will be treated in the same manner as if the client has sent a * version. The first defined matching route in our code will be used. There is also an option to set up the routes to match multiple versions by adding a list of versions for a certain handler: server.get({path: 'hi', version: ['1.1.0', '1.1.1', '1.2.1']}, sendOld); The reason why this type of versioning is very suitable for use in constantly growing applications is because as the API changes, the client can stick with their version of the API without any additional effort or changes needed in the client-side development. Meaning that we don't have to do updates on the application. On the other hand, if the client is sure that their application will work on newer API versions, they can simply change the request headers. There's more... Versioning can be implemented by using custom content types prefixed with vnd for example, application/vnd.mycompany.user-v1. An example of this is Google Earth's content type KML where it is defined as application/vnd.google-earth.kml+xml. Notice that the content type can be in two parts; we could have application/vnd.mycompany-v1+json where the second part will be the format of the response. Fetching JSON data with JSONP JSONP or JSON with padding is a mechanism of making cross-domain requests by taking advantage of the <script> tag. AJAX transport is done by simply setting the src attribute on a script element or adding the element itself if not present. The browser will do an HTTP request to download the URL specified, and that is not subject to the same origin policy, meaning that we can use it to get data from servers that are not under our control. In this recipe, we will create a simple JSONP request, and a simple server to back that up. Getting ready We will make a simplified implementation of the server we used in previous examples, so we need Node.js and restify (http://mcavage.github.io/node-restify/) installed either via definition of package.json or a simple install. For working with Node.js. How to do it... First, we will create a simple route handler that will return a JSON object: function respond(req, res, next) { console.log("Got HTTP " + req.method + " on " + req.url + " responding"); var hello = [{ 'id':'0', 'what': 'hi there stranger' }]; res.send(hello); return next(); } We could roll our own version that will wrap the response into a JavaScript function with the given name, but in order to enable JSONP when using restify, we can simply enable the bundled plugin. This is done by specifying what plugin to be used: var server = restify.createServer(); server.use(restify.jsonp()); server.get('hi', respond); After this, we just set the server to listen on port 8080: server.listen(8080, function() { console.log('%s listening at %s', server.name, server.url); }); The built-in plugin checks the request string for parameters called callback or jsonp, and if those are found, the result will be JSONP with the function name of the one passed as value to one of these parameters. For example, in our case, if we open the browser on http://localhost:8080/hi, we get the following: [{"id":"0","what":"hi there stranger"}] If we access the same URL with the callback parameter or a JSONP set, such as http://localhost:8080/hi?callback=great, we should receive the same data wrapped with that function name: great([{"id":"0","what":"hi there stranger"}]); This is where the P in JSONP, which stands for padded, comes into the picture. So, what we need to do next is create an HTML file where we would show the data from the server and include two scripts, one for the pure JavaScript approach and another for the jQuery way: <b>Hello far away server: </b> <div id="data">loading</div> <hr/> <div id="oneMoreTime">...</div> <script src = "http://ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.min.js"></script> <script src = "example.js"></script> <script src = "exampleJQuery.js"></script> We can proceed with the creation of example.js, where we create two functions; one will create a script element and set the value of src to http://localhost:8080/?callback=cool.run, and the other will serve as a callback upon receiving the data: var cool = (function(){ var module = {}; module.run = function(data){ document.getElementById('data').innerHTML = data[0].what; } module.addElement = function (){ var script = document.createElement('script'); script.src = 'http://localhost:8080/hi?callback=cool.run' document.getElementById('data').appendChild(script); return true; } return module; }()); Afterwards we only need the function that adds the element: cool.addElement(); This should read the data from the server and show a result similar to the following: Hello far away server: hi there stranger From the cool object, we can run the addElement function directly as we defined it as self-executable. The jQuery example is a lot simpler; We can set the datatype to JSONP and everything else is the same as any other AJAX call, at least from the API point of view: $.ajax({ type : "GET", dataType : "jsonp", url : 'http://localhost:8080/hi', success: function(obj){ $('#oneMoreTime').text(obj[0].what); } }); We can now use the standard success callback to handle the data received from the server, and we don't have to specify the parameter in the request. jQuery will automatically append a callback parameter to the URL and delegate the call to the success callback. How it works... The first large leap we are doing here is trusting the source of the data. Results from the server is evaluated after the data is downloaded from the server. There has been some efforts to define a safer JSONP on http://json-p.org/, but it is far from being widespread. The download itself is a HTTP GET method adding another major limitation to usability. Hypermedia as the Engine of Application State (HATEOAS ), among other things, defines the use of HTTP methods for the create, update, and delete operations, making JSONP very unstable for those use cases. Another interesting point is how jQuery delegates the call to the success callback. In order to achieve this, a unique function name is created and is sent to the callback parameter, for example: /hi?callback=jQuery182031846177391707897_1359599143721&_=1359599143727 This function later does a callback to the appropriate handler of jQuey.ajax. There's more... With jQuery, we can also use a custom function if the server parameter that should handle jsonp is not called callback. This is done using the flowing config: jsonp: false, jsonpCallback: "my callback" As with JSONP, we don't do XMLHttpRequest and expect any of the functions that are used with AJAX call to be executed or have their parameters filled as such call. It is a very common mistake to expect just that. More on this can be found in the jQuery documentation at http://api.jquery.com/category/ajax/.
Read more
  • 0
  • 0
  • 2187

article-image-deployment-and-post-deployment
Packt
17 Nov 2014
30 min read
Save for later

Deployment and Post Deployment

Packt
17 Nov 2014
30 min read
In this article by Shalabh Aggarwal, the author of Flask Framework Cookbook, we will talk about various application-deployment techniques, followed by some monitoring tools that are used post-deployment. (For more resources related to this topic, see here.) Deployment of an application and managing the application post-deployment is as important as developing it. There can be various ways of deploying an application, where choosing the best way depends on the requirements. Deploying an application correctly is very important from the points of view of security and performance. There are multiple ways of monitoring an application after deployment of which some are paid and others are free to use. Using them again depends on requirements and features offered by them. Each of the tools and techniques has its own set of features. For example, adding too much monitoring to an application can prove to be an extra overhead to the application and the developers as well. Similarly, missing out on monitoring can lead to undetected user errors and overall user dissatisfaction. Hence, we should choose the tools wisely and they will ease our lives to the maximum. In the post-deployment monitoring tools, we will discuss Pingdom and New Relic. Sentry is another tool that will prove to be the most beneficial of all from a developer's perspective. Deploying with Apache First, we will learn how to deploy a Flask application with Apache, which is, unarguably, the most popular HTTP server. For Python web applications, we will use mod_wsgi, which implements a simple Apache module that can host any Python applications that support the WSGI interface. Remember that mod_wsgi is not the same as Apache and needs to be installed separately. Getting ready We will start with our catalog application and make appropriate changes to it to make it deployable using the Apache HTTP server. First, we should make our application installable so that our application and all its libraries are on the Python load path. This can be done using a setup.py script. There will be a few changes to the script as per this application. The major changes are mentioned here: packages=[    'my_app',    'my_app.catalog', ], include_package_data=True, zip_safe = False, First, we mentioned all the packages that need to be installed as part of our application. Each of these needs to have an __init__.py file. The zip_safe flag tells the installer to not install this application as a ZIP file. The include_package_data statement reads from a MANIFEST.in file in the same folder and includes any package data mentioned here. Our MANIFEST.in file looks like: recursive-include my_app/templates * recursive-include my_app/static * recursive-include my_app/translations * Now, just install the application using the following command: $ python setup.py install Installing mod_wsgi is usually OS-specific. Installing it on a Debian-based distribution should be as easy as just using the packaging tool, that is, apt or aptitude. For details, refer to https://code.google.com/p/modwsgi/wiki/InstallationInstructions and https://github.com/GrahamDumpleton/mod_wsgi. How to do it… We need to create some more files, the first one being app.wsgi. This loads our application as a WSGI application: activate_this = '<Path to virtualenv>/bin/activate_this.py' execfile(activate_this, dict(__file__=activate_this))   from my_app import app as application import sys, logging logging.basicConfig(stream = sys.stderr) As we perform all our installations inside virtualenv, we need to activate the environment before our application is loaded. In the case of system-wide installations, the first two statements are not needed. Then, we need to import our app object as application, which is used as the application being served. The last two lines are optional, as they just stream the output to the the standard logger, which is disabled by mod_wsgi by default. The app object needs to be imported as application, because mod_wsgi expects the application keyword. Next comes a config file that will be used by the Apache HTTP server to serve our application correctly from specific locations. The file is named apache_wsgi.conf: <VirtualHost *>      WSGIScriptAlias / <Path to application>/flask_catalog_deployment/app.wsgi      <Directory <Path to application>/flask_catalog_deployment>        Order allow,deny        Allow from all    </Directory>   </VirtualHost> The preceding code is the Apache configuration, which tells the HTTP server about the various directories where the application has to be loaded from. The final step is to add the apache_wsgi.conf file to apache2/httpd.conf so that our application is loaded when the server runs: Include <Path to application>/flask_catalog_deployment/ apache_wsgi.conf How it works… Let's restart the Apache server service using the following command: $ sudo apachectl restart Open up http://127.0.0.1/ in the browser to see the application's home page. Any errors coming up can be seen at /var/log/apache2/error_log (this path can differ depending on OS). There's more… After all this, it is possible that the product images uploaded as part of the product creation do not work. For this, we should make a small modification to our application's configuration: app.config['UPLOAD_FOLDER'] = '<Some static absolute path>/flask_test_uploads' We opted for a static path because we do not want it to change every time the application is modified or installed. Now, we will include the path chosen in the preceding code to apache_wsgi.conf: Alias /static/uploads/ "<Some static absolute path>/flask_test_uploads/" <Directory "<Some static absolute path>/flask_test_uploads">    Order allow,deny    Options Indexes    Allow from all    IndexOptions FancyIndexing </Directory> After this, install the application and restart apachectl. See also http://httpd.apache.org/ https://code.google.com/p/modwsgi/ http://wsgi.readthedocs.org/en/latest/ https://pythonhosted.org/setuptools/setuptools.html#setting-the-zip-safe-flag Deploying with uWSGI and Nginx For those who are already aware of the usefulness of uWSGI and Nginx, there is not much that can be explained. uWSGI is a protocol as well as an application server and provides a complete stack to build hosting services. Nginx is a reverse proxy and HTTP server that is very lightweight and capable of handling virtually unlimited requests. Nginx works seamlessly with uWSGI and provides many under-the-hood optimizations for better performance. Getting ready We will use our application from the last recipe, Deploying with Apache, and use the same app.wsgi, setup.py, and MANIFEST.in files. Also, other changes made to the application's configuration in the last recipe will apply to this recipe as well. Disable any other HTTP servers that might be running, such as Apache and so on. How to do it… First, we need to install uWSGI and Nginx. On Debian-based distributions such as Ubuntu, they can be easily installed using the following commands: # sudo apt-get install nginx # sudo apt-get install uWSGI You can also install uWSGI inside a virtualenv using the pip install uWSGI command. Again, these are OS-specific, so refer to the respective documentations as per the OS used. Make sure that you have an apps-enabled folder for uWSGI, where we will keep our application-specific uWSGI configuration files, and a sites-enabled folder for Nginx, where we will keep our site-specific configuration files. Usually, these are already present in most installations in the /etc/ folder. If not, refer to the OS-specific documentations to figure out the same. Next, we will create a file named uwsgi.ini in our application: [uwsgi] http-socket   = :9090 plugin   = python wsgi-file = <Path to application>/flask_catalog_deployment/app.wsgi processes   = 3 To test whether uWSGI is working as expected, run the following command: $ uwsgi --ini uwsgi.ini The preceding file and command are equivalent to running the following command: $ uwsgi --http-socket :9090 --plugin python --wsgi-file app.wsgi Now, point your browser to http://127.0.0.1:9090/; this should open up the home page of the application. Create a soft link of this file to the apps-enabled folder mentioned earlier using the following command: $ ln -s <path/to/uwsgi.ini> <path/to/apps-enabled> Before moving ahead, edit the preceding file to replace http-socket with socket. This changes the protocol from HTTP to uWSGI (read more about it at http://uwsgi-docs.readthedocs.org/en/latest/Protocol.html). Now, create a new file called nginx-wsgi.conf. This contains the Nginx configuration needed to serve our application and the static content: location /{    include uwsgi_params;    uwsgi_pass 127.0.0.1:9090; } location /static/uploads/{    alias <Some static absolute path>/flask_test_uploads/; } In the preceding code block, uwsgi_pass specifies the uWSGI server that needs to be mapped to the specified location. Create a soft link of this file to the sites-enabled folder mentioned earlier using the following command: $ ln -s <path/to/nginx-wsgi.conf> <path/to/sites-enabled> Edit the nginx.conf file (usually found at /etc/nginx/nginx.conf) to add the following line inside the first server block before the last }: include <path/to/sites-enabled>/*; After all of this, reload the Nginx server using the following command: $ sudo nginx -s reload Point your browser to http://127.0.0.1/ to see the application that is served via Nginx and uWSGI. The preceding instructions of this recipe can vary depending on the OS being used and different versions of the same OS can also impact the paths and commands used. Different versions of these packages can also have some variations in usage. Refer to the documentation links provided in the next section. See also Refer to http://uwsgi-docs.readthedocs.org/en/latest/ for more information on uWSGI. Refer to http://nginx.com/ for more information on Nginx. There is a good article by DigitalOcean on this. I advise you to go through this to have a better understanding of the topic. It is available at https://www.digitalocean.com/community/tutorials/how-to-deploy-python-wsgi-applications-using-uwsgi-web-server-with-nginx. To get an insight into the difference between Apache and Nginx, I think the article by Anturis at https://anturis.com/blog/nginx-vs-apache/ is pretty good. Deploying with Gunicorn and Supervisor Gunicorn is a WSGI HTTP server for Unix. It is very simple to implement, ultra light, and fairly speedy. Its simplicity lies in its broad compatibility with various web frameworks. Supervisor is a monitoring tool that controls various child processes and handles the starting/restarting of these child processes when they exit abruptly due to some reason. It can be extended to control the processes via the XML-RPC API over remote locations without logging in to the server (we won't discuss this here as it is out of the scope of this book). One thing to remember is that these tools can be used along with the other tools mentioned in the applications in the previous recipe, such as using Nginx as a proxy server. This is left to you to try on your own. Getting ready We will start with the installation of both the packages, that is, gunicorn and supervisor. Both can be directly installed using pip: $ pip install gunicorn $ pip install supervisor How to do it… To check whether the gunicorn package works as expected, just run the following command from inside our application folder: $ gunicorn -w 4 -b 127.0.0.1:8000 my_app:app After this, point your browser to http://127.0.0.1:8000/ to see the application's home page. Now, we need to do the same using Supervisor so that this runs as a daemon and will be controlled by Supervisor itself rather than human intervention. First of all, we need a Supervisor configuration file. This can be achieved by running the following command from virtualenv. Supervisor, by default, looks for an etc folder that has a file named supervisord.conf. In system-wide installations, this folder is /etc/, and in virtualenv, it will look for an etc folder in virtualenv and then fall back to /etc/: $ echo_supervisord_conf > etc/supervisord.conf The echo_supervisord_conf program is provided by Supervisor; it prints a sample config file to the location specified. This command will create a file named supervisord.conf in the etc folder. Add the following block in this file: [program:flask_catalog] command=<path/to/virtualenv>/bin/gunicorn -w 4 -b 127.0.0.1:8000 my_app:app directory=<path/to/virtualenv>/flask_catalog_deployment user=someuser # Relevant user autostart=true autorestart=true stdout_logfile=/tmp/app.log stderr_logfile=/tmp/error.log Make a note that one should never run the applications as a root user. This is a huge security flaw in itself as the application crashes, which can harm the OS itself. How it works… Now, run the following commands: $ supervisord $ supervisorctl status flask_catalog   RUNNING   pid 40466, uptime 0:00:03 The first command invokes the supervisord server, and the next one gives a status of all the child processes. The tools discussed in this recipe can be coupled with Nginx to serve as a reverse proxy server. I suggest that you try it by yourself. Every time you make a change to your application and then wish to restart Gunicorn in order for it to reflect the changes, run the following command: $ supervisorctl restart all You can also give specific processes instead of restarting everything: $ supervisorctl restart flask_catalog See also http://gunicorn-docs.readthedocs.org/en/latest/index.html http://supervisord.org/index.html Deploying with Tornado Tornado is a complete web framework and a standalone web server in itself. Here, we will use Flask to create our application, which is basically a combination of URL routing and templating, and leave the server part to Tornado. Tornado is built to hold thousands of simultaneous standing connections and makes applications very scalable. Tornado has limitations while working with WSGI applications. So, choose wisely! Read more at http://www.tornadoweb.org/en/stable/wsgi.html#running-wsgi-apps-on-tornado-servers. Getting ready Installing Tornado can be simply done using pip: $ pip install tornado How to do it… Next, create a file named tornado_server.py and put the following code in it: from tornado.wsgi import WSGIContainer from tornado.httpserver import HTTPServer from tornado.ioloop import IOLoop from my_app import app   http_server = HTTPServer(WSGIContainer(app)) http_server.listen(5000) IOLoop.instance().start() Here, we created a WSGI container for our application; this container is then used to create an HTTP server, and the application is hosted on port 5000. How it works… Run the Python file created in the previous section using the following command: $ python tornado_server.py Point your browser to http://127.0.0.1:5000/ to see the home page being served. We can couple Tornado with Nginx (as a reverse proxy to serve static content) and Supervisor (as a process manager) for the best results. It is left for you to try this on your own. Using Fabric for deployment Fabric is a command-line tool in Python; it streamlines the use of SSH for application deployment or system-administration tasks. As it allows the execution of shell commands on remote servers, the overall process of deployment is simplified, as the whole process can now be condensed into a Python file, which can be run whenever needed. Therefore, it saves the pain of logging in to the server and manually running commands every time an update has to be made. Getting ready Installing Fabric can be simply done using pip: $ pip install fabric We will use the application from the Deploying with Gunicorn and Supervisor recipe. We will create a Fabric file to perform the same process to the remote server. For simplicity, let's assume that the remote server setup has been already done and all the required packages have also been installed with a virtualenv environment, which has also been created. How to do it… First, we need to create a file called fabfile.py in our application, preferably at the application's root directory, that is, along with the setup.py and run.py files. Fabric, by default, expects this filename. If we use a different filename, then it will have to be explicitly specified while executing. A basic Fabric file will look like: from fabric.api import sudo, cd, prefix, run   def deploy_app():    "Deploy to the server specified"    root_path = '/usr/local/my_env'      with cd(root_path):        with prefix("source %s/bin/activate" % root_path):            with cd('flask_catalog_deployment'):                run('git pull')                run('python setup.py install')              sudo('bin/supervisorctl restart all') Here, we first moved into our virtualenv, activated it, and then moved into our application. Then, the code is pulled from the Git repository, and the updated application code is installed using setup.py install. After this, we restarted the supervisor processes so that the updated application is now rendered by the server. Most of the commands used here are self-explanatory, except prefix, which wraps all the succeeding commands in its block with the command provided. This means that the command to activate virtualenv will run first and then all the commands in the with block will execute with virtualenv activated. The virtualenv will be deactivated as soon as control goes out of the with block. How it works… To run this file, we need to provide the remote server where the script will be executed. So, the command will look something like: $ fab -H my.remote.server deploy_app Here, we specified the address of the remote host where we wish to deploy and the name of the method to be called from the fab script. There's more… We can also specify the remote host inside our fab script, and this can be good idea if the deployment server remains the same most of the times. To do this, add the following code to the fab script: from fabric.api import settings   def deploy_app_to_server():    "Deploy to the server hardcoded"    with settings(host_string='my.remote.server'):        deploy_app() Here, we have hardcoded the host and then called the method we created earlier to start the deployment process. S3 storage for file uploads Amazon explains S3 as the storage for the Internet that is designed to make web-scale computing easier for developers. S3 provides a very simple interface via web services; this makes storage and retrieval of any amount of data very simple at any time from anywhere on the Internet. Until now, in our catalog application, we saw that there were issues in managing the product images uploaded as a part of the creating process. The whole headache will go away if the images are stored somewhere globally and are easily accessible from anywhere. We will use S3 for the same purpose. Getting ready Amazon offers boto, a complete Python library that interfaces with Amazon Web Services via web services. Almost all of the AWS features can be controlled using boto. It can be installed using pip: $ pip install boto How to do it… Now, we should make some changes to our existing catalog application to accommodate support for file uploads and retrieval from S3. First, we need to store the AWS-specific configuration to allow boto to make calls to S3. Add the following statements to the application's configuration file, that is, my_app/__init__.py: app.config['AWS_ACCESS_KEY'] = 'Amazon Access Key' app.config['AWS_SECRET_KEY'] = 'Amazon Secret Key' app.config['AWS_BUCKET'] = 'flask-cookbook' Next, we need to change our views.py file: from boto.s3.connection import S3Connection This is the import that we need from boto. Next, replace the following two lines in create_product(): filename = secure_filename(image.filename) image.save(os.path.join(app.config['UPLOAD_FOLDER'], filename)) Replace these two lines with: filename = image.filename conn = S3Connection(    app.config['AWS_ACCESS_KEY'], app.config['AWS_SECRET_KEY'] ) bucket = conn.create_bucket(app.config['AWS_BUCKET']) key = bucket.new_key(filename) key.set_contents_from_file(image) key.make_public() key.set_metadata(    'Content-Type', 'image/' + filename.split('.')[-1].lower() ) The last change will go to our product.html template, where we need to change the image src path. Replace the original img src statement with the following statement: <img src="{{ 'https://s3.amazonaws.com/' + config['AWS_BUCKET'] + '/' + product.image_path }}"/> How it works… Now, run the application as usual and create a product. When the created product is rendered, the product image will take a bit of time to come up as it is now being served from S3 (and not from a local machine). If this happens, then the integration with S3 has been successfully done. Deploying with Heroku Heroku is a cloud application platform that provides an easy and quick way to build and deploy web applications. Heroku manages the servers, deployment, and related operations while developers spend their time on developing applications. Deploying with Heroku is pretty simple with the help of the Heroku toolbelt, which is a bundle of some tools that make deployment with Heroku a cakewalk. Getting ready We will proceed with the application from the previous recipe that has S3 support for uploads. As mentioned earlier, the first step will be to download the Heroku toolbelt, which can be downloaded as per the OS from https://toolbelt.heroku.com/. Once the toolbelt is installed, a certain set of commands will be available at the terminal; we will see them later in this recipe. It is advised that you perform Heroku deployment from a fresh virtualenv where we have only the required packages for our application installed and nothing else. This will make the deployment process faster and easier. Now, run the following command to log in to your Heroku account and sync your machined SSH key with the server: $ heroku login Enter your Heroku credentials. Email: shalabh7777@gmail.com Password (typing will be hidden): Authentication successful. You will be prompted to create a new SSH key if one does not exist. Proceed accordingly. Remember! Before all this, you need to have a Heroku account on available on https://www.heroku.com/. How to do it… Now, we already have an application that needs to be deployed to Heroku. First, Heroku needs to know the command that it needs to run while deploying the application. This is done in a file named Procfile: web: gunicorn -w 4 my_app:app Here, we will tell Heroku to run this command to run our web application. There are a lot of different configurations and commands that can go into Procfile. For more details, read the Heroku documentation. Heroku needs to know the dependencies that need to be installed in order to successfully install and run our application. This is done via the requirements.txt file: Flask==0.10.1 Flask-Restless==0.14.0 Flask-SQLAlchemy==1.0 Flask-WTF==0.10.0 Jinja2==2.7.3 MarkupSafe==0.23 SQLAlchemy==0.9.7 WTForms==2.0.1 Werkzeug==0.9.6 boto==2.32.1 gunicorn==19.1.1 itsdangerous==0.24 mimerender==0.5.4 python-dateutil==2.2 python-geoip==1.2 python-geoip-geolite2==2014.0207 python-mimeparse==0.1.4 six==1.7.3 wsgiref==0.1.2 This file contains all the dependencies of our application, the dependencies of these dependencies, and so on. An easy way to generate this file is using the pip freeze command: $ pip freeze > requirements.txt This will create/update the requirements.txt file with all the packages installed in virtualenv. Now, we need to create a Git repo of our application. For this, we will run the following commands: $ git init $ git add . $ git commit -m "First Commit" Now, we have a Git repo with all our files added. Make sure that you have a .gitignore file in your repo or at a global level to prevent temporary files such as .pyc from being added to the repo. Now, we need to create a Heroku application and push our application to Heroku: $ heroku create Creating damp-tor-6795... done, stack is cedar http://damp-tor-6795.herokuapp.com/ | git@heroku.com:damp-tor- 6795.git Git remote heroku added $ git push heroku master After the last command, a whole lot of stuff will get printed on the terminal; this will indicate all the packages being installed and finally, the application being launched. How it works… After the previous commands have successfully finished, just open up the URL provided by Heroku at the end of deployment in a browser or run the following command: $ heroku open This will open up the application's home page. Try creating a new product with an image and see the image being served from Amazon S3. To see the logs of the application, run the following command: $ heroku logs There's more… There is a glitch with the deployment we just did. Every time we update the deployment via the git push command, the SQLite database gets overwritten. The solution to this is to use the Postgres setup provided by Heroku itself. I urge you to try this by yourself. Deploying with AWS Elastic Beanstalk In the last recipe, we saw how deployment to servers becomes easy with Heroku. Similarly, Amazon has a service named Elastic Beanstalk, which allows developers to deploy their application to Amazon EC2 instances as easily as possible. With just a few configuration options, a Flask application can be deployed to AWS using Elastic Beanstalk in a couple of minutes. Getting ready We will start with our catalog application from the previous recipe, Deploying with Heroku. The only file that remains the same from this recipe is requirement.txt. The rest of the files that were added as a part of that recipe can be ignored or discarded for this recipe. Now, the first thing that we need to do is download the AWS Elastic Beanstalk command-line tool library from the Amazon website (http://aws.amazon.com/code/6752709412171743). This will download a ZIP file that needs to be unzipped and placed in a suitable place, preferably your workspace home. The path of this tool should be added to the PATH environment so that the commands are available throughout. This can be done via the export command as shown: $ export PATH=$PATH:<path to unzipped EB CLI package>/eb/linux/python2.7/ This can also be added to the ~/.profile or ~/.bash_profile file using: export PATH=$PATH:<path to unzipped EB CLI package>/eb/linux/python2.7/ How to do it… There are a few conventions that need to be followed in order to deploy using Beanstalk. Beanstalk assumes that there will be a file called application.py, which contains the application object (in our case, the app object). Beanstalk treats this file as the WSGI file, and this is used for deployment. In the Deploying with Apache recipe, we had a file named app.wgsi where we referred our app object as application because apache/mod_wsgi needed it to be so. The same thing happens here too because Amazon, by default, deploys using Apache behind the scenes. The contents of this application.py file can be just a few lines as shown here: from my_app import app as application import sys, logging logging.basicConfig(stream = sys.stderr) Now, create a Git repo in the application and commit with all the files added: $ git init $ git add . $ git commit -m "First Commit" Make sure that you have a .gitignore file in your repo or at a global level to prevent temporary files such as .pyc from being added to the repo. Now, we need to deploy to Elastic Beanstalk. Run the following command to do this: $ eb init The preceding command initializes the process for the configuration of your Elastic Beanstalk instance. It will ask for the AWS credentials followed by a lot of other configuration options needed for the creation of the EC2 instance, which can be selected as needed. For more help on these options, refer to http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_Python_flask.html. After this is done, run the following command to trigger the creation of servers, followed by the deployment of the application: $ eb start Behind the scenes, the preceding command creates the EC2 instance (a volume), assigns an elastic IP, and then runs the following command to push our application to the newly created server for deployment: $ git aws.push This will take a few minutes to complete. When done, you can check the status of your application using the following command: $ eb status –verbose Whenever you need to update your application, just commit your changes using the git and push commands as follows: $ git aws.push How it works… When the deployment process finishes, it gives out the application URL. Point your browser to it to see the application being served. Yet, you will find a small glitch with the application. The static content, that is, the CSS and JS code, is not being served. This is because the static path is not correctly comprehended by Beanstalk. This can be simply fixed by modifying the application's configuration on your application's monitoring/configuration page in the AWS management console. See the following screenshots to understand this better: Click on the Configuration menu item in the left-hand side menu. Notice the highlighted box in the preceding screenshot. This is what we need to change as per our application. Open Software Settings. Change the virtual path for /static/, as shown in the preceding screenshot. After this change is made, the environment created by Elastic Beanstalk will be updated automatically, although it will take a bit of time. When done, check the application again to see the static content also being served correctly. Application monitoring with Pingdom Pingdom is a website-monitoring tool that has the USP of notifying you as soon as your website goes down. The basic idea behind this tool is to constantly ping the website at a specific interval, say, 30 seconds. If a ping fails, it will notify you via an e-mail, SMS, tweet, or push notifications to mobile apps, which inform that your site is down. It will keep on pinging at a faster rate until the site is back up again. There are other monitoring features too, but we will limit ourselves to uptime checks in this book. Getting ready As Pingdom is a SaaS service, the first step will be to sign up for an account. Pingdom offers a free trial of 1 month in case you just want to try it out. The website for the service is https://www.pingdom.com. We will use the application deployed to AWS in the Deploying with AWS Elastic Beanstalk recipe to check for uptime. Here, Pingdom will send an e-mail in case the application goes down and will send an e-mail again when it is back up. How to do it… After successful registration, create a check for time. Have a look at the following screenshot: As you can see, I already added a check for the AWS instance. To create a new check, click on the ADD NEW button. Fill in the details asked by the form that comes up. How it works… After the check is successfully created, try to break the application by consciously making a mistake somewhere in the code and then deploying to AWS. As soon as the faulty application is deployed, you will get an e-mail notifying you of this. This e-mail will look like: Once the application is fixed and put back up again, the next e-mail should look like: You can also check how long the application has been up and the downtime instances from the Pingdom administration panel. Application performance management and monitoring with New Relic New Relic is an analytics software that provides near real-time operational and business analytics related to your application. It provides deep analytics on the behavior of the application from various aspects. It does the job of a profiler as well as eliminating the need to maintain extra moving parts in the application. It actually works in a scenario where our application sends data to New Relic rather than New Relic asking for statistics from our application. Getting ready We will use the application from the last recipe, which is deployed to AWS. The first step will be to sign up with New Relic for an account. Follow the simple signup process, and upon completion and e-mail verification, it will lead to your dashboard. Here, you will have your license key available, which we will use later to connect our application to this account. The dashboard should look like the following screenshot: Here, click on the large button named Reveal your license key. How to do it… Once we have the license key, we need to install the newrelic Python library: $ pip install newrelic Now, we need to generate a file called newrelic.ini, which will contain details regarding the license key, the name of our application, and so on. This can be done using the following commands: $ newrelic-admin generate-config LICENSE-KEY newrelic.ini In the preceding command, replace LICENSE-KEY with the actual license key of your account. Now, we have a new file called newrelic.ini. Open and edit the file for the application name and anything else as needed. To check whether the newrelic.ini file is working successfully, run the following command: $ newrelic-admin validate-config newrelic.ini This will tell us whether the validation was successful or not. If not, then check the license key and its validity. Now, add the following lines at the top of the application's configuration file, that is, my_app/__init__.py in our case. Make sure that you add these lines before anything else is imported: import newrelic.agent newrelic.agent.initialize('newrelic.ini') Now, we need to update the requirements.txt file. So, run the following command: $ pip freeze > requirements.txt After this, commit the changes and deploy the application to AWS using the following command: $ git aws.push How it works… Once the application is successfully updated on AWS, it will start sending statistics to New Relic, and the dashboard will have a new application added to it. Open the application-specific page, and a whole lot of statistics will come across. It will also show which calls have taken the most amount of time and how the application is performing. You will also see multiple tabs that correspond to a different type of monitoring to cover all the aspects. Summary In this article, we have seen the various techniques used to deploy and monitor Flask applications. Resources for Article: Further resources on this subject: Understanding the Python regex engine [Article] Exploring Model View Controller [Article] Plotting Charts with Images and Maps [Article]
Read more
  • 0
  • 0
  • 2187

article-image-starting-small-and-growing-modular-way
Packt
02 Mar 2015
27 min read
Save for later

Starting Small and Growing in a Modular Way

Packt
02 Mar 2015
27 min read
This article written by Carlo Russo, author of the book KnockoutJS Blueprints, describes that RequireJS gives us a simplified format to require many parameters and to avoid parameter mismatch using the CommonJS require format; for example, another way (use this or the other one) to write the previous code is: define(function(require) {   var $ = require("jquery"),       ko = require("knockout"),       viewModel = {};   $(function() {       ko.applyBindings(viewModel);   });}); (For more resources related to this topic, see here.) In this way, we skip the dependencies definition, and RequireJS will add all the texts require('xxx') found in the function to the dependency list. The second way is better because it is cleaner and you cannot mismatch dependency names with named function arguments. For example, imagine you have a long list of dependencies; you add one or remove one, and you miss removing the relative function parameter. You now have a hard-to-find bug. And, in case you think that r.js optimizer behaves differently, I just want to assure you that it's not so; you can use both ways without any concern regarding optimization. Just to remind you, you cannot use this form if you want to load scripts dynamically or by depending on variable value; for example, this code will not work: var mod = require(someCondition ? "a" : "b");if (someCondition) {   var a = require('a');} else {   var a = require('a1');} You can learn more about this compatibility problem at this URL: http://www.requirejs.org/docs/whyamd.html#commonjscompat. You can see more about this sugar syntax at this URL: http://www.requirejs.org/docs/whyamd.html#sugar. Now that you know the basic way to use RequireJS, let's look at the next concept. Component binding handler The component binding handler is one of the new features introduced in Version 2.3 of KnockoutJS. Inside the documentation of KnockoutJS, we find the following explanation: Components are a powerful, clean way of organizing your UI code into self-contained, reusable chunks. They can represent individual controls/widgets, or entire sections of your application. A component is a combination of HTML and JavaScript. The main idea behind their inclusion was to create full-featured, reusable components, with one or more points of extensibility. A component is a combination of HTML and JavaScript. There are cases where you can use just one of them, but normally you'll use both. You can get a first simple example about this here: http://knockoutjs.com/documentation/component-binding.html. The best way to create self-contained components is with the use of an AMD module loader, such as RequireJS; put the View Model and the template of the component inside two different files, and then you can use it from your code really easily. Creating the bare bones of a custom module Writing a custom module of KnockoutJS with RequireJS is a 4-step process: Creating the JavaScript file for the View Model. Creating the HTML file for the template of the View. Registering the component with KnockoutJS. Using it inside another View. We are going to build bases for the Search Form component, just to move forward with our project; anyway, this is the starting code we should use for each component that we write from scratch. Let's cover all of these steps. Creating the JavaScript file for the View Model We start with the View Model of this component. Create a new empty file with the name BookingOnline/app/components/search.js and put this code inside it: define(function(require) {var ko = require("knockout"),     template = require("text!./search.html");function Search() {}return {   viewModel: Search,   template: template};}); Here, we are creating a constructor called Search that we will fill later. We are also using the text plugin for RequireJS to get the template search.html from the current folder, into the argument template. Then, we will return an object with the constructor and the template, using the format needed from KnockoutJS to use as a component. Creating the HTML file for the template of the View In the View Model we required a View called search.html in the same folder. At the moment, we don't have any code to put inside the template of the View, because there is no boilerplate code needed; but we must create the file, otherwise RequireJS will break with an error. Create a new file called BookingOnline/app/components/search.html with the following content: <div>Hello Search</div> Registering the component with KnockoutJS When you use components, there are two different ways to give KnockoutJS a way to find your component: Using the function ko.components.register Implementing a custom component loader The first way is the easiest one: using the default component loader of KnockoutJS. To use it with our component you should just put the following row inside the BookingOnline/app/index.js file, just before the row $(function () {: ko.components.register("search", {require: "components/search"}); Here, we are registering a module called search, and we are telling KnockoutJS that it will have to find all the information it needs using an AMD require for the path components/search (so it will load the file BookingOnline/app/components/search.js). You can find more information and a really good example about a custom component loader at: http://knockoutjs.com/documentation/component-loaders.html#example-1-a-component-loader-that-sets-up-naming-conventions. Using it inside another View Now, we can simply use the new component inside our View; put the following code inside our Index View (BookingOnline/index.html), before the script tag:    <div data-bind="component: 'search'"></div> Here, we are using the component binding handler to use the component; another commonly used way is with custom elements. We can replace the previous row with the following one:    <search></search> KnockoutJS will use our search component, but with a WebComponent-like code. If you want to support IE6-8 you should register the WebComponents you are going to use before the HTML parser can find them. Normally, this job is done inside the ko.components.register function call, but, if you are putting your script tag at the end of body as we have done until now, your WebComponent will be discarded. Follow the guidelines mentioned here when you want to support IE6-8: http://knockoutjs.com/documentation/component-custom-elements.html#note-custom-elements-and-internet-explorer-6-to-8 Now, you can open your web application and you should see the text, Hello Search. We put that markup only to check whether everything was working here, so you can remove it now. Writing the Search Form component Now that we know how to create a component, and we put the base of our Search Form component, we can try to look for the requirements for this component. A designer will review the View later, so we need to keep it simple to avoid the need for multiple changes later. From our analysis, we find that our competitors use these components: Autocomplete field for the city Calendar fields for check-in and check-out Selection field for the number of rooms, number of adults and number of children, and age of children This is a wireframe of what we should build (we got inspired by Trivago): We could do everything by ourselves, but the easiest way to realize this component is with the help of a few external plugins; we are already using jQuery, so the most obvious idea is to use jQuery UI to get the Autocomplete Widget, the Date Picker Widget, and maybe even the Button Widget. Adding the AMD version of jQuery UI to the project Let's start downloading the current version of jQuery UI (1.11.1); the best thing about this version is that it is one of the first versions that supports AMD natively. After reading the documentation of jQuery UI for the AMD (URL: http://learn.jquery.com/jquery-ui/environments/amd/) you may think that you can get the AMD version using the download link from the home page. However, if you try that you will get just a package with only the concatenated source; for this reason, if you want the AMD source file, you will have to go directly to GitHub or use Bower. Download the package from https://github.com/jquery/jquery-ui/archive/1.11.1.zip and extract it. Every time you use an external library, remember to check the compatibility support. In jQuery UI 1.11.1, as you can see in the release notes, they removed the support for IE7; so we must decide whether we want to support IE6 and 7 by adding specific workarounds inside our code, or we want to remove the support for those two browsers. For our project, we need to put the following folders into these destinations: jquery-ui-1.11.1/ui -> BookingOnline/app/ui jquery-ui-1.11.1/theme/base -> BookingOnline/css/ui We are going to apply the widget by JavaScript, so the only remaining step to integrate jQuery UI is the insertion of the style sheet inside our application. We do this by adding the following rows to the top of our custom style sheet file (BookingOnline/css/styles.css): @import url("ui/core.css");@import url("ui/menu.css");@import url("ui/autocomplete.css");@import url("ui/button.css");@import url("ui/datepicker.css");@import url("ui/theme.css") Now, we are ready to add the widgets to our web application. You can find more information about jQuery UI and AMD at: http://learn.jquery.com/jquery-ui/environments/amd/ Making the skeleton from the wireframe We want to give to the user a really nice user experience, but as the first step we can use the wireframe we put before to create a skeleton of the Search Form. Replace the entire content with a form inside the file BookingOnline/components/search.html: <form data-bind="submit: execute"></form> Then, we add the blocks inside the form, step by step, to realize the entire wireframe: <div>   <input type="text" placeholder="Enter a destination" />   <label> Check In: <input type="text" /> </label>   <label> Check Out: <input type="text" /> </label>   <input type="submit" data-bind="enable: isValid" /></div> Here, we built the first row of the wireframe; we will bind data to each field later. We bound the execute function to the submit event (submit: execute), and a validity check to the button (enable: isValid); for now we will create them empty. Update the View Model (search.js) by adding this code inside the constructor: this.isValid = ko.computed(function() {return true;}, this); And add this function to the Search prototype: Search.prototype.execute = function() { }; This is because the validity of the form will depend on the status of the destination field and of the check-in date and check-out date; we will update later, in the next paragraphs. Now, we can continue with the wireframe, with the second block. Here, we should have a field to select the number of rooms, and a block for each room. Add the following markup inside the form, after the previous one, for the second row to the View (search.html): <div>   <fieldset>     <legend>Rooms</legend>     <label>       Number of Room       <select data-bind="options: rangeOfRooms,                           value: numberOfRooms">       </select>     </label>     <!-- ko foreach: rooms -->       <fieldset>         <legend>           Room <span data-bind="text: roomNumber"></span>         </legend>       </fieldset>     <!-- /ko -->   </fieldset></div> In this markup we are asking the user to choose between the values found inside the array rangeOfRooms, to save the selection inside a property called numberOfRooms, and to show a frame for each room of the array rooms with the room number, roomNumber. When developing and we want to check the status of the system, the easiest way to do it is with a simple item inside a View bound to the JSON of a View Model. Put the following code inside the View (search.html): <pre data-bind="text: ko.toJSON($data, null, 2)"></pre> With this code, you can check the status of the system with any change directly in the printed JSON. You can find more information about ko.toJSON at http://knockoutjs.com/documentation/json-data.html Update the View Model (search.js) by adding this code inside the constructor: this.rooms = ko.observableArray([]);this.numberOfRooms = ko.computed({read: function() {   return this.rooms().length;},write: function(value) {   var previousValue = this.rooms().length;   if (value > previousValue) {     for (var i = previousValue; i < value; i++) {       this.rooms.push(new Room(i + 1));     }   } else {     this.rooms().splice(value);     this.rooms.valueHasMutated();   }},owner: this}); Here, we are creating the array of rooms, and a property to update the array properly. If the new value is bigger than the previous value it adds to the array the missing item using the constructor Room; otherwise, it removes the exceeding items from the array. To get this code working we have to create a module, Room, and we have to require it here; update the require block in this way:    var ko = require("knockout"),       template = require("text!./search.html"),       Room = require("room"); Also, add this property to the Search prototype: Search.prototype.rangeOfRooms = ko.utils.range(1, 10); Here, we are asking KnockoutJS for an array with the values from the given range. ko.utils.range is a useful method to get an array of integers. Internally, it simply makes an array from the first parameter to the second one; but if you use it inside a computed field and the parameters are observable, it re-evaluates and updates the returning array. Now, we have to create the View Model of the Room module. Create a new file BookingOnline/app/room.js with the following starting code: define(function(require) {var ko = require("knockout");function Room(roomNumber) {   this.roomNumber = roomNumber;}return Room;}); Now, our web application should appear like so: As you can see, we now have a fieldset for each room, so we can work on the template of the single room. Here, you can also see in action the previous tip about the pre field with the JSON data. With KnockoutJS 3.2 it is harder to decide when it's better to use a normal template or a component. The rule of thumb is to identify the degree of encapsulation you want to manage: Use the component when you want a self-enclosed black box, or the template if you want to manage the View Model directly. What we want to show for each room is: Room number Number of adults Number of children Age of each child We can update the Room View Model (room.js) by adding this code into the constructor: this.numberOfAdults = ko.observable(2);this.ageOfChildren = ko.observableArray([]);this.numberOfChildren = ko.computed({read: function() {   return this.ageOfChildren().length;},write: function(value) {   var previousValue = this.ageOfChildren().length;   if (value > previousValue) {     for (var i = previousValue; i < value; i++) {       this.ageOfChildren.push(ko.observable(0));     }   } else {     this.ageOfChildren().splice(value);     this.ageOfChildren.valueHasMutated();   }},owner: this});this.hasChildren = ko.computed(function() {return this.numberOfChildren() > 0;}, this); We used the same logic we have used before for the mapping between the count of the room and the count property, to have an array of age of children. We also created a hasChildren property to know whether we have to show the box for the age of children inside the View. We have to add—as we have done before for the Search View Model—a few properties to the Room prototype: Room.prototype.rangeOfAdults = ko.utils.range(1, 10);Room.prototype.rangeOfChildren = ko.utils.range(0, 10);Room.prototype.rangeOfAge = ko.utils.range(0, 17); These are the ranges we show inside the relative select. Now, as the last step, we have to put the template for the room in search.html; add this code inside the fieldset tag, after the legend tag (as you can see here, with the external markup):      <fieldset>       <legend>         Room <span data-bind="text: roomNumber"></span>       </legend>       <label> Number of adults         <select data-bind="options: rangeOfAdults,                            value: numberOfAdults"></select>       </label>       <label> Number of children         <select data-bind="options: rangeOfChildren,                             value: numberOfChildren"></select>       </label>       <fieldset data-bind="visible: hasChildren">         <legend>Age of children</legend>         <!-- ko foreach: ageOfChildren -->           <select data-bind="options: $parent.rangeOfAge,                               value: $rawData"></select>         <!-- /ko -->       </fieldset>     </fieldset>     <!-- /ko --> Here, we are using the properties we have just defined. We are using rangeOfAge from $parent because inside foreach we changed context, and the property, rangeOfAge, is inside the Room context. Why did I use $rawData to bind the value of the age of the children instead of $data? The reason is that ageOfChildren is an array of observables without any container. If you use $data, KnockoutJS will unwrap the observable, making it one-way bound; but if you use $rawData, you will skip the unwrapping and get the two-way data binding we need here. In fact, if we use the one-way data binding our model won't get updated at all. If you really don't like that the fieldset for children goes to the next row when it appears, you can change the fieldset by adding a class, like this: <fieldset class="inline" data-bind="visible: hasChildren"> Now, your application should appear as follows: Now that we have a really nice starting form, we can update the three main fields to use the jQuery UI Widgets. Realizing an Autocomplete field for the destination As soon as we start to write the code for this field we face the first problem: how can we get the data from the backend? Our team told us that we don't have to care about the backend, so we speak to the backend team to know how to get the data. After ten minutes we get three files with the code for all the calls to the backend; all we have to do is to download these files (we already got them with the Starting Package, to avoid another download), and use the function getDestinationByTerm inside the module, services/rest. Before writing the code for the field let's think about which behavior we want for it: When you put three or more letters, it will ask the server for the list of items Each recurrence of the text inside the field into each item should be bold When you select an item, a new button should appear to clear the selection If the current selected item and the text inside the field are different when the focus exits from the field, it should be cleared The data should be taken using the function, getDestinationByTerm, inside the module, services/rest The documentation of KnockoutJS also explains how to create custom binding handlers in the context of RequireJS. The what and why about binding handlers All the bindings we use inside our View are based on the KnockoutJS default binding handler. The idea behind a binding handler is that you should put all the code to manage the DOM inside a component different from the View Model. Other than this, the binding handler should be realized with reusability in mind, so it's always better not to hard-code application logic inside. The KnockoutJS documentation about standard binding is already really good, and you can find many explanations about its inner working in the Appendix, Binding Handler. When you make a custom binding handler it is important to remember that: it is your job to clean after; you should register event handling inside the init function; and you should use the update function to update the DOM depending on the change of the observables. This is the standard boilerplate code when you use RequireJS: define(function(require) {var ko = require("knockout"),     $ = require("jquery");ko.bindingHandlers.customBindingHandler = {   init: function(element, valueAccessor,                   allBindingsAccessor, data, context) {     /* Code for the initialization… */     ko.utils.domNodeDisposal.addDisposeCallback(element,       function () { /* Cleaning code … */ });   },   update: function (element, valueAccessor) {     /* Code for the update of the DOM… */   }};}); And inside the View Model module you should require this module, as follows: require('binding-handlers/customBindingHandler'); ko.utils.domNodeDisposal is a list of callbacks to be executed when the element is removed from the DOM; it's necessary because it's where you have to put the code to destroy the widgets, or remove the event handlers. Binding handler for the jQuery Autocomplete widget So, now we can write our binding handler. We will define a binding handler named autocomplete, which takes the observable to put the found value. We will also define two custom bindings, without any logic, to work as placeholders for the parameters we will send to the main binding handler. Our binding handler should: Get the value for the autoCompleteOptions and autoCompleteEvents optional data bindings. Apply the Autocomplete Widget to the item using the option of the previous step. Register all the event listeners. Register the disposal of the Widget. We also should ensure that if the observable gets cleared, the input field gets cleared too. So, this is the code of the binding handler to put inside BookingOnline/app/binding-handlers/autocomplete.js (I put comments between the code to make it easier to understand): define(function(require) {var ko = require("knockout"),     $ = require("jquery"),     autocomplete = require("ui/autocomplete");ko.bindingHandlers.autoComplete = {   init: function(element, valueAccessor, allBindingsAccessor, data, context) { Here, we are giving the name autoComplete to the new binding handler, and we are also loading the Autocomplete Widget of jQuery UI: var value = ko.utils.unwrapObservable(valueAccessor()),   allBindings = ko.utils.unwrapObservable(allBindingsAccessor()),   options = allBindings.autoCompleteOptions || {},   events = allBindings.autoCompleteEvents || {},   $element = $(element); Then, we take the data from the binding for the main parameter, and for the optional binding handler; we also put the current element into a jQuery container: autocomplete(options, $element);if (options._renderItem) {   var widget = $element.autocomplete("instance");   widget._renderItem = options._renderItem;}for (var event in events) {   ko.utils.registerEventHandler(element, event, events[event]);} Now we can apply the Autocomplete Widget to the field. If you are questioning why we used ko.utils.registerEventHandler here, the answer is: to show you this function. If you look at the source, you can see that under the wood it uses $.bind if jQuery is registered; so in our case we could simply use $.bind or $.on without any problem. But I wanted to show you this function because sometimes you use KnockoutJS without jQuery, and you can use it to support event handling of every supported browser. The source code of the function _renderItem is (looking at the file ui/autocomplete.js): _renderItem: function( ul, item ) {return $( "<li>" ).text( item.label ).appendTo( ul );}, As you can see, for security reasons, it uses the function text to avoid any possible code injection. It is important that you know that you should do data validation each time you get data from an external source and put it in the page. In this case, the source of data is already secured (because we manage it), so we override the normal behavior, to also show the HTML tag for the bold part of the text. In the last three rows we put a cycle to check for events and we register them. The standard way to register for events is with the event binding handler. The only reason you should use a custom helper is to give to the developer of the View a way to register events more than once. Then, we add to the init function the disposal code: // handle disposalko.utils.domNodeDisposal.addDisposeCallback(element, function() {$element.autocomplete("destroy");}); Here, we use the destroy function of the widget. It's really important to clean up after the use of any jQuery UI Widget or you'll create a really bad memory leak; it's not a big problem with simple applications, but it will be a really big problem if you realize an SPA. Now, we can add the update function:    },   update: function(element, valueAccessor) {     var value = valueAccessor(),         $element = $(element),         data = value();     if (!data)       $element.val("");   }};}); Here, we read the value of the observable, and clean the field if the observable is empty. The update function is executed as a computed observable, so we must be sure that we subscribe to the observables required inside. So, pay attention if you put conditional code before the subscription, because your update function could be not called anymore. Now that the binding is ready, we should require it inside our form; update the View search.html by modifying the following row:    <input type="text" placeholder="Enter a destination" /> Into this:    <input type="text" placeholder="Enter a destination"           data-bind="autoComplete: destination,                     autoCompleteEvents: destination.events,                     autoCompleteOptions: destination.options" /> If you try the application you will not see any error; the reason is that KnockoutJS ignores any data binding not registered inside the ko.bindingHandlers object, and we didn't require the binding handler autocomplete module. So, the last step to get everything working is the update of the View Model of the component; add these rows at the top of the search.js, with the other require(…) rows:      Room = require("room"),     rest = require("services/rest");require("binding-handlers/autocomplete"); We need a reference to our new binding handler, and a reference to the rest object to use it as source of data. Now, we must declare the properties we used inside our data binding; add all these properties to the constructor as shown in the following code: this.destination = ko.observable();this.destination.options = { minLength: 3,source: rest.getDestinationByTerm,select: function(event, data) {   this.destination(data.item);}.bind(this),_renderItem: function(ul, item) {   return $("<li>").append(item.label).appendTo(ul);}};this.destination.events = {blur: function(event) {   if (this.destination() && (event.currentTarget.value !==                               this.destination().value)) {     this.destination(undefined);   }}.bind(this)}; Here, we are defining the container (destination) for the data selected inside the field, an object (destination.options) with any property we want to pass to the Autocomplete Widget (you can check all the documentation at: http://api.jqueryui.com/autocomplete/), and an object (destination.events) with any event we want to apply to the field. Here, we are clearing the field if the text inside the field and the content of the saved data (inside destination) are different. Have you noticed .bind(this) in the previous code? You can check by yourself that the value of this inside these functions is the input field. As you can see, in our code we put references to the destination property of this, so we have to update the context to be the object itself; the easiest way to do this is with a simple call to the bind function. Summary In this article, we have seen all some functionalities of KnockoutJS (core). The application we realized was simple enough, but we used it to learn better how to use components and custom binding handlers. If you think we put too much code for such a small project, try to think what differences you have seen between the first and the second component: the more component and binding handler code you write, the lesser you will have to write in the future. The most important point about components and custom binding handlers is that you have to realize them looking at future reuse; more good code you write, the better it will be for you later. The core point of this article was AMD and RequireJS; how to use them inside a KnockoutJS project, and why you should do it. Resources for Article: Further resources on this subject: Components [article] Web Application Testing [article] Top features of KnockoutJS [article] e to add—as we have done before for the Search View Model—  
Read more
  • 0
  • 0
  • 2180

article-image-moodle-20-science-monitoring-your-students-progress
Packt
01 Apr 2011
7 min read
Save for later

Moodle 2.0 Science: Monitoring Your Students' Progress

Packt
01 Apr 2011
7 min read
Moodle is a really useful tool for helping teachers to monitor the progress of their students. As any teacher knows, this can be a challenge, so having everything in one place is very useful. We'll look at how you can monitor progress with an example. Checking usage and completion of tasks For you to be able to help your users learn, it goes without saying that they have to complete the tasks you set! Quite a common question is "how do I know if my users are looking at the content I make for them?" There are a variety of ways to monitor this, which vary depending on whether you are looking at a resource or an activity. Tracking usage of course materials Quite often you might set your students a task to go on to your Moodle course and read a resource that you have uploaded or follow a link to another website. While you can't know for sure that they have read the material, it is possible to check that they have displayed it on their screen. To check if users had viewed a resource, we have completion tracking. To use this feature, your administrator must enable it for your whole site and you need to turn it on in the student progress section of the course settings. This means that now you can easily see a list of users who have looked at the resource. On the pupil view, there are boxes next to items that require completion. If a teacher has specified certain conditions that need to be met, the box will automatically fill with a tick once they have been met. Users can also use this to manually track their progress towards completion if there are no criteria set for a particular task by ticking a shaded box themselves. This is shown in the following screenshot: Preparation for course completion reports The course completion report will show you which activities or resources your learners have used. To demonstrate this first you need to change some of the settings on your resources and activities, and ask your administrator to enable it in the site settings. Completion settings for resources Let's go back to a resource we uploaded in the first topic "Manufacture of magnesium sulfate" and edit it. Click on Turn editing on, which has the icon of the hand holding the pen. When the updating file dialog comes up, scroll right down to the bottom where it says Activity completion. Here you have a number of settings, as shown in the next screenshot: We're going to use the setting Show activity as complete when conditions are met. If you're happy letting your users decide to declare when they have completed an activity, you can use the setting Students can manually mark the activity as completed. This would be useful for a self review, towards pupils building a portfolio, or just to get them to take more responsibility over their learning. Once you've done this choose the conditions that need to be met. As this is a resource check the box next to Require view. These conditions vary depending on the nature of the activity. If you want to you can set a date when you expect the activity to be completed. This is just to help organize your completion report and is not shared with the users. Completion settings for activities Different activities have their own settings that you can set to decide when an activity is completed by your users. We'll go through each of these below. Forum activity completion settings You can set up activity completion for forums. In the introduction to this forum, our learners were asked to answer the most recent unanswered question and then post a question of their own. Let's use activity completion to make sure that they do this. In the same way, go to update the forum and scroll down to the activity completion settings at the bottom. You'll notice that there are a lot more options than for a resource. The activity completion settings that we'll choose are Required discussions and Require replies. Both of these will be set to one. This means that your students will need to start at least one discussion and provide at least one reply. Don't forget to set the completion tracking setting in the top drop-down box. This is what the settings will look like: Quiz activity completion For quizzes (assignments and lessons), there are two options for activity completion. You can either require your users to view the quiz or require a grade. Let's go back to the motion quiz we set up and let that require students to receive a grade to complete this activity. Here are the settings: Chat activity completion For a chat activity, the only completion option is for users to manually check the boxes completed. Once you've gone through and set the activity completion settings you will be able to see which of your activities your users have completed. Completion tracking for your whole course Now that you have set up your activities and resources to be tracked, you need to define at a course level, which activities need to be finished for course completion. From the settings block on the left-hand side, click on the link Completion tracking. This is where you decide on the criteria for course completion. We want our users to complete all of the activities chosen, so in the first box choose All for the aggregation method. If there are prerequisites for your course, you can set them here. In the activities completed box, check all of the activities you want your users to complete and specify the completion dates, if any, and passing grades. All the settings can be changed to a later date, if you wish. Course completion reports You can now set up the course completion reports. The link can be found in the navigation block on the left-hand side: Once you click on the course completion reports link, you should see something like the following: The grayed out boxes with ticks are for activities that users can manually choose completion for. So as you can see, it would be quite easy to identify which users haven't completed particular tasks. From here, you can click a user's name and send them a reminder via a message. You can also export this data if you wish. Course reports There are three different types of course reports—activity report, view course logs, and participation report. You can use them to monitor your users in slightly different ways. Activity report For the activity report, you can see a simple overview of the number of views for each activity. This could be useful if you want to see if one activity is more popular than another or if an activity is not being viewed a lot. View course logs This report shows detailed usage across the whole course. Now that we are using completion reports, you would only need to use this type of log if you wanted to check when a particular user accessed a task. Participation report This report gives you a customizable overview for each activity listed by a user. You could use this type of report to see if the users have viewed or posted to an activity or resource and then send messages directly to multiple users.
Read more
  • 0
  • 0
  • 2178

article-image-customizing-your-template-using-joomla15
Packt
02 Jul 2010
6 min read
Save for later

Customizing your Template Using Joomla!1.5

Packt
02 Jul 2010
6 min read
(Read more interesting articles on Joomla! 1.5 here.) Customizing the breadcrumb The larger your website gets, the more important it is to make use of Joomla!'s breadcrumb feature. Getting ready To start redefining your breadcrumb's style, open the template.css file for your template; use the rhuk_milkyway template for this demonstration. This means that your CSS file will be located in the templatesrhuk_milkywaycss directory of your Joomla! installation. If you visit a page other than the home page in your Joomla! website, you'll be able to see the breadcrumb: As you can see, the rhuk_milkyway template defines the style for the breadcrumb in the template.css file: span.pathway { display: block; margin: 0 20px; height: 16px; line-height: 16px; overflow: hidden;} The HTML that defines the breadcrumb (for the Features page) is as shown: <div id="pathway"> <span class="breadcrumbs pathway"> <a href="http://example.com/" class="pathway">Home</a> <img src=" /templates/rhuk_milkyway/images/arrow.png" alt="" /> Features </span></div> How to do it... You can customize the breadcrumb by changing the CSS, and altering the color and size of the breadcrumb's content: span.pathway {color: #666;font-size: 90%;display: block;margin: 0 20px;height: 16px;line-height: 16px;overflow: hidden;} Once the altered CSS file has been uploaded, you can see your changes: The next step to customizing your breadcrumb is to alter the image used for the separator arrows, located at templatesrhuk_milkywayimagesarrow.png. You'll replace this image with your own new one (which has been enlarged in this image to make it easier to view). Once uploaded, your new breadcrumb looks a little more fitting for your website: How it works... By targeting specific ids and classes with CSS and changing an image in the images directory of our template, we can subtly change our template to distinguish it from others without a great deal of work. See also Styling the search module Styling pagination Styling pagination Some content in your Joomla! website may run over multiple pages (for example, some search results). By styling pagination you can again help to distinguish your Joomla! template from others. Getting ready Open your template's primary stylesheet; generally, this will be called template.css, and is located in the templatesrhuk_milkywaycss directory if we are using the rhuk_milkyway template (as we are for this demonstration). It is also worth bearing in mind the typical structure of the pagination feature within the HTML. We can find this by searching for a common word such as "the" or "Joomla!" on our website. <span class="pagination"> <span>&laquo;</span> <span>Start</span> <span>Prev</span><strong> <span>1</span></strong> <strong> <a href=" index.php?searchword=Joomla!&amp;searchphrase=all&amp;Itemid=1&amp; option=com_search&amp;limitstart=20" title="2">2</a> </strong> <strong> <a href=" index.php?searchword=Joomla!&amp;searchphrase=all&amp;Itemid=1&amp; option=com_search&amp;limitstart=40" title="3">3</a></strong> <a href=" index.php?searchword=Joomla!&amp;searchphrase=all&amp;Itemid=1&amp; option=com_search&amp;limitstart=20" title="Next">Next</a> <a href=" index.php?searchword=Joomla!&amp;searchphrase=all&amp;Itemid=1&amp; option=com_search&amp;limitstart=40" title="End">End</a> <span>&raquo;</span> </span> Our primary interest in the previous part is the .pagination class assigned to the <span> element that contains the pagination feature's content. By default, the pagination (as seen on the search results page) looks like this: How to do it... Now that you are aware of the relevant class to style, you can add it to your template's stylesheet, with the aim of making the pagination less obtrusive with the surrounding content of your pages: .pagination {color: #666;font-size: 90%}.pagination a {color: #F07 !important /* pink */} Once you've uploaded the newer stylesheet, you'll be able to see the new pagination style, which will appear smaller than before, and with pink-colored links. Producing more semantic markup for pagination As you can see above, the HTML that Joomla! currently generates for the pagination feature is quite verbose—unnecessarily long and untidy. We'll change our template's pagination.php file to use more semantic (meaningful) HTML for this feature by adding each item to a list item within an unordered list element ( Open the pagination.php file and you will see four PHP functions (assuming that you are looking within the rhuk_milkyway template), but the function which is of interest to us is the pagination_list_render PHP function. Currently, the code for this function looks like this: function pagination_list_render($list){ // Initialize variables $html = "<span class="pagination">"; $html .= '<span>&laquo;</span>'.$list['start']['data']; $html .= $list['previous']['data']; foreach( $list['pages'] as $page ) { if($page['data']['active']) { $html .= '<strong>'; } $html .= $page['data']; if($page['data']['active']) { $html .= '</strong>'; } } $html .= $list['next']['data']; $html .= $list['end']['data']; $html .= '<span>&raquo;</span>'; $html .= "</span>"; return $html;} You can see that Joomla! builds up the HTML to insert into the page by using the $html PHP variable. All you need to change is the HTML you can see: function pagination_list_render($list){ // Initialize variables $html = "<ul class="pagination">"; $html .= '<li class="page-previous">&laquo;</li>' . '<li>' . $list['start']['data'] . '</li>'; $html .= '<li>' . $list['previous']['data'] . '</li>'; foreach( $list['pages'] as $page ) { if($page['data']['active']) { $html .= '<li>'; } $html .= '<strong class="active">' . $page['data'] . '</strong>'; if($page['data']['active']) { $html .= '</li>'; } } $html .= '<li>' . $list['next']['data'] . '</li>'; $html .= '<li>' . $list['end']['data'] . '</li>'; $html .= '<li class="page-next">&raquo;</li>'; $html .= "</ul>"; return $html;} If you now upload the pagination.php file and refresh the page, you'll see that the previous style that you had defined only partially styles the newer HTML: If you add the following CSS to your template's template.css file, everything will be styled as you intended before: ul.pagination {list-style-type: none}ul.pagination li {display: inline} Once uploaded, your new pagination is complete:
Read more
  • 0
  • 0
  • 2175

article-image-access-control-php5-cms-part-2
Packt
21 Oct 2009
17 min read
Save for later

Access Control in PHP5 CMS - Part 2

Packt
21 Oct 2009
17 min read
Framework Solution The implementation of access control falls into three classes. One is the class that is asked questions about who can do what. Closely associated with this is another class that caches general information applicable to all users. It is made a separate class to aid implementation of the split of cache between general and user specific. The third class handles administration operations. Before looking at the classes, though, let's figure out the database design. Database for RBAC All that is required to implement basic RBAC is two tables. A third table is required to extend to a hierarchical model. An optional extra table can be implemented to hold role descriptions. Thinking back to the design considerations, the first need is for a way to record the operations that can be done on the subjects, that is the permissions. They are the targets for our access control system. You'll recall that a permission consists of an action and a subject, where a subject is defined by a type, and an identifier. For ease of handling, a simple auto-increment ID number is added. But we also need a couple of other things. To make our RBAC system general, it is important to be able to control not only the actual permissions, but also who can grant those permissions, and whether they can grant that right to others. So an extra control field is added with one bit for each of those three possibilities. It therefore becomes possible to grant the right to access something with or without the ability to pass on that right. The other extra data item that is useful is a "system" flag. It is used to make some permissions incapable of deletion. Although not being a logical requirement, this is certainly a practical requirement. We want to give administrators a lot of power over the configuration of access rights, but at the same time, we want to avoid any catastrophes. The sort of thing that would be highly undesirable would be for the top level administrator to remove all of their own rights to the system. In practice, most systems will have a critical central structure of rights, which should not be altered even by the highest administrator. So now the permissions table can be seen to be as shown in the following screenshot: Note that the character strings for role, action, and subject_type are given generous lengths of 60, which should be more than adequate. The subject ID will often be quite short, but to avoid constraining generality, it is made a text field, so that the RBAC system can still handle very complex identifiers, if required. Of course, there will be some performance penalties if this field is very long, but it is better to have a design trade-off than a limitation. If we restricted the subject ID to being a number, then more complex identifiers would be a special case. This would destroy the generality of our scheme, and might ultimately reduce overall efficiency. In addition to the auto-increment primary key ID, two indices are created, as shown in the following screenshot. They involve overhead during update operations but are likely to speed access operations. Since far more accesses will typically be made than updates, this makes sense. If for some reason an index does not give a benefit, it is always possible to drop it. Note that the index on the subject ID has to be constrained in length to avoid breaking limits on key size. The value chosen is a compromise between efficiency through short keys, and efficiency through the use of fine grained keys. In a heavily used system, it would be worth reviewing the chosen figure carefully, and perhaps modifying it in the light of studies into actual data. The other main database table is even simpler, and holds information about assignment of accessors to roles. Again, an auto-increment ID is added for convenience. Apart from the ID, the only fields required are the role, the accessor type, and the accessor ID. This time a single index, additional to the primary key, is sufficient. The assignment table is shown in the following screenshot, and its index is shown in the screenshot after that: Adding hierarchy to RBAC requires only a very simple table, where each row contains two fields: a role, and an implied role. Both fields constitute the primary key, neither field on its own being necessarily unique. An index is not required for efficiency, since the volume of hierarchy information is assumed to be small, and whenever it is needed, the whole table is read. But it is still a good principle to have a primary key, and it also guarantees that there will not be redundant entries. For the example given earlier, a typical entry might have consultant as the role, and doctor as the implied role. At present, Aliro implements hierarchy only for backwards compatibility, but it is a relatively easy development to make hierarchical relationships generally available. Optionally, an extra table can be used to hold a description of the roles in use. This has no functional purpose, and is simply an option to aid administrators of the system. The table should have the role as its primary key. As it does not affect the functionality of the RBAC at all, no further detail is given here. With the database design settled, let's look at the classes. The simplest is the administration class, so we'll start there. Administering RBAC The administration of the system could be done by writing directly to the database, since that is what most of the operations involve. There are strong reasons not to do so. Although the operations are simple, it is vital that they be handled correctly. It is generally a poor principle to allow access to the mechanisms of a system rather than providing an interface through class methods. The latter approach ideally allows the creation of a robust interface that changes relatively infrequently, while details of implementation can be modified without affecting the rest of the system. The administration class is kept separate from the classes handling questions about access because for most CMS requests, administration will not be needed, and the administration class will not load at all. As a central service, the class is implemented as a standard singleton, but it is not cached because information generally needs to be written immediately to the database. In fact, the administration class frequently requests the authorization cache class to clear its cache so that the changes in the database can be effective immediately. The class starts off: class aliroAuthorisationAdmin { private static $instance = __CLASS__; private $handler = null; private $authoriser = null; private $database = null; private function __construct() { $this->handler =& aliroAuthoriserCache::getInstance(); $this->authoriser =& aliroAuthoriser::getInstance(); $this->database = aliroCoreDatabase::getInstance(); } private function __clone() { // Enforce singleton } public static function getInstance() { return is_object(self::$instance) ? self::$instance : (self::$instance = new self::$instance()); } private function doSQL($sql, $clear=false) { $this->database->doSQL($sql); if ($clear) $this->clearCache(); } private function clearCache() { $this->handler->clearCache(); } Apart from the instance property that is used to implement the singleton pattern, the other private properties are related objects that are acquired in the constructor to help other methods. Getting an instance operates in the usual fashion for a singleton, with the private constructor, and clone methods enforcing access solely via getInstance. The doSQL method also simplifies other methods by combining a call to the database with an optional clearing of cache through the class's clearCache method. Clearly the latter is simple enough that it could be eliminated. But it is better to have the method in place so that if changes were made to the implementation such that different actions were needed when any relevant cache is to be cleared, the changes would be isolated to the clearCache method. Next we have a couple of useful methods that simply refer to one of the other RBAC classes: public function getAllRoles($addSpecial=false) { return $this->authoriser->getAllRoles($addSpecial); }public function getTranslatedRole($role) { return $this->authoriser->getTranslatedRole($role); } Again, these are provided so as to simplify the future evolution of the code so that implementation details are concentrated in easily identified locations. The general idea of getAllRoles is obvious from the name, and the parameter determines whether the special roles such as visitor, registered, and nobody will be included. Since those roles are built into the system in English, it would be useful to be able to get local translations for them. So the method getTranslatedRole will return a translation for any of the special roles; for other roles it will return the parameter unchanged, since roles are created dynamically as text strings, and will therefore normally be in a local language from the outset. Now we are ready to look at the first meaty method: public function permittedRoles ($action, $subject_type, $subject_id) { $nonspecific = true; foreach ($this->permissionHolders ($subject_type, $subject_id) as $possible) { if ('*' == $possible->action OR $action == $possible->action) { $result[$possible->role] = $this->getTranslatedRole ($possible->role); if ('*' != $possible->subject_type AND '*' != $possible_subject_id) $nonspecific = false; } } if (!isset($result)) { if ($nonspecific) $result = array('Visitor' => $this->getTranslatedRole('Visitor')); else return array(); } return $result; }private function &permissionHolders ($subject_type, $subject_id) { $sql = "SELECT DISTINCT role, action, control, subject_type, subject_id FROM #__permissions"; if ($subject_type != '*') $where[] = "(subject_type='$subject_type' OR subject_type='*')"; if ($subject_id != '*') $where[] = "(subject_id='$subject_id' OR subject_id='*')"; if (isset($where)) $sql .= " WHERE ".implode(' AND ', $where); return $this->database->doSQLget($sql); } Any code that is providing an RBAC administration function for some part of the CMS is likely to want to know what roles already have a particular permission so as to show this to the administrator in preparation for any changes. The private method permissionHolders uses the parameters to create a SQL statement that will obtain the minimum relevant permission entries. This is complicated by the fact that in most contexts, asterisk can be used as a wild card. The public method permittedRoles uses the private method to obtain relevant database rows from the permissions table. These are checked against the action parameter to see which of them are relevant. If there are no results, or if none of the results refer specifically to the subject, without the use of wild cards, then it is assumed that all visitors can access the subject, so the special role of visitor is added to the results. When actual permission is to be granted we need the following methods: public function permit ($role, $control, $action, $subject_type, $subject_id) { $sql = $this->permitSQL($role, $control, $action, $subject_type, $subject_id); $this->doSQL($sql, true); }private function permitSQL ($role, $control, $action, $subject_type, $subject_id) { $this->database->setQuery("SELECT id FROM #__permissions WHERE role='$role' AND action='$action' AND subject_type='$subject_type' AND subject_id='$subject_id'"); $id = $this->database->loadResult(); if ($id) return "UPDATE #__permissions SET control=$control WHERE id=$id"; else return "INSERT INTO #__permissions (role, control, action, subject_type, subject_id) VALUES ('$role', '$control', '$action', '$subject_type', '$subject_id')"; } The public method permit grants permission to a role. The control bits are set in the parameter $control. The action is part of permission, and the subject of the action is identified by the subject type and identity parameters. Most of the work is done by the private method that generates the SQL; it is kept separate so that it can be used by other methods. Once the SQL is obtained, it can be passed to the database, and since it will normally result in changes, the option to clear the cache is set.   The SQL generated depends on whether there is already a permission with the same parameters, in which case only the control bits are updated. Otherwise an insertion occurs. The reason for having to do a SELECT first, and then decide on INSERT or UPDATE is that the index on the relevant fields is not guaranteed to be unique, and also because the subject ID is allowed to be much longer than can be included within an index. It is therefore not possible to use ON DUPLICATE KEY UPDATE. Wherever possible, it aids efficiency to use the MySQL option for ON DUPLICATE KEY UPDATE. This is added to the end of an INSERT statement, and if the INSERT fails by virtue of the key already existing in the table, then the alternative actions that follow ON DUPLICATE KEY UPDATE are carried out. They consist of one or more assignments, separated by commas, just as in an UPDATE statement. No WHERE is permitted since the condition for the assignments is already determined by the duplicate key situation. A simple method allows deletion of all permissions for a particular action and subject: public function dropPermissions ($action, $subject_type, $subject_id) { $sql = "DELETE FROM #__permissions WHERE action='$action' AND subject_type='$subject_type'AND subject_id='$subject_id' AND system=0"; $this->doSQL($sql, true); } The final set of methods relates to assigning accessors to roles. Two of them reflect the obvious need to be able to remove all roles from an accessor (possibly preparatory to assigning new roles) and the granting of a role to an accessor. Where the need is to assign a whole set of roles, it is better to have a method especially for the purpose. Partly this is convenient, but it also provides an extra operation, minimization of the set of roles. The method is: public function assign ($role, $access_type, $access_id, $clear=true) { if ($this->handler->barredRole($role)) return false; $this->database->setQuery("SELECT id FROM #__assignments WHERE role='$role' AND access_type='$access_type' AND access_id='$access_id'"); if ($this->database->loadResult()) return true; $sql = "INSERT INTO #__assignments (role, access_type, access_id) VALUES ('$role', '$access_type', '$access_id')"; $this->doSQL($sql, $clear); return true; }public function assignRoleSet ($roleset, $access_type, $access_id) { $this->dropAccess ($access_type, $access_id); $roleset = $this->authoriser->minimizeRoleSet($roleset); foreach ($roleset as $role) $this->assign ($role, $access_type, $access_id, false); $this->clearCache(); }public function dropAccess ($access_type, $access_id) { $sql = "DELETE FROM #__assignments WHERE access_type='$access_type' AND access_id='$access_id'"; $this->doSQL($sql, true); } The method assign links a role to an accessor. It checks for barred roles first, these are simply the special roles discussed earlier, which cannot be allocated to any accessor. As with the permitSQL method, it is not possible to use ON DUPLICATE KEY UPDATE because the full length of the accessor ID is not part of an index, so again the existence of an assignment is checked first. If the role assignment is already in the database, there is nothing to do. Otherwise a row is inserted, and the cache is cleared. Getting rid of all role assignments for an accessor is a simple database deletion, and is implemented in the dropAccess method. The higher level method assignRoleSet uses dropAccess to clear out any existing assignments. The call to the authorizer object to minimize the role set reflects the implementation of a hierarchical model. Once there is a hierarchy, it is possible for one role to imply another as consultant implied doctor in our earlier example. This means that a role set may contain redundancy. For example, someone who has been allocated the role of consultant does not need to be allocated the role of doctor. The minimizeRoleSet method weeds out any roles that are superfluous. Once that has been done, each role is dealt with using the assign method, with the clearing of the cache saved until the very end. The General RBAC Cache As outlined earlier, the information needed to deal with RBAC questions is cached in two ways. The file system cache is handled by the aliroAuthoriserCache singleton class, which inherits from the cachedSingleton class. This means that the data of the singleton object will be automatically stored in the file system whenever possible, with the usual provisions for timing out an old cache, or clearing the cache when an update has occurred. It is highly desirable to cache the data both to avoid database operations and to avoid repeating the processing needed in the constructor. So the intention is that the constructor method will run only infrequently. It contains this code: protected function __construct() { // Making private enforces singleton $database = aliroCoreDatabase::getInstance(); $database->setQuery("SELECT role, implied FROM #__role_link UNION SELECT DISTINCT role, role AS implied FROM #__assignments UNION SELECT DISTINCT role,role AS implied FROM #__permissions"); $links = $database->loadObjectList(); if ($links) foreach ($links as $link) { $this->all_roles[$link->role] = $link->role; $this->linked_roles[$link->role][$link->implied] = 1; foreach ($this->linked_roles as $role=>$impliedarray) { foreach ($impliedarray as $implied=>$marker) { if ($implied == $link->role OR $implied == $link->implied) { $this->linked_roles[$role][$link->implied] = 1; if (isset($this->linked_roles[$link->implied])) foreach ($this->linked_roles[$link->implied] as $more=>$marker) { $this->linked_roles[$role][$more] = 1; } } } } } $database->setQuery("SELECT role, access_id FROM #__assignments WHERE access_type = 'aUser' AND (access_id = '*' OR access_id = '0')"); $user_roles = $database->loadObjectList(); if ($user_roles) foreach ($user_roles as $role) $this- >user_roles[$role->access_id][$role->role] = 1; if (!isset($this->user_roles['0'])) $this->user_roles['0'] = array(); if (isset($this->user_roles['*'])) $this->user_roles['0'] = array_merge($this->user_roles['0'], $this->user_roles['*']); } All possible roles are derived by a UNION of selections from the permissions, assignments, and linked roles database tables. The union operation has overheads, so that alone is one reason for favoring the use of a cache. The processing of linked roles is also complex, and therefore worth running as infrequently as possible. Rather than working through the code in detail, it is more useful to describe what it is doing. The concept is much simpler than the detail! If we take an example from the backwards compatibility features of Aliro, there is a role hierarchy that includes the role Publisher, which implies membership of the role Editor. The role Editor also implies membership of the role Author. In the general case, it is unreasonable to expect the administrator to figure out the implied relationships. In this case, it is clear that the role Publisher must also imply membership of the role Editor. But these linked relationships can plainly become quite complex. The code in the constructor therefore assumes that only the least number of connections have been entered into the database, and it figures out all the implications. The other operation where the code is less than transparent is the setting of the user_roles property. The Aliro RBAC system permits the use of wild cards for specification of identities within accessor, or subject types. An asterisk indicates any identity. For accessors whose accessor type is user, another wild card available is zero. This means any user who is logged in, and is not an unregistered visitor. Given the relatively small number of role assignments of this kind, it saves a good deal of processing if all of them are cached. Hence the user_roles processing is done in the constructor. Other methods in the cache class are simple enough to be mentioned rather than given in detail. They include the actual implementation of the getTranslatedRole method, which provides local translations for the special roles. Other actual implementations are getAllRoles with the option to include the special roles, getTranslatedRole, which translates a role if it turns out to be one of the special ones and barredRole, which in turn, tests to see if the passed role is in the special group. It may therefore not be assigned to an accessor.
Read more
  • 0
  • 0
  • 2174
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-php-web-20-mashup-projects-your-own-video-jukebox-part-2
Packt
19 Feb 2010
19 min read
Save for later

PHP Web 2.0 Mashup Projects: Your Own Video Jukebox: Part 2

Packt
19 Feb 2010
19 min read
Parsing With PEAR If we were to start mashing up right now, between XSPF, YouTube's XML response, and RSS, we would have to create three different parsers to handle all three response formats. We would have to comb through the documentation and create flexible parsers for all three formats. If the XML response for any of these formats changes, we would also be responsible for changing our parser code. This isn't a difficult task, but we should be aware that someone else has already done the work for us. Someone else has already dissected the XML code. To save time, we can leverage this work for our mashup. We used PEAR, earlier in Chapter 1 to help with XML-RPC parsing. For this project, we will once again use PEAR to save us the trouble of writing parsers for the three XML formats we will encounter. For this project, we will take a look at three packages for our mashup. File_XSPF is a package for extracting and setting up XSPF playlists. Services_YouTube is a Web Services package that was created specifically for handling the YouTube API for us. Finally, XML_RSS is a package for working with RSS feeds. For this project, it works out well that there are three specific packages that fits our XML and RSS formats. If you need to work with an XML format that does not have a specific PEAR package, you can use the XML_Unserializer package. This package will take a XML and return it as a string. Is PEAR Right For You?Before we start installing PEAR packages, we should take a look if it is even feasible to use them for a project. PEAR packages are installed with a command line package manager that is included with every core installation of PHP. In order for you to install PEAR packages, you need to have administrative access to the server. If you are in a shared hosting environment and your hosting company is stingy, or if you are in a strict corporate environment where getting a server change is more hassle than it is worth, PEAR installation may not be allowed. You could get around this by downloading the PEAR files and installing them in your web documents directory. However, you will then have to manage package dependencies and package updates by yourself. This hassle may be more trouble than it's worth, and you may be better off writing your own code to handle the functionality.On the other hand, PEAR packages are often a great time saver. The purpose of the packages is to either simplify tedious tasks, or interface with complex systems. The PEAR developer has done the difficult work for you already. Moreover, as they are written in PHP and not C, like a PHP extension would be, a competent PHP developer should be able to read the code for documentation if it is lacking. Finally, one key benefit of many packages, including the ones we will be looking at, is that they are object-oriented representations of whatever they are interfacing. Values can be extracted by simply calling an object's properties, and complex connections can be ignited by a simple function call. This helps keep our code cleaner and modular. Whether the benefits of PEAR outweigh the potential obstacles depends on your specific situation. Package Installation and Usage Just like when we installed the XML-RPC package, we will use the install binary to install our three packages. If you recall, installing a package, simply type install into the command line followed by the name of the package. In this case, though, we need to set a few more flags to force the installer to grab dependencies and code in beta status. To install File_XSPF, switch to the root user of the machine and use this command: [Blossom:~] shuchow# /usr/local/php5/bin/pear install -f --alldeps File_XSPF This command will download the package. The -alldeps flag tells PEAR to also check for required dependencies and install them if necessary. The progress and outcome of the downloads will be reported. Do a similar command for Services_YouTube: [Blossom:~] shuchow# /usr/local/php5/bin/pear install -f --alldeps Services_YouTube Usually, you will not need the –f flag. By default, PEAR downloads the latest stable release of a package. The –f flag, force, forces PEAR to download the most current version, regardless of its release state. As of this writing, File_XSPF and Services_YouTube do not have stable releases, only beta and alpha respectively. Therefore, we must use –f to grab and install this package. Otherwise, PEAR will complain that the latest version is not available. If the package you want to download is in release state, you will not need the –f flag. This is the case of XML_RSS, which has a stable version available. [Blossom:~] shuchow# /usr/local/php5/bin/pear install --alldeps XML_RSS After this, sending a list-all command to PEAR will show the three new packages along with the packages you had before. PEAR packages are basically self-contained PHP files that PEAR installs into your PHP includes directory. The includes directory is a directive in your php.ini file. Navigate to this directory to see the PEAR packages' source files. To use a PEAR package, you will need to include the package's source file in the top of your code. Consult the package's documentation on how to include the main package file. For example, File_XSPF is activated by including a file named XSPF.php. PEAR places XSPF.php in a directory named File, and that directory is inside your includes directory. <?php require_once 'File/XSPF.php'; //File_XSPF is now available. File_XSPF The documentation to the latest version of XSPF is located at http://pear.php.net/package/File_XSPF/docs/latest/File_XSPF/File_XSPF.html. The package is simple to use. The heart of the package is an object called XSPF. You instantiate and use this object to interact with a playlist. It has methods to retrieve and modify values from a playlist, as well as utility methods to load a playlist into memory, write a playlist from memory to a file, and convert an XSPF file to other formats. Getting information from a playlist consists of two straightforward steps. First, the location of the XSPF file is passed to the XSPF object's parse method. This loads the file into memory. After the file is loaded, you can use the object's various getter methods to extract values from the list. Most of the XSPF getter methods are related to getting metadata about the playlist itself. To get information about the tracks in the playlist, use the getTracks method. This method will return an array of XSPF_Track objects. Each track in the playlist is represented as an XSPF_Track object in this array. You can then use the XSPF_Track object's methods to grab information about the individual tracks. We can grab a playlist from Last.fm to illustrate how this works. The web service has a playlist of a member's most played songs. Named Top Tracks, the playlist is located at http://ws.audioscrobbler.com/1.0/user/USERNAME/toptracks.xspf, where USERNAME is the name of the Last.fm user that you want to query. This page is named XSPFPEARTest.php in the examples. It uses File_XSPF to display my top tracks playlist from Last.fm. <?php require_once 'File/XSPF.php'; $xspfObj =& new File_XSPF(); //Load the playlist into the XSPF object. $xspfObj->parseFile('http://ws.audioscrobbler.com/1.0/user/ ShuTheMoody/toptracks.xspf'); //Get all tracks in the playlist. $tracks = $xspfObj->getTracks();?> This first section creates the XSPF object and loads the playlist. First, we bring in the File_XSPF package into the script. Then, we instantiate the object. The parseFile method is used to load an XSPF file list across a network. This ties the playlist to the XSPF object. We then use the getTracks method to transform the songs on the playlist into XSPF_Track objects. <html><head> <title>Shu Chow's Last.fm Top Tracks</title></head><body> Title: <?= $xspfObj->getTitle() ?><br /> Created By: <?= $xspfObj->getCreator() ?> Next, we prepare to display the playlist. Before we do that, we extract some information about the playlist. The XSPF object's getTitle method returns the XSPF file's title element. getCreator returns the creator element of the file. <?php foreach ($tracks as $track) { ?> <p> Title: <?= $track->getTitle() ?><br /> Artist: <?= $track->getCreator() ?><br /> </p><?php } ?></body></html> Finally, we loop through the tracks array. We assign the array's elements, which are XSPF_Track objects, into the $track variable. XSPF_Track also has getTitle and getCreator methods. Unlike XSPF's methods of the same names, getTitle returns the title of the track, and getCreator returns the track's artist. Running this file in your web browser will return a list populated with data from Last.fm. Services_YouTube Services_YouTube works in a manner very similar to File_XSPF. Like File_XSPF, it is an object-oriented abstraction layer on top of a more complicated system. In this case, the system is the YouTube API. Using Services_YouTube is a lot like using File_XSPF. Include the package in your code, instantiate a Services_YouTube object, and use this object's methods to interact with the service. The official documentation for the latest release of Services_YouTube is located at http://pear.php.net/package/Services_YouTube/docs/latest/. The package also contains online working examples at http://pear.php.net/manual/en/package.webservices.services-youtube.php. Many of the methods deal with getting members' information like their profile and videos they've uploaded. A smaller, but very important subset is used to query YouTube for videos. We will use this subset in our mashup. To get a list of videos that have been tagged with a specific tag, use the object's listByTag method. listByTag will query the YouTube service and store the XML response in memory. It is does not return an array of video objects we can directly manage, but with one additional function call, we can achieve this. From there, we can loop through an array of videos similar to what we did for XSPF tracks. The example file YouTubePearTest.php illustrates this process. <?php require_once 'Services/YouTube.php'; $dev_id = 'Your YouTube DeveloperID'; $tag = 'Social Distortion'; $youtube = new Services_YouTube($dev_id, array('usesCache' => true)); $videos = $youtube->listByTag($tag);?> First, we load the Services_YouTube file into our script. As YouTube's web service requires a Developer ID, we store that information into a local variable. After that, we place the tag we want to search for in another local variable named $tag. In this example, we are going to check out which videos YouTube has for the one of the greatest bands of all time, Social Distortion. Service_YouTube's constructor takes this Developer ID and uses it whenever it queries the YouTube web service. The constructor can take an array of options as a parameter. One of the options is to use a local cache of the queries. It is considered good practice to use a cache, as to not slam the YouTube server and run up your requests quota. Another option is to specify either REST or XML-RPC as the protocol via the driver key in the options array. By default, Services_YouTube uses REST. Unless you have a burning requirement to use XML-RPC, you can leave it as is. Once instantiated, you can call listByTag to get the response from YouTube. listByTag takes only one parameter—the tag of our desire. Services_YouTube now has the results from YouTube. We can begin the display of the results. <html><head> <title>Social Distortion Videos</title></head><body> <h1>YouTube Query Results for Social Distortion</h1> Next, we will loop through the videos. In order to get an array of video objects, we first need to parse the XML response. We do that using Services_YouTube's xpath method, which will use the powerful XPATH query language to go through the XML and convert it into PHP objects. We pass the XPATH query into the method, which will give us an array of useful objects. We will take a closer look at XPATH and XPATH queries later in another project. For now, trust that the query //video will return an array of video objects that we can examine. Within the loop, we display each video's title, a thumbnail image of the video, and a hyperlink to the video itself. <?php foreach ($videos->xpath('//video') as $i => $video) { ?><p> Title: <?= $video->title ?><br /> <img src='<?= $video->thumbnail_url ?>' alt='<?= $video->title ?>' /><br /> <a href='<?= $video->url ?>'>URL</a></p><?php } ?></body></html> Running this query in our web browser will give us a results page of videos that match the search term we submitted. XML_RSS Like the other PEAR extensions, XML_RSS changes something very complex, RSS, into something very simple and easy to use, PHP objects. The complete documentation for this package is at http://pear.php.net/package/XML_RSS/docs/XML_RSS. There is a small difference to the basic philosophy of XML_RSS compared to Services_YouTube and File_XSPF. The latter two packages take information from whatever we're interested in, and place them into PHP object properties. For example, File_XSPF takes track names into a Track object, and you use a getTitle() getter method to get the title of the track. In Services_YouTube, it's the same principle, but the properties are public, and so there are no getter methods. You access the video's properties directly in the video object. In XML_RSS, the values we're interested in are stored in associative arrays. The available methods in this package get the arrays, then you manipulate them directly. It's a small difference, but you should be aware of it in case you want to look at the code. It also means that you will have to check the documentation of the package to see which array keys are available to you. Let's take a look at how this works in an example. The file is named RSSPEARTest.php in the example code. One of Audioscrobbler's feeds gives us an RSS file of songs that a user recently played. The feed isn't always populated because after a few hours, songs that are played aren't considered recent. In other words, songs will eventually drop off the feed simply because they are too old. Therefore, it's best to use this feed on a heavy user of Last.fm. RJ is a good example to use. He seems to always be listening to something. We'll grab his feed from Audioscrobbler: <?php include ("XML/RSS.php"); $rss =& new XML_RSS("http://ws.audioscrobbler.com/1.0/user/RJ/ recenttracks.rss"); $rss->parse(); We start off by including the module and creating an XML_RSS object. XML_RSS is where all of the array get methods reside, and is the heart of this package. It's constructor method takes one variable—the path to the RSS file. At instantiation, the package loads the RSS file into memory. parse() is the method that actually does the RSS parsing. After this, the get methods will return data about the feed. Needless to say, parse() must be called before you do anything constructive with the file. $channelInfo = $rss->getChannelInfo();?> The package's getChannelInfo() method returns an array that holds information about the metadata, the channel, of the file. This array holds the title, description, and link elements of the RSS file. Each of these elements is stored in the array with the same key name as the element. <?= "<?xml version="1.0" encoding="UTF-8" ?>" ?> The data that comes back will be UTF-8 encoded. Therefore, we need to force the page into UTF-8 encoding mode. This line outputs the XML declaration into the top of the web page in order to insure proper rendering. Putting a regular <?xml declaration will trigger the PHP engine to parse the declaration. However, PHP will not recognize the code and halt the page with an error. <html> <head> <title><?= $channelInfo['title'] ?></title> </head> <body> <h1><?= $channelInfo['description'] ?></h1> Here we begin the actual output of the page. We start by using the array returned from getChannelInfo() to output the title and description elements of the feed. <ol> <?php foreach ($rss->getItems() as $item { ?> <li> <?= $item['title'] ?>: <a href="<?= $item ['link'] ?>"><?= $item ['link'] ?></a> </li> <?php } ?></ol> Next, we start outputting the items in the RSS file. We use getItems() to grab information about the items in the RSS. The return is an array that we loop through with a foreach statement. Here, we are extracting the item's title and link elements. We show the title, and then create a hyperlink to the song's page on Last.fm. The description and pubDate elements in the RSS are also available to us in getItems's returned array. Link to User: <a href="<?= $channelInfo['link'] ?>"><?= $channelInfo['link'] ?></a> </body></html> Finally, we use the channel's link property to create a hyperlink to the user's Last.fm page before we close off the page's body and html tags. Using More ElementsIn this example, the available elements in the channel and item arrays are a bit limited. getChannelInfo() returns an array that only has the title, description, and link properties. The array from getItems() only has title, description, link, and pubDate properties. This is because we are using the latest release version of XML_RSS. At the time of writing this book, it is version 0.9.2. The later versions of XML_RSS, currently in beta, handle many more elements. Elements in RSS 2.0 like category and authors are available. To upgrade to a beta version of XML_RSS, use the command PEAR upgrade –f XML_RSS in the command line. The –f flag is the same flag we used to force the beta and alpha installations of Service_YouTube and File_XSPF. Alternatively, you can install the beta version of XML_RSS at the beginning using the same –f flag. If we run this page on our web browser, we can see the successful results of our hit. At this point, we know how to use the Audioscrobbler feeds to get information. The majority of the feeds are either XSPF or RSS format. We know generally how the YouTube API works. Most importantly, we know how to use the respective PEAR packages to extract information from each web service. It's time to start coding our application. Mashing Up If you haven't already, you should, at the very least, create a YouTube account and sign up for a developer key. You should also create a Last.fm account, install the client software, and start listening to some music on your computer. This will personalize the video jukebox to your music tastes. All examples here will assume that you are using your own YouTube key. I will use my own Last.fm account for the examples. As the feeds are open and free, you can use the same feeds if you choose not to create a Last.fm account. Mashup Architecture There are obviously many ways in which we can set up our application. However, we're going to keep functionality fairly simple. The interface will be a framed web page. The top pane is the navigation pane. It will be for the song selection. The bottom section is the content pane and will display and play the video. In the navigation pane, we will create a select menu with all of our songs. The value, and label, for each option will be the artist name followed by a dash, followed by the name of the song (For example, "April Smith—Bright White Jackets"). Providing both pieces of information will help YouTube narrow down the selection. When the user selects a song and pushes a "Go" button, the application will load the content page into the content pane. This form will pass the artist and song information to the content page via a GET parameter. The content page will use this GET parameter to query YouTube. The page will pull up the first, most relevant result from its list of videos and display it. Main Page The main page is named jukebox.html in the example code. This is our frameset page. It will be quite simple. All it will do is define the frameset that we will use. <html><head><title>My Video Jukebox</title></head> <frameset rows="10%,90%"> <frame src="navigation.php" name="Navigation" /> <frame src="" name="Content" /> </frameset></html> This code defines our page. It is two frame rows. The navigation section, named Navigation, is 10% of the height, and the content, named Content, is the remaining 90%. When first loaded, the mashup will load the list of songs in the navigation page and nothing else.
Read more
  • 0
  • 0
  • 2167

article-image-creating-new-types-plone-portlets
Packt
15 Oct 2009
4 min read
Save for later

Creating New Types of Plone Portlets

Packt
15 Oct 2009
4 min read
(For more resources on Plone, see here.) Plone makes it easy to create new types of portlets that include custom programming logic for your site. There are several ways to create custom portlets, but the simplest way to get started is to use the add-on product collective.portlet.tal which provides a new type of portlet, called a TAL Portlet. This portlet allows you to write simple bits of code using Zope's TAL templating language. Let's walk through a quick example of building a custom TAL portlet, which will show a randomly-selected news item from your site. Installing collective.portlet.tal Before you can add a TAL portlet, you must download the product from Plone.org/products and install the add-on product collective.portlet.tal on your site. The best way to do this is to modify your buildout.cfg file. Add collective.portlet.tal to the eggs and zcml sections of your buildout. Here's a code snippet with the changes made to it: [buildout] ... eggs = ... collective.portlet.tal [instance] recipe = plone.recipe.zope2instance ... zcml = collective.portlet.tal Once you've made these changes, re-run buildout by issuing the following command: $ ./bin/buildout Once you've added the product to your buildout, visit Site Setup and choose Add/Remove Products, to install collective.portlet.tal in your site. Finally, add a few news items to your site so that we have something for our new TAL portlet to find. Adding a simple TAL portlet With the collective.portlet.tal product in place, the following can happen: Navigate to your Plone site. Choose Manage Portlets in the right column. From the Add portlet... drop-down list, choose TAL Portlet. You'll see an empty text box in which you can enter a title. We will specify Featured News Item as our title. We'll soon see the code needed to feature a random one of our site's published news items. In addition to the Title text box, you'll also see an HTML text area titled TAL code. Conveniently, this comes pre-populated with some boilerplate HTML and TAL code. Skim this, so that you get a feel for how this looks and what the common HTML structure is like, for a portlet in Plone. As an immediate experiment, we will find the following snippet of code: <dd class="portletItem odd"> Body text</dd> We will modify this, slightly, to: <dd class="portletItem odd"> Is this thing on?</dd> Click on Save and navigate through the site, and you should see your first TAL portlet in action. Of course, there's nothing in this example that couldn't be accomplished with a static text portlet. So let's navigate back to the Featured News Item portlet and make it a bit more interesting and dynamic. Update the code in your TAL Portlet to include the following: <dl class="portlet portlet${portlet_type_name}" tal_define="newsitems python:context.portal_catalog (portal_type='News Item', review_state='published');" tal_condition="newsitems"> <dt class="portletHeader"> <span class="portletTopLeft"></span> <span> Featured News Item </span> <span class="portletTopRight"></span> </dt> <dd class="portletItem odd" tal_define="random_newsitem python:random.choice(newsitems)"> <a tal_content="random_newsitem/Title" href="[replaced by random news item link]" title="[replaced by random news item title]" tal_attributes="href random_newsitem/getURL; title random_newsitem/Title">[replaced by random news item title]</a> </dd> <dd class="portletFooter"> <span class="portletBotomLeft"></span> <span> <a href="http://example.com/news">More news...</a> </span> <span class="portletBottomRight"></span> </dd> </dl> Now, let's go into more detail on a few of these sections, so that you understand what's happening. If at any point you need more context, try reading the excellent ZPT reference manual at http://plone.org/documentation/tutorial/zpt.
Read more
  • 0
  • 0
  • 2158

article-image-integrating-websphere-extreme-scale-data-grid-relational-database-part-1
Packt
18 Nov 2009
10 min read
Save for later

Integrating Websphere eXtreme Scale Data Grid with Relational Database: Part 1

Packt
18 Nov 2009
10 min read
As stated above there are three compelling reasons to integrate with a database backend. First, reporting tools do not have good data grid integration. Using CrystalReports and other reporting tools, don't work with data grids right now. Loading data from a data grid into a data warehouse with existing tools isn't possible either. The second reason we want to use a database with a data grid is when we have an extremely large data set. A data grid stores data in memory. Though much cheaper than in the past, system memory is still much more expensive than a typical magnetic hard disk. When dealing with extremely large data sets, we want to structure our data so that the most frequently used data is in the cache and less frequently used data is on the disk. The third compelling reason to use a database with a data grid is that our application may need to work with legacy applications that have been using relational databases for years. Our application may need to provide more data to them, or operate on data already in the legacy database in order to stay ahead of a processing load.In this article, we will explore some of the good and not-so-good uses of an in-memory data grid. We'll also look at integrating Websphere eXtreme Scale with relational databases. You're going where? Somewhere along the way, we all learned that software consists of algorithms and data. CPUs load instructions from our compiled algorithms, and those instructions operate on bits representing our data. The closer our data lives to the CPU, the faster our algorithm can use it. On the x86 CPU, the registers are the closest we can store data to the instructions executed by the CPU. CPU registers are also the smallest and most expensive data storage location. The amount of data storable in registers is fixed because the number and size of CPU registers is fixed. Typically, we don't directly interact with registers because their correct usage is important to our application performance. We let the compiler writers handle translating our algorithms into machine code. The machine code knows better than we do, and will use register storage far more effectively than we will most of the time. Less expensive, and about an order of magnitude slower, we have the Level 1 cache on a CPU (see below). The Level 1 cache holds significantly more data than the combined storage capacity of the CPU registers. Reading data from the Level 1 cache, and copying it to a register, is still very fast. The Level 1 cache on my laptop has two 32K instruction caches, and two 32K data caches. Still less expensive, and another order of magnitude slower, is the Level 2 cache. The Level 2 cache is typically much larger than Level 1 cache. I have 4MB of the Level 2 cache on my laptop. It still won't fit the contents of the Library of Congress into that 4MB, but that 4MB isn't a bad amount of data to keep near the CPU. Up another level, we come to the main system memory. Consumer level PCs come with 4GB RAM. A low-end server won't have any less than 8GB. At this point, we can safely store a large chunk of data, if not all of the data, used by an application. Once the application exits, its data is unloaded from the main memory, and all of the data is lost. In fact, once our data is evicted from any storage at or below this level, it is lost. Our data is ephemeral unless it is put onto some secondary storage. The unit of measurement for accessing data in a register, either Level 1 or 2 cache and main memory, is a nanosecond. Getting to secondary storage, we jump up an SI-prefix to a microsecond. Accessing data in the secondary storage cache is on the order of microseconds. If the data is not in cache, the access time is on the order of milliseconds. Accessing data on a hard drive platter is one million times slower than accessing that same data in main memory, and one billion times slower than accessing that data in a register. However, secondary storage is very cheap and holds millions of times more than primary storage. Data stored in secondary storage is durable. It doesn't disappear when the computer is reset after a crash. Our operation teams comfortably build secondary storage silos to store petabytes of data. We typically build our applications so the application server interacts with some relational database management system that sits in front of that storage silo. The network hop to communicate with the RDBMS is in the order of microseconds on a fast network, and milliseconds otherwise. Sharing data between applications has been done with the disk + network + database approach for a long time. It's become the traditional way to build applications. Load balancer in front, application servers or batch processes constantly communicating with a database to store data for the next process that needs it. As we see with computer architecture, we insert data where it fits. We squeeze it as close to the CPU as possible for better performance. If a data segment doesn't fit in one level, keep squeezing what fits into each higher storage level. That leaves us with a lot of unused memory and disk space in an application deployment. Storing data in the memory is preferable to storing it on a hard drive. Memory segmentation in a deployment has made it difficult to store useful amounts of data at a few milliseconds distance. We just use a massive, but slow, database instead. Where does an IMDG fit? We've used ObjectGrid to store all of our data so far. This diagram should look pretty familiar by now: Because we're only using the ObjectGrid APIs, our data is stored in-memory. It is not persisted to disk. If our ObjectGrid servers crash, then our data is in jeopardy (we haven't covered replication yet). One way to get our data into a persistent store is to mark up our classes with some ORM framework like JPA. We can use the JPA API to persist, update, and remove our objects from a database after we perform the same operations on them using the ObjectMap or Entity APIs. The onus is on the application developer to keep both cache and database in sync: If you take this approach, then all of the effort would be for naught. Websphere eXtreme Scale provides functionality to integrate with an ORM framework, or any data store, through Loaders. A Loader is a BackingMap plugin that tells ObjectGrid how to transform an object into the desired output form. Typically, we'll use a Loader with an ORM specification like JPA. Websphere eXtreme Scale comes with a few different Loaders out of the box, but we can always write our own. A Loader works in the background, transforming operations on objects into some output, whether it's file output or SQL queries. A Loader plugs into a BackingMap in an ObjectGrid server instance, or in a local ObjectGrid instance. A Loader does not plug into a client-side BackingMap, though we can override Loader settings on a client-side BackingMap. While the Loader runs in the background, we interact with an ObjectGrid instance. We use the ObjectMap API for objects with zero or simple relationships, and the Entity API for objects with more complex relationships. The Loader handles all of the details in transforming an object into something that can integrate with external data stores: Why is storing our data in a database so important? Haven't we seen how much faster Websphere eXtreme Scale is than an RDBMS? Shouldn't all of our data be stored in in-memory? An in-memory data grid is good for certain things. There are plenty of things that a traditional RDBMS is good at that any IMDG just doesn't support. An obvious issue is that memory is significantly more expensive than hard drives. 8GB of server grade memory costs thousands of dollars. 8GB of server grade disk space costs pennies. Even though the disk is slower than memory, we can store a lot more data on it. An IMDG shines where a sizeable portion of frequently-changing data can be cached so that all clients see the same data. The IMDG provides orders of magnitude with better latency, read, and write speeds than any RDBMS. But we need to be aware that, for large data sets, an entire data set may not fit in a typical IMDG. If we focus on the frequently-changing data that must be available to all clients, then using the IMDG makes sense. Imagine a deployment with 10 servers, each with 64GB of memory. Let's say that of the 64GB, we can use 50GB for ObjectGrid. For a 1TB data set, we can store 50% of it in cache. That's great! As the data set grows to 5TB, we can fit 10% in cache. That's not as good as 50%, but if it is the 10% of the data that is accessed most frequently, then we come out ahead. If that 10% of data has a lot of writes to it, then we come out ahead. Websphere eXtreme Scale gives us predictable, dynamic, and linear scalability. When our data set grows to 100TB, and the IMDG holds only 0.5% of the total data set, we can add more nodes to the IMDG and increase the total percentage of cacheable data (see below). It's important to note that this predictable scalability is immensely valuable. Predictable scalability makes capacity planning easier. It makes hardware procurement easier because you know what you need. Linear scalability provides a graceful way to grow a deployment as usage and data grow. You can rest easy knowing the limits of your application when it's using an IMDG. The IMDG also acts as a shock absorber in front of a database. We're going to explore some of the reasons why an IMDG makes a good shock absorber with the Loader functionality. There are plenty of other situations, some that we have already covered, where an IMDG is the correct tool for the job. There are also plenty of situations where an IMDG just doesn't fit. A traditional RDBMS has thousands of man-years of research, implementation tuning, and bug fixing already put into it. An RDBMS is well-understood and is easy to use in application development. There are standard APIs for interacting with them in almost any language: In-memory data grids don't have the supporting tools built around them that RDBMSs have. We can't plug CrystalReports into an ObjectGrid instance to get daily reports out of the data in the grid. Querying the grid is useful when we run simple queries, but fails when we need to run the query over the entire data set, or run a complex query. The query engine in Websphere eXtreme Scale is not as sophisticated as the query engine in an RDBMS. This also means the data we get from ad hoc queries is limited. Running ad hoc queries in the first place is more difficult. Even building an ad hoc query runner that interacts with an IMDG is of limited usefulness. An RDBMS is a wonderful cross-platform data store. Websphere eXtreme Scale is written in Java and only deals with Java objects. A simple way for an organization to share data between applications is in a plaintext database. We have standard APIs for database access in nearly every programming language. As long as we use the supported database driver and API, we will get the results as we expect, including ORM frameworks from other platforms like .NET and Rails. We could go on and on about why an RDBMS needs to be in place, but I think the point is clear. It's something we still need to make our software as useful as possible.
Read more
  • 0
  • 0
  • 2157

article-image-inkscape-faqs
Packt
31 Jan 2011
4 min read
Save for later

Inkscape FAQs

Packt
31 Jan 2011
4 min read
Have you got questions on Inkscape you want answers for? You've come to the right place. Whether you're new to the web design software or there are a couple of issues puzzling you, we've put together this FAQ to answer some of the most common Inkscape queries. What is Inkscape? Inkscape is an open source, free program that creates vector-based graphics that can be used in web, print, and screen design as well as interface and logo creation, and material cutting. Its capabilities are similar to those of commercial products such as Adobe Illustrator, Macromedia Freehand, and CorelDraw and can be used for any number of practical purposes. It is a software for web designers who want to add attractive visual elements to their website. What License is Inkscape released under? Inkscape is a free, open source program developed by a group of volunteers under the GNU General Public License (GPL). You not only get a free download but can use the program to create items with it and freely distribute them, modify the program itself, and share that modified program with others. What platforms does Inkscape run on? Inkscape is available for download for Windows, Macintosh, Linux, or Solaris operating systems. Where can you download Inkscape from? Go to the official Inkscape websiteand download the appropriate version of the software for your computer. How do you run Inkscape on MAC OS X operating system? To run on the Mac OS X operating system, it typically runs under X11—an implementation of the X Window System software that makes it possible to run X11-based applications in Mac OS X. The X11 application has shipped with the Mac OS X since version 10.5. When you open Inkscape on a Mac, it will first open X11 and run Inkscape within that program. Loss of some shortcut key options will occur but all functionality is present using menus and toolbars. Is the X11 application a part of the MAC OS X operating system? If you have Mac OS X version 10.5 or above. If you have a previous version of the MAC OS X operating system, you can download the X11 application package 2.4.0 or greater from this website: http://xquartz.macosforge.org/trac/wiki/X112.4.0. What is the interface of Inkscape like? The Inkscape interface is based on the GNOME UI standard which uses visual cues and feedback for any icons. For example: Hovering your mouse over any icon displays a pop-up description of the icon. If an icon has a dark gray border, it is active and can be used. If an icon is grayed out, it is not currently available to use with the current selection. All icons that are in execution mode (or busy) are covered by a dark shadow. This signifies that the application is busy and won't respond to any edit request. There is a Notification Display on the main screen that displays dynamic help messages to key shortcuts and basic information on how to use the Inkscape software in its current state or based on what objects and tools are selected. Within the main screen there is the main menu, a command, snap and status bar, tool controls, and a palette bar. What are Paths? Paths have no pre-defined lengths or widths. They are arbitrary in nature and come in three basic types: open paths (have two ends), closed paths (have no ends, like a circle), or compound paths (uses a combination of two open and/or closed paths). In Inkscape there are a few ways we can make paths: the Pencil (Freehand), Bezier (Pen), and Calligraphy tools—all of which are found in the tool box. They can also be created by converting a regular shape or text object into paths. What shapes can be created in Inkscape? Inkscape can also create shapes that are part of the SVG standard. These are: Rectangles and squares 3D boxes Circles, ellipses, and arcs Stars Polygons Spirals To create any of these shapes, see the following screenshot. Select (click) the shape tool icon in the tool box and then draw the shape on the canvas by clicking, holding, and then dragging the shape to the size you want on the canvas. What is slicing? It is a term used to describe breaking of an image created in a graphics program apart so that it can be re-assembled in HTML to create a web page. To do this, we'll use Web Slicer Extension: from the main menu select Extensions | Web | Slicer | Create a slicer rectangle.
Read more
  • 0
  • 0
  • 2154
article-image-making-better-form-using-javascript
Packt
29 Jul 2010
12 min read
Save for later

Making a Better Form using JavaScript

Packt
29 Jul 2010
12 min read
(For more resources on Joomla!, see here.) But enough chat for now, work is awaiting us! Send the form using jQuery AJAX This is not going to be as hard as it may first seem, thanks to the powerful jQuery features. What steps do we need to take to achieve AJAX form sending? First, open our default_tmpl.phpfile. Here we are going to add an ID to our button, and change it a bit, from this: <input type="submit" name="send" value="Send" class="sc_button"/> to this: <input type="button" name="send" value="Send" class="sc_button"id="send_button"/> Apart from adding the ID, we change its type from submit to button. And with this our form is prepared. We need a new file, a js one this time, to keep things organized. So we are going to create a js folder, and place a littlecontact.js file in it, and we will have the following path: modules/mod_littlecontact/js/littlecontact.js As always, we will also include this file in the mod_littlecontact.xml file, like this: <filename>js/littlecontact.js</filename> Before adding our code to the littlecontact.js file, we are going to add it to the header section of our site. We will do this in the mod_littlecontact.php file, as follows: require_once(dirname(__FILE__).DS.'helper.php');$document =& JFactory::getDocument();$document->addScript(JURI::root(true).'modules'.DS.' mod_littlecontact'.DS.'js'.DS.'littlecontact.js');JHTML::stylesheet('styles.css','modules/mod_littlecontact/css/'); I've highlighted the changes we need to make; first we get an instance to the global document object. Then we use the addScript method to add our script file to the header section.   We use JURI::root(true) to create a correct path. So now in our header, if we check the source code, we will see:   <script type="text/javascript" src="/modules/mod_littlecontact/js/littlecontact.js"></script> If instead of using JURI::root(true), we would have used JURI::root() our source code would look like the following: <script type="text/javascript" src="http://wayofthewebninja.com/ modules/mod_littlecontact/js/littlecontact.js"></script> You can find more information about the JURI::root method at: http://docs.joomla.org/JURI/root We are now ready to start working on our littlecontact.js file: jQuery(document).ready(function($){ $('#send_button').click(function() { $.post("index.php", $("#sc_form").serialize()); });}); It is a little piece of code, let's take a look at it. First we use the ready function, so all of our code is executed when the DOM is ready: jQuery(document).ready(function($){ Then we add the click method to the #send_button button. This method will have a function inside with some more code. This time we are using the post method: $.post("index.php", $("#sc_form").serialize()); The post method will send a request to a page, defined in the first parameter, using the HTTP post request method. In the second parameter we can find the data we are sending to the page. We could pass an array with some data, but instead we are using the serialize method on our form, with ID sc_form. The serialize method will read our form, and prepare a string for sending the data. And that's all; our form will be sent, without our visitors even noticing. Go ahead and try it! Also, you could take a look to the following two pages: http://api.jquery.com/jQuery.post/ http://api.jquery.com/serialize/ Here you can find some good information about these two functions. After you have taken a look at these pages, come back here, and we will continue. Well, sending the form without page reloading is OK, we will save our visitors some time. But we need our visitors to notice that something is happening and most important, that the message has been sent. We will now work on these two things. First of all we are going to place a message, so our readers will know that the form is being sent. This is going to be quite easy too. First we are going to add some markup to our default_tmpl.php, as follows: <?phpdefined('_JEXEC') or die('Direct Access to this location is not allowed.');?><div id="littlecontact"> . . . <div id="sending_message" class="hidden_div"> <br/><br/><br/> <h1>Your message is being sent, <br/>wait a bit.</h1> </div> <div id="message_sent" class="hidden_div"> <br/><br/><br/> <h1>Your message has been sent. <br/>Thanks for contacting us.</h1> <br/><br/><br/> <a href="index.php" class="message_link" id="message_back">Back to the form</a> </div></div> We have added two DIVs here: sending_message and message_sent. These two will help us show some messages to our visitors. With the messages prepared, we need some CSS styles, and we will define these in our module's styles.css file: #littlecontact{ position: relative;}#sending_message, #message_sent{ height: 235px; width: 284px; position: absolute; z-index: 100; background-color: #5B5751; top: 0; text-align: center;}.hidden_div{ visibility: hidden; display: none;}.show_div{ visibility: visible; display: block;}a.message_link:link, a.message_link:visited{ color: #ffffff; text-decoration: none;}a.message_link:hover{ text-decoration: underline;} Don't worry about writing all this code; you can find it in the code bundle, so copy it from there. Going back to the code, these are just simple CSS styles, and some of the most important ones are the hidden_div and show_div classes. These will be used to show or hide the messages. Ready to go to the JavaScript code? We will now return to our littlecontact.js file and modify it a bit: jQuery(document).ready(function($){ $('#send_button').click(function() { $.post("index.php", $("#sc_form").serialize(), show_ok()); $("#sending_message").removeClass("hidden_div"); }); $("#message_back").click(function(e){ e.preventDefault(); $("#message_sent").addClass("hidden_div"); $("#sending_message").addClass("hidden_div"); }); function show_ok(){ $("#sending_message").addClass("hidden_div"); $("#message_sent").removeClass("hidden_div"); $("input:text").val(''); $("textarea").val(''); }}); Seems a lot? Don't worry, we will take a step-by-step look at it. If we look at our previously added click function, we can see a new line, as follows: $("#sending_message").removeClass("hidden_div"); This will search for our sending_message DIV, and remove the hidden_div class. This way the DIV will be visible, and we will see a screen similar to the following screenshot: A nice message tells our visitors that the e-mail is being sent just at the moment. But we don't do only that. If we take a closer look at our previous post method, we will see some changes, as follows: $.post("index.php", $("#sc_form").serialize(), show_ok()); A new third parameter! This is a callback function, which will be executed when the request succeeds and our e-mail has been sent. But what is inside this show_ok function? Its contents are as follows: function show_ok(){ $("#sending_message").addClass("hidden_div"); $("#message_sent").removeClass("hidden_div"); $("input:text").val(''); $("textarea").val(''); } First we add the hidden_div class to the sending_message, so this sending message is not seen any more. But instead we remove the hidden_div class of our message_sent DIV, so our visitors will see this new message: But we are also emptying our inputs, text inputs, and textarea fields: $("input:text").val(''); $("textarea").val(''); So when visitors return to the form they are presented with a fresh one, just in case they have forgotten something and want to send a new e-mail. Hey who knows! Our last step is to enable a back link, so that the readers can return to the form: $("#message_back").click(function(e){ e.preventDefault(); $("#message_sent").addClass("hidden_div"); $("#sending_message").addClass("hidden_div"); }); First we target the link using its ID, and then we bind a click function to it. The next step is to prevent the default event for the link. This is why the link won't behave as a link, and won't try to load a page. This is why we are not going to load or reload a page, instead we will continue with our code, hiding both DIVs, so the form is visible again. That's it! It has not been that hard, has it? Now, it would be a great moment to take a look at the code bundle, see the code, read it, and try it by yourself. Or alternatively, keep reading a bit more if you want! Tips and tricks Look at the site http://www.ajaxload.info/. There you will be able to generate some loader GIF images. These will act as the typical clock mouse, telling the users that something is happening. Maybe you would like to use that instead of only using text. Give it a try! Validating form fields using jQuery—why validate? Ah! validating forms, so entertaining. It's just the kind of task everyone always wants to do. Well, maybe a bit less than others. But it's something that needs to be done. Why? Just to ensure that we are receiving the proper data, or even that we are receiving data. Ideally we would use JavaScript validation on the client side, and PHP validation on the server side. Server-side validation is essential, so a user turning off JavaScript still gets his/her contents validated. JavaScript validation will save us the effort of having to send all the data to the server, and then come back with the errors. We are going to use a bit of JavaScript to try to validate our form. This process is going to be quite simple too, as our form is very small. We will be doing all of our work in our littlecontact.js file. Remember our $('#send_button').click function? It looked like this: $('#send_button').click(function() { $.post("index.php", $("#sc_form").serialize(), show_ok()); $("#sending_message").removeClass("hidden_div"); }); Now with some modifications, it will be more or less as follows: $('#send_button').click(function() { //First we do some validation, //just to know that we have some data alerts = ''; if($("input[name=your_name]").val() == ''){ alerts += "we need your namen"; } if($("textarea[name=your_question]").val().length < 5){ alerts += "We need a message of at least 5 characters lengthn"; } if(alerts != ''){ alert(alerts); }else{ $.post("index.php", $("#sc_form").serialize(), show_ok()); $("#sending_message").removeClass("hidden_div"); } }); First, we define a new variable, to put all the messages in: alerts = ''; Then we check our form fields (first the input text): if($("input[name=your_name]").val() == '') As you can see, with jQuery we can select the input with a name equal to your_name and check if its value is empty. The textarea check is very similar: if($("textarea[name=your_question]").val().length < 5 But we are also checking if the length of the value is greater than five. After each one of these validations, if failed, we add a message to the alerts variable. Later, we will check if that variable is not empty. If it's not empty, it would mean that some of the checks have failed, and then we show the alerts to our visitors: alert(alerts); This will raise a typical alert message, much like the following screenshot: Informative, but not really nice. But thinking about it, we already have the jQuery UI library available, thanks to our SC jQuery Joomla! plugin. Why not use that plugin to show a better message? Let's do it. First we need to make some changes in the default_tmpl.php file: <div id="alerts" title="Errors found in the form" style="display: none;"></div> We have added a new DIV, with an ID equal to alerts, and with an informative title. Now that our markup is ready, some changes are also necessary in our littlecontact.js JavaScript file. For example, we are going to change our alert messages from the following: alerts += "- We need your namen";...alerts += "- We need a message of at least 5 characters lengthn"; To the following: alerts += "- We need your name<br/>";...alerts += "- We need a message of at least 5 chapters length<br/>"; Why are we doing this? It is because we will show HTML in our dialog, instead of just text. How are we going to show the dialog? Quite easily, by changing the following line: alert(alerts); To this: $("#alerts").html(alerts).dialog(); What are we doing here? First, we select our newly created DIV, with ID alerts, and then we use the html method, passing the variable alerts as its parameter. This will fill our DIV with the content of the alerts variable. Nested in it we will find the dialog method. This is a jQuery UI method that will create a dialog box, as we can see in the following screenshot: Better than our previous alert message, isn't it? Also notice that this dialog is matching the style of all our jQuery UI elements, like the login dialog and the tabs module. If we were to change the style in the SC jQuery Joomla! plugin, the style of the dialog will also change. If you want to know more about the jQuery UI dialog method, check the following page: http://jqueryui.com/demos/dialog/ Summary In this article we saw how to make a better form using JavaScript, send the form using jQuery AJAX, validate form fields using jQuery, and learn why it is important to validate. Well that's it for now. This is just a small example; now don't you think it would be great to give it a try? Further resources on this subject: Removing Unnecessary jQuery Loads [article] The Basics of Joomla! Module Creation and Creating a "Send us a question" Module [article]
Read more
  • 0
  • 0
  • 2150

article-image-aperture-action
Packt
06 Sep 2013
14 min read
Save for later

Aperture in Action

Packt
06 Sep 2013
14 min read
Controlling clipped highlights The problem of clipped highlights is a very common issue that a photographer will often have to deal with. Digital cameras only have limited dynamic range, so clipping becomes an issue, especially with high-contrast scenes. However, if you shoot RAW, then your camera will often record more highlighted information than is visible in the image. You may already be familiar with recovering highlights by using the recovery slider in Aperture, but there are actually a couple of other ways that you can bring this information back into range. The three main methods of controlling lost highlights in Aperture are: Using the recovery slider Using curves Using shadows and highlights For many cases, using the recovery slider will be good enough, but the recovery slider has its limitations. Sometimes it still leaves your highlights looking too bright, or it doesn't give you the look you wish to achieve. The other two methods mentioned give you more control over the process of recovery. If you use a Curves adjustment, you can control the way the highlight rolls off, and you can reduce the artificial look that clipped highlights can give your image, even if technically the highlight is still clipped. A highlights & shadows adjustment is also useful because it has a different look, as compared to the one that you get when using the recovery slider. It works in a slightly different way, and includes more of the brighter tones of your image when making its calculations. The highlights and shadows adjustment has the added advantage of being able to be brushed in. So, how do you know which one to use? Consider taking a three-stepped approach. If the first step doesn't work, move on to the second, and so on. Eventually, it will become second nature, and you'll know which way will be the best by just looking at the photograph. Step 1 Use the recovery slider. Drag the slider up until any clipped areas of the image start to reappear. Only drag the slider until the clipped areas have been recovered, and then stop. You may find that if your highlights are completely clipped, you may need to drag the slider all the way to the right, as per the following screenshot: For most clipped highlight issues, this will probably be enough. If you want to see what's going on, add a Curves adjustment and set the Range field to the Extended range. You don't have to make any adjustments at this point, but the histogram in the Curves adjustment will now show you how much image data is being clipped, and how much data that you can actually recover. Real world example In the following screenshot, the highlights on the right-hand edge of the plant pot have been completely blown out: If we zoom in, you will be able to see the problem in more detail. As you can see, all the image information has been lost from the intricate edge of this cast iron plant pot. Luckily this image had been shot in RAW, and the highlights are easily recovered. In this case, all that was necessary was the use of the recovery slider. It was dragged upward until it reached a value of around 1.1, and this brought most of the detail back into the visible range. As you can see from the preceding image, the detail has been recovered nicely and there are no more clipped highlights. The following screenshot is the finished image after the use of the recovery slider: Step 2 If the recovery slider brought the highlights back into range, but still they are too bright, then try the Highlights & Shadows adjustment. This will allow you to bring the highlights down even further. If you find that it is affecting the rest of your image, you can use brushes to limit the highlight adjustment to just the area you want to recover. You may find that with the Highlight and Shadows adjustment, if you drag the sliders too far the image will start to look flat and washed out. In this case, using the mid-contrast slider can add some contrast back into the image. You should use the mid-contrast slider carefully though, as too much can create an unnatural image with too much contrast. Step 3 If the previous steps haven't addressed the problem to your satisfaction, or if the highlight areas are still clipped, you can add a roll off to your Curves adjustment. The following is a quick refresher on what to do: Add a Curves adjustment, if you haven't already added one. From the pop-up range menu at the bottom of the Curves adjustment, set the range to Extended. Drag the white point of the Curves slider till it encompasses all the image information. Create a roll off on the right-hand side of the curve, so it looks something like the following screenshot: If you're comfortable with curves, you can skip directly to step 3 and just use a Curves adjustment, but for better results, you should combine the preceding differing methods to best suit your image. Real world example In the following screenshot (of yours truly), the photo was taken under poor lighting conditions, and there is a badly blown out highlight on the forehead: Before we fix the highlights, however, the first thing that we need to do is to fix the overall white balance, which is quite poor. In this case, the easiest way to fix this problem is to use the Aperture's clever skin tone white-balance adjustment. On the White Balance adjustment brick from the pop-up menu, set the mode to Skin Tone. Now, select the color picker and pick an area of skin tone in the image. This will set the white balance to a more acceptable color. (You can tweak it more if it's not right, but this usually gives satisfactory results.) The next step is to try and fix the clipped highlight. Let's use the three-step approach that we discussed earlier. We will start by using the recovery slider. In this case, the slider was brought all the way up, but the result wasn't enough and leaves an unsightly highlight, as you can see in the following screenshot: The next step is to try the Highlight & Shadows adjustment. The highlights slider was brought up to the mid-point, and while this helped, it still didn't fix the overall problem. The highlights are still quite ugly, as you can see in the following screenshot: Finally, a Curves adjustment was added and a gentle roll off was applied to the highlight portion of the curve. While the burned out highlight isn't completely gone, there is no longer a harsh edge to it. The result is a much better image than the original, with a more natural-looking highlight as shown in the following screenshot: Finishing touches To take this image further, the face was brightened using another Curves adjustment, and the curves was brushed in over the facial area. A vignette was also added. Finally, a skin softening brush was used over the harsh shadow on the nose, and over the edges of the halo on the forehead, just to soften it even further. The result is a much better (and now useable) image than the one we started with. Fixing blown out skies Another common problem one often encounters with digital images is blown out skies. Sometimes it can be as a result of the image being clipped beyond the dynamic range of the camera, whereas other times the day may simply have been overcast and there is no detail there to begin with. While there are situations when the sky is too bright and you just need to bring the brightness down to better match the rest of the scene, that is easily fixed. But what if there is no detail there to recover in the first place? That scenario is what we are going to look at in the next section. This covers what to do when the sky is completely gone and there's nothing left to recover. There are options open to you in this case. The first is pretty obvious. Leave it as it is. However, you might have an image that is nicely lit otherwise, but all that's ruining it is a flat washed-out sky. What would add a nice balance to an image in such a scenario is some subtle blue in the sky, even if it's just a small amount. Luckily, this is fairly easy to achieve in Aperture. Perform the following steps: Try the steps outlined in the previous section to bring clipped highlights back into range. Sometimes simply using the recovery slider will bring clipped skies back into the visible range, depending on the capabilities of your camera. In order for the rest of this trick to work, your highlights must be in the visible range. If you have already made any enhancements using the Enhance brick and you want to preserve those, add another Enhance brick by choosing Add New Enhance adjustment from the cog pop-up on the side of the interface. If the Tint controls aren't visible on the Enhance brick, click on the little arrow beside the word Tint to reveal the Tint controls. Using the right-hand Tint control (the one with the White eyedropper under it), adjust the control until it adds some blue back to the sky. If this is adding too much blue to other areas of your image, then brush the enhance adjustment in by choosing Brush Enhance In from the cog pop-up menu. Real world example In this example, the sky has been completely blown out and has lost most of its color detail. The first thing to try is to see whether any detail can be recovered by using the recovery slider. In this case, some of the sky was recovered, but a lot of it was still burned out. There is simply no more information to recover. The next step is to use the tint adjustment as outlined in the instructions. This puts some color back in the sky and it looks more natural. A small adjustment of the Highlights & Shadows also helps bring the sky back into the range. Finishing touches While the sky has now been recovered, there is still a bit of work to be done. To brighten up the rest of the image, a Curves adjustment was added, and the upper part of the curve was brought up, while the shadows were brought down to add some contrast. The following is the Curves adjustment that was used: Finally, to reduce the large lens flare in the center of the image, I added a color adjustment and reduced the saturation and brightness of the various colors in the flare. I then painted the color adjustment in over the flare, and this reduced the impact of it on the image. This is the same technique that can be used for getting rid of color fringing, which will be discussed later in this article. The following screenshot is the final result: Removing objects from a scene One of the myths about photo workflow applications such as Aperture is that they're not good for pixel-level manipulations. People will generally switch over to something such as Photoshop if they need to do more complex operations, such as cloning out an object. However, Aperture's retouch tool is surprisingly powerful. If you need to remove small distracting objects from a scene, then it works really well. The following is an example of a shot that was entirely corrected in Aperture: It is not really practical to give step-by-step instructions for using the tool because every situation is different, so instead, what follows is a series of tips on how best to use the retouch function: To remove complex objects you will have to switch back and forth between the cloning and healing mode. Don't expect to do everything entirely in one mode or the other. To remove long lines, such as the telegraph wires in the preceding example, start with the healing tool. Use this till you get close to the edge of an object in the scene you want to keep. Then switch to the cloning tool to fix the areas close to the kept object. The healing tool can go a bit haywire near the edges of the frame, or the edges of another object, so it's often best to use the clone tool near the edges. Remember when using the clone tool that you need to keep changing your clone source so as to avoid leaving repetitive patterns in the cloned area. To change your source area, hold down the option key, and click on the image in the area that you want to clone from. Sometimes doing a few smaller strokes works better than one long, big stroke. You can only have one retouch adjustment, but each stroke is stored separately within it. You can delete individual strokes, but only in the reverse order in which they were created. You can't delete the first stroke, and keep the following ones if for example, you have 10 other strokes. It is worth taking the time to experiment with the retouch tool. Once you get the hang of this feature, you will save yourself a lot of time by not having to jump to another piece of software to do basic (or even advanced) cloning and healing. Fixing dust spots on multiple images A common use for the retouch tool is for removing sensor dust spots on an image. If your camera's sensor has become dirty, which is surprisingly common, you may find spots of dust creeping onto your images. These are typically found when shooting at higher f-stops (narrower apertures), such as f/11 or higher, and they manifest as round dark blobs. Dust spots are usually most visible in the bright areas of solid color, such as skies. The big problem with dust spots is that once your sensor has dust on it, it will record that dust in the same place in every image. Luckily Aperture's tools makes it pretty easy to remove those dust spots, and once you've removed them from one image, it's pretty simple to remove them from all your images. To remove dust spots on multiple images, perform the following steps: Start by locating the image in your batch where the dust spots are most visible.   Zoom in to 1:1 view (100 percent zoom), and press X on your keyboard to activate the retouch tool.   Switch the retouch tool to healing mode and decrease the size of your brush till it is just bigger than the dust spot. Make sure there is some softness on the brush. Click once over the spot to get rid of it. You should try to click on it rather than paint when it comes to dust spots, as you want the least amount of area retouched as possible. Scan through your image when viewing at 1:1, and repeat the preceding process until you have removed all the dust spots Close the retouch tool's HUD to drop the tool. Zoom back out. Select the lift tool from the Aperture interface (it's at the bottom of the main window). In the lift and stamp HUD, delete everything except the Retouch adjustment in the Adjustments submenu. To do this, select all the items except the retouch entry, and press the delete (or backspace) key. Select another image or group of images in your batch, and press the Stamp Selected Images button on the Lift and Stamp HUD. Your retouched settings will be copied to all your images, and because the dust spots don't move between shots, the dust should be removed on all your images.
Read more
  • 0
  • 0
  • 2148

article-image-creating-image-gallery
Packt
30 Oct 2013
5 min read
Save for later

Creating an image gallery

Packt
30 Oct 2013
5 min read
(For more resources related to this topic, see here.) Getting ready Before we get started, we need to find a handful of images that we can use for the gallery. Find four to five images to use for the gallery and put them in the images folder. How to do it... Add the following links to the images to the index.html file: <a class="fancybox"href="images/waterfall.png">Waterfall</a><a class="fancybox" href="images/frozenlake.png">Frozen Lake</a><a class="fancybox" href="images/road-inforest.png">Road in Forest</a><a class="fancybox" href="images/boston.png">Boston</a> The anchor tags no longer have an ID, but a class. It is important that they all have the same class so that Fancybox knows about them. Change our call to the Fancybox plugin in the scripts.js file to use the class that all of the links have instead of show-fancybox ID. $(function() { // Using fancybox class instead of the show-fancybox ID $('.fancybox').fancybox(); }); Fancybox will now work on all of the images but they will not be part of the same gallery. To make images part of a gallery, we use the rel attribute of the anchor tags. Add rel="gallery" to all of the anchor tags, shown as follows: <a class="fancybox" rel="gallery" href="images/waterfall.png">Waterfall</a> <a class="fancybox" rel="gallery" href="images/frozenlake.png">Frozen Lake</a> <a class="fancybox" rel="gallery" href="images/roadin-forest.png">Road in Forest</a> <a class="fancybox" rel="gallery" href="images/boston.png">Boston</a> Now that we have added rel="gallery" to each of our anchor tags, you should see left and right arrows when you hover over the left-hand side or right-hand side of Fancybox. These arrows allow you to navigate between images as shown in the following screenshot: How it works... Fancybox determines that an image is part of a gallery using the rel attribute of the anchor tags. The order of the images is based on the order of the anchor tags on the page. This is important so that the slideshow order is exactly the same as a gallery of thumbnails without any additional work on our end. We changed the ID of our single image to a class for the gallery because we wanted to call Fancybox on all of the links instead of just one. If we wanted to add more image links to the page, it would just be a matter of adding more anchor tags with the proper href values and the same class. There's more... So, what else can we do with the gallery functionality of Fancybox? Let's take a look at some of the other things that we could do with the gallery that we have currently. Captions and thumbnails All of the functionalities that we discussed for single images apply to galleries as well. So, if we wanted to add a thumbnail, it would just be a matter of adding an img tag inside the anchor tag instead of the text. If we wanted to add a caption, we can do so by adding the title attribute to our anchor tags. Showing slideshow from one link Let's say that we wanted to have just one link to open our gallery slideshow. This can be easily achieved by hiding the other links via CSS with the help of the following step: We start by adding this style tag to the <head> tag just under the <script> tag for our scripts.js file, shown as follows: <style type="text/css"> .hidden { display: none; } </style> Now, we update the HTML file so that all but one of our anchor tags have the hidden class. Next, when we reload the page, we will see only one link. When you click on the link, you should still be able to navigate through the gallery just like all of the links were on the page. <a class="fancybox" rel="gallery" href="images/waterfall.png">Image Gallery</a> <div class="hidden"> <a class="fancybox" rel="gallery" href="images/frozen-lake.png">Frozen Lake</a> <a class="fancybox" rel="gallery" href="images/roadin-forest.png">Road in Forest</a> <a class="fancybox" rel="gallery" href="images/boston.png">Boston</a> </div> Summary In this article we saw that Fancybox provides very strong image handling functionalities. We also saw how an image gallery is created by Fancybox. We can also display images as thumbnails and display the images as a slideshow using just one link. Resources for Article: Further resources on this subject: Getting started with your first jQuery plugin [Article] OpenCart Themes: Styling Effects of jQuery Plugins [Article] The Basics of WordPress and jQuery Plugin [Article]
Read more
  • 0
  • 0
  • 2143
article-image-vaadin-portlets-liferay-user-interface-development
Packt
01 Dec 2010
1 min read
Save for later

Vaadin Portlets in Liferay User Interface Development

Packt
01 Dec 2010
1 min read
Vaadin portlets are developed with Vaadin framework. The Vaadin framework can also be used to develop standalone web applications. Liferay portal supports the Vaadin portlets. In this section, we will write a Vaadin portlet for Liferay portal using the Vaadin Eclipse plugin. Required software Install the following software for the development environment, if they are not already there: Eclipse Java EE IDE Liferay portal 6.x.x with Tomcat 6.0.x Configuring Tomcat 6.0 in Eclipse If you have not already done so, configure Tomcat 6.0 in Eclipse as follows: Start Eclipse. Click on Window | Preferences. Expand Server. Click on Runtime Environment. Click on Add .... Select Apache Tomcat v6.0. Click on Next. Click on Browse and open the tomcat-6.0.x directory. Click on Finish. Installing Vaadin Eclipse plugin You can automatically create a Vaadin portlet prototype for Liferay portal with the Vaadin Eclipse plugin. Here is how you can install it: Assuming that Eclipse is open. Click on Help. Select Install New Software ... Click on Add .... Input Name: Vaadin, Location: http://vaadin.com/eclipse. Click on OK. Click on Finish. The Vaadin Eclipse plugin will be installed. It will take several minutes.
Read more
  • 0
  • 0
  • 2142

article-image-getting-started-jquery
Packt
18 Jun 2010
3 min read
Save for later

Getting Started with jQuery

Packt
18 Jun 2010
3 min read
(For more resources on jQuery, see here.) jQuery - How it works To understand how jQuery can ease web client (JavaScript based) development, one has to understand two aspects of jQuery. They are: Functionalities Modules Understanding the functionalities/services provided by jQuery will tell you what jQuery provides and understanding the modules that constitute jQuery will tell you how to access the services provided by jQuery. Here are the details. Functionalities The functionalities provided by jQuery can be classified into following: Selection Attributes handling Element manipulation Ajax Callbacks Event Handling Among the above listed functionalities, selection, element manipulation and event handling makes common tasks very easily implementable or trivial. Selection Using this functionality one can select one or multiple HTML elements. The raw JavaScript equivalent of the selection functionality is: document.getElementByID(‘<element id>’) or document.getElementByTagName(‘<tag name>’) Attributes handling One of most required task in JavaScript is to change the value of an attribute of a tag. The conventional way is to use getElementByID to get the element and then use index to get to the required attribute. jQuery eases it by using selection and attributes handling functionality in conjunction. Element handling There are scenarios where the values of tags need to be modified. One of such scenarios is rewriting text of a <p> tag based on selection from combo box. That is where element handling functionality of jQuery comes handy. Using the element handling or DOM scripting, as it is popularly known, one can not only access a tag but also perform manipulation such as appending child tags to multiple occurrences of a specific tag without using for loop. Ajax Ajax is of the concept and implementation that brought the usefulness of JavaScript to the fore. However, it also brought the complexities and the boilerplate code required for using Ajax to its full potential. The Ajax related functionalities of jQuery encapsulates away the boilerplate code and lets one concentrate on the result of the Ajax call. The main point to keep in mind is that encapsulation of the setup code does not mean that one cannot access the Ajax related events. jQuery takes care of that too and one can register to the Ajax events and handle them. Callbacks There are many scenarios in web development, where you want to initiate another task on the basis of completion of one task. An example of such a scenario involves animation. If you want to execute a task after completion of an animation, you will need callback function. The core of jQuery is implemented in such a way that most of the API supports callbacks. Event handling One of the main aspects of JavaScript and its relationship with HTML is the events triggered by the form elements can be handled using JavaScript. However, when multiple elements and multiple events come into picture, the code complexity becomes very hard to handle. The core of jQuery is geared towards handling the events in such a way that complexity can be maintained at manageable levels. Now that we have discussed the main functionalities of jQuery, let us move onto the main modules of jQuery and how the functionalities map onto the functionalities.
Read more
  • 0
  • 0
  • 2141
Modal Close icon
Modal Close icon