Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-understanding-web-based-applications-and-other-multimedia-forms
Packt
20 Nov 2013
5 min read
Save for later

Understanding Web-based Applications and Other Multimedia Forms

Packt
20 Nov 2013
5 min read
(For more resources related to this topic, see here.) However, we will not look at blogs, wikis, or social networking sites that are usually referred to as web-based reference tools. Moodle already has these, so instead we will take a look at web applications that allows the easy creation, collaboration, and sharing of multimedia elements, such as interactive floor planners, online maps, timelines, and many others applications that are very easy to use, and that support different learning styles. Usually, I use Moodle as a school operating system and web apps as its social applications, to illustrate what I believe can be a very powerful way of using Moodle and the web for learning. Designing meaningful activities in Moodle gives students the opportunity to express their creativity by using these tools, and reflecting on the produced multimedia artifacts with both peers and teacher. However, we have to keep in mind some issues of e-safety, backups, and licensing when using these online tools, usually associated with online communities. After all, we will have our students using them, and they will therefore be exposed to some risks. Creating dynamic charts using Google Drive (Spreadsheets) Assigning students in our Moodle course tasks will require them to use a tool like Google Spreadsheets to present their plans to colleagues in a visual way. Google Drive (http://drive.google.com) provides a set of online productivity tools that work on web standards and recreates a typical Office suite. We can make documents, spreadsheets, presentations, drawings, or forms. To use Google Drive, we will need a Google account. After creating our account and logging in to Google Drive, we can organize the files displayed on the right side of the screen, add them to folders, tag them, search (of course, it's Google!), collaborate (imagine a wiki spreadsheet), export to several formats (including the usual formats for Office documents from Microsoft, Open Office, or Adobe PDF), and publish these documents online. We will start by creating a new Spreadsheet to make a budget for a music studio which will be built during the music course, by navigating to CREATE | Spreadsheet. Insert a chart As in any spreadsheet application, we can add a title by double-clicking on Untitled spreadsheet and then we add some equipment and cost to the cells: After populating our table with values and selecting all of them, we should click on the Insert chart button. The Start tab will show up in the Chart Editor pop up, as shown in the following screenshot: If we click on the Charts tab, we can pick from a list of available charts. Let's pick one of the pie charts. In the Customize tab, we can add a title to the chart, and change its appearance: When everything is done, we can click on the Insert button, and the chart previewed in the Customize tab will be added to the spreadsheet. Publish If we click on the chart, a square will be displayed on the upper-right corner, and if we click on the drop-down arrow, we see a Publish chart... option, which can be used to publish the chart. When we click on this option, we will be presented with two ways of embedding the chart, the first, as an interactive chart, and the second, as an image. Both change dynamically if we change the values or the chart in Google Drive. We should use the image code to put the chart on a Moodle forum. Share, comment, and collaborate Google Drive has the options of sharing and allowing comments and changes in our spreadsheet by other people. On the upper-right corner of each opened document, there are two buttons for that, Comments and Share. To add collaborators to our spreadsheet, we have to click on the Share button and then add their contacts (for example, e-mail) in the Invite people: field, then click on the Share & save button, and hit Done. If a collaborator is working on the same spreadsheet, at the same time we are, we can see it below the Comments and Share buttons as shown in the following screenshot: If we click on the arrow next to 1 other viewer we can chat directly with the collaborator as we edit it collaboratively: Remember that, this can be quite useful in distance courses that have collaborative tasks assigned to groups. Creating a shared folder using Google Drive We can also use the sharing functionality to share documents with the collaborators (15 GB of space for that). In the main Google Drive page, we can create a folder by navigating to Create | Folder. We are then required to give it a name: The folder will be shown in the files and folder explorer in Google Drive: To share it with someone, we need to right-click the folder and choose the Share... option. Then, just like the process of sharing a spreadsheet we saw previously, we just need to add our collaborators' contacts (for example, e-mail) in the Invite people: field, then click on Share & save and hit Done. The invited people will receive an e-mail to add the shared folder to their Google Drive (they need a Google account for this) and it is done. Everything we add to this folder is automatically synced with everyone. This includes all the Google Drive documents, PDFs, and all the files uploaded to this folder. And it's an easy way to share multimedia projects between a group of people working on the same project.
Read more
  • 0
  • 0
  • 4082

article-image-foundations
Packt
20 Nov 2013
6 min read
Save for later

Foundations

Packt
20 Nov 2013
6 min read
(For more resources related to this topic, see here.) Installation If you do not have node installed, visit: http://nodejs.org/download/. There is also an installation guide on the node GitHub repository wiki if you prefer not to or cannot use an installer: https://github.com/joyent/node/wiki/Installation. Let's install Express globally: npm install -g express If you have downloaded the source code, install its dependencies by running this command: npm install Testing Express with Mocha and SuperTest Now that we have Express installed and our package.json file in place, we can begin to drive out our application with a test-first approach. We will now install two modules to assist us: mocha and supertest. Mocha is a testing framework for node; it's flexible, has good async support, and allows you to run tests in both a TDD and BDD style. It can also be used on both the client and server side. Let's install Mocha with the following command: npm install -g mocha –-save-dev SuperTest is an integration testing framework that will allow us to easily write tests against a RESTful HTTP server. Let's install SuperTest: npm install supertest –-save-dev Continuous testing with Mocha One of the great things about working with a dynamic language and one of the things that has drawn me to node is the ability to easily do Test-Driven Development and continuous testing. Simply run Mocha with the -w watch switch and Mocha will respond when changes to our codebase are made, and will automatically rerun the tests: mocha -w Extracting routes Express supports multiple options for application structure. Extracting elements of an Express application into separate files is one option; a good candidate for this is routes. Let's extract our route heartbeat into ./lib/routes/heartbeat.js; the following listing simply exports the route as a function called index: exports.index = function(req, res){ res.json(200, 'OK'); }; Let's make a change to our Express server and remove the anonymous function we pass to app.get for our route and replace it with a call to the function in the following listing. We import the route heartbeat and pass in a callback function, heartbeat.index: var express = require('express') , http = require('http') , config = require('../configuration') , heartbeat = require('../routes/heartbeat') , app = express(); app.set('port', config.get('express:port')); app.get('/heartbeat', heartbeat.index); http.createServer(app).listen(app.get('port')); module.exports = app; 404 handling middleware In order to handle a 404 Not Found response, let's add a 404 not found middleware. Let's write a test, ./test/heartbeat.js; the content type returned should be JSON and the status code expected should be 404 Not Found: describe('vision heartbeat api', function(){ describe('when requesting resource /missing', function(){ it('should respond with 404', function(done){ request(app) .get('/missing') .expect('Content-Type', /json/) .expect(404, done); }) }); }); Now, add the following middleware to ./lib/middleware/notFound.js. Here we export a function called index and call res.json, which returns a 404 status code and the message Not Found. The next parameter is not called as our 404 middleware ends the request by returning a response. Calling next would call the next middleware in our Express stack; we do not have any more middleware due to this, it's customary to add error middleware and 404 middleware as the last middleware in your server: exports.index = function(req, res, next){ res.json(404, 'Not Found.'); }; Now add the 404 not found middleware to ./lib/express/index.js: var express = require('express') , http = require('http') , config = require('../configuration') , heartbeat = require('../routes/heartbeat') , notFound = require('../middleware/notFound') , app = express(); app.set('port', config.get('express:port')); app.get('/heartbeat', heartbeat.index); app.use(notFound.index); http.createServer(app).listen(app.get('port')); module.exports = app; Logging middleware Express comes with a logger middleware via Connect; it's very useful for debugging an Express application. Let's add it to our Express server ./lib/express/index.js: var express = require('express') , http = require('http') , config = require('../configuration') , heartbeat = require('../routes/heartbeat') , notFound = require('../middleware/notFound') , app = express(); app.set('port', config.get('express:port')); app.use(express.logger({ immediate: true, format: 'dev' })); app.get('/heartbeat', heartbeat.index); app.use(notFound.index); http.createServer(app).listen(app.get('port')); module.exports = app; The immediateoption will write a log line on request instead of on response. The devoption provides concise output colored by the response status. The logger middleware is placed high in the Express stack in order to log all requests. Logging with Winston We will now add logging to our application using Winston; let's install Winston: npm install winston --save The 404 middleware will need to log 404 not found, so let's create a simple logger module, ./lib/logger/index.js; the details of our logger will be configured with Nconf. We import Winston and the configuration modules. We define our Logger function, which constructs and returns a file logger—winston.transports. File—that we configure using values from our config. We default the loggers maximum size to 1 MB, with a maximum of three rotating files. We instantiate the Logger function, returning it as a singleton. var winston = require('winston') , config = require('../configuration'); function Logger(){ return winston.add(winston.transports.File, { filename: config.get('logger:filename'), maxsize: 1048576, maxFiles: 3, level: config.get('logger:level') }); } module.exports = new Logger(); Let's add the Loggerconfiguration details to our config files ./config/ development.jsonand ./config/test.json: { "express": { "port": 3000 }, "logger" : { "filename": "logs/run.log", "level": "silly", } } Let's alter the ./lib/middleware/notFound.js middleware to log errors. We import our logger and log an error message via logger when a 404 Not Found response is thrown: var logger = require("../logger"); exports.index = function(req, res, next){ logger.error('Not Found'); res.json(404, 'Not Found'); }; Summary This article has shown in detail with all the commands how Node.js is installed along with Express. The testing of our Express with Mocha and SuperTest is shown in detail. The logging in into our application is shown with middleware and Winston. Resources for Article: Further resources on this subject: Spring Roo 1.1: Working with Roo-generated Web Applications [Article] Building tiny Web-applications in Ruby using Sinatra [Article] Develop PHP Web Applications with NetBeans, VirtualBox and Turnkey LAMP Appliance [Article]
Read more
  • 0
  • 0
  • 2499

article-image-styling-forms
Packt
20 Nov 2013
8 min read
Save for later

Styling the Forms

Packt
20 Nov 2013
8 min read
(For more resources related to this topic, see here.) CSS3 for web forms CSS3 brings us infinite new possibilities and allows styling to make better web forms. CSS3 gives us a number of new ways to create an impact with our form designs, with quite a few important changes. HTML5 introduced useful new form elements such as sliders and spinners and old elements such as textbox and textarea, and we can make them look really cool with our innovation and CSS3. Using CSS3, we can turn an old and boring form into a modern, cool, and eye catching one. CSS3 is completely backwards compatible, so we will not have to change the existing form designs. Browsers have and will always support CSS2. CSS3 forms can be split up into modules. Some of the most important CSS3 modules are: Selectors (with pseudo-selectors) Backgrounds and Borders Text (with Text Effects) Fonts Gradients Styling of forms always varies with requirements and the innovation of the web designer or developer. In this article, we will look at those CSS3 properties with which we can style our forms and give them a rich and elegant look. Some of the new properties of CSS3 required vendor prefixes, which were used frequently as they helped browsers to read the code. In general, it is no longer needed to use them with CSS3 for some of the properties, such as border-radius, but they come into action when the browser doesn't interpret the code. A list of all the vendor prefixes for major browsers is given as follows: -moz-: Firefox -webkit-: WebKit browsers such as Safari and Chrome -o-: Opera -ms-: Internet Explorer Before we start styling the form, let us have a quick revision of form modules for better understanding and styling of the forms. Selectors and pseudo-selectors Selectors are a pattern used to select the elements which we want to style. A selector can contain one or more simple selectors separated by combinators. The CSS3 Selectors module introduces three new attribute selectors; they are grouped together under the heading Substring Matching Attribute Selectors. These new selectors are as follows: [att^=val]: The "begins with" selector [att$=val]: The "ends with" selector [att*=val]: The "contains" selector The first of these new selectors, which we will refer to as the "begins with" selector, allows the selection of elements where a specified attribute (for example, the href attribute of a hyperlink) begins with a specified string (for example, http://, https://, or mailto:). In the same way, the additional two new selectors, which we will refer to as the "ends with" and "contains" selectors, allow the selection of elements where a specified attribute either ends with or contains a specified string respectively. A CSS pseudo-class is just an additional keyword to selectors that tells a special state of the element to be selected. For example, :hover will apply a style when the user hovers over the element specified by the selector. Pseudo-classes, along with pseudo-elements, apply a style to an element not only in relation to the content of the document tree, but also in relation to external factors like the history of the navigator, such as :visited, and the status of its content, such as :checked, on some form elements. The new pseudo-classes are as follows: Type Details :last-child It is used to match an element that is the last child element of its parent element. :first-child It is used to match an element that is the first child element of its parent element. :checked It is used to match elements such as radio buttons or checkboxes which are checked. :first-of-type It is used to match the first child element of the specified element type. :last-of-type It is used to match the last child element of the specified element type. :nth-last-of-type(N) It is used to match the Nth child element from the last of the specified element type. :only-child It is used to match an element if it's the only child element of its parent. :only-of-type It is used to match an element that is the only child element of its type. :root It is used to match the element that is the root element of the document. :empty It is used to match elements that have no children. :target It is used to match the current active element that is the target of an identifier in the document's URL. :enabled It is used to match user interface elements that are enabled. :nth-child(N) It is used to match every Nth child element of the parent. :nth-of-type(N) It is used to match every Nth child  element of the parent counting from the last of the parent. :disabled It is used to match user interface elements that are disabled. :not(S) It is used to match elements that aren't matched by the specified selector. :nth-last-child(N) Within a parent element's list of child elements, it is used to match elements on the basis of their positions. Backgrounds CSS3 contains several new background attributes; and moreover, in CSS3, some changes are also made in the previous properties of the background; which allow greater control on the background element. The new background properties added are as follows. The background-clip property The background-clip property is used to determine the allowable area for the background image. If there is no background image, then this property has only visual effects such as when the border has transparent regions or partially opaque regions; otherwise, the border covers up the difference. Syntax The syntax for the background-clip property are as follows: background-clip: no-clip / border-box / padding-box / content-box; Values The values for the background-clip property is as follows: border-box: With this, the background extends to the outside edge of the border padding-box: With this, no background is drawn below the border content-box: With this, the background is painted within the content box; only the area the content covers is painted no-clip: This is the default value, same as border-box The background-origin property The background-origin property specifies the positioning of the background image or color with respect to the background-position property. This property has no effect if the background-attachment property for the background image is fixed. Syntax The following is the syntax for the background-attachment property: background-origin: border-box / padding-box / content-box; Values The values for the background-attachment property are as follows: border-box: With this, the background extends to the outside edge of the border padding-box: By using this, no background is drawn below the border content-box: With this, the background is painted within the content box The background-size property The background-size property specifies the size of the background image. If this property is not specified then the original size of the image will be displayed. Syntax The following is the syntax for the background-size property: background-size: length / percentage / cover / contain; Values The values for the background-size property are as follows: length: This specifies the height and width of the background image. No negative values are allowed. percentage: This specifies the height and width of the background image in terms of the percent of the parent element. cover: This specifies the background image to be as large as possible so that the background area is completely covered. contain: This specifies the image to the largest size such that its width and height can fit inside the content area. Apart from adding new properties, CSS3 has also enhanced some old background properties, which are as follows. The background-color property If the underlying layer of the background image of the element cannot be used, we can specify a fallback color in addition to specifying a background color. We can implement this by adding a forward slash before the fallback color. background-color: red / blue; The background-repeat property In CSS2 when an image is repeated at the end, the image often gets cut off. CSS3 introduced new properties with which we can fix this problem: space: By using this property between the image tiles, an equal amount of space is applied until they fill the element round: By using this property until the tiles fit the element, the image is scaled down The background-attachment property With the new possible value of local, we can now set the background to scroll when the element's content is scrolled. This comes into action with elements that can scroll. For example: body{background-image:url('example.gif');background-repeat:no-repeat;background-attachment:fixed;} CSS3 allows web designers and developers to have multiple background images, using nothing but just a simple comma-separated list. For example: background-image: url(abc.png), url(xyz.png); Summary In this article, we learned about the basics of CSS3 and the modules in which we can categorize the CSS3 for forms, such as backgrounds. Using this we can improvise the look and feel of a form. This makes a form more effective and attractive. Resources for Article: Further resources on this subject: HTML5 Canvas [Article] Building HTML5 Pages from Scratch [Article] HTML5 Presentations - creating our initial presentation [Article]
Read more
  • 0
  • 0
  • 10789

article-image-issues-and-wikis-gitlab
Packt
20 Nov 2013
6 min read
Save for later

Issues and Wikis in GitLab

Packt
20 Nov 2013
6 min read
(For more resources related to this topic, see here.) Issues The built-in features for issue tracking and documentation will be very beneficial to you, especially if you're working on extensive software projects, the ones with many components, or those that need to be supported in multiple versions at once, for example, stable, testing, and unstable. In this article, we will have a closer look at the formats that are supported for issues and wiki pages (in particular, Markdown); also the elements that can be referenced from within these and how issues can be organized. Furthermore, we will go through the process of assigning issues to team members, and keeping documentation in wiki pages, which can also be edited locally. Lastly, we will see how the RSS feeds generated by GitLab can keep your team in a closer loop around the projects they work on. The metadata covered in this article may seem trivial, but many famous software projects have gained traction due to their extensive and well-written documentation, which initially was done by core developers. It enables your users to do the same with their projects, even if only internally; it opens up for a much more efficient collaboration. GitLab-flavored Markdown GitLab comes with a Markdown formatting parser that is fairly similar to GitHubs, which makes it very easy to adapt and migrate. Many standalone editors also support this format, such as Mou (http://mouapp.com/) for Mac or MarkdownPad (http://markdownpad.com/) for Windows. On Linux, editors with a split view, such as ReText (http://sourceforge.net/projects/retext/) or the more Zen-writing UberWriter (http://uberwriter.wolfvollprecht.de/) are available. For the popular Vim editor , multiple Markdown plugins too are up for grabs on a number of GitHub repositories; one of them is Vim Markdown (https://github.com/tpope/vim-markdown) by Tim Pope. Lastly, I'd like to mention that you don't need a dedicated editor for Markdown because they are plain text files. The mentioned editors simply enhance the view through syntax highlighting and preview modes. About Markdown Markdown was originally written by John Gruber, and has since evolved into various flavors. The intention of this very lightweight markup language is to have a source that is easy to edit and can be transformed into meaningful HTML to be displayed on the Web. Different variations of Markdown have made it to a majority of very successful software projects as the default language; readme files, documentation, and even blogging engines adopt it. In Markdown, text styles can be applied, links placed, and images can be inserted. If ever Markdown, by default, does not support what you are currently trying to do, you can insert plain HTML, which will not be altered by the Markdown parser. Referring to elements inside GitLab When working with source code, it can be of importance to refer to a line of code, a file, or other things, when discussing something. Because many development teams are nowadays spread throughout the world, GitLab adapts to that and makes it easy to refer and reference many things directly from comments, wiki pages, or issues. Some things like files or lines can be referenced via links, because GitLab has unique links to the branches of a repository; others are more directly accessible. The following items (basically, prefixed strings or IDs) can be referenced through shortcodes: commit messages comments wall posts issues merge requests milestones wiki pages To reference items, use the following shortcodes inside any field that supports Markdown or RDoc on the web interface: @foofor team members #123for issues !123for merge requests $123for snippets 1234567for commits Issues, knowing what needs to be done An issue is a text message of variable length, describing a bug in the code, an improvement to be made, or something else that should be done or discussed. By commenting on the issue, developers or project leaders can respond to this request or statement. The meta information attached to an issue can be very valuable to the team, because developers can be assigned to an issue, and it can be tagged or labeled with keywords that describe the content or area to which it belongs. Furthermore, you can also set a goal for the milestone to be included in this fix or feature. In the following screenshot, you can see the interface for issues: Creating issues By navigating to the Issues tab of a repository in the web interface, you can easily create new issues. Their title should be brief and precise, because a more elaborate description area is available. The description area supports the GitLab-flavored Markdown, as mentioned previously. Upon creation, you can choose a milestone and a user to assign an issue to, but you can also leave these fields unset, possibly to let your developers themselves choose with what they want to work and at what time. Before they begin their work, they can assign the issues to themselves. In the following screenshot, you can see what the issue creation form looks like: Working with labels or tags Labels are tags used to organize issues by the topic and severity. Creating labels is as easy as inserting them, separated by a comma, into the respective field while creating an issue. Currently in Version 5.2, certain keywords trigger a certain background color on the label. Labels like critical or bug turn red, feature turns green, and other labels are blue by default. The following screenshot shows what a list of labeled features looks like: After the creation of a label, it will be listed under the Labels tab within the Issues page, with a link that lists all the issues that have been labeled the same. Filtering by the label, assigned user, or milestone is also possible from the list of issues within each projects overview. Summary In this article, we have had a look at the project management side of things. You can now make use of the built-in possibilities to distribute tasks across team members through issues, keep track of things that still have to do with the issues, or enable observers to point out bugs. Resources for Article : Further resources on this subject: Using Gerrit with GitHub [Article] The architecture of JavaScriptMVC [Article] Using the OSGi Bundle Repository in OSGi and Apache Felix 3.0 [Article]
Read more
  • 0
  • 1
  • 17589

article-image-configuring-ovirt
Packt
19 Nov 2013
9 min read
Save for later

Configuring oVirt

Packt
19 Nov 2013
9 min read
(For more resources related to this topic, see here.) Configuring the NFS storage NFS storage is a fairly common type of storage that is quite easy to set up and run even without special equipment. You can take the server with large disks and create NFS directory.But despite the apparent simplicity of NFS, setting s should be done with attention to details. Make sure that the NFS directory is suitable for use; go to the procedure of connecting storage to the data center. The following options are displayed after you click on the Configure Storage dialog box in which we specify the basic storage configuration: Name and Data Center: It is used to specify a name and target of the data center for storage Domain Function/Storage Type: It is used to choose a data function and NFS type Use Host: It is used to enter the host that will make the initial connection to the storage and a host who will be in the role of SPM Export Path: It is used to enter the storage server name and path of the exported directory Advanced Parameters: It provides additional connection options, such as NFS version, number of retransmissions and timeout, that are recommended to be changed only in exceptional cases Fill in the required storage settings and click on the OK button; this will start the process of connecting storage. The following image shows the New Storage dialog box with the connecting NFS storage: Configuring the iSCSI storage This section will explain how to connect the iSCSI storage to the data center with the type of storage as iSCSI. You can skip this section if you do not use iSCSI storage. iSCSI is a technology for building SAN (Storage Area Network). A key feature of this technology is the transmission of SCSI commands over the IP networks. Thus, there is a transfer of block data via IP. By using the IP networks, data transfer can take place over long distances and through network equipment such as routers and switches. These features make the iSCSI technology good for construction of low-cost SAN. oVirt supports iSCSI and iSCSI storages that can be connected to oVirt data centers. Then begin the process of connecting the storage to the data center. After you click on the Configure Storage dialog box in which you specify the basic storage configuration, the following options are displayed: Name and Data Center: It is used to specify the name and target of the data center. Domain Function/Storage Type: It is used to specify the domain function and storage type. In this case, the data function and iSCSI type. Use Host: It is used to specify the host to which the storage (SPM) will be attached. The following options are present in the search box for iSCSI targets: Address and Port: It is used to specify the address and port of the storage server that contains the iSCSI target User Authentication: Enable this checkbox if authentication is to be used on the iSCSI target CHAP username and password: It is used to specify the username and password for authentication Click on the Discover button and oVirt Engine connects to the specified server for the searching of iSCSI targets. In the resulting list, click on the designated targets, we click on the Login button to authenticate. Upon successful completion of the authentication, the display target LUN will be displayed; check it and click on OK to start connection to the data center. New storage will automatically connect to the data center. If it does not, select the location from the list and click on the Attach button in the detail pane where we choose a target data center. Configuring the Fibre Channel storage If you have selected Fibre Channel when creating the data center, we should create a Fibre Channel storage domain. oVirt supports Fibre Channel storage based on multiple preconfigured Logical Unit Numbers (LUN). Skip this section if you do not use Fibre Channel equipment. Begin the process of connecting the storage to the data center. Open the Guide Me wizard and click on the Configure Storage dialog box where you specify the basic storage configuration: Name and Data Center: It is used to specify the name and data center Domain Function/Storage Type: Here we need to specify the data function and Fibre Channel type Use Host: It specifies the address of the virtualization host that will act as the SPM In the area below, the list of LUNs are displayed, enable the Add LUN checkbox on the selected LUN to use it as Fibre Channel data storage. Click on the OK button and this will start the process of connecting storage to the data centers. In the Storage tab and in the list of storages, we can see created Fibre Channel storage. In the process of connecting, its status will change and at the end new storage will be activated and connected to the data center. The connection process can also be seen in the event pane. The following screenshot shows the New Storage dialog box with Fibre Channel storage type: Configuring the GlusterFS storage GlusterFS is a distributed, parallel, and linearly scalable filesystem. GlusterFS can combine the data storage that are located on different servers into a parallel network filesystem. GlusterFS's potential is very large, so developers directed their efforts towards the implementation and support of GlusterFS in oVirt (GlusterFS documentation is available at http://www.gluster.org/community/documentation/index.php/Main_Page). oVirt 3.3 has a complete data center with the GlusterFS type of storage. Configuring the GlusterFS volume Before attempting to connect GlusterFS storage into the data center, we need to create the volume. The procedure of creating GlusterFS volume is common in all versions. Select the Volumes tab in the resource pane and click on Create Volume. In the open window, fill the volume settings: Data Center: It is used to specify the data center that will be attached to the GlusterFS storage. Volume Cluster: It is used to specify the name of the cluster that will be created. Name: It is used to specify a name for the new volume. Type: It is used to specify the type of GlusterFS volume available to choose from, there are seven types of volume that implement various strategic placements of data on the filesystem. Base types are Distribute, Replicate, and Stripe and other combination of these types: Distributed Replicate, Distributed Stripe, Striped Replicate, and Distributed Striped Replicate (additional info can be found at the link: http://gluster.org/community/documentation/index.php/GlusterFS_Concepts). Bricks: With this button, a list of bricks will be collected from the filesystem. Brick is a separate piece with which volume will be built. These bricks are distributed across the hosts. As bricks use a separate directory, it should be placed on a separate partition. Access Protocols: It defines basic access protocols that can be used to gain access to the following: Gluster: It is a native protocol access to volumes GlusterFS, enabled by default. NFS: It is an access protocol based on NFS. CIFS: It is an access protocol based on CIFS. Allow Access From: It allows us to enter a comma-separated IP address, hostnames, or * for all hosts that are allowed to access GlusterFS volume. Optimize for oVirt Store: Enabling this checkbox will enable extended options for created volume. The following screenshot shows the dialog box of Create Volume: Fill in the parameters, click on the Bricks button, and go to the new window to add new bricks with the following properties: Volume Type: This is used to change the previously marked type of the GlusterFS volume Server: It is used to specify a separate server that will export GlusterFS brick Brick Directory: It is used to specify the directory to use Specify the server and directory and click on Add. Depending on the type of volume, specify multiple bricks. After completing the list with bricks, click on the OK button to add volume and return to the menu. Click on the OK button to create GlusterFS volumes with the specified parameters. The following screenshot shows the Add Bricks dialog box: Now that we have GlusterFS volume, we select it from the list and click on Start. Configuring the GlusterFS storage oVirt 3.3 has support for creating data centers with the GlusterFS storage type: The GlusterFS storage type requires a preconfigured data center. A pre-created cluster should be present inside the data center. The enabled Gluster service is required. Go to the Storage section in resource pane and click on New Domain. In the dialog box that opens, fill in the details of our storage. The details are given as follows: Name and Data Center: It is used to specify the name and data center Domain Function/Storage Type: It is used to specify the data function and GlusterFS type Use Host: It is used to specify the host that will connect to the SPM Path: It is used to specify the path to the location in the format hostname:volume_name VFS Type: Leave it as glusterfs and leave Mount Option blank Click on the OK button; this will start the process of creating the repository. The created storage automatically connects to the specified data centers. If not, select the repository created in the list, and in the subtab named Data Center in the detail pane, click on the Attach button and choose our data center. After you click on OK, the process of connecting storage to the data center starts. The following screenshot shows the New Storage dialog box with the GlusterFS storage type. Summary In this article we learned how to configure NFS Storage, iSCSI Storage, FC storages, and GlusterFS Storage. Resources for Article: Further resources on this subject: Tips and Tricks on Microsoft Application Virtualization 4.6 [Article] VMware View 5 Desktop Virtualization [Article] Qmail Quickstarter: Virtualization [Article]
Read more
  • 0
  • 0
  • 10991

article-image-setting-woocommerce
Packt
19 Nov 2013
6 min read
Save for later

Setting Up WooCommerce

Packt
19 Nov 2013
6 min read
(For more resources related to this topic, see here.) So, you're already familiar with WordPress and know how to use plugins, widgets, and themes? Your next step is to expand your existing WordPress website or blog with an online store? In that case you've come to the right place! WooCommerce is a versatile plugin for WordPress, that gives the possibility for everyone with a little WordPress knowledge to start their own online store. In case you are not familiar with WordPress at all, this book is not the first one you should read. No worries though, WordPress isn't that hard to learn and there are tons of online possibilities to learn about the WordPress solution very quickly. Or just turn to one of the many printed books on WordPress that are available. These are the topics we'll be covering in this article: Installing and activating WooCommerce Learn everything about setting up WooCommerce correctly Preparing for takeoff Before we start, remember that it's only possible to install your own plugins if you're working in your own WordPress installation. This means that users running a website on WordPress.com will not be able to follow along. It's simply impossible in that environment to install plugins yourself. Although installing WooCommerce on top of WordPress isn't difficult, we highly recommend that you set up a test environment first. Without going too much into depth, this is what you need to do: Create a backup copy of your complete WordPress environment using FTP. Alternatively use a plugin to store a copy into your Dropbox folder automatically. There are tons of solutions available, just pick your own favorite. UpDraftPlus is one of the possibilities and delivers a complete backup solution: http://wordpress.org/plugins/updraftplus/. Don't forget to backup your WordPress database as well. You may do this using a tool like phpMyAdmin and create an export from there. But also in this case, there are plugins that make life easier. The UpDraftPlus plugin mentioned previously can perform this task as well. Once your backups are complete, install XAMPP on a local (Windows) machine that can be downloaded from http://www.apachefriends.org. Although XAMPP is available for Mac users, MAMP is a widely used alternative for this group. MAMP can be downloaded from http://www.mamp.info/en/index.html. Restore your WordPress backup on your test server and start following the remaining part of this book in your new test environment. Alternatively, install a copy of your WordPress website as a temporary subdomain at your hosting provider. For instance, if my website is http://www.example.com, I could easily create a copy of my site in http://test.example.com. Possibilities may vary, depending on the package you have with your hosting provider. If in your situation it isn't needed to add WooCommerce to an existing WordPress site, of course you may also start from scratch. Just install WordPress on a local test server or install it at your hosting provider. To keep our instructions in this book as clear as possible we did just that. We created a fresh installation of WordPress Version 3.6. Next, you see a screenshot of our fresh WordPress installation: Are these short instructions just too much for you at this moment? Do you need a more detailed step-by-step guide to create a test environment for your WordPress website? Look at the following tutorials: For Max OSX users: http://wpmu.org/local-wordpresstest-environment-mamp-osx/ For Windows users: http://www.thegeekscope.com/howto-copy-a-live-wordpress-website-to-local-indowsenvironment/ More tutorials will also be available on our website: http://www.joomblocks.com Don't forget to sign up for the free Newsletter, that will bring you even more news and tutorials on WordPress, WooCommerce, and other open source software solutions! Once ready, we'll be able to take the next step and install the WooCommerce plugin. Let's take a look at our WordPress backend. In our situation we can open this by browsing to http://localhost/wp36/wp-admin. Depending on the choices you made previously for your test environment, your URL could be different. Well, this should all be pretty familiar for you already. Again, your situation might look different, depending on your theme or the number of plugins already active for your website. Installing WooCommerce Installing a plugin is a fairly simple task: Click on Plugins in the menu on the left and click on Add New. Next, simply enter woocommerce in the Search field and click on Search Plugins. Verify if the correct plugin is shown on top and click on Install Now. Confirm the warning message that appears by clicking on OK. Click on Activate Plugin. Note that in the following screenshot, we're installing Version 2.0.13 of WooCommerce. New versions will follow rather quickly, so you might already see a higher version number. WooCommerce needs to have a number of specific WordPress pages, that it automatically will setup for you. Just click on the Install WooCommerce Pages button and make sure not to forget this step! In our example project, we're installing the English version of WooCommerce. But you might need a different language. By default, WooCommerce is already delivered in a number of languages. This means the installation will automatically follow the language of your WordPress installation. If you need something else, just browse through the plugin directory on WordPress.org to find any additional translations. Once we have created the necessary pages, the WooCommerce welcome screen will appear and you will see a new menu item has been added to the main menu on the left. Meanwhile the plugin created the necessary pages, that you can access by clicking on Pages in the menu on the left. Note that if you open a page that was automatically created by WooCommerce, you'll only see a shortcode, which is used to call the needed functionality. Do not delete the shortcodes, or WooCommerce might stop working. However, it's still possible to add your own content before or after the shortcode on these pages. WooCommerce also added some widgets to your WordPress dashboard, giving an overview of the latest product and sales statistics. At this moment this is all still empty of course. Summary In this article, we learned about the basics of WooCommerce and installing the same. We also learned that WooCommerce is a free but versatile plugin for WordPress, that you may use to easily set up your own online store. Resources for Article: Further resources on this subject: Django 1.2 E-commerce: Generating PDF Reports from Python using ReportLab [Article] Increasing sales with Brainshark slideshows/documents [Article] Implementing OpenCart Modules [Article]
Read more
  • 0
  • 0
  • 10147
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-icinga-object-configuration
Packt
18 Nov 2013
9 min read
Save for later

Icinga Object Configuration

Packt
18 Nov 2013
9 min read
(For more resources related to this topic, see here.) A localhost monitoring setup Let us take a close look at our current setup, which we created, for monitoring a localhost. Icinga by default comes with object configuration for a localhost. The object configuration files are inside /etc/icinga/objects for default installations. $ ls /etc/icinga/objects commands.cfg notifications.cfg templates.cfg contacts.cfg printer.cfg timeperiods.cfg localhost.cfg switch.cfg windows.cfg There are several configuration files with object definitions. Together, these object definitions define the monitoring setup for monitoring some services on a localhost. Let's first look at localhost.cfg, which has most of the relevant configuration. We have a host definition: define host{ use linux-server host_name localhost alias localhost address 127.0.0.1 } The preceding object block defines one object, that is, the host that we want to monitor, with details such as the hostname, alias for the host, and the address of the server—which is optional, but is useful when you don't have DNS record for the hostname. We have a localhost host object defined in Icinga with the preceding object configuration. The localhost.cfg file also has a hostgroup defined which is as follows: define hostgroup { hostgroup_name linux-servers alias Linux Servers members localhost // host_name of the host object } The preceding object defines a hostgroup with only one member, localhost, which we will extend later to include more hosts. The members directive specifies the host members of the hostgroup. The value of this directive refers to the value of the host_name directive in the host definitions. It can be a comma-separated list of several hostnames. There is also a directive called hostgroups in the host object, where you can give a comma-separated list of names of the hostgroups that we want the host to be part of. For example, in this case, we could have omitted the members directive in the hostgroup definition and specified a hostgroups directive, which has the value linux-servers, in the localhost host definition. At this point, we have a localhost host and a linux-servers hostgroup, and localhost is a member of linux-servers. This is illustrated in the following figure: Going further into localhost.cfg, we have a bunch of service object definitions that follow. Each of these definitions indicate the service on a localhost that we want to monitor with the host_name directive. define service { use local-service host_name localhost service_description PING check_command check_ping!100.0,20%!500.0,60% } This is one of the service definitions. The object defines a PING service check that monitors the reachability. The host_name directive specifies the host that this service check should be associated with, which in this case is localhost. Again, the value of the host_name directive here should reflect the value of the host_name directive defined in the host object definition. So, we have a PING service check defined for a localhost, which is illustrated by following figure: There are several such service definitions that are placed on a localhost. Each service has a check_command directive that specifies the command for monitoring that service. Note that the exclamation marks in the check_command values are the command argument separators. So, cmd!foo!bar indicates that the command is cmd with foo as its first argument and bar as the second. It is important to remember that the check_ping part in check_command in the preceding example does not mean the check_ping executable that is in /usr/lib64/nagios/plugins/check_ping for most installations; it refers to the Icinga object of type command. In our setup, all command object definitions are inside commands.cfg. The commands.cfg file has the command object definition for check_ping. define command { command_name check_ping command_line $USER1$/check_ping -H $HOSTADDRESS$ -w $ARG1$ -c $ARG2$ -p 5 } The check_command value in the PING service definition refers to the preceding command object, which indicates the exact command to be executed for performing the service check. $USER1$ is a user-defined Icinga macro. Macros in Icinga are like variables that can be used in various object definitions to wrap data inside these variables. Some macros are predefined, while some are user defined. These user macros are usually defined in /etc/icinga/resources.cfg: $USER1$=/usr/lib64/nagios/plugins So replace the $USER1$ macro with its value, and execute: $ value/of/USER1/check_ping --help This command will print the usual usage string with all the command-line options available. $ARG1$ and $ARG2$ in the command definition are macros referring to the arguments passed in the check_command value in the service definition, which are 100.0,20% and 500.0,60% respectively for the PING service definition. We will come to this later. As noted earlier, the status of the service is determined by the exit code of the command that is specified in the command_line directive in command definition. We have many such service definitions for a localhost in localhost.cfg, such as Root Partition (monitors disk space), Total Processes, Current Load, HTTP, along with command definitions in commands.cfg for check_commands of each of these service definitions. So, we have a host definition for localhost, a hostgroup definition linux-servers having localhost as its member, several service check definitions for localhost with check commands, and the command definitions specifying the exact command with arguments to execute for the checks. This is illustrated with the example Ping check in the following figure: This completes the basic understanding of how our localhost monitoring is built up from plain-text configuration. Notifications We would, as is the point of having monitoring systems, like to get alerted when something actually goes down. We don't want to keep monitoring the Icinga web interface screen, waiting for something to go down. Icinga provides a very generic and flexible way of sending out alerts. We can have any alerting script triggered when something goes wrong, which in turn may run commands for sending e-mails, SMS, Jabber messages, Twitter tweets, or practically anything that can be done from within a script. The default localhost monitoring setup has an e-mail alerting configuration. The way these notifications work is that we define contact objects where we give the contact name, e-mail addresses, pager numbers, and other necessary details. These contact names are specified in the host/service templates or the objects themselves. So, when Icinga detects that a host/service has gone down, it will use this contact object to send contact details to the alerting script. The contact object definition also has the host_notification_commands and service_notification_commands directives. These directives specify the command objects that should be used to send out the notifications for that particular contact. The former directive is used when the host goes down, and the latter is used when a service goes down. The respective command objects are then looked up and the value of their command_line directive is executed. This command object is the same as the one we looked at previously for executing checks. The same command object type is used to also define notification commands. We can also define contact groups and specify them in the host/service object definitions to alert a bunch of contacts at the same time. We can also give a comma-separated list of contact names instead of a contact group. Let's have a look at our current setup for notification configuration. The host/service template objects have the admin contact group specified, whose definition is in contacts.cfg: define contactgroup { contactgroup_name admins alias Icinga Administrators members icingaadmin } The group has the icingaadmin member contact, which is again defined in the same file: define contact { contact_name icingaadmin use generic-contact alias Icinga Admin email your@email.com } The contacts.cfg file has your e-mail address. The contact object inherits the generic-contact template contact object. define contact{ name generic-contact service_notification_period 24x7 host_notification_period 24x7 service_notification_options w,u,c,r,f,s host_notification_options d,u,r,f,s service_notification_commands notify-service-by-email host_notification_commands notify-host-by-email register 0 } This template object has the host_notification_commands and service_notification_commands directives defined as notify-host-by-email and notify-service-by-email respectively. These are commands similar to what we use in service definitions. These commands are defined in commands.cfg: define command { command_name notify-host-by-email command_line /usr/bin/printf "%b" "***** Icinga *****nnNotification Type: $NOTIFICATIONTYPE$nHost: $HOSTNAME$nState: $HOSTSTATE$nAddress: $HOSTADDRESS$nInfo: $HOSTOUTPUT$nnDate/Time: $LONGDATETIME$n" | /bin/mail -s "** $NOTIFICATIONTYPE$ Host Alert: $HOSTNAME$ is $HOSTSTATE$ **" $CONTACTEMAIL$ } define command { command_name notify-service-by-email command_line /usr/bin/printf "%b" "***** Icinga *****nnNotification Type: $NOTIFICATIONTYPE$nnService: $SERVICEDESC$nHost: $HOSTALIAS$nAddress: $HOSTADDRESS$nState: $SERVICESTATE$nnDate/Time: $LONGDATETIME$nnAdditional Info:nn$SERVICEOUTPUT$n" | /bin/mail -s "** $NOTIFICATIONTYPE$ Service Alert: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$ **" $CONTACTEMAIL$ } These commands are eventually executed to send out e-mail notifications to the supplied e-mail addresses. Notice that command_lines uses the /bin/mail command to send e-mails, which is why we need a working setup of a SMTP server. Similarly, we could use any command/script path to send out custom alerts, such as SMS and Jabber. We could also change the above e-mail command to change the content format to suit our requirements. The following figure illustrates the contact and notification configuration: The correlation between hosts/services and contacts/notification commands is shown below: Summary In this article, we analyzed our current configuration for the Icinga setup which monitors a localhost. We can replicate this to monitor a number of other servers using the desired service checks. We also looked at how the alerting configuration works to send out notifications when something goes down. Resources for Article: Further resources on this subject: Troubleshooting Nagios 3.0 [Article] Notifications and Events in Nagios 3.0-part1 [Article] BackTrack 4: Target Scoping [Article]
Read more
  • 0
  • 0
  • 11410

article-image-css3-animation
Packt
18 Nov 2013
7 min read
Save for later

CSS3 Animation

Packt
18 Nov 2013
7 min read
(For more resources related to this topic, see here.) The websites, we see today, are complex and complicated. By complex and complicated, we are referring to the development of these websites and not the webpage itself. We see animations and complex features. Prior to HTML5 and CSS3, JavaScript was used extensively for this purpose. HTML was incorrectly used for styling when it was expected to design the structural markup of the page. However, with the advent of CSS, it is a good practice to use HTML for markup and CSS for styling. CSS3 brings along transforms, transition elements, and animation features that make it easier to develop awesome features. In transition, we can view the change from a single state to other but when it comes to multiple states, Animation is the solution. Let's discuss the various properties of CSS3 Animations and then we will incorporate all of that in a code to understand it better. @keyframes The points at which the transition should take place can be defined using the @keyframes property. As of now, we need to add a vendor prefix to the @keyframes property as it is still in its development state. In future, when it is accepted as a standard, then we do not have to use a vendor prefix. We can use percentage or from and to keywords to implement the change in state from one CSS style to another. animation-name We need to apply animation to an element. This property enables us to do so by applying it to the animation name defined in the keyframes rule. However, it cannot be a standalone property and has to be used in conjunction with other animation properties. animation-duration Using this feature, we can define the duration of the animation. If we specify the animation-duration to 5 seconds, changes in the CSS defined states will need to be completed within 5 seconds. animation-delay Similar to the delay property in transition, the delay feature will delay the animation by the time period specified. animation-timing-function Similar to the timing function, this property decides the speed of transition. It behaves the same way as the transition timing function that we have seen earlier. animation-iteration-count We can decide the number of iteration carried out in the animation phase using this property. Setting this property to infinite will mean that the animation will never stop. animation-direction We can decide the direction of the animation using this property. We can use values like reverse, alternate to define the direction of the element to be animated. animation-play-state Using this feature, we can determine whether the animation would be running or paused accordingly. Now that we had a look at these properties, we will now incorporate some of these properties in a code and understand the functionality in a better way. Hence, to gain a practical insight, let's look at the following code. <!DOCTYPE html> <html> <head> <style> body { background:#000; color:#fff; } #trigger { width:100px; height:100px; position:absolute; top:50%; margin:-50px 0 0 -50px; left:50%; background: black; border-radius:50px; /*set the animation*/ /*[animation name] [animation duration] [animation timing function] [animation delay] [animation iterations count] [animation direction]*/ animation: glowness 5s linear 0s 5 alternate; -moz-animation: glowness 5s linear 0s 5 alternate; /* Firefox */ -webkit-animation: glowness 5s linear 0s 5 alternate; /* Safari and Chrome */ -o-animation: glowness 5s linear 0s 5 alternate; /* Opera */ -ms-animation: glowness 5s linear 0s 5 alternate; /* IE10 */ } #trigger:hover { animation-play-state: paused; -moz-animation-play-state: paused; -webkit-animation-play-state: paused; -o-animation-play-state: paused; -ms-animation-play-state: paused; } /*animation keyframes*/ @keyframes glowness { 0% {box-shadow: 0 0 80px orange;} 25% {box-shadow: 0 0 150px red;} 50% {box-shadow: 0 0 70px pink;} 75% {box-shadow: 0 0 50px violet;} 100% {box-shadow: 0 0 100px yellow;} } @-moz-keyframes glowness /* Firefox */ { 0% {box-shadow: 0 0 80px orange;} 25% {box-shadow: 0 0 150px red;} 50% {box-shadow: 0 0 70px pink;} 75% {box-shadow: 0 0 50px violet;} 100% {box-shadow: 0 0 100px yellow;} } @-webkit-keyframes glowness /* Safari and Chrome */ { 0% {box-shadow: 0 0 80px orange;} 25% {box-shadow: 0 0 150px red;} 50% {box-shadow: 0 0 70px pink;} 75% {box-shadow: 0 0 50px violet;} 100% {box-shadow: 0 0 100px yellow;} } @-o-keyframes glowness /* Opera */ { 0% {box-shadow: 0 0 80px orange;} 25% {box-shadow: 0 0 150px red;} 50% {box-shadow: 0 0 70px pink;} 75% {box-shadow: 0 0 50px violet;} 100% {box-shadow: 0 0 100px yellow;} } @-ms-keyframes glowness /* IE10 */ { 0% {box-shadow: 0 0 20px green;} 25% {box-shadow: 0 0 150px red;} 50% {box-shadow: 0 0 70px pink;} 75% {box-shadow: 0 0 50px violet;} 100% {box-shadow: 0 0 100px yellow;} } </style> <script> // animation started (buggy on firefox) $('#trigger').on('animationstart mozanimationstart webkitAnimationStart oAnimationStart msanimationstart',function() { $('p').html('animation started'); }) // animation paused $('#trigger').on('mouseover',function(){ $('p').html('animation paused'); }) // animation re-started $('#trigger').on('mouseout',function(){ $('p').html('animation re-started'); }) // animation ended $('#trigger').on('animationend mozanimationend webkitAnimationEnd oAnimationEnd msanimationend',function() { $('p').html('animation ended'); }) //iteration count var i =0; $('#trigger').on('animationiteration mozanimationiteration webkitAnimationIteration oAnimationIteration msanimationiteration', function() { i++; $('p').html('animation iteration='+i); }) </script> </head> <body> <div id="trigger"></div> </body> </html> The output of the code on execution would be as follows: We have used –webkit as the prefix in this example as we are executing the code in Google Chrome. Please us –moz prefix for Firefox and –o- for Opera. Comments are added in the code so that we can understand it easily. Apart from HTML5 and CSS3, we have used a bit of JQuery. Let’s go through the animation part of the code to understand it better. In the CSS3 styles, we have mentioned the animation direction as alternate as a result of which the animation would be in a different direction after the first iteration. We have used the hover property. In this code, whenever we hover over the object, the animation is paused. We have also defined the glowness of the object in keyframes. We have also mentioned how the colors change and defined a box-shadow attribute for the animation in keyframes. We have defined the <script> tag in which we have included the JavaScript and JQuery code. We have used the trigger attribute. The trigger() method triggers a particular event and the default behavior of an event with regards to the chosen elements. We have used mouseover and mouseout properties. The mouseover and mouseout event fires when the user moves the mouse pointer over an element and out of an element respectively. We have used those events in conjunction with the start, end and pausing of the animation. Therefore, we can create complex animations using CSS3. Coding is an art which gets better with practice. Hence, we need to implement it practically in order to know the subtle nuances of HTML5 and CSS3. However, we can achieve that after a considerable amount of practice. However, we are just on the shore; the sea of knowledge is far beyond. In this article, we have covered a lot of HTML5 and CSS3 features. Instead of wading through loads of theory, the concepts in this article are explained in a practical manner using code samples to demonstrate the new features of HTML5 and CSS3. The code samples are such that you can copy the code (the entire code is written instead of code snippets) and execute it for better understanding. Transition, transformation, and animation are also explained in a lucid manner, and there is a gradual increase in the difficulty level throughout the article. By the end of the book, you will be thoroughly acquainted with HTML5 and CSS3, enabling you to design a web page using the included code samples with ease. Click on the following link to have a look at the book: http://www.packtpub.com/html5-and-css3-for-transition-transformation-animation/book Summary This article has discussed how HTML5 and CSS3 features can be used used in websites. There is a detailed discussion on the animations used in the websites offered by CSS3. Resources for Article: Further resources on this subject: Mobiles First – How and Why [Article] Creating an Animated Gauge with CSS3 [Article] HTML5 Canvas [Article]
Read more
  • 0
  • 0
  • 1999

article-image-getting-started-ansible
Packt
18 Nov 2013
8 min read
Save for later

Getting Started with Ansible

Packt
18 Nov 2013
8 min read
(For more resources related to this topic, see here.) First steps with Ansible Ansible modules take arguments in key value pairs that look similar to key=value, perform a job on the remote server, and return information about the job as JSON. The key value pairs allow the module to know what to do when requested. The data returned from the module lets Ansible know if anything changed or if any variables should be changed or set afterwards. Modules are usually run within playbooks as this lets you chain many together, but they can also be used on the command line. Previously, we used the ping command to check that Ansible had been correctly setup and was able to access the configured node. The ping module only checks that the core of Ansible is able to run on the remote machine but effectively does nothing. A slightly more useful module is called setup. This module connects to the configured node, gathers data about the system, and then returns those values. This isn't particularly handy for us while running from the command line, however, in a playbook you can use the gathered values later in other modules. To run Ansible from the command line, you need to pass two things, though usually three. First is a host pattern to match the machine that you want to apply the module to. Second you need to provide the name of the module that you wish to run and optionally any arguments that you wish to give to the module. For the host pattern, you can use a group name, a machine name, a glob, and a tilde (~), followed by a regular expression matching hostnames, or to symbolize all of these, you can either use the word all or simply *. To run the setup module on one of your nodes, you need the following command line: $ ansible machinename -u root -k -m setup The setup module will then connect to the machine and give you a number of useful facts back. All the facts provided by the setup module itself are prepended with ansible_ to differentiate them from variables. The following is a table of the most common values you will use, example values, and a short description of the fields: Field Example Description ansible_architecture x86_64 The architecture of the managed machine ansible_distribution CentOS The Linux or Unix Distribution on the managed machine ansible_distribution_version 6.3 The version of the preceding distribution ansible_domain example.com The domain name part of the server's hostname ansible_fqdn machinename.example.com This is the fully qualified domain name of the managed machine. ansible_interfaces ["lo", "eth0"] A list of all the interfaces the machine has, including the loopback interface ansible_kernel 2.6.32-279.el6.x86_64 The kernel version installed on the managed machine ansible_memtotal_mb 996 The total memory in megabytes available on the managed machine ansible_processor_count 1 The total CPUs available on the managed machine ansible_virtualization_role guest Whether the machine is a guest or a host machine ansible_virtualization_type kvm The type of virtualization setup on the managed machine These variables are gathered using Python from the host system; if you have facter or ohai installed on the remote node, the setup module will execute them and return their data as well. As with other facts, ohai facts are prepended with ohai_ and facter facts with facter_. While the setup module doesn't appear to be too useful on the command line, once you start writing playbooks, it will come into its own. If all the modules in Ansible do as little as the setup and the ping module, we will not be able to change anything on the remote machine. Almost all of the other modules that Ansible provides, such as the file module, allow us to actually configure the remote machine. The file module can be called with a single path argument; this will cause it to return information about the file in question. If you give it more arguments, it will try and alter the file's attributes and tell you if it has changed anything. Ansible modules will almost always tell you if they have changed anything, which becomes more important when you are writing playbooks. You can call the file module, as shown in the following command, to see details about /etc/fstab: $ ansible machinename -u root -k -m file -a 'path=/etc/fstab' The preceding command should elicit a response like the following code: machinename | success >> { "changed": false, "group": "root", "mode": "0644", "owner": "root", "path": "/etc/fstab", "size": 779, "state": "file" } Or like the following command to create a new test directory in /tmp: $ ansible machinename -u root -k -m file -a 'path=/tmp/test state=directory mode=0700 owner=root' The preceding command should return something like the following code: machinename | success >> { "changed": true, "group": "root", "mode": "0700", "owner": "root", "path": "/tmp/test", "size": 4096, "state": "directory" } The second command will have the changed variable set to true, if the directory doesn't exist or has different attributes. When run a second time, the value of changed should be false indicating that no changes were required. There are several modules that accept similar arguments to the file module, and one such example is the copy module. The copy module takes a file on the controller machine, copies it to the managed machine, and sets the attributes as required. For example, to copy the /etc/fstabfile to /tmp on the managed machine, you will use the following command: $ ansible machinename -m copy -a 'path=/tmp/fstab mode=0700 owner=root' The preceding command, when run the first time, should return something like the following code: machinename | success >> { "changed": true, "dest": "/tmp/fstab", "group": "root", "md5sum": "fe9304aa7b683f58609ec7d3ee9eea2f", "mode": "0700", "owner": "root", "size": 637, "src": "/root/.ansible/tmp/ansible-1374060150.96- 77605185106940/source", "state": "file" } There is also a module called command that will run any arbitrary command on the managed machine. This lets you configure it with any arbitrary command, such as a preprovided installer or a self-written script; it is also useful for rebooting machines. Please note that this module does not run the command within the shell, so you cannot perform redirection, use pipes, and expand shell variables or background commands. Ansible modules strive to prevent changes being made when they are not required. This is referred to as idempotency and can make running commands against multiple servers much faster. Unfortunately, Ansible cannot know if your command has changed anything or not, so to help it be more idempotent you have to give it some help. It can do this either via the creates or the removes argument. If you give a creates argument, the command will not be run if the filename argument exists. The opposite is true of the removes argument; if the filename exists, the command will be run. You run the command as follows: $ ansible machinename -m command -a 'rm -rf /tmp/testing removes=/tmp/testing' If there is no file or directory named /tmp/testing, the command output will indicate that it was skipped, as follows: machinename | skipped Otherwise, if the file did exist, it will look as follows: ansibletest | success | rc=0 >> Often it is better to use another module in place of the command module. Other modules offer more options and can better capture the problem domain they work in. For example, it would be much less work for Ansible and also the person writing the configurations to use the file module in this instance, since the file module will recursively delete something if the state is set to absent. So, this command would be equivalent to the following command: $ ansible machinename -m file -a 'path=/tmp/testing state=absent' If you need to use features usually available in a shell while running your command, you will need the shell module. This way you can use redirection, pipes, or job backgrounding. You can pick which shell to use with the executable argument. However, when you write the code, it also supports the creates argument but does not support the removes argument. You can use the shell module as follows: $ ansible machinename -m shell -a '/opt/fancyapp/bin/installer.sh > /var/log/fancyappinstall.log creates=/var/log/fancyappinstall.log' Summary In this article, we have covered which installation type to choose, installing Ansible, and how to build an inventory file to reflect your environment. After this, we saw how to use Ansible modules in an ad hoc style for simple tasks. Finally, we discussed how to learn which modules are available on your system and how to use the command line to get instructions for using a module. Resources for Article: Further resources on this subject: Configuring Manage Out to DirectAccess Clients [Article] Creating and configuring a basic mobile application [Article] Deploying Applications and Software Updates on Microsoft System Center 2012 Configuration Manager [Article]
Read more
  • 0
  • 0
  • 12321

article-image-derivatives-pricing
Packt
18 Nov 2013
10 min read
Save for later

Derivatives Pricing

Packt
18 Nov 2013
10 min read
(For more resources related to this topic, see here.) Derivatives are financial instruments which derive their value from (or are dependent on) the value of another product, called the underlying. The three basic types of derivatives are forward and futures contracts, swaps, and options. In this article we will focus on this latter class and show how basic option pricing models and some related problems can be handled in R. We will start with overviewing how to use the continuous Black-Scholes model and the binomial Cox-Ross-Rubinstein model in R, and then we will proceed with discussing the connection between these models. Furthermore, with the help of calculating and plotting of the Greeks, we will show how to analyze the most important types of market risks that options involve. Finally, we will discuss what implied volatility means and will illustrate this phenomenon by plotting the volatility smile with the help of real market data. The most important characteristics of options compared to futures or swaps is that you cannot be sure whether the transaction (buying or selling the underlying) will take place or not. This feature makes option pricing more complex and requires all models to make assumptions regarding the future price movements of the underlying product. The two models we are covering here differ in these assumptions: the Black-Scholes model works with a continuous process while the Cox-Ross-Rubinstein model works with a discrete stochastic process. However, the remaining assumptions are very similar and we will see that the results are close too. The Black-Scholes model The assumptions of the Black-Scholes model (Black and Sholes, 1973, see also Merton, 1973) are as follows: The price of the underlying asset (S) follows geometric Brownian motion: Here μ (drift) and σ (volatility) are constant parameters and W is a standard Wiener process. The market is arbitrage-free. The underlying is a stock paying no dividends. Buying and (short) selling the underlying asset is possible in any (even fractional) amount. There are no transaction costs. The short-term interest rate (r) is known and constant over time. The main result of the model is that under these assumptions, the price of a European call option (c) has a closed form: Here X is the strike price, T-tis the time to maturity of the option, and N denotes the cumulative distribution function of the standard normal distribution. The equation giving the price of the option is usually referred to as the Black-Scholes formula. It is easy to see from put-call parity that the price of a European put option (p) with the same parameters is given by: Now consider a call and put option on a Google stock in June 2013 with a maturity of September 2013 (that is, with 3 months of time to maturity).Let us assume that the current price of the underlying stock is USD 900, the strike price is USD 950, the volatility of Google is 22%, and the risk-free rate is 2%. We will calculate the value of the call option with the GBSOption function from the fOptions package. Beyond the parameters already discussed, we also have to set the cost of carry (b); in the original Black-Scholes model, (with underlying paying no dividends) it equals the risk-free rate. > library(fOptions) > GBSOption(TypeFlag = "c", S = 900, X =950, Time = 1/4, r = 0.02, + sigma = 0.22, b = 0.02) Title: Black Scholes Option Valuation Call: GBSOption(TypeFlag = "c", S = 900, X = 950, Time = 1/4, r = 0.02, b = 0.02, sigma = 0.22) Parameters: Value: TypeFlag c S 900 X 950 Time 0.25 r 0.02 b 0.02 sigma 0.22 Option Price: 21.79275 Description: Tue Jun 25 12:54:41 2013 This prolonged output returns the passed parameters with the result just below the Option Price label. Setting the TypeFlag to p would compute the price of the put option and now we are only interested in the results (found in the price slot—see the str of the object for more details) without the textual output: > GBSOption(TypeFlag = "p", S = 900, X =950, Time = 1/4, r = 0.02, sigma = 0.22, b = 0.02)@price [1] 67.05461 We also have the choice to compute the preceding values with a more user-friendly calculator provided by the GUIDE package. Running the blackscholes() function would trigger a modal window with a form where we can enter the same parameters. Please note that the function uses the dividend yield instead of cost of carry, which is zero in this case. The Cox-Ross-Rubinstein model The Cox-Ross-Rubinstein(CRR) model (Cox, Ross and Rubinstein, 1979) assumes that the price of the underlying asset follows a discrete binomial process. The price might go up or down in each period and hence changes according to a binomial tree illustrated in the following plot, where u and dare fixed multipliers measuring the price changes when it goes up and down. The important feature of the CRR model is that u=1/d and the tree is recombining; that is, the price after two periods will be the same if it first goes up and then goes down or vice versa, as shown in the following figure: To build a binomial tree, first we have to decide how many steps we are modeling (n); that is, how many steps the time to maturity of the option will be divided into. Alternatively, we can determine the length of one time step ∆t,(measured in years) on the tree: If we know the volatility (σ) of the underlying, the parameters u and dare determined according to the following formulas: And consequently: When pricing an option in a binomial model, we need to determine the tree of the underlying until the maturity of the option. Then, having all the possible prices at maturity, we can calculate the corresponding possible option values, simply given by the following formulas: To determine the option price with the binomial model, in each node we have to calculate the expected value of the next two possible option values and then discount it. The problem is that it is not trivial what expected return to use for discounting. The trick is that we are calculating the expected value with a hypothetic probability, which enables us to discount with the risk-free rate. This probability is called risk neutral probability (pn) and can be determined as follows: The interpretation of the risk-neutral probability is quite plausible: if the probability that the underlying price goes up from any of the nodes was pn, then the expected return of the underlying would be the risk-free rate. Consequently, an expected value calculated with pn can be discounted by rand the price of the option in any node of the tree is determined as: In the preceding formula, g is the price of an option in general (it may be call or put as well) in a given node, gu and gd are the values of this derivative in the two possible nodes one period later. For demonstrating the CRR model in R, we will use the same parameters as in the case of the Black-Scholes formula. Hence, S=900, X=950, σ=22%, r=2%, b=2%, T-t=0.25. We also have to set n, the number of time steps on the binomial tree. For illustrative purposes, we will work with a 3-period model: > CRRBinomialTreeOption(TypeFlag = "ce", S = 900, X = 950, + Time = 1/4, r = 0.02, b = 0.02, sigma = 0.22, n = 3)@price [1] 20.33618 > CRRBinomialTreeOption(TypeFlag = "pe", S = 900, X = 950, + Time = 1/4, r = 0.02, b = 0.02, sigma = 0.22, n = 3)@price [1] 65.59803 It is worth observing that the option prices obtained from the binomial model are close to (but not exactly the same as) the Black-Scholes prices calculated earlier. Apart from the final result, that is, the current price of the option, we might be interested in the whole option tree as well: > CRRTree <- BinomialTreeOption(TypeFlag = "ce", S = 900, X = 950, + Time = 1/4, r = 0.02, b = 0.02, sigma = 0.22, n = 3) > BinomialTreePlot(CRRTree, dy = 1, xlab = "Time steps", + ylab = "Number of up steps", xlim = c(0,4)) > title(main = "Call Option Tree") Here we first computed a matrix by BinomialTreeOption with the given parameters and saved the result in CRRTree that was passed to the plot function with specified labels for both the x and y axis with the limits of the x axis set from 0 to 4, as shown in the following figure. The y-axis (number of up steps) shows how many times the underlying price has gone up in total. Down steps are defined as negative up steps. The European put option can be shown similarly by changing the TypeFlag to pe in the previous code: Connection between the two models After applying the two basic option pricing models, we give some theoretical background to them. We do not aim to give a detailed mathematical derivation, but we intend to emphasize (and then illustrate in R) the similarities of the two approaches. The financial idea behind the continuous and the binomial option pricing is the same: if we manage to hedge the option perfectly by holding the appropriate quantity of the underlying asset, it means we created a risk-free portfolio. Since the market is supposed to be arbitrage-free, the yield of a risk-free portfolio must equal the risk-free rate. One important observation is that the correct hedging ratio is holding underlying asset per option. Hence, the ratio is the partial derivative (or its discrete correspondent in the binomial model) of the option value with respect to the underlying price. This partial derivative is called the delta of the option. Another interesting connection between the two models is that the delta-hedging strategy and the related arbitrage-free argument yields the same pricing principle: the value of the derivative is the risk-neutral expected value of its future possible values, discounted by the risk-free rate. This principle is easily tractable on the binomial tree where we calculated the discounted expected values node by node; however, the continuous model has the same logic as well, even if the expected value is mathematically more complicated to compute. This is the reason why we gave only the final result of this argument, which was the Black-Scholes formula. Now we know that the two models have the same pricing principles and ideas (delta-hedging and risk-neutral valuation), but we also observed that their numerical results are not equal. The reason is that the stochastic processes assumed to describe the price movements of the underlying asset are not identical. Nevertheless, they are very similar; if we determine the value of u and d from the volatility parameter as we did it in The Cox-Ross-Rubinstein model section, the binomial process approximates the geometric Brownian motion. Consequently, the option price of the binomial model converges to that of the Black-Scholes model if we increase the number of time steps (or equivalently, decrease the length of the steps). To illustrate this relationship, we will compute the option price in the binomial model with increasing numbers of time steps. In the following figure, we compare the results with the Black-Scholes price of the option: The plot was generated by a loop running N from 1 to 200 to compute CRRBinomialTreeOption with fixed parameters: > prices <- sapply(1:200, function(n) { + CRRBinomialTreeOption(TypeFlag = "ce", S = 900, X = 950, + Time = 1/4, r = 0.02, b = 0.02, sigma = 0.22, n = n)@price + }) Now the prices variable holds 200 computed values: > str(prices) num [1:200] 26.9 24.9 20.3 23.9 20.4... Let us also compute the option with the generalized Black-Scholes option: > price <- GBSOption(TypeFlag = "c", S = 900, X = 950, Time = 1/4, r = 0.02, sigma = 0.22, b = 0.02)@price And show the prices in a joint plot with the GBS option rendered in red: > plot(1:200, prices, type='l', xlab = 'Number of steps', + ylab = 'Prices') > abline(h = price, col ='red') > legend("bottomright", legend = c('CRR-price', 'BS-price'), + col = c('black', 'red'), pch = 19)
Read more
  • 0
  • 0
  • 21748
Packt
15 Nov 2013
5 min read
Save for later

RESTful Web Services – Server-Sent Events (SSE)

Packt
15 Nov 2013
5 min read
Getting started Generally, the flow of web services is initiated by the client by sending a request for the resource to the server. This is the traditional way of consuming web services. Traditional Flow Here, the browser or Jersey client initiates the request for data from the server, and the server provides a response along with the data. Every time a client needs to initiate a request for the resource, the server may not have the capability to generate the data. This becomes difficult in an application where real-time data needs to be shown. Even though there is no new data over the server, the client needs to check for it every time. Nowadays, there is a requirement that the server needs to send some data without the client's request. For this to happen the client and server need to be connected, and the server can push the data to the client. This is why it is termed as Server-Sent Events. In these events, the connections created initially between the client and server is not released after the request. The server maintains the connection and pushes the data to the respective client when required. Server-Sent Event Flow In the Server-Sent Event Flow diagram initially, when a browser or a Jersey client initiates a request to establish a connection with the server using EventSource, the server is always in a listening mode for the new connection to be established. When a new connection from any EventSource is received, the server opens a new connection and maintains it in a queue. Maintaining a connection depends upon the implementation of business logic. SSE creates a single unidirectional connection. So, only a single connection is established between the client and server. After the connection is successfully established, the client is in the listening mode for new events from the server. Whenever any new event occurs on the server side, it will broadcast the event, along with the data to a specific open HTTP connection. In modern browsers that support HTML5, the onmessage method of EventSource is responsible for handling new events received from the server; whereas, in the case of Jersey clients, we have the onEvent method of EventSource, which handles new events from the server. Implementing Server-Sent Events (SSE) To use SSE, we need to register SseFeature on both the client and server sides. By doing so, the client/server gets connected to SseFeature to be used while traversing data over the network. SSE: Internal Working In the SSE: Internal Working diagram, we assume that the client/server is connected. When any new event is generated, the server initiates an OutboundEvent instance that will be responsible to have chunked output, which in turn will have a serialized data format. OutboundEventWriter is responsible to serialize the data on the server side. We need to specify the media type of the data in OutboundEvent. There are no restrictions of providing specific media types only. However, on the client side, InboundEvent is responsible for handling the incoming data from the server. Here, InboundEvent receives the chunked input that contains serialized data format. Using InbounEventReader, data is deserialized. Using SSEBroadCaster, we are able to broadcast events to multiple clients that are connected to the server. Let's look at the example, which shows how to create SSE web services and broadcast the events: @ApplicationPath("services") public class SSEApplication extends ResourceConfig { publicSSEApplication() { super(SSEResource.class, SseFeature.class); } } Here, we registered the SseFeature module and the SSEResource root-resource class to the server. private static final SseBroadcaster BROADCASTER = new SseBroadcaster(); …… @GET @Path("sseEvents") @Produces(SseFeature.SERVER_SENT_EVENTS) public EventOutput getConnection() { final EventOutput eventOutput = new EventOutput(); BROADCASTER.add(eventOutput); return eventOutput; } …… In the SSEResource root class, we need to create a resource method that will allow clients to establish the connection and persist accordingly. Here, we are maintaining the connection into the BROADCASTER instance in the SseBroadcaster class. EventOutput manages specific client connections. SseBroadcaster is simply responsible for accommodating a group of EventOutput; that is, the client's connection. …… @POST @Consumes(MediaType.APPLICATION_FORM_URLENCODED) public void post(@FormParam("name") String name) { BROADCASTER .broadcast(new OutboundEvent.Builder() .data(String.class, name) .build()); } …… When any post method is consumed, we create a new event and broadcast it to the client available in the BROADCASTER instance. The OutboundEvent instance will contain the data (MediaType, Object) method that is initialized with a specific media type and actual data. We can provide any media type to send data. By using the build() method, data is being serialized with the OutBoundEventWriter class internally. When the broadcast (OutboundEvent) is called, internally SseBroadcaster pushes data on all registered EventOutputs; that is, on clients connected to SseBroadcaster. At times, there's a scenario where the client/server has been connected and after sometime, the client gets disconnected. So, in this case, SseBroadcaster automatically handles the client connection; that is, it determines whether the connection needs to be maintained. When any client connection is closed, the broadcaster detects EventOutput and frees the connection and resources obtained by that EventOutput connection. Summary Thus we learned the difference between the traditional web service flow and SSE web service flow. We also covered how to create the SSE web services and implement the Jersey client in order to consume the SSE using different programmatic models. Useful Links: Setting up the most Popular Journal Articles in your Personalized Community in Liferay Portal Understanding WebSockets and Server-sent Events in Detail RESS - The idea and the Controversies
Read more
  • 0
  • 0
  • 6093

article-image-application-performance
Packt
15 Nov 2013
8 min read
Save for later

Application Performance

Packt
15 Nov 2013
8 min read
(For more resources related to this topic, see here.) Data sizing The cost of abstractions in terms of data size plays an important role. For example, whether or not a data element can fit into a processor cache line depends directly upon its size. On a Linux system, we can find out the cache line size and other parameters by inspecting the values in the files under /sys/devices/system/cpu/cpu0/cache/. Another concern we generally find with data sizing is how much data we are holding at a time in the heap. GC has direct consequences on the application's performance. While processing data, often we do not really need all the data we hold on to. Consider the example of generating a summary report of sold items for a certain period (months) of time. After the subperiod (month wise), summary data is computed. We do not need the item details anymore, hence it's better to remove the unwanted data while we add the summaries. This is shown in the following example: (defn summarize [daily-data] ; daily-data is a map (let [s (items-summary (:items daily-data))] (-> daily-data (select-keys [:digest :invoices]) ; we keep only the required key/val pairs (assoc :summary s)))) ;; now inside report generation code (-> (fetch-items period-from period-to :interval-day) (map summarize) generate-report) Had we not used select-keys in the preceding summarize function, it would have returned a map with extra summary data along with all the other existing keys in the map. Now, such a thing is often combined with lazy sequences. So, for this scheme to work, it is important not to hold on to the head of the lazy sequence. Reduced serialization An I/O channel is a common source of latency. The perils of over-serialization cannot be overstated. Whether we read or write data from a data source over an I/O channel, all of that data needs to be prepared, encoded, serialized, de-serialized, and parsed before being worked on. It is better for every step to have less data involved in order to lower the overhead. Where there is no I/O involved, such as in-process communication, it generally makes no sense to serialize. A common example of over-serialization is encountered while working with SQL databases. Often, there are common SQL query functions that fetch all columns of a table or a relation—they are called by various functions that implement the business logic. Fetching data that we do not need is wasteful and detrimental to the performance for the same reason that we discussed in the preceding paragraph. While it may seem more work to write one SQL statement and one database query function for each use case, it pays off with better performance. Code that uses NoSQL databases is also subject to this anti-pattern—we have to take care to fetch only what we need even though it may lead to additional code. There's a pitfall to be aware of when reducing serialization. Often, some information needs to be inferred in absence of the serialized data. In such cases where some of the serialization is dropped so that we can infer other information, we must compare the cost of inference versus the serialization overhead. The comparison may not be necessarily done per operation, but rather on the whole. Then, we can consider the resources we can allocate in order to achieve capacities for various parts of our systems. Chunking to reduce memory pressure What happens when we slurp a text file regardless of its size? The contents of the entire file will sit in the JVM heap. If the file is larger than the JVM heap capacity, the JVM will terminate by throwing OutOfMemoryError. If the file is large but not large enough to force the JVM into an OOM error, it leaves a relatively smaller JVM heap space for other operations in the application to continue. A similar situation takes place when we carry out any operation disregarding the JVM heap capacity. Fortunately, this can be fixed by reading data in chunks and processing them before reading further. Sizing for file/network operations Let us take the example of a data ingestion process where a semi-automated job uploads large Comma Separated File (CSV) files via the File Transfer Protocol (FTP) to a file server, and another automated job, which is written in Clojure, runs periodically to detect the arrival of files via the Network File System (NFS). After detecting a new file, the Clojure program processes the file, updates the result in a database, and archives the file. The program detects and processes several files concurrently. The size of the CSV files is not known in advance, but the format is predefined. As per the preceding description, one potential problem is that since there could be multiple files being processed concurrently, how do we distribute the JVM heap among the concurrent file-processing jobs? Another issue could be that the operating system imposes a limit on how many files can be opened at a time; on Unix-like systems, you can use the ulimit command to extend the limit. We cannot arbitrarily slurp the CSV file contents—we must limit each job to a certain amount of memory and also limit the number of jobs that can run concurrently. At the same time, we cannot read a very small number of rows from a file at a time because this may impact performance. (def ^:const K 1024) ;; create the buffered reader using custom 128K buffer-size (-> filename java.io.FileInputStream java.io.InputStreamReader (java.io.BufferedReader (* K 128))) Fortunately, we can specify the buffer size when reading from a file or even from a network stream so as to tune the memory usage and performance as appropriate. In the preceding code example, we explicitly set the buffer size of the reader to facilitate the same. Sizing for JDBC query results Java's interface standard for SQL databases, JDBC (which is technically not an acronym), supports fetch-size for fetching query results via JDBC drivers. The default fetch size depends on the JDBC driver. Most JDBC drivers keep a low default value so as to avoid high memory usage and attain internal performance optimization. A notable exception to this norm is the MySQL JDBC driver that completely fetches and stores all rows in memory by default. (require '[clojure.java.jdbc :as jdbc]) ;; using prepare-statement directly (we rarely use it directly, shown just for demo) (with-open [stmt (jdbc/prepare-statement conn sql :fetch-size 1000 max-rows 9000) rset (resultset-seq (.executeQuery stmt))] (vec rset)) ;; using query (query db [{:fetch-size 1000} "SELECT empno FROM emp WHERE country=?" 1]) When using the Clojure Contrib library java.jdbc (https://github.com/clojure/java.jdbc as of Version 0.3.0), the fetch size can be set while preparing a statement as shown in the preceding example. The fetch size does not guarantee proportional latency; however, it can be used safely for memory sizing. We must test any performance-impacting latency changes due to fetch size at different loads and use cases for the particular database and JDBC driver. Besides fetch-size, we can also pass the max-rowsargument to limit the maximum rows to be returned by a query. However, this implies that the extra rows will be truncated from the result, not that the database will internally limit the number of rows to realize. Resource pooling There are several types of resources on the JVM that are rather expensive to initialize. Examples are HTTP connections, execution threads, JDBC connections, and so on. The Java API recognizes such resources and has built-in support for creating a pool of some of those resources so that the consumer code borrows a resource from a pool when required and at the end of the job simply returns it to the pool. Java's thread pools and JDBC data sources are prominent examples. The idea is to preserve the initialized objects for reuse. Even when Java does not support pooling of a resource type directly, you can always create a pool abstraction around custom expensive resources. The pooling technique is common in I/O activities, but it can be equally applicable to non-I/O purposes where the initialization cost is high. Summary Designing an application for performance should be based on the use cases and patterns of anticipated system load and behavior. Measuring performance is extremely important to guide optimization in the process. Fortunately, there are several well-known optimization patterns to tap into, such as resource pooling, and data sizing. Thus we analysed the performance optimization using these patterns. Resources for Article: Further resources on this subject: Improving Performance with Parallel Programming [Article] Debugging Java Programs using JDB [Article] IBM WebSphere Application Server Security: A Threefold View [Article]
Read more
  • 0
  • 0
  • 7320

article-image-fuelphp
Packt
15 Nov 2013
11 min read
Save for later

FuelPHP

Packt
15 Nov 2013
11 min read
(For more resources related to this topic, see here.) Since it is community-driven, everyone is in an equal position to spot bugs, provide fixes, or add new features to the framework. This has led to the creation of features such as the new temporal ORM (Object Relation Mapper), which is a first for any PHP-based ORM. This also means that everyone can help build tools that make development easier, more straightforward, and quicker. The framework is lightweight and allows developers to load only what they need. It's a configuration over convention approach. Instead of enforcing conventions, they act as recommendations and best practices. This allows new developers to jump onto a project and catch up to speed quicker. It also helps when we want to find extra team members for projects. A brief history of FuelPHP FuelPHP started out with the goal of adopting the best practices from other frameworks to form a thoroughly modern starting point, which makes full use of PHP Version 5.3 features, such as namespaces. It has little in the way of legacy and compatibility issues that can affect older frameworks. The framework was started in the year 2010 by Dan Horrigan. He was joined by Phil Sturgeon, Jelmer Schreuder, Harro Verton, and Frank de Jonge. FuelPHP was a break from other frameworks such as CodeIgniter, which was basically still a PHP 4 framework. This break allowed for the creation of a more modern framework for PHP 5.3, and brings together decades of experience of other languages and frameworks, such as Ruby on Rails and Kohana. After a period of community development and testing, Version 1.0 of the FuelPHP framework was released in July 2011. This marked a version ready for use on production sites and the start of the growth of the community. The community provides periodic releases (at the time of writing, it is up to Version 1.7) with a clear roadmap (http://fuelphp.com/roadmap) of features to be added. This also includes a good guide of progress made to date. The development of FuelPHP is an open process and all the code is hosted on GitHub at https://github.com/fuel/fuel, and the main core packages can be found in other repositories on the Fuel GitHub account—a full list of these can be found at https://github.com/fuel/. Features of FuelPHP Using a Bespoke PHP or a custom-developed framework could give you a greater performance. FuelPHP provides many features, documentation, and a great community. The following sections describe some of the most useful features. (H)MVC Although FuelPHP is a Model-View-Controller (MVC) framework, it was built to support the HMVC variant of MVC. Hierarchical Model-View-Controller (HMVC) is a way of separating logic and then reusing the controller logic in multiple places. This means that when a web page is generated using a theme or a template section, it can be split into multiple sections or widgets. Using this approach, it is possible to reuse components or functionality throughout a project or in multiple projects. In addition to the usual MVC structure, FuelPHP allows the use of presentation modules (ViewModels). These are a powerful layer that sits between the controller and the views, allowing for a smaller controller while still separating the view logic from both the controller and the views. If this isn't enough, FuelPHP also supports a router-based approach where you can directly route to a closure. This then deals with the execution of the input URI. Modular and extendable The core of FuelPHP has been designed so that it can be extended without the need for changing any code in the core. It introduces the notion of packages, which are self-contained functionality that can be shared between projects and people. Like the core, in the new versions of FuelPHP, these can be installed via the Composer tool . Just like packages, functionality can also be divided into modules. For example, a full user-authentication module can be created to handle user actions, such as registration. Modules can include both logic and views, and they can be shared between projects. The main difference between packages and modules is that packages can be extensions of the core functionality and they are not routable, while modules are routable. Security Everyone wants their applications to be as secure as possible; to this end, FuelPHP handles some of the basics for you. Views in FuelPHP will encode all the output to ensure that it's secure and is capable of avoiding Cross-site scripting (XSS) attacks. This behavior can be overridden or can be cleaned by the included htmLawed library. The framework also supports Cross-site request forgery (CSRF) prevention with tokens, input filtering, and the query builder, which tries to help in preventing SQL injection attacks. PHPSecLib is used to offer some of the security features in the framework. Oil – the power of the command line If you are familiar with CakePHP or the Zend framework or Ruby on Rails, then you will be comfortable with FuelPHP Oil. It is the command-line utility at the heart of FuelPHP—designed to speed up development and efficiency. It also helps with testing and debugging. Although not essential, it proves indispensable during development. Oil provides a quick way for code generation, scaffolding, running database migrations, debugging, and cron-like tasks for background operations. It can also be used for custom tasks and background processes. Oil is a package and can be found at https://github.com/fuel/oil. ORM FuelPHP also comes with an Object Relation Mapper (ORM) package that helps in working with various databases through an object-oriented approach. It is relatively lightweight and is not supposed to replace the more complex ORMs such as Doctrine or Propel. The ORM also supports data relations such as: belongs-to has-one has-many many-to-many relationships Another nice feature is cascading deletions; in this case, the ORM will delete all the data associated with a single entry. The ORM package is available separately from FuelPHP and is hosted on GitHub at https://github.com/fuel/orm. Base controller classes and model classes FuelPHP includes several classes to give a head start on projects. These include controllers that help with templates, one for constructing RESTful APIs, and another that combines both templates and RESTful APIs. On the model side, base classes include CRUD (Create, Read, Update, and Delete) operations. There is a model for soft deletion of records, one for nested sets, and lastly a temporal model. This is an easy way of keeping revisions of data. The authentication package The authentication framework gives a good basis for user authentication and login functionality. It can be extended using drivers for new authentication methods. Some of the basics such as groups, basic ACL functions, and password hashing can be handled directly in the authentication framework. Although the authentication package is included when installing FuelPHP, it can be upgraded separately to the rest of the application. The code can be obtained from https://github.com/fuel/auth. Template parsers The parser package makes it even easier to separate logic from views instead of embedding basic PHP into the views. FuelPHP supports many template languages, such as Twig, Markdown, Smarty, and HTML Abstraction Markup Language (Haml). Documentation Although not particularly a feature of the actual framework, the documentation for FuelPHP is one of the best available. It is kept up-to-date for each release and can be found at http://fuelphp.com/docs/. What to look forward to in Version 2.0 Although this book focuses on FuelPHP 1.6 and newer, it is worth looking forward to the next major release of the framework. It brings significant improvements but also makes some changes to the way the framework functions. Global scope and moving to dependency injection One of the nice features of FuelPHP is the global scope that allows easy static syntax and instances when needed. One of the biggest changes in Version 2 is the move away from static syntax and instances. The framework used the Multiton design pattern, rather than the Singleton design pattern. Now, the majority of Multitons will be replaced with the Dependency Injection Container (DiC) design pattern , but this depends on the class in question. The reason for the changes is to allow the unit testing of core files and to dynamically swap and/or extend our other classes depending upon the needs of the application. The move to dependency injection will allow all the core functionality to be tested in isolation. Before detailing the next feature, let's run through the design patterns in more detail. Singleton Ensures that a class only has a single instance and it provides a global point of access to it. The thinking is that a single instance of a class or object can be more efficient, but it can add unnecessary restrictions to classes that may be better served using a different design pattern. Multiton This is similar to the singleton pattern but expands upon it to include a way of managing a map of named instances as key-value pairs. So instead of having a single instance of a class or object, this design pattern ensures that there is a single instance for each key-value pair. Often the multiton is known as a registry of singletons. Dependency injection container This design pattern aims to remove hard coded dependencies and make is possible to change them either at run time or compile time. One example is ensure that variables have default values but also allow for them to be overridden, also allow for other objects to be passed to class for manipulation. It allows for mock objects to be used whilst testing functionality. Coding standards One of the far-reaching changes will be the difference in coding standards. FuelPHP Version 2.0 will now conform to both PSR-0 and PSR-1. This allows a more standard auto-loading mechanism and the ability to use Composer. Although Composer compatibility was introduced in Version 1.5, this move to PSR is for better consistency. It means that the method names will follow the "camelCase" method rather than the current "snake_case" method names. Although a simple change, this is likely to have a large effect on existing projects and APIs. With a similar move of other PHP frameworks to a more standardized coding standard, there will be more opportunities to re-use functionality from other frameworks. Package management and modularization Package management for other languages such as Ruby and Ruby on Rails has made sharing pieces of code and functionality easy and common-place. The PHP world is much larger and this same sharing of functionality is not as common. PHP Extension and Application Repository (PEAR) was a precursor of most package managers. It is a framework and distribution system for re-usable PHP components. Although infinitely useful, it is not as widely supported by the more popular PHP frameworks. Starting with FuelPHP 1.6 and leading into FuelPHP 2.0, dependency management will be possible through Composer (http://getcomposer.org). This deals with not only single packages, but also their dependencies. It allows projects to consistently set up with known versions of libraries required by each project. This helps not only with development, but also its testability of the project as well as its maintainability. It also protests against API changes. The core of FuelPHP and other modules will be installed via Composer and there will be a gradual migration of some Version 1 packages. Backwards compatibility A legacy package will be released for FuelPHP that will provide aliases for the changed function names as part of the change in the coding standards. It will also allow the current use of static function calling to continue working, while allowing for a better ability to unit test the core functionality. Speed boosts Although initially slower during the initial alpha phases, Version 2.0 is shaping up to be faster than Version 1.0. Currently, the beta version (at the time of writing) is 7 percent faster while requiring 8 percent less memory. This might not sound much, but it can equate to a large saving if running a large website over multiple servers. These figures may get better in the final release of Version 2.0 after the remaining optimizations are complete. Summary We now know a little more about the history of FuelPHP and some of the useful features such as ORM, authentication, modules, (H)MVC, and Oil (the command-line interface). We have also listed the following useful links, including the official API documentation (http://fuelphp.com/docs/) and the FuelPHP home page (http://fuelphp.com). This article also touched upon some of the new features and changes due in Version 2.0 of FuelPHP. Resources for Article: Further resources on this subject: Installing PHP-Nuke [Article] Installing phpMyAdmin [Article] Integrating phpList 2 with Drupal [Article]
Read more
  • 0
  • 0
  • 3148
Packt
12 Nov 2013
6 min read
Save for later

Quick start – creating your first template

Packt
12 Nov 2013
6 min read
(For more resources related to this topic, see here.) Preparing the project To get started, create a file named index.htmland add the following boilerplate code: <!DOCTYPE HTML> <html> <head> <title>Handlebars Quickstart</title> <script src ="handlebars.js"></script> </head> <body> <script> var src = "<h1>Hello {{name}}</h1>"; var template = Handlebars.compile(src); var output = template({name: "Tom"}); document.body.innerHTML += output; </script> </body> </html> This is a pretty good example to start with, as it demonstrates the minimum amount of code you will need to write to get a template on screen. We will start it by writing the template itself, just a pair of header tags with a greeting message inside. If you remember from the introduction, a Handlebars tag is a reference for some external data wrapped between two pairs of curly braces, and it signifies a dynamic point in the page where Handlebars will insert some information. Here we just want a property called "name" to be inserted at this point, which we will set in a moment. Once you have the template, the next step is where all the magic begins; Handlebars compile function will process through the template's source and generate a JavaScript function to output the result. What I mean by this is Handlebars will create a function that accepts some data and returns the final string with all the placeholders replaced. An example of what I mean could be something like the following code for our quick template stated in the preceding paragraph: var template = function (data) { return "<h1>Hello " + data.name + "</h1>"; } And then every time the template gets called with data, the resulting string will be passed back. Now obviously it is a bit more complex than this, and Handlebars performs some escaping for you and other such checks, but the basic idea of what the compile function generates remains the same. So with our template function created, we can call it by passing in some data (in this case the name Tom), and we take the output and append it to the body. After opening this page in a browser, you should see something like the following screenshot: With the basics out of the way, let's take a look at helpers. Block helpers Helpers can be called in the same way as the data placeholder was called from the template. The difference between them is that a data placeholder will just take a static string or number and insert it into the template's output. Helpers on the other hand are functions, which first compute something, and then the results get placed into the output instead. You can think of helpers as a more dynamic form of placeholders. Now there are two types of helpers in Handlebars: tag helpers, which work like regular functions; and block helpers, which have an added, nested template to manipulate. Handlebars comes with a series of block helpers built-in, which allows you to perform basic logic in your templates. One of the most commonly used block helpers in Handlebars would have to be the each helper, which allows you to run a section of template per item in an array. Let's take a look at it in action. It is going to be too messy to continue placing the templates into JavaScript strings like we did in the first example, so we will place it in its own script tag and pull it in. The reason we are using a script tag is because we don't want the template to show up on the page itself; by placing it in a script tag and setting the type to something the browser doesn't understand it will just be ignored. So right on top of the script tag block that we just wrote, add the following code: <script id="quickstart" type="template/handlebars"><h1>Hello {{name}}</h1><ul>{{#each messages}}<li><b>{{from}}</b>: {{text}}</li>{{/each}}</ul></script> We give the script tag an id, so we can access it later, and then we give it an arbitrary type, so that the browser doesn't try to parse it as JavaScript. Inside it we start with the same template code as before, and then we add each block to cycle through a list of messages and print out each one in a list element. The next step is to replace the script block underneath with the new code, which will get the template from here: <script>var src = document.getElementById('quickstart').innerHTML;var template = Handlebars.compile(src);var output = template({name: "Tom",messages: [{ from: "John", text: "Demo Message" },{ from: "Bob", text: "Something Else" },{ from: "John", text: "Second Post" }]});document.body.innerHTML += output;</script> We start by pulling the template from the script block we added in the previous paragraph using standard JavaScript; next we compile it like before and run the template, this time with the added "messages" array. Running this in your browser will give you something like the following: You may have picked up on this, but it's worth mentioning, that inside each block the context changes from the global data object passed into the template to the specific array element, because of this we are able to access its properties directly. These first few steps have been simple, but subtly we have covered loading in templates from script tags, and the syntax for both standard placeholders as well as block helpers in your templates. Summary Thus we have learned how to create template in this article. Resources for Article: Further resources on this subject: Working with JavaScript in Drupal 6: Part 1 [Article] Using JavaScript and jQuery in Drupal Themes [Article] Basics of Exception Handling Mechanism in JavaScript Testing [Article]
Read more
  • 0
  • 0
  • 1347

article-image-introduction-wordpress-applications-frontend
Packt
12 Nov 2013
7 min read
Save for later

Introduction to a WordPress application's frontend

Packt
12 Nov 2013
7 min read
(For more resources related to this topic, see here.) Basic file structure of a WordPress theme As WordPress developers, you should have a fairly good idea about the default file structure of WordPress themes. Let's have a brief introduction of the default files before identifying their usage in web applications. Think about a typical web application layout where we have a common header, footer, and content area. In WordPress, the content area is mainly populated by pages or posts. The design and the content for pages are provided through the page.php template, while the content for posts is provided through one of the following templates: index.php archive.php category.php single.php Basically, most of these post-related file types are developed to cater to the typical functionality in blogging systems, and hence can be omitted in the context of web applications. Since custom posts are widely used in application development, we need more focus on templates such as single-{post_type} and archive-{post_type} than category.php, archive.php, and tag.php. Even though default themes contain a number of files for providing default features, only the style.css and index.php files are enough to implement a WordPress theme. Complex web application themes are possible with the standalone index.php file. In normal circumstances, WordPress sites have a blog built on posts, and all the remaining content of the site is provided through pages. When referring to pages, the first thing that comes to our mind is the static content. But WordPress is a fully functional CMS, and hence the page content can be highly dynamic. Therefore, we can provide complex application screens by using various techniques on pages. Let's continue our exploration by understanding the theme file execution hierarchy. Understanding template execution hierarchy WordPress has quite an extensive template execution hierarchy compared to general web application frameworks. However, most of these templates will be of minor importance in the context of web applications. Here, we are going to illustrate the important template files in the context of web applications. The complete template execution hierarchy can be found at: http://hub.packtpub.com/wp-content/uploads/2013/11/Template_Hierarchy.png An example of the template execution hierarchy is as shown in the following diagram: Once the Initial Request is made, WordPress looks for one of the main starting templates as illustrated in the preceding screenshot. It's obvious that most of the starting templates such as front page, comments popup, and index pages are specifically designed for content management systems. In the context of web applications, we need to put more focus into both singular and archive pages, as most of the functionality depends on top of those templates. Let's identify the functionality of the main template files in the context of web applications: Archive pages: These are used to provide summarized listings of data as a grid. Single posts: These are used to provide detailed information about existing data in the system. Singular pages: These are used for any type of dynamic content associated with the application. Generally, we can use pages for form submissions, dynamic data display, and custom layouts. Let's dig deeper into the template execution hierarchy on the Singular Page path as illustrated in the following diagram: Singular Page is divided into two paths that contain posts or pages. Static Page is defined as Custom or Default page templates. In general, we use Default page templates for loading website pages. WordPress looks for a page with the slug or ID before executing the default page.php file. In most scenarios, web application layouts will take the other route of Custom page templates where we create a unique template file inside the theme for each of the layouts and define it as a page template using code comments. We can create a new custom page template by creating a new PHP file inside the theme folder and using the Template Name definition in code comments illustrated as follows: <?php/** Template Name: My Custom Template*/?> To the right of the preceding diagram, we have Single Post Page, which is divided into three paths called Blog Post, Custom Post, and Attachment Post. Both Attachment Posts and Blog Posts are designed for blogs and hence will not be used frequently in web applications. However, the Custom Post template will have a major impact on application layouts. As with Static Page, Custom Post looks for specific post type templates before looking for a default single.php file. The execution hierarchy of an Archive Page is similar in nature to posts, as it looks for post-specific archive pages before reverting to the default archive.php file. Now we have had a brief introduction to the template loading process used by WordPress. In the next section, we are going to look at the template loading process of a typical web development framework to identify the differences. Template execution process of web application frameworks Most stable web application frameworks use a flat and straightforward template execution process compared to the extensive process used by WordPress. These frameworks don't come with built-in templates, and hence each and every template will be generated from scratch. Consider the following diagram of a typical template execution process: In this process, Initial Request always comes to the index.php file, which is similar to the process used by WordPress or any other framework. It then looks for custom routes defined within the framework. It's possible to use custom routes within a WordPress context, even though it's not used generally for websites or blogs. Finally, Initial Request looks for the direct template file located in the templates section of the framework. As you can see, the process of a normal framework has very limited depth and specialized templates. Keep in mind that index.php referred to in the preceding section is the file used as the main starting point of the application, not the template file. In WordPress, we have a specific template file named index.php located inside the themes folder as well. Managing templates in a typical application framework is a relatively easy task when compared to the extensive template hierarchy used by WordPress. In web applications, it's ideal to keep the template hierarchy as flat as possible with specific templates targeted towards each and every screen. In general, WordPress developers tend to add custom functionalities and features by using specific templates within the hierarchy. Having multiple templates for a single screen and identifying the order of execution can be a difficult task in large-scale applications, and hence should be avoided in every possible instance. Web application layout creation techniques As we move into developing web applications, the logic and screens will become complex, resulting in the need of custom templates beyond the conventional ones. There is a wide range of techniques for putting such functionality into the WordPress code. Each of these techniques have their own pros and cons. Choosing the appropriate technique is vital in avoiding potential bottlenecks in large-scale applications. Here is a list of techniques for creating dynamic content within WordPress applications: Static pages with shortcodes Page templates Custom templates with custom routing Summary In this article we learned about basic file structure of the WordPress theme, the template execution hierarchy, and template execution process. We also learned the different techniques of Web application layout creation. Resources for Article: Further resources on this subject: Customizing WordPress Settings for SEO [Article] Getting Started with WordPress 3 [Article] Dynamic Menus in WordPress [Article]
Read more
  • 0
  • 0
  • 3463
Modal Close icon
Modal Close icon