Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - CMS and E-Commerce

830 Articles
article-image-understanding-express-routes
Packt
10 Jul 2013
10 min read
Save for later

Understanding Express Routes

Packt
10 Jul 2013
10 min read
(For more resources related to this topic, see here.) What are Routes? Routes are URL schema, which describe the interfaces for making requests to your web app. Combining an HTTP request method (a.k.a. HTTP verb) and a path pattern, you define URLs in your app. Each route has an associated route handler, which does the job of performing any action in the app and sending the HTTP response. Routes are defined using an HTTP verb and a path pattern. Any request to the server that matches a route definition is routed to the associated route handler. Route handlers are middleware functions, which can send the HTTP response or pass on the request to the next middleware in line. They may be defined in the app file or loaded via a Node module. A quick introduction to HTTP verbs The HTTP protocol recommends various methods of making requests to a Web server. These methods are known as HTTP verbs. You may already be familiar with the GET and the POST methods; there are more of them, about which you will learn in a short while. Express, by default, supports the following HTTP request methods that allow us to define flexible and powerful routes in the app: GET POST PUT DELETE HEAD TRACE OPTIONS CONNECT PATCH M-SEARCH NOTIFY SUBSCRIBE UNSUBSCRIBE GET, POST, PUT, DELETE, HEAD, TRACE, OPTIONS, CONNECT, and PATCH are part of the Hyper Text Transfer Protocol (HTTP) specification as drafted by the Internet Engineering Task Force (IETF) and the World Wide Web Consortium (W3C). M-SEARCH, NOTIFY, SUBSCRIBE, and UNSUBSCRIBE are specified by the UPnP Forum. There are some obscure HTTP verbs such as LINK, UNLINK, and PURGE, which are currently not supported by Express and the underlying Node HTTP library. Routes in Express are defined using methods named after the HTTP verbs, on an instance of an Express application: app.get(), app.post(), app.put(), and so on. We will learn more about defining routes in a later section. Even though a total of 13 HTTP verbs are supported by Express, you need not use all of them in your app. In fact, for a basic website, only GET and POST are likely to be used. Revisiting the router middleware This article would be incomplete without revisiting the router middleware. The router middleware is very special middleware. While other Express middlewares are inherited from Connect, router is implemented by Express itself. This middleware is solely responsible for empowering Express with Sinatra-like routes. Connect-inherited middlewares are referred to in Express from the express object (express.favicon(), express.bodyParser(), and so on). The router middleware is referred to from the instance of the Express app (app.router)  To ensure predictability and stability, we should explicitly add router to the middleware stack: app.use(app.router); The router middleware is a middleware system of its own. The route definitions form the middlewares in this stack. Meaning, a matching route can respond with an HTTP response and end the request flow, or pass on the request to the next middleware in line. This fact will become clearer as we work with some examples in the upcoming sections. Though we won't be directly working with the router middleware, it is responsible for running the whole routing show in the background. Without the router middleware, there can be no routes and routing in Express. Defining routes for the app we know how routes and route handler callback functions look like. Here is an example to refresh your memory: app.get('/', function(req, res) { res.send('welcome'); }); Routes in Express are created using methods named after HTTP verbs. For instance, in the previous example, we created a route to handle GET requests to the root of the website. You have a corresponding method on the app object for all the HTTP verbs listed earlier. Let's create a sample application to see if all the HTTP verbs are actually available as methods in the app object: var http = require('http'); var express = require('express'); var app = express(); // Include the router middleware app.use(app.router); // GET request to the root URL app.get('/', function(req, res) { res.send('/ GET OK'); }); // POST request to the root URL app.post('/', function(req, res) { res.send('/ POST OK'); }); // PUT request to the root URL app.put('/', function(req, res) { res.send('/ PUT OK'); }); // PATCH request to the root URL app.patch('/', function(req, res) { res.send('/ PATCH OK'); }); // DELETE request to the root URL app.delete('/', function(req, res) { res.send('/ DELETE OK'); }); // OPTIONS request to the root URL app.options('/', function(req, res) { res.send('/ OPTIONS OK'); }); // M-SEARCH request to the root URL app['m-search']('/', function(req, res) { res.send('/ M-SEARCH OK'); }); // NOTIFY request to the root URL app.notify('/', function(req, res) { res.send('/ NOTIFY OK'); }); // SUBSCRIBE request to the root URL app.subscribe('/', function(req, res) { res.send('/ SUBSCRIBE OK'); }); // UNSUBSCRIBE request to the root URL app.unsubscribe('/', function(req, res) { res.send('/ UNSUBSCRIBE OK'); }); // Start the server http.createServer(app).listen(3000, function() { console.log('App started'); }); We did not include the HEAD method in this example, because it is best left for the underlying HTTP API to handle it, which it already does. You can always do it if you want to, but it is not recommended to mess with it, because the protocol will be broken unless you implement it as specified. The browser address bar isn't capable of making any type of request, except GET requests. To test these routes we will have to use HTML forms or specialized tools. Let's use Postman, a Google Chrome plugin for making customized requests to the server. We learned that route definition methods are based on HTTP verbs. Actually, that's not completely true, there is a method called app.all() that is not based on an HTTP verb. It is an Express-specific method for listening to requests to a route using any request method: app.all('/', function(req, res, next) { res.set('X-Catch-All', 'true'); next(); }); Place this route at the top of the route definitions in the previous example. Restart the server and load the home page. Using a browser debugger tool, you can examine the HTTP header response added to all the requests made to the home page, as shown in the following screenshot: Something similar can be achieved using a middleware. But the app.all() method makes it a lot easier when the requirement is route specified. Route identifiers So far we have been dealing exclusively with the root URL (/) of the app. Let's find out how to define routes for other parts of the app. Routes are defined only for the request path. GET query parameters are not and cannot be included in route definitions. Route identifiers can be string or regular expression objects. String-based routes are created by passing a string pattern as the first argument of the routing method. They support a limited pattern matching capability. The following example demonstrates how to create string-based routes: // Will match /abcd app.get('/abcd', function(req, res) { res.send('abcd'); }); // Will match /acd app.get('/ab?cd', function(req, res) { res.send('ab?cd'); }); // Will match /abbcd app.get('/ab+cd', function(req, res) { res.send('ab+cd'); }); // Will match /abxyzcd app.get('/ab*cd', function(req, res) { res.send('ab*cd'); }); // Will match /abe and /abcde app.get('/ab(cd)?e', function(req, res) { res.send('ab(cd)?e'); }); The characters ?, +, *, and () are subsets of their Regular Expression counterparts.   The hyphen (-) and the dot (.) are interpreted literally by string-based route identifiers. There is another set of string-based route identifiers, which is used to specify named placeholders in the request path. Take a look at the following example: app.get('/user/:id', function(req, res) { res.send('user id: ' + req.params.id); }); app.get('/country/:country/state/:state', function(req, res) { res.send(req.params.country + ', ' + req.params.state); } The value of the named placeholder is available in the req.params object in a property with a similar name. Named placeholders can also be used with special characters for interesting and useful effects, as shown here: app.get('/route/:from-:to', function(req, res) { res.send(req.params.from + ' to ' + req.params.to); }); app.get('/file/:name.:ext', function(req, res) { res.send(req.params.name + '.' + req.params.ext.toLowerCase()); }); The pattern-matching capability of routes can also be used with named placeholders. In the following example, we define a route that makes the format parameter optional: app.get('/feed/:format?', function(req, res) { if (req.params.format) { res.send('format: ' + req.params.format); } else { res.send('default format'); } }); Routes can be defined as regular expressions too. While not being the most straightforward approach, regular expression routes help you create very flexible and powerful route patterns. Regular expression routes are defined by passing a regular expression object as the first parameter to the routing method. Do not quote the regular expression object, or else you will get unexpected results. Using regular expression to create routes can be best understood by taking a look at some examples. The following route will match pineapple, redapple, redaple, aaple, but not apple, and apples: app.get(/.+app?le$/, function(req, res) { res.send('/.+ap?le$/'); }); The following route will match anything with an a in the route name: app.get(/a/, function(req, res) { res.send('/a/'); }); You will mostly be using string-based routes in a general web app. Use regular expression-based routes only when absolutely necessary; while being powerful, they can often be hard to debug and maintain. Order of route precedence Like in any middleware system, the route that is defined first takes precedence over other matching routes. So the ordering of routes is crucial to the behavior of an app. Let's review this fact via some examples. In the following case, http://localhost:3000/abcd will always print "abc*" , even though the next route also matches the pattern: app.get('/abcd', function(req, res) { res.send('abcd'); }); app.get('/abc*', function(req, res) { res.send('abc*'); }); Reversing the order will make it print "abc*": app.get('/abc*', function(req, res) { res.send('abc*'); }); app.get('/abcd', function(req, res) { res.send('abcd'); }); The earlier matching route need not always gobble up the request. We can make it pass on the request to the next handler, if we want to. In the following example, even though the order remains the same, it will print "abc*" this time, with a little modification in the code. Route handler functions accept a third parameter, commonly named next, which refers to the next middleware in line. We will learn more about it in the next section: app.get('/abc*', function(req, res, next) { // If the request path is /abcd, don't handle it if (req.path == '/abcd') { next(); } else { res.send('abc*'); } }); app.get('/abcd', function(req, res) { res.send('abcd'); }); So bear it in mind that the order of route definition is very important in Express. Forgetting this will cause your app behave unpredictably. We will learn more about this behavior in the examples in the next section.
Read more
  • 0
  • 0
  • 6000

article-image-ext-js-4-working-tree-and-form-components
Packt
11 Jan 2012
6 min read
Save for later

Ext JS 4: Working with Tree and Form Components

Packt
11 Jan 2012
6 min read
Tree panel The tree component is much more simplified in Ext JS 4. Like grid, it is also a subclass of Ext.panel.Table. This means we can add most functionality of the grid in the tree as well. Let's start declaring a simple tree in Ext JS 3: new Ext.tree.TreePanel({ renderTo: 'tree-example', title: 'Simple Tree', width: 200, rootVisible: false, root: new Ext.tree.AsyncTreeNode({ expanded: true, children: [ { text: "Menu Option 1", leaf: true }, { text: "Menu Option 2", expanded: true, children: [ { text: "Sub Menu Option 2.1", leaf: true }, { text: "Sub Menu Option 2.2", leaf: true} ] }, { text: "Menu Option 3", leaf: true } ] }) }); Now, let's see how to declare the same tree in Ext JS: Ext.create('Ext.tree.Panel', { title: 'Simple Tree', width: 200, store: Ext.create('Ext.data.TreeStore', { root: { expanded: true, children: [ { text: "Menu Option 1", leaf: true }, { text: "Menu Option 2", expanded: true, children: [ { text: "Sub Menu Option 2.1", leaf: true }, { text: "Sub Menu Option 2.2", leaf: true} ] }, { text: "Menu Option 3", leaf: true } ] } }), rootVisible: false, renderTo: 'tree-example' }); In Ext JS 4, we also have the title, width, and div properties, where the tree is going to be rendered, and a config store. The store config is a new element for the tree. If we output both of the codes, we will have the same output, which is the following tree: If we take a look at the data package, we will see three files related to tree: NodeInterface, Tree, and TreeStore. NodeInterface applies a set of methods to the prototype of a record to decorate it with a Node API. The Tree class is used as a container of a series of nodes and TreeStore is a store implementation used by a Tree. The good thing about having TreeStore is that we can use its features, such as proxy and reader, as we do for any other Store in Ext JS 4. Drag-and-drop and sorting The drag-and-drop feature is very useful for rearranging the order of the nodes in the Tree class. Adding the drag-and-drop feature is very simple. We need to add the following code into the tree declaration: Ext.create('Ext.tree.Panel', { store: store, viewConfig: { plugins: { ptype: 'treeviewdragdrop' } }, //other properties }); And how do we handle drag-and-drop in store? We do it in the same way as we handled the edition plugin on the Grid, using a Writer: var store = Ext.create('Ext.data.TreeStore', { proxy: { type: 'ajax', api: { read : '../data/drag-drop.json', create : 'create.php' } }, writer: { type: 'json', writeAllFields: true, encode: false }, autoSync:true }); In the earlier versions of Ext JS 4, the autoSync config option does work. Another way of synchronizing the Store with the server is adding a listener to the Store instead of the autoSync config option, as follows: listeners: { move: function( node, oldParent, newParent, index, options ) { this.sync(); } } And, to add the sorting feature to the Tree class, we simply need to configure the sorters property in the TreeStore, as follows: Ext.create('Ext.data.TreeStore', { folderSort: true, sorters: [{ property: 'text', direction: 'ASC' }] }); Check tree To implement a check tree, we simply need to make a few changes in the data that we are going to apply to the Tree. We need to add a property called checked to each node, with a true or false value; true indicates the node is checked, and false, otherwise. For this example, we will use the following json code: [{ "text": "Cartesian", "cls": "folder", "expanded": true, "children": [{ "text": "Bar", "leaf": true, "checked": true },{ "text": "Column", "leaf": true, "checked": true },{ "text": "Line", "leaf": true, "checked": false }] },{ "text": "Gauge", "leaf": true, "checked": false },{ "text": "Pie", "leaf": true, "checked": true }] And as we can see, the code is the same as that for a simple tree: var store = Ext.create('Ext.data.TreeStore', { proxy: { type: 'ajax', url: 'data/check-nodes.json' }, sorters: [{ property: 'leaf', direction: 'ASC' }, { property: 'text', direction: 'ASC' }] }); Ext.create('Ext.tree.Panel', { store: store, rootVisible: false, useArrows: true, frame: true, title: 'Charts I have studied', renderTo: 'tree-example', width: 200, height: 250 }); The preceding code will output the following tree: Tree grid In Ext JS 3, the client JavaScript Component, Tree Grid, was an extension part of the ux package. In Ext JS 4, this Component is part of the native API but it is no longer an extension. To implement a Tree Grid, we are going to use the Tree Component as well; the only difference is that we are going to declare some columns inside the tree. This is the good part of Tree being a subclass of Ext.panel.Table, the same super class for Grid as well. First, we will declare a Model and a Store, to represent the data we are going to display in the Tree Grid. We will then load the Tree Grid: Ext.define('Book', { extend: 'Ext.data.Model', fields: [ {name: 'book', type: 'string'}, {name: 'pages', type: 'string'} ] }); var store = Ext.create('Ext.data.TreeStore', { model: 'Book', proxy: { type: 'ajax', url: 'data/treegrid.json' }, folderSort: true }); So far there is no news. We declared the variable store as any other used in a grid, except that this one is a TreeStore. The code to implement the Component Tree Grid is declared as follows: Ext.create('Ext.tree.Panel', { title: 'Books', width: 500, height: 300, renderTo: Ext.getBody(), collapsible: true, useArrows: true, rootVisible: false, store: store, multiSelect: true, singleExpand: true, columns: [{ xtype: 'treecolumn', text: 'Task', flex: 2, sortable: true, dataIndex: 'task' },{ text: 'Assigned To', flex: 1, dataIndex: 'user', sortable: true }] }); The most important line of code is highlighted—the columns declaration. The columns property is an array of Ext.grid.column.Column objects, as we declare in a grid. The only thing we have to pay attention to is the column type of the first column, that is, treecolumn; this way we know that we have to render the node into the Tree Grid. We also configured some other properties. collapsible is a Boolean property; if set to true it will allow us to collapse and expand the nodes of the tree. The useArrows is also a Boolean property, which indicates whether the arrow icon will be visible in the tree (expand/collapse icons). The property rootVisible indicates whether we want to display the root of the tree, which is a simple period (.). The property singleExpand indicates whether we want to expand a single node at a time and the multiSelect property indicates whether we want to select more than one node at once. The preceding code will output the following tree grid:
Read more
  • 0
  • 0
  • 5979

article-image-content-drupal-frequently-asked-questions-faq-2
Packt
24 Sep 2009
5 min read
Save for later

Content in Drupal: Frequently Asked Questions (FAQ)

Packt
24 Sep 2009
5 min read
What is content in the context of Drupal? We can certainly say that 'content' is any material that makes up the web page, be it Drupal-generated content, such as the banner and buttons, or user content, such as the text of a blog. Within Drupal, 'content' has more narrow parameters. When you create a story in Drupal, it is stored in a database as a node, and is assigned a node ID (nid). Some would say that, with respect to Drupal, content is limited to objects (stories, and so on) that can receive comments created by users, and are assigned a node id. Others say that it is any object in Drupal that can be on a page. These technical discussions can cause your eyes to glaze over. It would seem that the latter definition makes the most sense; however, there is one additional factor that we need to consider, and that is the layout of the Drupal admin functions. Drupal provides admin functions for creating and maintaining content, and these functions list only those objects that receive a node id. Other objects, such as Blocks, are created and maintained elsewhere. What are the types of content in Drupal? The following table lists the content types that ship with Drupal by default: Content Type Description Blog entry A blog, or weblog, is an author-specific content type that is used as a journal or diary, among other things, by individuals. In Drupal, each blog writer can, depending on the site's settings and their permissions, add attachments, HTML, or PHP code to their blog. A good example of a blog can be found at: http://googleblog.blogspot.com/, which demonstrates an interesting use of the blog content format. Book page A book is an organized set of book page types (actually any type can be used nowadays), which are intended to be used for collaborative authoring. Book pages may be added by different people in order to make up one single book that can then be structured into chapters and pages, or in whatever structure is most appropriate, provided it is in a hierarchical structure. Because pretty much any data type can be added to a book, there is plenty of scope for exciting content (think of narrated or visual content complementing dynamic book pages, created with PHP and Flash animations, to create a truly unique Internet-based book-the possibilities are endless!). A good example of a book is the documentation provided for developers on the Drupal site, found at: http://drupal.org/node/316. This has been built up over time by a number of different authors. You will notice that if you have the Book module enabled, an additional outline tag is presented above all/most of the site's posts. Clicking on this tab allows you to add that post to a book-in this way, books can be built up from content posted to the site. Forum topic Forum topics are the building blocks of forums. Forums can only consist of forum topics and their comments, unlike books, which can consist of pretty much any content type. Information in forums is categorized in a hierarchical structure, and they are extremely useful for hosting discussions as well as community-based support and learning. Forums are abundant on the Internet and you can also visit the Drupal forums to get a feel for how they operate. Page The page type is meant to allow you to add basic, run-of-the-mill web pages that can be found on any site. About us or Terms of use pages are good candidates for the page type, although you can spruce these up with a bit of dynamic content and HTML. Just look on any website to see examples of such pages. What about comments? Comments are not the same as the other node types discussed in the previous table. While there may be exceptions, the terms 'node' and 'content' are synonymous with respect to Drupal. While, technically, they are content, consider the fact that one cannot create a comment without first having another node to add the comment to. Instead, you can tack comments onto other content types, and these are very popular as a means to stimulate discussion among users. You can see comments in action by logging into the Drupal forums, http://drupal.org/forum, and posting or viewing comments on the various topics there. How to work with content types? It is possible to specify some default behavior for each of the content types. To do this, go to Content types under Content management to bring up the following page: Each content type has a set of editable configuration parameters, so to get a good idea of how they work, click on the edit in the Book page row. The edit page is broken up into four sections dealing with the following: Identification – Allows you to specify the human readable name and the name used internally by Drupal for the associated content type, as well as to add a description to be displayed on the content creation page. Submission form settings – Allows you to set the field names for the title and body (leaving the body blank removes the field entirely) as well as specify the minimum number of words required to make the posting valid. Again, it is possible to add in submission guidelines or notes to aid those users posting this content type. Workflow settings – Allows you to set default publishing options, multilingual support, and specify whether or not to allow file attachments. Comment settings – Allows you to specify default comment settings such as read or read/write, whether or not comments are allowed, whether they are to appear expanded or collapsed, in which order and how many, amongst other things.
Read more
  • 0
  • 0
  • 5964

article-image-installing-and-using-openfire
Packt
21 Oct 2009
6 min read
Save for later

Installing and Using Openfire

Packt
21 Oct 2009
6 min read
The Openfire instant messaging server is very easy to install. In fact, it's totally newbie-proof. So much so, that unlike other complex server software, even if you've never setup up Openfire before, you'll be able to get it up and running on your first try. If you're sceptical, by the time we are done with this short article, we'll have ourselves a fully-functional Openfire server that will register users and connect with clients. Preparing Your System Openfire is a cross-platform server and can be installed under Linux, Solaris, Mac, or Windows operating system environments. Openfire reserves its enormity for its users. When it comes to system requirements, Openfire is very suave and a perfect gentleman who has very moderate demands. You don't need to spend much time preparing your system for installing Openfire. Just pick out the environment you're comfortable with—Windows or one of the popular Linux distributions such as Fedora, Debian, or Ubuntu, and you're good to go. You don't have to run around getting obscure libraries or worry about mismatched versions. But like any hard-working gentleman, Openfire has a thing for caffeine, so make sure you have Java on your system. No need to run to the kitchen—this isn't the Java in the cupboard. Openfire is written in the Java programming language, so it'll need a Java Runtime Environment (JRE) installed on your system. A JRE creates a simple (breathable, so to say) environment for Java applications to live and function in. It's available as a free download and is very easy to install. If you're installing under Windows, just skip to the "Installing Under Windows" section later in the article. Linux Users Get Your Cuppa! Sun's Java Runtime Environment is available as a free download from Sun's website (http://www.java.com/en/download/linux_manual.jsp) or it can also be installed from your distribution's software management repositories. Users of RPM-based systems can safely skip this section because the Openfire installer for their distribution already includes a JRE. On the other hand, users of Debian-based systems such as Ubuntu will have to install the JRE before installing Openfire. Thanks to the popular apt-get package management system, there isn't much to installing the JRE. Because Sun's JRE isn't free and is also not an open source software, most Linux distributions make the JRE package available in their non-free tree. If the following command doesn't work, check out the detailed installation instructions for your specific distribution, at  https://jdk-distros.dev.java.net. Open a console and issue the following command: $ sudo apt-get install sun-java6-jre Now the apt-get system will automatically fetch, install, and activate the JRE for you! Meet The Protagonists This article is about making sure that you have no trouble installing one file. This one file is the Openfire installer and it is available in multiple flavors. The four flavors we're concerned with aren't as exotic as Baskin Robbins' 31 flavors but that doesn't make the decision any easier. The Openfire project releases several installers. The four flavors we're concerned with are: Openfire-3.5.2-1.i386.rpm: RPM package for Fedora Linux and other RPM-based variants Openfire_3.5.2_all.deb: DEB package for Debian, Ubuntu Linux and their derivates Openfire_3_5_2.tar.gz: Compressed "tarball" archive that'll work on any Linux distribution Openfire_3_5_2.exe: Openfire installer for Windows We'll cover installing Openfire from all of these files, so that you may use Openfire from your favorite Linux distribution or from within Windows. Just to reiterate here, the Windows installer and the RPM Linux installer both bundle the JRE, while the other other versions do not. The Actual Install-Bit Alright, so you have the Java JRE setup and you've downloaded the Openfire installer. In this section, we'll install Openfire server from the various versions we discussed in the last section. Let's first install from the source tarball. The first step when dealing with .tar.gz source archive is to extract the files. Let's extract ours under /tmp and then move the extracted directory under /opt. # tar zxvf openfire_3_5_2.tar.gz# mv openfire /opt Now we'll create a non-priviledged user and group for running Openfire. # groupadd openfire# useradd -d /opt/openfire -g openfire openfire Next, we'll change ownership of the openfire/directory to the newly-created user and group. # chown -R openfire:openfire /opt/openfire Believe it or not, that's it! You've just installed Openfire server. Surprised? Get ready for more. It gets even simpler if you install using the precompiled RPM or DEB binaries. In the case of RPM, Openfire is installed under /opt/openfire and in case of the DEB file, Openfire resides under /etc/openfire. On RPM-based systems such as Fedora and its derivates (as root), use: # rpm -ivh openfire-3.5.2-1.i386.rpm On DEB-based systems such as Debian, Ubuntu, and so on, use: $ sudo dpkg -i openfire_3.5.2_all.deb Voila! You're done. Now, who thought my "installing Openfire is totally newbie-proof" comment was an exaggeration? Running Openfire on Linux/Unix So, we now have Openfire on our favourite Linux distribution, whichever distribution this may be. Now it's time to fire it up and get going. Depending on how you installed Openfire, the procedure to start it varies a little. If you've installed Openfire from the RPM or DEB, you'll be pleased to know that the Openfire developers have already done most of the hard work for you. These binaries contain some custom handling for the RedHat/Debian-like environments. You can start and stop Openfire just like any other service on your system: # /etc/init.d/openfire startStarting Openfire: You can also view the other options available: # /etc/init.d/openfireUsage /etc/init.d/Openfire {start|stop|restart|status|condrestart|reload} On the other hand, if you've installed Openfire using the .tar.gz archive, you can start and stop Openfire using the bin/openfire script in your Openfire installation directory. First, change to the user that owns the /opt/openfire directory: # su - openfire# cd /opt/openfire/bin/# ./openfire startStarting Openfire And now you have Openfire up and running! If you are using a firewall, which you most probably are, make sure to forward traffic on ports 5222 and 5223 (for SSL) which clients use for connecting with the Openfire server. Also forward traffic on port 7777 for file transfer. Linux users can skip the next section on installing Openfire under Windows and move directly to the section that discusses the preliminary Openfire setup.
Read more
  • 0
  • 0
  • 5933

article-image-play-framework-introduction-writing-modules
Packt
28 Jul 2011
11 min read
Save for later

Play Framework: Introduction to Writing Modules

Packt
28 Jul 2011
11 min read
Play Framework Cookbook In order to get to know more modules, you should not hesitate to take a closer look at the steadily increasing amount of modules available at the Play framework modules page at http://www.playframework.org/modules. When beginning to understand modules, you should not start with modules implementing its persistence layer, as they are often the more complex ones. In order to clear up some confusion, you should be aware of the definition of two terms throughout the article, as these two words with an almost identical meaning are used most of the time. The first is word is module and the second is plugin. Module means the little application which serves your main application, where as plugin represents a piece of Java code, which connects to the mechanism of plugins inside Play. Creating and using your own module Before you can implement your own functionality in a module, you should know how to create and build a module. This recipe takes a look at the module's structure and should give you a good start. The source code of the example is available at examples/chapter5/module-intro. How to do it... It is pretty easy to create a new module. Go into any directory and enter the following: play new-module firstmodule This creates a directory called firstmodule and copies a set of predefined files into it. By copying these files, you can create a package and create this module ready to use for other Play applications. Now, you can run play build-module and your module is built. The build step implies compiling your Java code, creating a JAR file from it, and packing a complete ZIP archive of all data in the module, which includes Java libraries, documentation, and all configuration files. This archive can be found in the dist/ directory of the module after building it. You can just press Return on the command line when you are asked for the required Play framework version for this module. Now it is simple to include the created module in any Play framework application. Just put this in the in the conf/dependencies.yml file of your application. Do not put this in your module! require: - play - customModules -> firstmodule repositories: - playCustomModules: type: local artifact: "/absolute/path/to/firstmodule/" contains: - customModules -> * The next step is to run play deps. This should show you the inclusion of your module. You can check whether the modules/ directory of your application now includes a file modules/firstmodule, whose content is the absolute path of your module directory. In this example it would be /path/to/firstmodule. To check whether you are able to use your module now, you can enter the following: play firstmodule:hello This should return Hello in the last line. In case you are wondering where this is coming from, it is part of the commands.py file in your module, which was automatically created when you created the module via play new-module. Alternatively, you just start your Play application and check for an output such as the following during application startup: INFO ~ Module firstmodule is available (/path/to/firstmodule) The next step is to fill the currently non-functional module with a real Java plugin, so create src/play/modules/firstmodule/MyPlugin.java: public class MyPlugin extends PlayPlugin { public void onApplicationStart() { Logger.info("Yeeha, firstmodule started"); } } You also need to create the file src/play.plugins: 1000:play.modules.firstmodule.MyPlugin Now you need to compile the module and create a JAR from it. Build the module as shown in the preceding code by entering play build-module. After this step, there will be a lib/play- firstmodule.jar file available, which will be loaded automatically when you include the module in your real application configuration file. Furthermore, when starting your application now, you will see the following entry in the application log file. If you are running in development mode, do not forget to issue a first request to make sure all parts of the application are loaded: INFO ~ Yeeha, firstmodule started How it works... After getting the most basic module to work, it is time go get to know the structure of a module. The filesystem layout looks like this, after the module has been created: app/controllers/firstmodule app/models/firstmodule app/views/firstmodule app/views/tags/firstmodule build.xml commands.py conf/messages conf/routes lib src/play/modules/firstmodule/MyPlugin.java src/play.plugins As you can see a module basically resembles a normal Play application. There are directories for models, views, tags, and controllers, as well as a configuration directory, which can include translations or routes. Note that there should never be an application.conf file in a module. There are two more files in the root directory of the module. The build.xml file is an ant file. This helps to compile the module source and creates a JAR file out of the compiled classes, which is put into the lib/ directory and named after the module. The commands.py file is a Python file, which allows you to add special command line directives, such as the play firstmodule:hello command that we just saw when executing the Play command line tool. The lib/ directory should also be used for additional JARs, as all JAR files in this directory are automatically added to classpath when the module is loaded. Now the only missing piece is the src/ directory. It includes the source of your module, most likely the logic and the plugin source. Furthermore, it features a very important file called play.plugins. After creating the module, the file is empty. When writing Java code in the src/ directory, it should have one line consisting of two entries. One entry features the class to load as a plugin; where as the other entry resembles a priority. This priority defines the order in which to load all modules of an application. The lower the priority, the earlier the module gets loaded. If you take a closer look at the PlayPlugin class, which MyPlugin inherits from, you will see a lot of methods that you can override. Here is a list of some of them accompanying a short description: onLoad(): This gets executed directly after the plugin has been loaded. However, this does not mean that the whole application is ready! bind(): There are two bind() methods with different parameters. These methods allow a plugin to create a real object out of arbitrary HTTP request parameters or even the body of a request. If you return anything different other than null in this method, the returned value is used as a parameter for controller whenever any controller is executed. getStatus(), getJsonStatus(): Allows you to return an arbitrary string representing a status of the plugin or statistics about its usage. You should always implement this for production ready plugins in order to simplify monitoring. enhance(): Performs bytecode enhancement. rawInvocation(): This can be used to intercept any incoming request and change the logic of it. This is already used in the CorePlugin to intercept the @kill and @status URLs. This is also used in the DocViewerPlugin to provide all the existing documentation, when being in test mode. serveStatic(): Allows for programmatically intercepting the serving of static resources. A common example can be found in the SASS module, where the access to the .sass file is intercepted and it is precomplied. loadTemplate(): This method can be used to inject arbitrary templates into the template loader. For example, it could be used to load templates from a database instead of the filesystem. detectChange(): This is only active in development mode. If you throw an exception in this method, the application will be reloaded. onApplicationStart(): This is executed on application start and if in development mode, on every reload of your application. You should initiate stateful things here, such as connections to databases or expensive object creations. Be aware, that you have to care of thread safe objects and method invocations for yourself. For an example you could check the DBPlugin, which initializes the database connection and its connection pool. Another example is the JPAPlugin, which initializes the persistence manager or the JobPlugin, which uses this to start jobs on application start. onApplicationReady(): This method is executed after all plugins are loaded, all classes are precompiled, and every initialization is finished. The application is now ready to serve requests. afterApplicationStart(): This is currently almost similar to onApplicationReady(). onApplicationStop(): This method is executed during a graceful shutdown. This should be used to free resources, which were opened during the starting of the plugin. A standard example is to close network connections to database, remove stale file system entries, or clear up caches. onInvocationException(): This method is executed when an exception, which is not caught is thrown during controller invocation. The ValidationPlugin uses this method to inject an error cookie into the current request. invocationFinally(): This method is executed after a controller invocation, regardless of whether an exception was thrown or not. This should be used to close request specific data, such as a connection, which is only active during request processing. beforeActionInvocation(): This code is executed before controller invocation. Useful for validation, where it is used by Play as well. You could also possibly put additional objects into the render arguments here. Several plugins also set up some variables inside thread locals to make sure they are thread safe. onActionInvocationResult(): This method is executed when the controller action throws a result. It allows inspecting or changing the result afterwards. You can also change headers of a response at this point, as no data has been sent to the client yet. onInvocationSuccess(): This method is executed upon successful execution of a complete controller method. onRoutesLoaded(): This is executed when routes are loaded from the routes files. If you want to add some routes programmatically, do it in this method. onEvent(): This is a poor man's listener for events, which can be sent using the postEvent() method. onClassesChange(): This is only relevant in testing or development mode. The argument of this method is a list of freshly changed classes, after a recompilation. This allows the plugin to detect whether certain resources need to be refreshed or restarted. If your application is a complete shared-nothing architecture, you should not have any problems. Test first, before implementing this method. addTemplateExtensions(): This method allows you to add further TemplateExtension classes, which do not inherit from JavaExtensions, as these are added automatically. At the time of this writing, neither a plugin nor anything in the core Play framework made use of this, with the exception of the Scala module. compileAll(): If the standard compiler inside Play is not sufficient to compile application classes, you can override this method. This is currently only done inside the Scala plugin and should not be necessary in regular applications. routeRequest(): This method can be used to redirect requests programmatically. You could possibly redirect any URL which has a certain prefix or treat POST requests differently. You have to render some result if you decide to override this method. modelFactory(): This method allows for returning a factory object to create different model classes. This is needed primarily inside of the different persistence layers. It was introduced in play 1.1 and is currently only used by the JPA plugin and by the Morphia plugin. The model factory returned here implements a basic and generic interface for getting data, which is meant to be independent from the persistence layer. It is also used to provide a more generic fixtures support. afterFixtureLoad(): This method is executed after a Fixtures.load() method has been executed. It could possibly be used to free or check some resources after adding batch data via fixtures. Cleaning up after creating your module When creating a module via Play new-module, you should remove any unnecessary cruft from your new module, as most often, not all of this is needed. Remove all unneeded directories or files, to make understanding the module as easy as possible. Supporting Eclipse IDE As play eclipsify does not work currently for modules, you need to set it up manually. A trick to get around this is to create and eclipsify a normal Play application, and then configure the build path and use "Link source" to add the src/ directory of the plugin.
Read more
  • 0
  • 0
  • 5930

article-image-openfire-effectively-managing-users
Packt
23 Oct 2009
14 min read
Save for later

Openfire: Effectively Managing Users

Packt
23 Oct 2009
14 min read
Despite the way it sounds, managing users isn't an all-involving activity—at least it shouldn't be. Most system administrators tend to follow the "install-it-forget-it" methodology to running their servers. You can do so with Openfire as well, but with a user-centeric service such as an IM server, keeping track of things isn't a bad idea. Openfire makes your job easier with its web-based admin interface. There are several things that you can setup via the web interface that'll help you manage the users. You can install some plugins that'll help you run and manage the server more effectively, such as the plugin for importing/exporting users, and dual-benefit plugins such as the search plugin, which help users find other users in the network, and also let you check up on users using the IM service. In this article, we will cover: Searching for users Getting email alerts via IM Broadcasting messages to all users Managing user clients Importing/exporting users Searching for Users with the SearchPlugin Irrespective of whether you have pre-populated user rosters, letting users find other users on the network is always a good idea. The Search Plugin works both ways—it helps your users find each other, and also helps you, the administrator, to find users and modify their settings if required. To install the plugin, head over to the Plugins tab (refer to the following screenshot). The Search plugin is automatically installed along with Openfire, and will be listed as a plugin that is already installed. It's still a good idea to restart the plugin just to make sure that everything's ok. Locate and click the icon in the Restart column that corresponds to the Search plugin. This should restart the plugin. The Search plugin has various configurable options, but by default the pluginis deployed with all of its features enabled. So your users can immediately start searching for users. To tweak the Search plugin options, head over to the Server | Server Settings |Search Service Properties in the Openfire admin interface. From this page, you can enable or disable the service. Once enabled, users will be able to search for other users on the network from their clients. Not all clients have the Search feature but Spark, Exodus, Psi, and some others do. Even if you disable this plugin, you, the admin, will still be able to search for users from the Openfire admin interface as described in the following section. In addition to enabling the Search option, you'll have to name it. The plugin is offered as a network "service" to the users. The Openfire server offers other services and also includes the group chat feature which we will discuss in the Appendix. Calling the search service by its default name, search.< your-domain-name > is a goodidea. You should only change it if you have another service on your network with the same name. Finally, you'll have to select the fields users can search on. The three options available are Username, Name, and Email (refer to the previous screenshot). You can enable any of these options, or all the three for a better success rate. Once you're done with setting up the options, click the Save Properties button to apply them. To use the plugin, your users will have to use their clients to query the Openfire server and then select the search service from the ones listed. This will present them with a search interface through which they'll be able to search for their peers(refer to the following screenshot) using one or more of the three options (Username,Name, Email), depending on what you have enabled. Searching for Users from Within The Admin Interface So we've let our users look for their peers, but how do you, the Openfire admin, look for users? You too can use your client, but it's better to do it from the interface since you can tweak the user's settings from there as well. To search for users from within the admin interface, head over to the Users/Groups tab. You'll notice an AdvancedUser Search option in the sidebar. When you click on this option, you'll be presented with a single text field withthree checkboxes (refer to the previous screenshot). In the textfield, enter the user'sName, Username, and Email that you want to find. The plugin can also handle the * wildcard character so that you can search using a part of the user's details as well.For example, if you want to find a user "James", but don't know if his last name isspelled "Allen" or "Allan", try entering "James A*" in the search field and make sure that the Name checkbox is selected. Another example would be "* Smith", which looks for all the users with the last name "Smith". The search box is case-sensitive. So why were you looking for "James Allan", the guy with two first names? It was because his last name is in fact "Allen" and he wants to get it corrected. So you find his record with the plugin and click on his username. This brings up a summary of his properties including his status, the groups he belongs to, when he was registeredon the network, and so on. Find and click the Edit Properties button below the details, make the required changes, and click the Save Properties > button. Get Email Alerts via IM Instant Messaging is an alternate line of enterprise communication, along with electronic ones such as email and traditional ones such as the telephone. Some critical tasks require instant notification and nothing beats IM when it comes to time-critical alerts. For example, most critical server software applications, especially the ones facing outwards on to the Internet, are configured to send an email to the admin in case of an emergency—for example, a break-in attempt, abnormal shutdown, hardware failure, and so on. You can configure Openfire to route these messages to you as an IM, if you're online. If you're a startup that only advertises a single info@coolstartup.com email address which is read by all seven employees of the company, you can configure Openfire to send IMs to all of you when the VCs come calling! Setting this up isn't an issue if you have the necessary settings handy. The email alert service connects to the email server using IMAP and requires the following options: Mail Host: The host running the email service. Example: imap.example.com Mail Port: The port through which Openfire listens for new email. SSL can also be used if it is enabled on your mail server. Example: 993. Server Username: The username of the account you want to monitor.Example: info@cool-startup.com. Server Password: The accounts password. Folder: The folder in which Openfire must look for new messages. Typically this will be the "Inbox" but if your server filters email that meet a preset criteria into a particular folder, you need to specify it here. Check Frequency: How frequently Openfire should check the account for new email. The default value is 300000 ms which is equal to 5 minutes. JID of users to notify: This is where you specify the Openfire Jabber IDs(userids) of the users you want to notify when a new email pops up. If you need to alert multiple users, separate their JID's with commas. But first head over to the Plugins tab and install the Email Listener plugin from the list of available plugins. Once you have done this, head back to the Server tab and choose the Email Listener option in the sidebar and enter the settings in the form that pops up (refer to the following screenshot). Click the Test Settings button to allow Openfire to try to connect to the server using the settings provided. If the test is successful, finish off the setup procedure by clicking the Save button to save your settings. If the test fails, check the settings and make sure that the email server is up and running. You can test and hook them with your Gmail account as well. That's it. Now close that email client you have running in the background, and let Openfire play secretary, while you write your world domination application! Broadcasting Messages Since Openfire is a communication tool, it reserves the coolest tricks in the bag for that purpose. The primary purpose of Openfire remains one-to-one personal interactions and many-to-many group discussion, but it can also be used as a one-to-many broadcasting tool. This might sound familiar to you. But don't sweat, I'm not repeating myself. The one-to-many broadcasting we cover in this section is different from the Send Message tool. The Send Message tool from the web-based Openfire administration console is available only to the Openfire administrator. But the plugin we cover in this section has a much broader perspective. For one, the Broadcast plugin can be used by non-admin users, though of course, you can limit access. Secondly, the Broadcast plugin can be used to send messages to a select group of users which can grow to include everyone in the organization using Openfire. One use of the broadcast plugin is for sending important reminders. Here are some examples: The Chief Accounts Officer broadcasts a message to everyone in the organization reminding them to file their returns by a certain date. The CEO broadcasts a message explaining the company's plans to merge with or acquire another company, or just to share a motivational message. You, the Openfire administrator, use the plugin to announce system outages. The Sales Department Head is upset because sales targets haven't been met and calls for a group meeting at 10:00 a.m. on the day after tomorrow and in forms everyone in the Sales department via the plugin. The intern in the advertisement department sends a list of his accounts to everyone in the department before returning to college and saves everyone a lot of running around, thanks to the plugin. Setting up the Plugin To reap the benefits of the Broadcast plugin, begin by installing it from under theAvailable Plugins list on the Plugins tab. This plugin has a few configuration options which should be set carefully—using a misconfigured broadcast plugin, the new guy in the purchase department could send a message of "Have you seen my stapler?" to everyone in the organization, including the CEO! The broadcast plugin is configured via the Openfire system properties. Remember these? They are listed under the Server tab's System Properties option in the sidebar. You'll have to manually specify the settings using properties (refer to the following screenshot): plugin.broadcast.serviceName— This is the name of the broadcast service. By default, the service is called "broadcast", but you can call it something else, such as "shout", or "notify". plugin.broadcast.groupMembersAllowed— This property accepts two values—true and false. If you select the "true" option, all group members will be allowed to broadcast messages to all users in the group they belong to. If set to "false", only group admins can send messages to all members of their groups. The default value is "true". plugin.broadcast.disableGroupPermissions— Like the previous property, this property also accepts either true or false values. By selecting the "true" option, you will allow any user in the network to broadcast messages to any group and vice versa, the "false" option restricts the broadcasting option to group members and admins. The default value of this group is "false". As you can imagine, if you set this value to "true" and allow anyone to send broadcast messages to a group, you effectively override the restrictive value of the previous setting. plugin.broadcast.allowedUsers—Do not forget to set this property! If it is not set, anyone on the network can send a message to everyone else on the network. There are a only a few people you'd want to have the ability to broadcast a message to everyone in the organization. This list of users who can talk to everyone should be specified with this property by a string of comma-separated JIDs. In most cases, the default options of these properties should suffice. If you don't change any variables, your service will be called "broadcast" and will allow group members to broadcast messages to their own groups and not to anyone else. You should also add the JIDs of executive members of the company (CEO, MD, etc.) to the list of users allowed to send messages to everyone in the organization. Using The Plugin Once you have configured the plugin, you'll have to instruct users on how to use the plugin according to the configuration. To send a message using the broadcast plugin, users must add a user with the JID in the following format @. (refer to the following screenshot). If the CEO wants to send a message to everyone, he has to send it to a user called all@broadcast.serverfoo, assuming that you kept the default settings, and that your Openfire server is called serverfoo. Similarly, when members of the sales department want to communicate with their departmental collegues, they have to send the message to sales@broadcast.serverfoo. Managing User Clients There's no dearth of IM clients. It's said that if you have ten users on your network, you'll have at least fifteen different clients. Managing user's clients is like bringing order to chaos. In this regard you'll find that Openfire is biased towards its own IMclient, Spark. But as it has all the features you'd expect from an IM client and runs on multiple platforms as well, one really can't complain. So what can you control using the client control features? Here's a snapshot: Don't like users transferring files? Turn it off, irrespective of the IM client. Don't like users experimenting with clients? Restrict their options Don't want to manually install Spark on each and every user's desktop? Put it on the network, and send them an email with a link, along with installation and sign-in instructions. Do users keep forgetting the intranet website address? Add it as a bookmark in their clients. Don't let users bug you all the time asking for the always-on "hang-out"conference room. Add it as a bookmark to their client! Don't these features sound as if they can take some of the work off your shoulders? Sure, but you'll only truly realize how cool and useful they are when you implement them! So what are you waiting for? Head over to the Plugins tab and install the Client Control plugin. When it is installed, head over to the Server | ClientManagement tab. Here you'll notice several options. The first option under client management, Client Features, lets you enable or disable certain client features (refer to the following screenshot). These are: Broadcasting: If you don't want your users to broadcast messages, disable this feature. This applies only to Spark. File Transfer: Disabling this feature will stop your users from sharing files.This applies to all IM clients. Avatar/VCard: You can turn off indiscriminate changes to a user's avatar or virtual visiting card by disabling this experimental feature which only applies to Spark. Group Chat: Don't want users to join group chat rooms? Then disable this feature which will prevent all the users from joining discussion groups, irrespective of the IM client they are using. By default, all of these features are enabled. When you've made changes as per your requirements, remember to save the settings using the Save Settings button. Next, head over to the Permitted Clients option (refer to the following screenshot) to restrict the clients that users can employ. By default, Openfire allows all XMPPclients to connect to the server. If you want to run a tight ship, you can decide to limit the number of clients allowed by selecting the Specify Clients option button. From the nine clients listed for the three platforms supported by Openfire (Windows,Linux, and Mac), choose the clients you trust by selecting the checkbox next to them.If your client isn't listed, use the Add Other Client text box to add that client. When you've made your choices, click on the Save Settings button to save and implement the client control settings. The manually-added clients are automatically added to the list of allowed clients. If you don't trust them, why add them? The remove link next to these clients will remove them from the list of clients you trust.
Read more
  • 0
  • 0
  • 5919
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-getting-started-wordpress-3
Packt
02 Feb 2011
7 min read
Save for later

Getting Started with WordPress 3

Packt
02 Feb 2011
7 min read
  WordPress 2.7 Complete Create your own complete blog or web site from scratch with WordPress Everything you need to set up your own feature-rich WordPress blog or web site Clear and practical explanations of all aspects of WordPress In-depth coverage of installation, themes, syndication, and podcasting Explore WordPress as a fully functioning content management system Concise, clear, and easy to follow; rich with examples         Read more about this book       (For more resources on Wordpress, see here.) WordPress is available in easily downloadable formats from its website, http://wordpress.org/download/. WordPress is a free, open source application, and is released under GNU General Public License (GPL). This means that anyone who produces a modified version of software released under the GPL is required to keep those same freedoms, that people buying or using the software may also modify and redistribute, attached to his or her modified version. This way, WordPress and other software released under GPL are kept open source. Where to build your WordPress website The first decision you have to make is where your blog is going to live. You have two basic options for the location where you will create your site. You can: Use WordPress.com Install on a server (hosted or your own) Let's look at some of the advantages and disadvantages of each of these two choices. The advantage of using WordPress.com is that they take care of all of the technical details for you. The software is already installed; they'll upgrade it for you whenever there's an upgrade; and you're not responsible for anything else. Just manage your content! The big disadvantage is that you lose almost all of the theme and plugin control you'd have otherwise. WordPress.com will not let you upload or edit your own theme, though it will let you (for a fee) edit the CSS of any theme you use. WordPress.com will not let you upload or manage plugins at all. Some plugins are installed by default (most notably Akismet, for spam blocking, and a fancy statistics plugin), but you can neither uninstall them nor install others. Additional features are available for a fee as well. The following table is a brief overview of the essential differences between using WordPress.com versus installing WordPress on your own server:   WordPress.com Your own server Installation You don't have to install anything, just sign up Install WordPress yourself, either manually or via your host's control panel (if offered) Themes Use any theme made available by WordPress.com Use any theme available anywhere, written by anyone (including yourself) Plugins No ability to choose or add plugins Use any plugin available anywhere, written by anyone (including yourself) Upgrades WordPress.com provides automatic upgrades You have to upgrade it yourself when upgrades are available Widgets Widget availability depends on available themes You can widgetize any theme yourself Maintenance You don't have to do any maintenance You're responsible for the maintenance of your site Advertising No advertising allowed Advertise anything Using WordPress.com WordPress.com (http://wordpress.com) is a free service provided by the WordPress developers, where you can register a blog or non-blog website easily and quickly with no hassle. However, because it is a hosted service, your control over some things will be more limited than it would be if you hosted your own WordPress website. As mentioned before, WordPress.com will not let you edit or upload your own themes or plugins. Aside from this, WordPress.com is a great place to maintain your personal site if you don't need to do anything fancy with a theme. To get started, go to http://wordpress.com, which will look something like the following: To register your free website, click on the loud orange-and-white Sign up now button. You will be taken to the signup page. In the following screenshot, I've entered my username (what I'll sign in with) and a password (note that the password measurement tool will tell you if your password is strong or weak), as well as my e-mail address. Be sure to check the Legal flotsam box and leave the Gimme a blog! radio button checked. Without it, you won't get a website. After providing this information and clicking on the Next button, WordPress will ask for other choices (Blog Domain, Blog Title, Language, and Privacy), as shown in following screenshot. You can also check if it's a private blog or not. Note that you cannot change the blog domain later! So be sure it's right. After providing this information and clicking on Signup, you will be sent to a page where you can enter some basic profile information. This page will also tell you that your account is set up, but your e-mail ID needs to be verified. Be sure to check your inbox for the e-mail with the link, and click on it. Then, you'll be truly done with the installation. Installing WordPress manually The WordPress application files can be downloaded for free if you want to do a manual installation. If you've got a website host, this process is extremely easy and requires no previous programming skills or advanced blog user experience. Some web hosts offer automatic installation through the host's online control panel. However, be a little wary of this because some hosts offer automatic installation, but they do it in a way that makes updating your WordPress difficult or awkward, or restricts your ability to have free rein with your installation in the future. Preparing the environment A good first step is to make sure you have an environment setup that is ready for WordPress. This means two things: making sure that you verify that the server meets the minimum requirements, and making sure that your database is ready. For WordPress to work, your web host must provide you with a server that does the following two things: Support PHP, which must be at least Version 4.3. Provide you with write access to a MySQL database. MySQL has to be at least Version 4.1.2. You can find out if your host meets these two requirements by contacting your web host. If your web server meets these two basic requirements, you're ready to move on to the next step. As far as web servers go, Apache is the best. However, WordPress will also run on a server running the Microsoft IIS server (though using permalinks will be difficult, if possible at all). Enabling mod_rewrite to use pretty permalinks If you want to use permalinks, your server must be running Unix, and Apache's mod_rewrite option must be enabled. Apache's mod_rewrite is enabled by default in most web hosting accounts. If you are hosting your own account, you can enable mod_rewrite by modifying the Apache web server configuration file. You can check the URL http://www.tutorio.com/tutorial/enable-mod-rewrite-on-apache to learn how to enable mod_rewrite on your web server. If you are running on shared hosting, then ask your system administrator to install it for you. However, it is more likely that you already have it installed on your hosting account. Downloading WordPress Once you have checked out your environment, you need to download WordPress from http://wordpress.org/download/. Take a look at the following screenshot in which the download links are available on the right side: The .zip file is shown as a big blue button because that'll be the most useful format for the most people. If you are using Windows, Mac, or Linux operating systems, your computer will be able to unzip that downloaded file automatically. (The .tar.gz file is provided because some Unix users prefer it.) A further note on location We're going to cover installing WordPress remotely. However, if you plan to develop themes or plugins, I suggest that you also install WordPress locally on your own computer's server. Testing and deploying themes and plugins directly to the remote server will be much more time-consuming than working locally. If you look at the screenshots I will be taking of my own WordPress installation, you'll notice that I'm working locally (for example, http://wpbook:8888/ is a local URL). After you download the WordPress .zip file, extract the files, and you'll get a folder called wordpress. It will look like the following screenshot:  
Read more
  • 0
  • 0
  • 5868

article-image-joomla-flash-flashy-templates-headers-banners-and-tickers-part-2
Packt
18 Nov 2009
4 min read
Save for later

Joomla! with Flash: Flashy Templates, Headers, Banners, and Tickers: Part 2

Packt
18 Nov 2009
4 min read
Using Flash headers We have seen that one of the uses of Flash in Joomla! templates is as a header. By using a Flash animation in a site's header you can create some stunning effects. As we have already seen, while designing the template, we may embed Flash animation in the header region and control the layout using an appropriate CSS stylesheet. To embed such Flash animations like these, you can use the <object> </object> XHTML tag. We have seen its use in the previous section. An alternative to this is showing the Flash header at some module position. There are several extensions that can be used for showing Flash objects at a module position. We will be looking at some of them next. Using Flexheader3 Flexheader3 is a Joomla! 1.5-compatible extension for using Flash as headers in Joomla! sites. This is available for download for free at http://flexheader2.andrehotzler.de/en/download/folder/208-flexheader3.html. After downloading the package, install it from the Extensions | Install/Uninstall screen in Joomla! administration. Then click on Extensions | Module Manager. In the Module Manager screen, you will find the module named Flexheader3. Click on it and that shows the Module: [Edit] screen for the Flexheader3 module, as shown in the following screenshot: The Details section is similar to other modules from where you enable the module, select the module position to display this, select the order of display, and assign menus for which this module will be displayed. The module-specific settings are in the Parameters section. As you see, selecting the module position is crucial for this module. Most of the templates don't have a position to display the header using a module. Therefore, you may need to create a module position for displaying a Flash header. The following section shows you how to create a module position displaying a header. Creating a module position To create a module position in your template you need to edit at least two files. Browse to the /templates directory, and click on the name of the template that you want to modify. You need to edit two files in the template folder: index.php and templateDetails.xml. First, open the templateDetails.xml file in your text editor and find the <positions> tag. Under this, type the line highlighted in the following code so that the file looks like the following: <positions> <position>flexheader</position> <position>left</position> <position>user1</position> ... <position>right</position> <position>debug</position> </positions> Remember to type <position>flexheader</position> before ending </positions> tag. Placing it outside the <positions> </positions> block will make the template unusable. After modifying the templateDetails.xml file, open the index.php file in your text editor. Find out the code for including a header image in that template. Generally, this is done by inserting an image using the <img src=... /> tag. If you don't find such a tag, then look for <div id="header" ... > or something like that. In such cases, CSS is used to display the background image to the div element. Once you have found the code for showing the header image, replace it with the following code: <jdoc:include type="modules" name="flexheader" style="RAW" /> This line of code means that you are instructing to include modules designated for the flexheader position. When we assign the Flexheader3 module to this position, the contents of that module will be displayed in this position. Generally, this module will produce a code like the following in this position: <img src="/images/header.png" title="My header image" alt="Header image" style="width: 528px; height: 70px;" /> When changes to index.php are made, save those changes. We will be configuring the module to display a Flash header in this module position.
Read more
  • 0
  • 0
  • 5859

article-image-linux-shell-script-monitoring-activities
Packt
28 Jan 2011
8 min read
Save for later

Linux Shell Script: Monitoring Activities

Packt
28 Jan 2011
8 min read
Linux Shell Scripting Cookbook Solve real-world shell scripting problems with over 110 simple but incredibly effective recipes Master the art of crafting one-liner command sequence to perform tasks such as text processing, digging data from files, and lot more Practical problem solving techniques adherent to the latest Linux platform Packed with easy-to-follow examples to exercise all the features of the Linux shell scripting language Part of Packt's Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible.    Disk usage hacks Disk space is a limited resource. We frequently perform disk usage calculation on hard disks or any storage media to find out the free space available on the disk. When free space becomes scarce, we will need to find out large-sized files that are to be deleted or moved in order to create free space. Disk usage manipulations are commonly used in shell scripting contexts. This recipe will illustrate various commands used for disk manipulations and problems where disk usages can be calculated with a variety of options. Getting ready df and du are the two significant commands that are used for calculating disk usage in Linux. The command df stands for disk free and du stands for disk usage. Let's see how we can use them to perform various tasks that involve disk usage calculation. How to do it... To find the disk space used by a file (or files), use: $ du FILENAME1 FILENAME2 . . For example: $ du file.txt 4 The result is, by default, shown as size in bytes. In order to obtain the disk usage for all files inside a directory along with the individual disk usage for each file showed in each line, use: $ du -a DIRECTORY -a outputs results for all files in the specified directory or directories recursively. Running du DIRECTORY will output a similar result, but it will show only the size consumed by subdirectories. However, they do not show the disk usage for each of the files. For printing the disk usage by files, -a is mandatory. For example: $  du -a test 4  test/output.txt 4  test/process_log.sh 4  test/pcpu.sh 16  test An example of using du DIRECTORY is as follows: $ du test 16  test There's more... Let's go through additional usage practices for the du command. Displaying disk usage in KB, MB, or Blocks By default, the disk usage command displays the total bytes used by a file. A more human-readable format is when disk usage is expressed in standard units KB, MB, or GB. In order to print the disk usage in a display-friendly format, use –h as follows: du -h FILENAME For example: $ du -sh test/pcpu.sh 4.0K  test/pcpu.sh # Multiple file arguments are accepted Or: # du -h DIRECTORY $ du -h hack/ 16K  hack/ Finding the 10 largest size files from a given directory Finding large-size files is a regular task we come across. We regularly require to delete those huge size files or move them. We can easily find out large-size files using du and sort commands. The following one-line script can achieve this task: $ du -ak SOURCE_DIR | sort -nrk 1 | head Here -a specifies all directories and files. Hence du traverses the SOURCE_DIR and calculates the size of all files. The first column of the output contains the size in Kilobytes since -k is specified and the second column contains the file or folder name. sort is used to perform numerical sort with column 1 and reverse it. head is used to parse the first 10 lines from the output. For example: $ du -ak /home/slynux | sort -nrk 1 | head -n 4 50220 /home/slynux 43296 /home/slynux/.mozilla 43284 /home/slynux/.mozilla/firefox 43276 /home/slynux/.mozilla/firefox/8c22khxc.default One of the drawbacks of the above one-liner is that it includes directories in the result. However, when we need to find only the largest files and not directories we can improve the one-liner to output only the large-size files as follows: $ find . -type f -exec du -k {} ; | sort -nrk 1 | head We used find to filter only files to du rather than allow du to traverse recursively by itself. Calculating execution time for a command While testing an application or comparing different algorithms for a given problem, execution time taken by a program is very critical. A good algorithm should execute in minimum amount of time. There are several situations in which we need to monitor the time taken for execution by a program. For example, while learning about sorting algorithms, how do you practically state which algorithm is faster? The answer to this is to calculate the execution time for the same data set. Let's see how to do it. How to do it... time is a command that is available with any UNIX-like operating systems. You can prefix time with the command you want to calculate execution time, for example: $ time COMMAND The command will execute and its output will be shown. Along with output, the time command appends the time taken in stderr. An example is as follows: $ time ls test.txt next.txt real    0m0.008s user    0m0.001s sys     0m0.003s It will show real, user, and system times for execution. The three different times can be defined as follows: Real is wall clock time—the time from start to finish of the call. This is all elapsed time including time slices used by other processes and the time that the process spends when blocked (for example, if it is waiting for I/O to complete). User is the amount of CPU time spent in user-mode code (outside the kernel) within the process. This is only the actual CPU time used in executing the process. Other processes and the time that the process spends when blocked do not count towards this figure. Sys is the amount of CPU time spent in the kernel within the process. This means executing the CPU time spent in system calls within the kernel, as opposed to library code, which is still running in the user space. Like 'user time', this is only the CPU time used by the process. An executable binary of the time command is available at /usr/bin/time as well as a shell built-in named time exists. When we run time, it calls the shell built-in by default. The shell built-in time has limited options. Hence, we should use an absolute path for the executable (/usr/bin/time) for performing additional functionalities. We can write this time statistics to a file using the -o filename option as follows: $ /usr/bin/time -o output.txt COMMAND The filename should always appear after the –o flag. In order to append the time statistics to a file without overwriting, use the -a flag along with the -o option as follows: $ /usr/bin/time -a -o output.txt COMMAND We can also format the time outputs using format strings with the -f option. A format string consists of parameters corresponding to specific options prefixed with %. The format strings for real time, user time, and sys time are as follows: Real time - %e f User - %U f sys - %S By combining parameter strings, we can create formatted output as follows: $ /usr/bin/time -f "FORMAT STRING" COMMAND For example: $ /usr/bin/time -f "Time: %U" -a -o timing.log uname Linux Here %U is the parameter for user time. When formatted output is produced, the formatted output of the command is written to the standard output and the output of the COMMAND, which is timed, is written to standard error. We can redirect the formatted output using a redirection operator (>) and redirect the time information output using the (2>) error redirection operator. For example: $ /usr/bin/time -f "Time: %U" uname> command_output.txt 2>time.log $ cat time.log Time: 0.00 $ cat command_output.txt Linux Many details regarding a process can be collected using the time command. The important details include, exit status, number of signals received, number of context switches made, and so on. Each parameter can be displayed by using a suitable format string. The following table shows some of the interesting parameters that can be used: For example, the page size can be displayed using the %Z parameters as follows: $ /usr/bin/time -f "Page size: %Z bytes" ls> /dev/null Page size: 4096 bytes Here the output of the timed command is not required and hence the standard output is directed to the /dev/null device in order to prevent it from writing to the terminal.  
Read more
  • 0
  • 0
  • 5844

article-image-25-useful-extensions-drupal-7-themers
Packt
07 Jun 2011
5 min read
Save for later

25 Useful Extensions for Drupal 7 Themers

Packt
07 Jun 2011
5 min read
Drupal 7 Themes Create new themes for your Drupal 7 site with a clean layout and powerful CSS styling Drupal modules There exist within the Drupal.org site a number of modules that are relevant to your work of theming a site. Some are straightforward tools that make your standard theming tasks easier, others are extensions to Drupal functionality that enable to you do new things, or to do things from the admin interface that normally would require working with the code. The list here is not meant to be comprehensive, but it does list all the key modules that are either presently available for Drupal 7 or at least in development. There are additional relevant modules that are not listed here, as at the time this was written, they showed no signs of providing a Drupal 7 version. Caution One thing to keep in mind here—some of these modules attempt to reduce complex tasks to simple GUI-based admin interfaces. While that is a wonderful and worthy effort, you should be conscious of the fact that sometimes tools of this nature can raise performance and security issues and due to their complexity, sometimes cause conflicts with other modules that also are designed to perform at least part of the functions being fulfilled by the more complex module. As with any new module, test it out locally first and make sure it not only does what you want, but also does not provide any unpleasant surprises. The modules covered in this article include: Administration Menu Chaos Tool Suit Colorbox Conditional Stylesheets Devel @font-your-face Frontpage HTML5 Tools .mobi loader Mobile Theme Nice Menus Noggin Organic Groups Panels Semantic Views Skinr Style Guide Sweaver Taxonomy Theme Theme Developer ThemeKey Views Webform Administration Menu The Administration Menu was a mainstay of many Drupal sites built during the lifespan of Drupal 6.x. With the arrival of Drupal 7, we thought it unlikely we would need the module, as the new toolbar functionality in the core accomplished a lot of the same thing. In the course of writing this, however, we installed Administration Menu and were pleasantly surprised to find that not only can you run the old-style Administration Menu, but they have also now included the option to run a Toolbar-style Administration Menu, as shown in the following screenshot: The Administration Menu Toolbar offers all the options of the default Toolbar plus the added advantage of exposing all the menu options without having to navigate through sub-menus on the overlay. Additionally, you have fast access to clearing the caching, running cron, and disabling the Devel module (assuming you have it installed). A great little tweak to the new Drupal 7 administration interface. View the project at: http://drupal.org/project/admin_menu. Chaos Tool Suite This module provides a collection of APIs and tools to assist developers. Though the module is required by both the Views and Panels modules, discussed elsewhere in this article, it provides other features that also make it attractive. Among the tools to help themers are the Form Wizard, which simplifies the creation of complex forms, and the Dependent widget that allows you to set conditional field visibility on forms. The suite also includes CSS Tools to help cache and sanitize your CSS. Learn more at http://drupal.org/project/ctools. Colorbox The Colorbox module for Drupal provides a jQuery-based lightbox plugin. It integrates the third-party plugin of the same name (http://colorpowered.com/colorbox/). The module allows you to easily create lightboxes for images, forms, and content. The module supports the most commonly requested features, including slideshows, captions, and the preloading of images. Colorbox comes with a selection of styles or you can create your own with CSS. To run this module, you must first download and install the Colorbox plugin from the aforementioned URL. Visit the Colorbox Drupal module project page at: http://drupal.org/project/colorbox. Conditional Stylesheets The module allows themers to easily address cross-browser compatibility issues with Internet Explorer. With this module installed, you can add stylesheets targeting the browser via the theme's .info file, rather than having to modify the template.php file. The module relies on the conditional comments syntax originated by Microsoft. To learn more, visit the project site at http://drupal.org/project/conditional_styles. Devel The Devel module is a suite of tools that are useful to both module and theme developers. The module provides a suite of useful tools and utilities. Among the options it provides: Auto-generate content, menus, taxonomies, and users Print summaries of DB queries Print arrays Log performance Summaries of node access The module is also a prerequisite to the Theme Developer module, discussed later in this article. Learn more: http://drupal.org/project/devel. @font-your-face @font-your-face provides an admin interface for browsing and applying web fonts to your Drupal themes. The module employs the CSS @font-face syntax and draws upon a variety of online font resources, including Google Fonts, Typekit. com, KERNEST, and others. The system automatically loads fonts from the selected sources and you can apply them to the styles you designate—without having to manually edit the stylesheets. It's easy-to-use and has the potential to change the way you select and use fonts on your websites. @font-your-face requires the Views module to function. Learn more at the project site: http://drupal.org/project/fontyourface. Frontpage This module serves a very specific purpose—it allows you to designate, from the admin interface, different front pages for anonymous and authenticated users. Though you can accomplish the same thing through use of $classes and a bit of work, the module makes it possible for anyone to set this up without having to resort to coding. Visit the project site at http://drupal.org/project/frontpage.
Read more
  • 0
  • 0
  • 5839
article-image-choosing-your-shipping-method
Packt
19 Jun 2013
9 min read
Save for later

Choosing your shipping method

Packt
19 Jun 2013
9 min read
(For more resources related to this topic, see here.) Getting ready To view and edit our shipping methods we must first navigate to System | Configuration | Shipping Methods. Remember, our Current Configuration Scope field is important as shipping methods can be set on a per website scope basis. There are many shipping methods available by default, but the main generic methods are Flat Rate, Table Rates, and Free Shipping. By default, Magento comes with the Flat Rate method enabled. We are going to start off by disabling this shipping method. Be careful when disabling shipping methods; if we leave our Magento installation without any active shipping methods then no orders can be placed—the customer would be presented with this error in the checkout: Sorry, no quotes are available for this order at this time. Likewise through the administration panel manual orders will also receive the error. How to do it... To disable our Flat Rate method we need to navigate to its configuration options in System | Configuration | Shipping Methods | Flat Rate and choose Enabled as No, and click on Save. The following screenshot highlights our current configuration scope and disabled Flat Rate method: Next we need to configure our Table Rates method, so we need to now click on the Table Rates tab and set Enabled to Yes , within Title enter National Delivery and within Method Name enter Shipping. Finally, for the Condition option select Weight vs. Destination (all the other information can be left as default as it will not affect our pricing for this scenario). To upload our spreadsheet for our new Table Rates method we need to first change our scope (shipping rates imported via a .csv file are always entered at a website view level). To do this we need to select Main Website (this wording can differ depending on System | Manage Stores Settings) from our Current Configuration Scope field. The following screenshot shows the change in input fields when our configuration scope has changed: Click on the Export CSV button and we should start downloading a blank .csv file (or if there are rates already, it will give us our active rates). Next we will populate our spreadsheet with the following information (shown in the screenshot) so that we can ship to anywhere in the USA: After finishing our spreadsheet we can now import it, so (with our Current Configuration Scope field set to our Website view) click on the Choose File/Browse button and upload it. Once the browser has uploaded the file we can click on Save. Next we are going to configure our Free Shipping method to run alongside our Table Rates method, so to start with we need to switch back to our Default Config scope and then click on the Free Shipping tab Within this tab we will set Enabled to Yes and Minimum Order Amount to 50. We can leave the other options as default. How it works... The following is a brief explanation of each of our main shipping methods. Flat Rate The Flat Rate method allows us to specify a fixed shipping charge to be applied either per item or per order. The Flat Rate method also allows us to specify a handling fee—a percentage or fixed amount surcharge of the flat rate fee. With this method we can also specify which countries we wish to make this shipping method applicable for (dependent solely on the customers' shipping address details). Unlike the Table Rates method, you cannot specify multiple flat rates for any given region of a country nor can you specify flat rates individually per country. Table Rates The Table Rates method uses a spreadsheet of data to increase the flexibility of our shipping charges by allowing us to apply different prices to our orders depending on the criteria we specify in the spreadsheet. Along with the liberty to specify which countries this method is applicable for and giving us the option to apply a handling fee, the Table Rates method also allows us to choose from a variety of shopping cart conditions. The choice that we select from these conditions affects the data that we can import via the spreadsheet. Inside this spreadsheet we can specify hundreds of rows of countries along with their specific states or Zip/Postal Codes. Each row has a condition such as weight (and above) and also a specific price. If a shopping cart matches the criteria entered on any of the rows, the shipping price will be taken from that row and set to the cart. In our example we have used Weight vs. Destination; there are two other alternative conditions which come with a default Magento installation that could be used to calculate the shipping: Price vs. Destination: This Table Rates condition takes into account the Order Subtotal (and above) amount in whichever currency is currently set for the store # of Items vs. Destination: This Table Rates condition calculates the shipping cost based on the # of Items (and above) within the customer's basket Free Shipping The Free Shipping method is one of the simplest and most commonly used of all the methods that come with a default Magento installation. One of the best ways to increase the conversion rate through your Magento store is to offer your customers Free Shipping. Magento allows you to do this by using its Free Shipping method. Selecting the countries that this method is applicable for and inputting a minimum order amount as the criteria will enable this method in the checkout for any matching shopping cart. Unfortunately, you cannot specify regions of a country within this method (although you can still offer a free shipping solution through table rates and promotional rules). Our configuration As mentioned previously, the Table Rates method provides us with three types of conditions. In our example we created a table rate spreadsheet that relies on the weight information of our products to work out the shipping price. Magento's default Free Shipping method is one of the most popular and useful shipping methods and its most important configuration option is Minimum Order Amount. Setting this value to 50 will tell Magento that any shopping cart with a subtotal greater than $50 should provide the Free Shipping method for the customer to use; we can see this demonstrated in the following screenshot: The enabled option is a standard feature among nearly all shipping method extensions. Whenever we wish to enable or disable a shipping method, all we need to do is set it to Yes for enabled and No to disable it. Once we have configured our Table Rates extension, Magento will use the values inputted by our customer and try to match them against our imported data. In our case if a customer has ordered a product weighing 2.5 kg and they live anywhere in the USA, they will be presented with our $6.99 price. However, a drawback of our example is if they live outside of the USA, our shipping method will not be available. The .csv file for our Weight vs. Destination spreadsheet is slightly different to the spreadsheet used for the other Table Rates conditions. It is therefore important to make sure that if we change our condition, we export a fresh spreadsheet with the correct column information. One very important point to note when editing our shipping spreadsheets is the format of the file—programs such as Microsoft Excel sometimes save in an incompatible format. It is recommended to use the free, downloadable Open Office suite to edit any of Magento's spreadsheets as they save the file in a compatible format. We can download Open Office here: www.openoffice.org If there is no alternative but to use Microsoft Excel then we must ensure we save as CSV for Windows or alternatively CSV (Comma Delimited). A few key points when editing the Table Rates spreadsheet: The * (Asterisk) is a wildcard—similar to saying ANY Weight (and above) is really a FROM weight and will set the price UNTIL the next row value that is higher than itself (for the matching Country, Region/State, and Zip/ Postal Code)—the downside of this is that you cannot set a maximum weight limit The Country column takes three-letter codes—ISO 3166-1 alpha-3 codes The Zip/Postal Code column takes either a full USA ZIP code or a full postal code The Region/State column takes all two-letter state codes from the USA or any other codes that are available in the drop-down select menus for regions on the checkout pages of Magento One final note is that we can run as many shipping methods as we like at the same time—just like we did with our Free Shipping method and our Table Rates method. There's more... For more information on setting up the many shipping methods that are available within Magento please see the following link: http://innoexts.com/magento-shipping-methods We can also enable and disable shipping methods on a per website view basis, so for example we could disable a shipping method for our French store. Disabling Free Shipping for French website If we wanted to disable our Free Shipping method for just our French store, we could change our Current Configuration Scope field to our French website view and then perform the following steps: Navigate to System | Configuration | Shipping Methods and click on the Free Shipping tab. Uncheck Use Default next to the Enabled option and set Enabled to No, and then click on Save Config. We can see that Magento normally defaults all of our settings to the Default Config scope; by unchecking the Use Default checkbox we can edit our method for our chosen store view. Summary This article explored the differences between the Flat Rate, Table Rates, and Free Shipping methods, as well as taught us how to disable a shipping method and configure your Table Rates. Resources for Article : Further resources on this subject: Magento Performance Optimization [Article] Magento: Exploring Themes [Article] Getting Started with Magento Development [Article]
Read more
  • 0
  • 0
  • 5787

article-image-installing-and-configuring-drupal-commerce
Packt
28 Jun 2013
8 min read
Save for later

Installing and Configuring Drupal Commerce

Packt
28 Jun 2013
8 min read
(For more resources related to this topic, see here.) Installing Drupal Commerce to an existing Drupal 7 website There are two approaches to installing Drupal Commerce; this recipe covers installing Drupal Commerce on an existing Drupal 7 website. Getting started You will need to download Drupal Commerce from http://drupal.org/project/ commerce. Download the most recent recommended release you see that couples with your Drupal 7 website's core version: You will also require the following modules to allow Drupal Commerce to function: Ctools: http://drupal.org/project/ctools Entity API: http://drupal.org/project/entity Views: http://drupal.org/project/views Rules: http://drupal.org/project/rules Address Field: http://drupal.org/project/addressfield How to do it... Now that you're ready, install Drupal Commerce by performing the following steps: Install the modules that Drupal Commerce depends on, first by copying the preceding module files into your Drupal site's modules directory, sites/all/modules. Install Drupal Commerce's modules next, by copying the files into the sites/all/ modules directory, so that they appear in the sites/all/modules/commerce directory. Enable the newly installed Drupal Commerce module in your Drupal site's administration panel (example.com/admin/modules if you've installed Drupal Commerce at example.com), under the Modules navigation option, by ensuring the checkbox to the left-hand side of the module name is checked. Now that Drupal Commerce is installed, a new menu option will appear in the administration navigation at the top of your screen when you are logged in as a user with administration permissions. You may need to clear the cache to see this. Navigate to Configuration | Development | Performance in the administration panel to do this. How it works... Drupal Commerce depends on a number of other Drupal modules to function, and by installing and enabling these in your website's administration panel you're on your way to getting your Drupal Commerce store off the ground. You can also install the Drupal Commerce modules via Drush (the Drupal Shell) too. For more information on Drush, see http://drupal.org/project/drush. Installing Drupal Commerce with Commerce Kickstart 2 Drupal Commerce requires quite a number of modules, and doing a basic installation can be quite time-consuming, which is where Commerce Kickstart 2 comes in. It packages Drupal 7 core and all of the necessary modules. Using Commerce Kickstart 2 is a good idea if you are building a Drupal Commerce website from scratch, and don't already have Drupal core installed. Getting started Download Drupal Commerce Kickstart 2 from its drupal.org project page at http://drupal.org/project/commerce kickstart. How to do it... Once you have decompressed the Commerce Kickstart 2 files to the location you want to install Drupal Commerce in, perform the following steps: Visit the given location in your web browser. For this example, it is assumed that your website is at example.com, so visit this address in your web browser. You'll see that you are presented with a welcome screen as shown in the following screenshot: Click the Let's Get Started button underneath this, and the installer moves to the next configuration option. Next, your server's requirements are checked to ensure Drupal can run in this environment. In the preceding screenshot you can see some common problems when installing Drupal that prevent installation. In particular, ensure that you create the /sites/ default/files directory in your Drupal installation and ensure it has permissions to allow Drupal to write to it (as this is where your website's images and files are stored). You will also need to copy the /sites/default/default.settings.php file to /sites/default/settings.php before you can start. Make sure this file is writeable by Drupal too (you'll secure it after installation is complete). Once these problems have been resolved, refresh the page and you will be taken to the Set up database screen. Enter the database username, password, and database name you want to use with Drupal, and click on Save and continue: The next step is the Install profile section, which can take some time as Drupal Commerce is installed for you. There's nothing for you to do here; just wait for installation to complete! You can now safely remove write permissions for the settings.php file in the /sites/default directory of your Drupal Commerce installation. The next step is Configure site. Enter the name of your new store and your e-mail address here, and provide a username and password for your Drupal Commerce administrator account. Don't forget to make a note of these as you'll need them to access your website later! Below these options, you can specify the country of your server and the default time zone. These are usually picked up from your server itself, but you may want to change them: Click on the Save and continue button to progress now; the next step is Configure store. Here you can set your Default store country field (if it's different from your server settings) and opt to install Drupal Commerce's demo, which includes sample content and a sample Drupal Commerce theme too: Further down on this screen, you're presented with more options. By checking the Do you want to be able to translate the interface of your store? field, Drupal Commerce provides you with an ability to translate your website for customers of different languages (for this simple store installation, leave this set to No). Finally, you can set the Default store currency field you wish to use, and whether you want Commerce Kickstart to set up a sales tax rule for your store (select which is more appropriate for your store, or leave it set to No sample tax rate for now): Click on Create and finish at the bottom of the screen. If you chose to install the demo store in the previous screen, you will have to wait as it is added for you. There are now options to allow Drupal to check for updates automatically, and to receive e-mails about security updates. Leave these both checked to help you stay on top of keeping your Drupal website secure and up-to-date. Wait as Commerce Kickstart installs everything Drupal Commerce requires to run. That's it! Your Drupal Commerce store is now up and running thanks to Commerce Kickstart 2. How it works... The Commerce Kickstart package includes Drupal 7 core and the Drupal Commerce module. By packaging these together, installation and initial configuration for your Drupal Commerce store is made much easier! Creating your first product Now that you've installed Drupal Commerce, you can start to add products to display to customers and start making money. In this recipe you will learn how to add a basic product to your Drupal Commerce store. Getting started Log in to your Drupal Commerce store's administration panel, and navigate to Products | Add a product: If you haven't, navigate to Site settings | Modules and ensure that the Commerce Kickstart Menu module is enabled for your store. Note the sample products from Drupal Kickstart's installation are displaying there. How to do it... To get started adding a product to your store, click on the Add product button and follow these steps: Click on the Product display. Product displays groups of multiple related product variations together for display on the frontend of your website. Fill in the form that appears, entering a suitable Title, using the Body field for the product's description, as well as filling in the SKU (stock keeping unit; a unique reference for this product) and Price fields. Ensure that the Status field is set to Active. You can also optionally upload an image for the product here: Optionally, you can assign the product to one of the pre-existing categories in the Product catalog tab underneath these fields, as well as a URL for it in the URL path settings tab: Click on the Save product button, and you've now created a basic product in your store. To view the product on the frontend of your store, you can navigate to the category listings if you imported Drupal Commerce's demo data, or else you can return to the Products menu and click on the name of the product in the Title column: You'll now see your product on the frontend of your Drupal Commerce store: How it works... In Drupal Commerce, a product can represent several things, listed as follows: A single product for sale (for example, a one-size-fits-all t-shirt) A variation of a product (for example, a medium-size t-shirt) An item that is not necessarily a purchase as such (for example, it may represent a donation to a charity) An intangible product which the site allows reservations for (for example, an event booking) Product displays (for example, a blue t-shirt) are used to group product variations (for example, a medium-sized blue t-shirt and a large-sized blue t-shirt), and display them on your website to customers. So, depending on the needs of your Drupal Commerce website, products may be displayed on unique pages, or multiple products might be grouped onto one page as a product display.
Read more
  • 0
  • 0
  • 5751

article-image-finding-and-fixing-joomla-15x-customization-problems
Packt
06 Oct 2009
12 min read
Save for later

Finding and Fixing Joomla! 1.5x Customization Problems

Packt
06 Oct 2009
12 min read
Understanding common errors There are five main areas that cause the majority of problems for Joomla! sites. Understanding these areas and the common problems that occur with in each of them is a very important part of fixing them and thus, our site. Even though there is a practically unlimited number of potential issues and problems that can occur, there are certain problems which occur much more regularly than others. If we understand these main problems, we should be able to take care of many of the problems that will occur on our site without needing to resort to hiring people to fix them, or waiting for extension developers to provide support. The five areas are: PHP code JavaScript code CSS/HTML code Web server Database We will now look at the two most common error sources, PHP and JavaScript. PHP code Because PHP code is executed on the server, we usually have some control over the conditions that it is subject to. Most PHP errors originate from one of four sources: Incorrect extension parameters PHP code error PHP version Server settings Incorrect extension parameters It is often easy to misunderstand what the correct value for an extension parameter is, or if a particular parameter is required or not. These misunderstandings are behind a large number of PHP "errors" that developers experience when building a site. Diagnosis In a well-coded extension, putting the wrong information into a parameter shouldn't result in an error, but will usually result in the extension producing strange or unexpected output, or even no output at all. In a poorly coded extension, an incorrect parameter value will probably cause an error. These errors are often easy to spot, especially in modules, because our site will output everything it processed up until the point of the error, giving our page the appearance of being cut off. Some very minor errors may even result in the whole page, except for the error causing extension, being output correctly, and error messages appearing in the page, where the extension with the error was supposed to appear. A critical error, however, may cause the site to crash completely, and output only an error message. In extreme cases not even an error message will be output, and visitors will only see a white screen. The messages should always appear in our PHP log though. Fixing the problem Incorrect extension parameters are the easiest problems to fix, and are often solved simply by going through the parameter screens for the extensions on the page with the errors, and making sure they all have correct values. If they all look correct, then we may want to try changing some parameters to see if that fixes the issue. If this still doesn't work, then we have a genuine error. PHP code error Extension developers aren't perfect, and even the best ones can overlook or miss small issues in the code. This is especially true with large, complex extensions so please remember that even if an extension has PHP code error, it may not necessarily mean that the whole extension is poorly coded. Diagnosis Similar to incorrect extension parameters, a PHP coding error will usually result in a cut-off page, or a white screen, sometimes with an error message displayed, sometimes without. Whether an error message is displayed or not depends partly on the configuration of your server, and partly on how severe the error was. Some servers are configured to suppress error output of certain types of errors. Regardless of the screen output, all PHP errors should be output to the PHP log. So, if we get a white screen, or even get a normal screen but strange output, checking our PHP log can often help us to find the problem. PHP logs can reside in different places on differently configured servers, although it will almost always be in a directory called logs. We may also not have direct access to the log, again depending on our server host. We should ask our web hosting company's support staff for the location of our PHP log, if we can't easily find it. Some common error messages and causes are: Parse error: parse error, unexpected T_STRING in... This is usually caused by a missing semi-colon at the end of a line, or a missing double quote (") or end bracket ()) after we opened one. For quotes and semicolons, the problem is usually the line above the one reported in the error. For missing brackets, the error will sometimes occur at the end of the script, even though the problem code is much earlier in the script. Parse error: syntax error, unexpected $end in... We are most likely missing a closing brace (}) somewhere. Make sure that each open brace ({) we have has been closed with a closing brace (}). Parse error: syntax error, unexpected T_STRING, expecting ',' or ';' in... There may be double quotes within double quotes. They either need to be escaped, using a forward slash before the inside quote, or changed to single quotes. Fixing the problem Fixing a PHP code error is possible but can be difficult depending on the extension. Usually when there is a PHP code error, it will give a brief description of the error, and a line number. If nothing is being output at all, then we may need to turn error reporting up as described later. We will then go to the line specified to examine it and the lines around it to try and find our problem. If we can't find an obvious error, then it might be better to take the error back to the developer and ask them for support. PHP version The current version of PHP is 5.x.x and version 6.x is expected soon, but because many older, but still popular, applications only run on PHP version 4.x.x. It is still very common to find many Web hosting companies still using PHP 4 on their servers. This problem is even more unfortunate due to the fact that PHP 4 isn't even supported anymore by the PHP developers. In PHP 5, there are many new functions and features that don't exist in PHP 4. As a result, using these functions in an extension will cause it to error when run on a PHP 4 server. Diagnosis Diagnosing if we have the wrong PHP version is not obvious, as it will usually result in an error about an unknown function when the extension tries to call a function that doesn't exist in the version of PHP installed on our server. Sometimes, the error will not be that the function is unknown, but that the number of parameters we are sending it is incorrect if they were changed between PHP 4 and PHP 5. Fixing the problem The only real way to fix the problem is to upgrade our PHP version. Some web hosts offer PHP 4 or 5 as an option and it might be as simple as checking a box or clicking a button to turn on PHP 5. In case if our host doesn't offer PHP 5 at all, the only solution is to use a different extension or change our web host. This may actually be a good idea anyway, because if our host is still using an unsupported PHP version with no option to upgrade, then what other unsupported, out of date software is running those servers? Server settings One of the most common problems encountered by site owners in regards to server settings is file permissions. Many web hosting companies run Linux, which uses a three-part permission model, on their servers. Using this model, every file can have separate permissions set for: The user who owns the particular file Other users in the same user group as the owner Everyone else (in a web site situation this is mainly the site visitors) Each file also has three permissions that enable, or disable, certain actions on the file. These permissions are read, write, and execute. Permissions are usually expressed in one of two ways, first as single characters in a file listing or as a three digit number. For example, a file listing on a Linux server might look like this: drwxr-x--- 2 auser agroup 4096 Dec 28 04:09 tmp-rwxr-x--- 1 auser agroup 345 Sep 1 04:12 somefile.php-rwxr--r-- 1 auser agroup 345 Sep 1 04:12 foo The very first character to the left, a d or – in this case, indicates if this is a directory (the d) or a file (the -). The next nine characters indicate the permissions and who they apply to. The first three belong to the file owner, next three to those in the same group as the owner, and the final three to everyone else. The letters used are: r—read permission w—write permission x—execute permission A dash (-) indicates that this permission hasn't been given to those particular people. So in our example above, tmp.php can be read, written to, or executed by the owner (a user). It can be read or executed (but not written to) by other users in the same group (agroup) as the owner, but the file cannot be used at all by people outside the group.foo however, can be read by people in the owners group, and also read by everyone else, but it cannot be executed by them. As mentioned above, permissions are also often expressed as a three-digit number. Each of the digits represents the sum of the numbers that represent the permissions granted. For example: r = 4, w = 2, and x = 1. Adding these together gives us a number from 0-7, which can indicate the permission level. So a file with a permission level of 644 would translate as: 6 = 4 + 2 = rw4 = r4 = r or -rw-r--r-- in the first notation that we looked at. Most servers are set by default to one of the following: 644 -rw-r--r--755 -rwxr-xr-x775 -rwxrwxr-x All of this looks fine so far. The problems start to creep in depending on how the server runs their PHP. PHP can either be set up to run as the same user who owns all the files (usually our FTP user or hosting account owner), or it can be set up to run as a different user, but in the same group as the owner. Or it can be set up to be a completely different user and group, as illustrated here: The ideal setup, from a convenience point of view, is the first one where PHP is executed as the same user who owns the files. This setup should have no problems with permissions. But the ideal setup for a very security-conscious web host is the third one since the PHP engine can't be used to hack the web site files, or the server itself. A web server with this setup though used to have a difficult time running a Joomla! site. It was difficult because changing the server preferences requires that files be edited by the PHP user, uploading extensions means that folders and files need to be created by the PHP user, and so on. If the PHP engine isn't even in the same group as the file owner, then it gets treated the same as any site visitor and can usually only read, and probably execute files, but not change them. This prevents us from editing preferences or uploading new extensions. However, if we changed the files so that the PHP engine could edit and execute them (permission 777, for example) then anyone who can see our site on the internet can potentially edit and execute our files by themselves, making our site very vulnerable to being hacked by even a novice hacker. We should never give files or directories a permission of 777 (read, write, and execute to all three user types) because it is almost guaranteed that our site will be hacked eventually as a result. If, for some reason, we need to do it for testing, or because we need to in order to install extensions, then we should change it back as soon as possible. Diagnosis To spot this problem is relatively simple. If we can't edit our web site configuration, or install any extensions at all, then nine times out of ten, server permissions will be the problem. Fixing the problem We can start by asking our web host if they allow PHP to be run as CGI, or install suEXEC (technical terms for running it as the same user who owns the files) and if so, how do we set it up. If they don't allow this, then the next best situation is to enable the Joomla! FTP layer in our configuration. This will force Joomla! to log into our site as the FTP user, which is almost always the same user that uploaded the site files, and edit or install files. We can enable the FTP layer by going to the Site | Global Configuration page and then clicking on the server item in the menu below the heading. We can then enter the required information for the FTP layer on this screen. The FTP layer should only be used on Linux-based servers. More information about the FTP layer can be found in the official Joomla! documentation at http://help.joomla.org/content/view/1941/302/1/2/ If for some reason the FTP layer doesn't work, we only have two other options. We could change our web hosting provider as one option. Or, whenever we want to install a new extension or change our configuration, we need to change the permissions on our folders, perform our tasks, and then change the permissions back to their original settings.
Read more
  • 0
  • 0
  • 5739
article-image-using-jquery-script-creating-dynamic-table-contents
Packt
21 Oct 2009
6 min read
Save for later

Using jQuery Script for Creating Dynamic Table of Contents

Packt
21 Oct 2009
6 min read
  A typical jQuery script uses a wide assortment of the methods that the library offers. Selectors, DOM manipulation, event handling, and so forth come into play as required by the task at hand. In order to make the best use of jQuery, we need to keep in mind the wide range of capabilities it provides. A Dynamic Table of Contents As an example of jQuery in action, we'll build a small script that will dynamically extract the headings from an HTML document and assemble them into a table of contents for that page. Our table of contents will be nestled on the top right corner of the page: We'll have it collapsed initially as shown above, but a click will expand it to full height: At the same time, we'll add a feature to the main body text. The introduction of the text on the page will not be initially loaded, but when the user clicks on the word Introduction, the introductory text will be inserted in place from another file: Before we reveal the script that performs these tasks, we should walk through the environment in which the script resides. Obtaining jQuery The official jQuery website (http://jquery.com/) is always the most up-to-date resource for code and news related to the library. To get started, we need a copy of jQuery, which can be downloaded right from the home page of the site. Several versions of jQuery may be available at any given moment; the latest uncompressed version will be most appropriate for us. No installation is required for jQuery. To use jQuery, we just need to reside it on our site in a public location. Since JavaScript is an interpreted language, there is no compilation or build phase to worry about. Whenever we need a page to have jQuery available, we will simply refer to the file's location from the HTML document. Setting Up the HTML Document There are three sections to most examples of jQuery usage— the HTML document itself, CSS files to style it, and JavaScript files to act on it. For this example, we'll use a page containing the text of a book: <?xml version="1.0" encoding="UTF-8" ?><!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"    "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html xml_lang="en" lang="en">  <head>      <meta http-equiv="Content-Type" content="text/html;                                                   charset=utf-8"/>      <title>Doctor Dolittle</title>    <link rel="stylesheet" href="dolittle.css" type="text/css" />      <script src="jquery.js" type="text/javascript"></script>      <script src="dolittle.js" type="text/javascript"></script>  </head>  <body>    <div id="container">      <h1>Doctor Dolittle</h1>      <div class="author">by Hugh Lofting</div>      <div id="introduction">        <h2><a href="introduction.html">Introduction</a></h2>      </div>      <div id="content">        <h2>Puddleby</h2>        <p>ONCE upon a time, many years ago when our grandfathers           were little children--there was a doctor; and his name was           Dolittle-- John Dolittle, M.D.  &quot;M.D.&quot; means            that he was a proper doctor and knew a whole lot.       </p>           <!-- More text follows... -->      </div>    </div>  </body></html> The actual layout of files on the server does not matter. References from one file to another just need to be adjusted to match the organization we choose. In most examples in this book, we will use relative paths to reference files (../images/foo.png) rather than absolute paths (/images/foo.png).This will allow the code to run locally without the need for a web server. The stylesheet is loaded immediately after the standard <head> elements. Here are the portions of the stylesheet that affect our dynamic elements: /* -----------------------------------   Page Table of Contents-------------------------------------- */#page-contents {  position: absolute;  text-align: left;  top: 0;  right: 0;  width: 15em;  border: 1px solid #ccc;  border-top-width: 0;  border-right-width: 0;  background-color: #e3e3e3;}#page-contents h3 {  margin: 0;  padding: .25em .5em .25em 15px;  background: url(arrow-right.gif) no-repeat 0 2px;  font-size: 1.1em;  cursor: pointer;}#page-contents h3.arrow-down {  background-image: url(arrow-down.gif);}#page-contents a {  display: block;  font-size: 1em;  margin: .4em 0;  font-weight: normal;}#page-contents div {  padding: .25em .5em .5em;    display: none;  background-color: #efefef;}/* -----------------------------------   Introduction-------------------------------------- */.dedication {  margin: 1em;  text-align: center;  border: 1px solid #555;  padding: .5em;} After the stylesheet is referenced, the JavaScript files are included. It is important that the script tag for the jQuery library be placed before the tag for our custom scripts; otherwise, the jQuery framework will not be available when our code attempts to reference it.  
Read more
  • 0
  • 0
  • 5707

article-image-openid-ultimate-sign
Packt
23 Oct 2009
13 min read
Save for later

OpenID: The Ultimate Sign On

Packt
23 Oct 2009
13 min read
Introduction How many times have you walked away from some Internet forum because you could not remember your login ID or password, and just did not want to go through the tedium of registering again? Or gone back to re-register yourself only to forget you password the next day? Remembering all those login IDs and passwords is indeed an onerous task and one more registration for a new site seems like one too many. We have all tried to get around these problems by jotting down passwords on pieces of paper or sticking notes to our terminal – all potentially dangerous practices that defeat the very purpose of keeping a digital identity secure. If you had the choice of a single user ID and password combination – essentially a single digital identity – imagine how easy it might become to sign up or sign in to new sites. Suppose you could also host your own digital identity or get it hosted by third party providers who you could change at will, or create different identity profiles for different classes of sites, or choose when your User ID with a particular site should expire; suppose you could do all this and more in a free, non-proprietary, open standards based, extensible, community-driven framework (whew!) with Open Source libraries and helpful tutorials to get you on board, you would say: “OpenID”. To borrow a quote from the OpenID website openid.net: “OpenID is an open, decentralized, free framework for user-centric digital identity.” The Concept The concept itself is not new (and there are proprietary authentication frameworks already in existence). We are all aware of reference checks or identity documents where a reliable agency is asked to vouch for your credentials. A Passport or a Driver's License is a familiar example. Web sites, especially those that transact business, have digital certificates provided by a reliable Certification Authority so that they can prove to you, the site visitor, they are indeed who they claim to be. From here, it does not require a great stretch of imagination to appreciate that an individual netizen can have his or her own digital identity based on similar principles. This is how you get the show on the road. First, you need to get yourself a personal identity based on OpenID from one of the numerous OpenID providers[1] or some sites that provide an OpenID with membership. This personal identity comes in the form a URL or URI (essentially a web address that starts with http:// or https://) that is unique to you. When you need to sign up or sign in to a web site that accepts OpenID logins (look for the words 'OpenID' or the OpenID logo), you submit your OpenID URL. The web site then redirects you to the site of your ID provider where you authenticate yourself with your password and optionally choose the details – such as full name, e-mail ID, or nickname, or when your login ID should expire for a particular site – that you want to share with the requesting site and allow the authentication request to go through. You are then returned to the requesting site. That is all there is to it. You are authenticated! The requesting site will usually ask you to associate a nickname with your OpenID. It should be possible to register with and sign in to different sites using different nicknames – one for each site – but the same OpenID. But you may not want to overdo this lest you get into trouble trying to recall the right nickname for a particular site. Just Enough Detail This is not a technical how-to. For serious technical details, you can follow the excellent links in the References section. This is a basic guide to get you started with OpenID, to show you how flexible it is, and to give pointers to its technical intricacies. By the end of this article you should be able to create your own personal digital identities based on OpenID (or discover if you already have one – you just might!), and be able to use them effectively. In the following sections, I have used some real web sites as examples. These are only for the purpose of illustration and in no way shows any preference or endorsement. Getting Your OpenID The simplest and most direct way to get your personal OpenID is to go to a third party provider. But before that, the smart thing to do would be find out if you already have one. For instance, if you blog at wordpress.com, then http://yourblogname.wordpress.com is an OpenID already available to you. There are other sites[1], too, that automatically provide you an OpenID with membership. Yahoo! gives you an OpenID if you have an account with them; but it is not automatic and you need to sign up for it at http://openid.yahoo.com. Your OpenID at Yahoo! will be of the form https://me.yahoo.com/your-nickname. To get your third party hosted OpenID we will choose Verisignlab's Personal Identity Provider (PIP) site -- http://pip.verisignlabs.com/ as an example. You are of course free to decide and choose your own provider(s). The sign up form is a simple no-fuss affair with the minimum number of fields. (If you are tired of hearing 'third party', the reason for using the term will get clearer further on. For the purpose of this article, you, the owner of the OpenID are the first party, the web site that wants you authenticated is the second party, the OpenID provider being the third.) After replying to the confirmation e-mail you are ready to take on the wide world with your OpenID. If you gave your ID as 'johndoe' then you will get an OpenID like: http://johndoe.pip.verisignlabs.com. You can come back to the PIP site and update your profile; some sites request information such as full name or e-mail ID but you are always in control whether you want to pass on this information back to them. If you choose to have just one OpenID, then this is about as much as you would ever do to sign on to any OpenID enabled site. You can also create multiple OpenID's for yourself – remember what we said earlier about having multiple ID's to suite different classes of sites. Testing Your OpenID Now that we have our OpenID we will test it and in the process also see how a typical OpenID-based authentication works in practice. Use the testing form[7] in the References section and enter your OpenID URL that you want tested. When you are redirected to your PIP's site (we are sticking to our Verisign example), enter your password and also choose what information you want passed back to the requesting site before clicking “Allow” to let the authentication go through. Important tip: Enter your password only on the PIP's site and nowhere else! Be aware that this particular testing page may not work with all OpenIDs; that may not necessarily mean that the OpenID itself has a problem. Step-by-Step: Use your WordPress or Verisign OpenID For this tutorial part, we will take the example of http://www.propeller.com (a voting site among other things) that accepts OpenID sign ups and sign ins. For an OpenID we will use the URL of your WordPress blog – http://yourblogname.wordpress.com. You could also use your OpenID URL (the one you got from the Verisign example) and follow through. On the Propeller site, go to the sign up page. Look for the prominent OpenID logo. Type in your OpenID URL and click on the 'Verify ...' button. You are taken to the site of your PIP where you need to authenticate yourself.   If you used your Verisign OpenID, enter your password, complete the details you want to pass back to the requesting site (remember, we are trying to sign up with Propeller) and allow the authentication to go through. You are now back with the Propeller site. Just hang in there a moment as we check the flow for a Wordpress OpenID.   For a WordPress OpenID, you will get a screen instead that asks you to deliberately sign in to your WordPress account. Once you are signed in, you will see a hyperlink that prompts you to continue with the authentication request from Propeller.     Follow this link to a form that asks your permission to pass back information to Propeller such your nickname and e-mail ID. You can change both these fields if you wish and allow the authentication to go through.   Now you should be back at the Propeller site with a successful OpenID verification. The site will ask you to associate a nickname with your OpenID and a working e-mail to complete your registration process. This step is no different from a normal sign up process. Check your e-mail, click on the link provided therein, get back to the Propeller site, and click another link to complete the registration process. You are automatically signed in to Propeller. Sign out for the moment so that we can see how an OpenID sign in works. Go to the sign in page at Propeller. You will see a normal sign in and an OpenID sign in. We will use the OpenID one (of course!). Type in your OpenID URL and click on the “Sign in...” button. Complete the formalities on your PIP site (for Verisign you will get a sign in page; for Wordpress you will need to sign in first unless you are already signed in) and let the authentication go through. This time you are back on the Propeller site all signed in and ready to go. Note that your nickname appears correctly because your OpenID is associated with it. That is all there is to it. Easier done than said. Try this a couple of times and I bet it will feel easier than the remote control of your home entertainment system! Your Custom OpenID URL If you want a personalized OpenID URL and do not like the one provided by your PIP you can always use delegation to get what you want. To make your blog or personal home page as your OpenID URL, insert the following in the head portion (the part that falls between <head> and </head> on an HTML page) of your blog or any page that you own. This will only work with pages that you completely own and have control over their source. There is a Wordpress plug-in that gives delegating capability to your Wordpress.com blog but we will not go into that here. The first URL is your OpenID server. The second URL is your OpenID URL – either the one you host yourself or the one provided by a third party. The requesting site discovers your OpenID and correctly authenticates you. With this approach you can switch providers transparently. At the risk of repeating: test your new personalized URL before you start using it. Note that the 'openid.server' URL may vary depending on the PIP. To get the name of your PIP's OpenID server, use the testing service[7] which reports the correct URL for your PIP to use with the “openid.server” part your delegation mark up. <link rel="openid.server" href="http://pip.verisignlabs.com/server" /><link rel="openid.delegate" href="http://johndoe.pip.verisignlabs.com/" /> Rolling Your Own If you are paranoid about entrusting the management of your digital identity to another web site and also have the technical smarts to match, there are ways you can become your own PIP[5][6]. If you are tech-savvy then you cannot fail to appreciate the elegance of the OpenID architecture and the way it lets control stay where it should – with you. Account Management – Lite? OpenID makes life easier for site visitors. But what about the site and the domain administrators? If administrators decide to go the OpenID way[3], it lightens their load by taking away a major part of the chore of membership administration and authentication. As a bonus, it also potentially opens up a site to the entire community of net users that have OpenID's or are getting one. Security and Reliability As the wisecrack goes – if you want complete security, you should unplug from the Internet. On a serious note, there are some precautions you have to take while using OpenID and they are no different from the precautions you would take for any item associated with your identity, say your Passport or your credit card. Remember to enter your password only on the Identity Provider's site and nowhere else. Be alert to phishing. This explains why WordPress asks you to log in explicitly rather than take you directly to their authentication page. Never use your e-mail ID handle as your OpenID name but use a different one. Using OpenID has its flip side, too. Getting your OpenID from a provider potentially lays open your browsing habits to tracking. You can get around this by being your own PIP, delegating from your own domain, or creating a PIP profile under an alias. There is the possibility that your OpenID provider goes out of service or worse, out of business. It is thus important to choose a reliable identity provider. There are sites that allow you to associate multiple OpenIDs with your account and perhaps this can be a way forward to popularize OpenID and to allay any fears of getting locked in with a single vendor and getting locked out of your identity in the process. Your Call There are many sites today that are not OpenID-ready. There are some sites that allow only OpenID sign ons. However, if you see the elegance of the OpenID mechanism and the convenience it provides both site administrators and members, you might agree that its time has come. Get an OpenID if you do not have one. Convince your friends to get theirs. And if you run an online community or are a member of one, throw your weight around to ensure that your site also provides an OpenID sign on. References http://wiki.openid.net/OpenIDServers is a list of ID providers. http://blogs.zdnet.com/digitalID/?p=78 makes a strong case for OpenID. Read it to get a good perspective on the subject. http://www.plaxo.com/api/openid_recipe is a soup-to-nuts tutorial on how to enable your site for OpenID authentication or migrate to OpenID from your current site-specific authentication scheme. Check out http://www.openidenabled.com/php-openid/ if you are looking for software libraries to OpenID-enable your site. http://www.intertwingly.net/blog/2007/01/03/OpenID-for-non-SuperUsers is a crisp if intermediate-level how-to that lets you try out new things in the OpenID space. http://siege.org/projects/phpMyID/ shows you how you can run your own (yes, your own) PIP server. http://www.openidenabled.com/resources/openid-test/checkup is a link that helps you test your OpenID. Once you get your OpenID, you can submit it to the form on this URL and get yourself authenticated to see if everything works fine. Does not seem to work with Wordpress and Yahoo! OpenIDs as of this writing. http://www.openid.net is the OpenID site.   Read another article by Gurudutt Talgery Podcasting with Linux Command Line Tools and Audacity  
Read more
  • 0
  • 0
  • 5681
Modal Close icon
Modal Close icon