Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-jquery-embedded-dojo-accordion-panes
Packt
08 Oct 2009
4 min read
Save for later

jQuery Embedded in Dojo Accordion Panes

Packt
08 Oct 2009
4 min read
Basic DOJO 123 accordion In my earlier article I had used the version of the Toolkit which had the accordion in the Widgets. In the latest version which I am using, the accordion is found in digit/layout. The code is similar to that in the earlier article. Basically you create a accordion container and then place the accordion panes inside the container. In referencing the Dojo library I am using part of the references from the Dojo Toolkit 123 installed on my local IIS and part of the reference from the AOL site (uses the 1.0.0 script). Listing 1: AccordionOrig.htm: A basic accordion with three panes [DOJO 123] <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>Accordion Pane with jQueries</title> <style type="text/css"> @import "http://localhost/Dojo123/dojo123/dijit/themes/tundra/tundra.css"; @import "http://localhost/Dojo123/dojo123/dojo/resources/dojo.css" </style> <script type="text/javascript" src="http://o.aolcdn.com/dojo/1.0.0/dojo/dojo.xd.js" djConfig="parseOnLoad: true"></script> <script type="text/javascript"> dojo.require("dojo.parser"); dojo.require("dijit.layout.AccordionContainer"); </script> </head> <body class="tundra"> <div dojoType="dijit.layout.AccordionContainer" duration="200" style="margin-right: 30px; width: 400px; height: 400px; overflow: scroll"> <!--Pane 1 --> <div dojoType="dijit.layout.AccordionPane" selected="true" title="Page 1" style="color:red;overflow: scroll; background-color:#FFFF80;"> <!--Pane 1 content--> <p >Test 1</a></p > </div> <!--Pane 2 --> <div dojoType="dijit.layout.AccordionPane" title="Page 2" style="overflow: scroll;background-color:#FFFF80;"> <!--Pane 2 content--> <p >Test 2</p > </div> <!-- Pane 3--> <div dojoType="dijit.layout.AccordionPane" title="Page 3" style="color:magenta;overflow: scroll;background-color:#FFFF80;"> <!--Pane 3 content--> <p >Test 3</a></p > </div> </div> </body> </html> This page when browsed to, will display the accordion as shown in Figure 1. This was cross-browser compatible in the following browsers: IE 6.0, Opera 9.1, Firefox 3.0.5, and Safari 3.2.1. The page did not render correctly (all panes completely open) in Google Chrome 1.0.154.43. Figure 1 jQuery API Components used in the article jQuery 1.3 downloaded from this site is used as a source for the script. From the API reference only two simple components were chosen to be embedded in the panes - the Selector and the Effects. The slideUp() effect where in, when you click on the code sensitive area the region of the area on the web page slides up. H1 Selector styled using jQuery Using jQuery you can selectively apply style to tags, ids, etc. In the example shown in the code that follows the H1 tag is styled using jQuery. Listing 2: H1SelectorJQry.htm: Tag styling with jQuery <html> <head></head> <body> <script language="JavaScript" src="http://localhost/JayQuery/jquery-1.3.min.js"> </script> <h1>Jquery inside a DOJO Accordion Pane</h1> <script type="text/JavaScript"> $(document).ready(function(){ $("h1").css("color", "magenta");}); </script> </body> <html> In the above, the jQuery code (inside the script tags) renders the h1 tag in the color shown as in Figure 2. Figure 2 jQuery Effect: slideUp() The htm page with the listing 3 when browsed to, displays a pale green 300 x 300 square corresponding to the styling of the p tag as shown in Figure 4. When clicked anywhere inside this square, the square slides up and disappears. This is the slideUp() effect. Listing 3: p_slideUp.htm: Jquery Effect <html> <head></head> <body> <script language="JavaScript" src="http://localhost/JayQuery/jquery-1.3.min.js"> </script> <div><p style="width:300; height:300; background-color:palegreen; color:darkgreen;">Test</p></div> <script type="text/JavaScript"> $("p").click(function () { $(this).slideUp(); }); </script> </body> <html> This page gets displayed as shown in Figure 3. When you click anywhere in the pale green area the "P" region slides up. Figure 3
Read more
  • 0
  • 0
  • 2636

article-image-google-web-toolkit-2-creating-page-layout
Packt
24 Nov 2010
7 min read
Save for later

Google Web Toolkit 2: Creating Page Layout

Packt
24 Nov 2010
7 min read
Google Web Toolkit 2 Application Development Cookbook Over 70 simple but incredibly effective practical recipes to develop web applications using GWT with JPA , MySQL and i Report Create impressive, complex browser-based web applications with GWT 2 Learn the most effective ways to create reports with parameters, variables, and subreports using iReport Create Swing-like web-based GUIs using the Ext GWT class library Develop applications using browser quirks, Javascript,HTML scriplets from scratch Part of Packt's Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible   The layout will be as shown in the diagram below: Creating the home page layout class This recipe creates a panel to place the menu bar, banner, sidebars, footer, and the main application layout. Ext GWT provides several options to define the top-level layout of the application. We will use the BorderLayout function. We will add the actual widgets after the layout is fully defined. The other recipes add the menu bar, banner, sidebars, and footers each, one-by-one. Getting ready Open the Sales Processing System project. How to do it... Let's list the steps required to complete the task. Go to File | New File. Select Java from Categories, and Java Class from File Types. Click on Next. Enter HomePage as the Class Name, and com.packtpub.client as Package. Click on Finish. Inherit the class ContentPanel. Press Ctrl + Shift + I to import the package automatically. Add a default constructor: package com.packtpub.client; import com.extjs.gxt.ui.client.widget.ContentPanel; public class HomePage extends ContentPanel { public HomePage() { } } Write the code of the following steps in this constructor. Set the size in pixels for the content panel: setSize(980,630); Hide the header: setHeaderVisible(false); Create a BorderLayout instance and set it for the content panel: BorderLayout layout = new BorderLayout(); setLayout(layout); Create a BorderLayoutData instance and configure it to be used for the menu bar and toolbar: BorderLayoutData menuBarToolBarLayoutData= new BorderLayoutData(LayoutRegion.NORTH, 55); menuBarToolBarLayoutData.setMargins(new Margins(5)); Create a BorderLayoutData instance and configure it to be used for the left-hand sidebar: BorderLayoutData leftSidebarLayoutData = new BorderLayoutData(LayoutRegion.WEST, 150); leftSidebarLayoutData.setSplit(true); leftSidebarLayoutData.setCollapsible(true); leftSidebarLayoutData.setMargins(new Margins(0, 5, 0, 5)); Create a BorderLayoutData instance and configure it to be used for the main contents, at the center: BorderLayoutData mainContentsLayoutData = new BorderLayoutData(LayoutRegion.CENTER); mainContentsLayoutData.setMargins(new Margins(0)); Create a BorderLayoutData instance and configure it to be used for the right-hand sidebar: BorderLayoutData rightSidebarLayoutData = new BorderLayoutData(LayoutRegion.EAST, 150); rightSidebarLayoutData.setSplit(true); rightSidebarLayoutData.setCollapsible(true); rightSidebarLayoutData.setMargins(new Margins(0, 5, 0, 5)); Create a BorderLayoutData instance and configure it to be used for the footer: BorderLayoutData footerLayoutData = new BorderLayoutData(LayoutRegion.SOUTH, 20); footerLayoutData.setMargins(new Margins(5)); How it works... Let's now learn how these steps allow us to complete the task of designing the application for the home page layout. The full page (home page) is actually a "content panel" that covers the entire area of the host page. The content panel is a container having top and bottom components along with separate header, footer, and body sections. Therefore, the content panel is a perfect building block for application-oriented user interfaces. In this example, we will place the banner at the top of the content panel. The body section of the content panel is further subdivided into five regions in order to place these—the menu bar and toolbar at the top, two sidebars on each side, a footer at the bottom, and a large area at the center to place the contents like forms, reports, and so on. A BorderLayout instance lays out the container into five regions, namely, north, south, east, west, and center. By using BorderLayout as the layout of the content panel, we will get five places to add five components. BorderLayoutData is used to specify layout parameters of each region of the container that has BorderLayout as the layout. We have created five instances of BorderLayoutData, to be used in the five regions of the container. There's more... Now, let's talk about some general information that is relevant to this recipe. Setting the size of the panel The setSize method is used to set the size for a panel. Any one of the two overloaded setSize methods can be used. A method has two int parameters, namely, width and height. The other one takes the same arguments as string. Showing or hiding header in the content panel Each content panel has built-in headers, which are visible by default. To hide the header, we can invoke the setHeaderVisible method, giving false as the argument, as shown in the preceding example. BorderLayoutData BorderLayoutData is used to set the layout parameters, such as margin, size, maximum size, minimum size, collapsibility, floatability, split bar, and so on for a region in a border panel. Consider the following line of code in the example we just saw: BorderLayoutData leftSidebarLayoutData = new BorderLayoutData(LayoutRegion.WEST, 150) It creates a variable leftSidebarLayoutData, where the size is 150 pixels and the region is the west of the border panel. rightSidebarLayoutData.setSplit(true) sets a split bar between this region and its neighbors. The split bar allows the user to resize the region. leftSidebarLayoutData.setCollapsible(true) makes the component collapsible, that is, the user will be able to collapse and expand the region. leftSidebarLayoutData.setMargins(new Margins(0, 5, 0, 5)) sets a margin where 0, 5, 0, and 5 are the top, right, bottom, and left margins, respectively. Classes and packages In the preceding example, some classes are used from Ext GWT library, as shown in the following table: ClassPackageBorderLayoutcom.extjs.gxt.ui.client.widget.layoutBorderLayoutDatacom.extjs.gxt.ui.client.widget.layoutContentPanelcom.extjs.gxt.ui.client.widgetMarginscom.extjs.gxt.ui.client.utilStylecom.extjs.gxt.ui.client See also The Adding the banner recipe The Adding menus recipe The Creating the left-hand sidebar recipe The Creating the right-hand sidebar recipe The Creating main content panel recipe The Creating the footer recipe The Using HomePage instance in EntryPoint recipe Adding the banner This recipe will create a method that we will use to add a banner in the content panel. Getting ready Place the banner image banner.png at the location webresourcesimages. You can use your own image or get it from the code sample provided on the Packt Publishing website (www.packtpub.com). How to do it... Create the method getBanner: public ContentPanel getBanner() { ContentPanel bannerPanel = new ContentPanel(); bannerPanel.setHeaderVisible(false); bannerPanel.add(new Image("resources/images/banner.png")); Image("resources/images/banner.png")); return bannerPanel; } Call the method setTopComponent of the ContentPanel class in the following constructor: setTopComponent(getBanner()); How it works... The method getBanner() creates an instance bannerPanel of type ContentPanel. The bannerPanel will just show the image from the location resources/images/banner.png. That's why, the header is made invisible by invoking setHeaderVisible(false). Instance of the com.google.gwt.user.client.ui.Image class, which represents the banner image, is added in the bannerPanel. In the default constructor of the HomePage class, the method setTopComponent(getBanner()) is called to set the image as the top component of the content panel. See also The Creating the home page layout class recipe The Adding menus recipe The Creating the left-hand sidebar recipe The Creating the right-hand sidebar recipe The Creating main content panel recipe The Creating the footer recipe The Using HomePage instance in EntryPoint recipe  
Read more
  • 0
  • 0
  • 2636

article-image-getting-started-selenium-grid
Packt
23 Nov 2010
6 min read
Save for later

Getting Started with Selenium Grid

Packt
23 Nov 2010
6 min read
Important preliminary points For this section you will need to have Apache Ant on the machine that you are going to have running Grid instances. You can get this from http://ant.apache.org/bindownload.cgi for Windows and Mac. If you have Ubuntu you can simply do sudo apt-get install ant1.8, which will install all the relevant items that are needed onto your Linux machine. Visit the project site to download Selenium Grid. Understanding Selenium Grid Selenium Grid is a version of Selenium that allows teams to set up a number of Selenium instances and then have one central point to send your Selenium commands to. This differs from what we saw in Selenium Remote Control (RC) where we always had to explicitly say where the Selenium RC is as well as know what browsers that Remote Control can handle. With Selenium Grid, we just ask for a specific browser, and then the hub that is part of Selenium Grid will route all the Selenium commands through to the Remote Control you want. Selenium Grid also allows us to, with the help of the configuration file, assign friendly names to the Selenium RC instances so that when the tests want to run against Firefox on Linux, the hub will find a free instance and then route all the Selenium Commands from your test through to the instance that is registered with that environment. We can see an example of this in the next diagram. We will see how to create tests for this later in the chapter, but for now let's have a look at making sure we have all the necessary items ready for the grid. Checking that we have the necessary items for Selenium Grid Now that you have downloaded Selenium Grid and Ant, it is always good to run a sanity check on Selenium Grid to make sure that we are ready to go. To do this we run a simple command in a console or Command Prompt. Let's see this in action. Time for action – doing a sanity check on Selenium Grid Open a Command Prompt or console window. Run the command ant sanity-check. When it is complete you should see something similar to the next screenshot: What just happened? We have just checked whether we have all the necessary items to run Selenium Grid. If there was something that Selenium relied on, the sanity check script would output what was needed so that you could easily correct this. Now that everything is ready, let us start setting up the Grid. Selenium Grid Hub Selenium Grid works by having a central point that tests can connect to, and commands are then pushed to the Selenium Remote Control instances connected to that hub. The hub has a web interface that tells you about the Selenium Remote Control instances that are connected to the Hub, and whether they are currently in use. Time for action – launching the hub Now that we are ready to start working with Selenium Grid we need to set up the Grid. This is a simple command that we run in the console or Command Prompt. Open a Command Prompt or console window. Run the command ant launch-hub. When that happens you should see something similar to the following screenshot: We can see that this is running in the command prompt or console. We can also see the hub running from within a browser. If we put http://nameofmachine:4444/console where nameofmachine is the name of the machine with the hub. If it is on your machine then you can place http://localhost:4444/console. We can see that in the next screenshot: What just happened? We have successfully started Selenium Grid Hub. This is the central point of our tests and Selenium Grid instances. We saw that when we start Selenium Grid it showed us what items were available according to the configuration file that is with the normal install. We then had a look at how we can see what the Grid is doing by having a look at the hub in a browser. We did this by putting the URL http://nameofmachine:4444/console where nameofmachine is the name of the machine that we would like to access with the hub. It shows what configured environments the hub can handle, what grid instances are available and which instances are currently active. Now that we have the hub ready we can have a look at starting up instances. Adding instances to the hub Now that we have successfully started the Selenium Grid Hub, we will need to have a look at how we can start adding Selenium Remote Controls to the hub so that it starts forming the grid of computers that we are expecting. As with everything in Selenium Grid, we need Ant to start the instances that connect. In the next few Time for action sections we will see the different arguments needed to start instances to join the grid. Time for action – adding a remote control with the defaults In this section we are going to launch Selenium Remote Control and get it to register with the hub. We are going to assume that the browser you would like it to register for is Firefox, and the hub is on the same machine as the Remote Control. We will pass in only one required argument, which is the port that we wish it to run on. However, when starting instances, we will always need to pass in the port since Selenium cannot work out if there are any free ports on the host machine. Open a Command Prompt or console window. Enter the command ant –Dport=5555 launch-remote-control and press Return. You should see the following in your Command Prompt or console: And this in the Selenium Grid Hub site: What just happened? We have added the first machine to our own Selenium Grid. It has used all the defaults that are in the Ant build script and it has created a Selenium Remote Control that will take any Firefox requests, located on the same machine as the host of Selenium Remote Control Grid. This is a useful way to set up the grid if you just want a large number of Firefox-controlling Selenium Remote Controls.
Read more
  • 0
  • 0
  • 2630

article-image-overview-node-package-manager
Packt
11 Aug 2011
5 min read
Save for later

An Overview of the Node Package Manager

Packt
11 Aug 2011
5 min read
Node Web Development npm package format An npm package is a directory structure with a package.json file describing the package. This is exactly what we just referred to as a Complex Module, except npm recognizes many more package.json tags than does Node. The starting point for npm's package.json is the CommonJS Packages/1.0 specification. The documentation for npm's package.json implementation is accessed with the following command: $ npm help json A basic package.json file is as follows: { name: "packageName", version: "1.0", main: "mainModuleName", modules: { "mod1": "lib/mod1", "mod2": "lib/mod2" } } The file is in JSON format which, as a JavaScript programmer, you should already have seen a few hundred times. The most important tags are name and version. The name will appear in URLs and command names, so choose one that's safe for both. If you desire to publish a package in the public npm repository it's helpful to check and see if a particular name is already being used, at http://search.npmjs.org or with the following command: $ npm search packageName The main tag is treated the same as complex modules. It references the module that will be returned when invoking require('packageName'). Packages can contain many modules within themselves, and those can be listed in the modules list. Packages can be bundled as tar-gzip tarballs, especially to send them over the Internet. A package can declare dependencies on other packages. That way npm can automatically install other modules required by the module being installed. Dependencies are declared as follows: "dependencies": { "foo" : "1.0.0 - 2.9999.9999" , "bar" : ">=1.0.2 <2.1.2" } The description and keywords fields help people to find the package when searching in an npm repository (http://search.npmjs.org). Ownership of a package can be documented in the homepage, author, or contributors fields: "description": "My wonderful packages walks dogs", "homepage": "http://npm.dogs.org/dogwalker/", "author": dogwhisperer@dogs.org Some npm packages provide executable programs meant to be in the user's PATH. These are declared using the bin tag. It's a map of command names to the script which implements that command. The command scripts are installed into the directory containing the node executable using the name given. bin: { 'nodeload.js': './nodeload.js', 'nl.js': './nl.js' }, The directories tag documents the package directory structure. The lib directory is automatically scanned for modules to load. There are other directory tags for binaries, manuals, and documentation. directories: { lib: './lib', bin: './bin' }, The script tags are script commands run at various events in the lifecycle of the package. These events include install, activate, uninstall, update, and more. For more information about script commands, use the following command: $ npm help scripts This was only a taste of the npm package format; see the documentation (npm help json) for more. Finding npm packages By default npm modules are retrieved over the Internet from the public package registry maintained on http://npmjs.org. If you know the module name it can be installed simply by typing the following: $ npm install moduleName But what if you don't know the module name? How do you discover the interesting modules? The website http://npmjs.org publishes an index of the modules in that registry, and the http://search.npmjs.org site lets you search that index. npm also has a command-line search function to consult the same index: $ npm search mp3 mediatags Tools extracting for media meta-data tags =coolaj86 util m4a aac mp3 id3 jpeg exiv xmp node3p An Amazon MP3 downloader for NodeJS. =ncb000gt Of course upon finding a module it's installed as follows: $ npm install mediatags After installing a module one may want to see the documentation, which would be on the module's website. The homepage tag in the package.json lists that URL. The easiest way to look at the package.json file is with the npm view command, as follows: $ npm view zombie ... { name: 'zombie', description: 'Insanely fast, full-stack, headless browser testing using Node.js', ... version: '0.9.4', homepage: 'http://zombie.labnotes.org/', ... npm ok You can use npm view to extract any tag from package.json, like the following which lets you view just the homepage tag: $ npm view zombie homepage http://zombie.labnotes.org/ Using the npm commands The main npm command has a long list of sub-commands for specific package management operations. These cover every aspect of the lifecycle of publishing packages (as a package author), and downloading, using, or removing packages (as an npm consumer). Getting help with npm Perhaps the most important thing is to learn where to turn to get help. The main help is delivered along with the npm command accessed as follows: For most of the commands you can access the help text for that command by typing the following: $ npm help <command> The npm website (http://npmjs.org/) has a FAQ that is also delivered with the npm software. Perhaps the most important question (and answer) is: Why does npm hate me? npm is not capable of hatred. It loves everyone, even you.  
Read more
  • 0
  • 0
  • 2629

article-image-creating-new-forum
Packt
19 Aug 2013
6 min read
Save for later

Creating a new forum

Packt
19 Aug 2013
6 min read
(For more resources related to this topic, see here.) In the WordPress Administration, click on New Forum, which is a subpage of the Forums menu item on the sidebar. You will be taken to a screen that is quite similar to a WordPress post creation page, but slightly different with a few extra areas: If you are not familiar with the WordPress post creation page, the following is a list of the page's features: The Enter Title Here box The long box on the top of the page is your forum title. This, on the forum page, will be what is clicked on, and will also provide the basis for the forum's URL Slug with some changes, as URL Slugs generally have to be letters, numbers, and dashes. So for example, if your forum title is My Product's Support Section, your Slug will probably be my-products-support-section. When you insert the forum title, the URL Slug will be generated below. However, if you wish to change it, click on the yellow highlighted section to change the Slug, and then click on OK. The Post box Beneath the title box is the post box. This should contain your forum description. This will be shown beneath your forum's name on the forum index page. You can add rich text to this, such as bold or italicized text, but my advice is to keep this short. One or two lines of text would suffice, otherwise it could make your forum look peculiar. Forum attributes Towards the right-hand side of the screen, you should see a Forum Attributes section. bbPress allows to set a number of different attributes for your created forum. The attributes are explained in detail as follows: Forum type: Your forum can be one of two types: "Forum" or "Category". Category is a section of the site where you cannot post, but forums are grouped in. So for example, if you have forums for "Football", "Cricket", and "Athletics", you may group them into a "Sport" category. Unless you have a large forum with a number of different areas, you shouldn't need many categories. Normally you would begin with a few forums, but then as your forums grow, you would introduce categories. If you create a category, any forum you create must be a subforum of the category. We will talk about creating subforums later in this article. Status: Your forum's status indicates if other users can post in the forum. If the status is "Open", any user can post in the forum. If the forum is "Closed", nobody can contribute other than Keymasters. Unless one of your forums is a "Forum Rules" forum, you would probably keep all forums as Open. Visibility: bbPress allows three types of forum visibility . These, as the names suggest, decide who gets to see the forums. The three options are as follows: Public: This type allows anybody visiting the site to see the forum and its contents. Private: This type allows users who are logged in to view and contribute to the forum, but the forum is hidden from users that are not logged in or users that are blocked. Private forums are prefixed with the word "Private". Hidden: This type allows only Moderators and Keymasters to view the forum. Most forums will probably have majority of their forums set to Public, but have selections that are Private or Hidden. Usually, having a Hidden forum to discuss forum matters with Administrators or Moderators is a good thing. You can have a private forum as well that could help encourage people to register on the site. Parent: You can have subforums of forums. By giving a parent to the forum, you make it a subforum. An example of this would be if you had a "Travel" forum, you can have subforums dedicated to "Europe", "Australia", and "Asia". Again, you will probably start with just a few forums, but over time, you will probably grow your forum to include subforums. Order: The Order field helps define the order in which your forums are listed. By default, or if unspecified, the order is always alphabetical. However, if you give a number, then the order of the forum will be determined by the Order number, from smallest to largest. It is good to put important forums at the top, and less important forums towards the bottom of the page. It's a good idea to number your orders in multiples of 10, rather than 1, 2, 3, and so on. That way, if you want to add a forum to your site that will be between two other forums, you can add it in with a number between the two multiples of 10, thus saving time. Now that you have set up a forum, click on publish, and congratulations, you should have a forum! Editing and deleting forums Forums are a community, and like all good communities, they evolve over time depending on their user's needs. As such, over time, you may need to restructure or delete forums. Luckily, this is easily done. First, click on Forums in the sidebar of the WordPress Administration. You should see a list of all the current forums you have on your site: If you hover over a forum, two options will appear: Edit, which will allow you to edit the forum. A screen similar to the New Forum page will appear, which will allow you to make changes to your forum. The second option is Trash, which will move your forum into Trash. After a while, it will be deleted from your site. When you click on Trash, you will trash everything associated with your forum (any topics, replies, or tags will be deleted). Be careful! Summary Right now, you should have a bustling forum, ably overseen by yourself and maybe even a couple of Moderators.Remember that all I have described so far has been how to use bbPress to manage your forum, and not how to manage your forum. Each forum will have its own rules and guidelines, and you will eventually learn how to manage your bbPress forum with more and more members joining in.A general rule of thumb, though, is set out your rules at the start of your forum, welcome change, act quickly on violations, and most importantly, treat your users with respect. As without users, you will have a very quiet forum. However, bbPress is a WordPress plugin, and in itself can be extensible and can take advantage of plugins and themes, both specifically designed for bbPress or even those that work with WordPress. Resources for Article: Further resources on this subject: Getting Started with WordPress 3 [Article] How to Create an Image Gallery in WordPress 3 [Article] Integrating phpList 2 with WordPress [Article]
Read more
  • 0
  • 0
  • 2629

article-image-content-delivery-alfresco-3
Packt
29 Sep 2010
11 min read
Save for later

Content Delivery in Alfresco 3

Packt
29 Sep 2010
11 min read
  Alfresco 3 Web Content Management Create an infrastructure to manage all your web content, and deploy it to various external production systems A complete guide to Web Content Creation and Distribution Understand the concepts and advantages of Publishing-style Web CMS Leverage a single installation to manage multiple websites Integrate Alfresco web applications with external systems Read more about this book (For more resources on Alfresco 3, see here.) Introduction to content delivery Alfresco provides a framework for pushing content from a stage (or authoring) server to live and test servers, as shown in the following figure: The Alfresco content production environment produces an approved view of a web project called a snapshot. Consider each snapshot as a web project version. Alfresco deployment takes a snapshot and pushes it out to either live or test servers. Consider a sample scenario as shown in the following diagram, where the content from the stage server is deployed to live servers. When snapshot version 2 is deployed to live servers, then the Alfresco deployment engine only copies the files which are either new or modifed and removes the files which are deleted when compared to snapshot version 1. The deployment engine is smart, which affects only few files rather than copying all of the files of a web project. Now that the snapshot version 2 is live (deployed to live servers), the editorial staff may work on a future version 3. Let's say for some reason there is an issue with snapshot version 2, which is live. You have the option of rolling it back to the previous good version of snapshot version 1. You can roll forward or you can roll back to a specific version of a web project snapshot version. This feature is very powerful even from a legal audit point of view, wherein you have an ability to reproduce the website as of a specific date. Further, the deployment process may be automated so that it happens automatically when content is approved for publishing. The deployment framework provides a flexible and highly-configurable system to allow you to tailor the system to your requirements. If the Alfresco-supplied components are not suitable, you can plug in your own authenticators, transport implementations, content transformers, and deployment targets. Live server vs. Test server Alfresco WCM enables previewing the content within the stage server environment. After content creation, the Editorial staff may preview web pages to verify the content, as well as the look and feel. Similarly the content reviewers and business owners may preview to review the web pages during the workflow process. Because of this powerful feature, you may not need a separate test server to preview and approve the content. The stage server itself is used for both authoring and testing the content. Hence, the content is authored and approved on the stage server and then deployed to the live servers directly. However, there can be a situation where you may need a separate test server. For example, if you are deploying content to another frontend application outside of Alfresco such as a PHP or .NET application, or situations when the virtualization server is not capable of providing the preview. Starting with the version 2.2 release Alfresco introduced the concept of a Test Server. You deploy the content from a Staging Sandbox to the live server and you deploy content from User Sandbox or from a workflow to the test server. Static vs. Dynamic delivery model Within the live or test server environment, you can push out content to a fat filesystem to be served up by Apache or IIS, or you can push your content into another runtime instance of Alfresco. Pushing content to a fat filesystem environment is also known as Static Deployment and it is achieved using Alfresco File System Receiver (FSR). Pushing content to another runtime instance of Alfresco is also known as Dynamic Deployment and it is achieved using Alfresco Server Receiver (ASR). In static deployment, the web pages are already rendered (or baked) before deploying. In dynamic deployment, since the content is in the runtime instance of Alfresco, the web pages will be generated (or fried) at runtime. The following is a summary of static and dynamic delivery models:     Static "Bake" Model Dynamic "Fry" Model Delivery Technology Web Servers Application Servers Page Compositing Submission time Request time Content deployed to File System Alfresco Runtime Content Search Not supported Supported out of the box Content Security Not supported Supported out of the box Personalization Limited Unlimited Performance Ultimate Less than the "bake" model   You can consider a hybrid deployment (both static and dynamic) for some business applications. You can define certain static content of the web project such as images, videos, and scripts to be deployed to the filesystem and certain dynamic content such as web pages to be deployed to the Alfresco runtime. This approach gives you good performance as well as personalized and dynamically changing content in a production environment. FSR for static delivery A File System Receiver (FSR) will need to be installed and configured on each live or test server to receive published static content from the Alfresco Staging Server. The FSR is a small, standalone server that receives updates from an Alfresco repository running Web Content Management; content is published to a fat filesystem. The published fat files will typically be served by a web server such as Apache, for static content or an application server such as Tomcat, JBoss, or IIS for web applications (WARs, PHP files, and so on). FSR requires filesystem access and must run as a user with appropriate rights to the target filesystem. The FSR is a standalone Java Daemon (no Tomcat or other app server required) and it has minimal resource requirements. The FSR supports the invocation of custom Java code and/or programs. Therefore, it can be used to perform additional tasks post-deployment such as search engine indexing, pushing content to a Content Delivery Network (CDN), or replicating content to other systems or repositories. The destination file server receiver has to be running with its RMI registry port and service port (44100 and 44101 by default) opened. Installing FSR If you refer to SourceForge at http://sourceforge.net/projects/alfresco/files/, you will notice three different downloads of FSR. A Microsoft Windows installer file (Alfresco-DeploymentCommunity-3.3-Setup.exe), a Linux installer file (Alfresco-DeploymentCommunity-3.3-Linux-x86-Install) for automatic installation, and a ZIP file (alfresco-community-deployment-3.3.zip) for manual installation. I would prefer using the ZIP file and manually installing the standalone deployment receiver. Both Windows and Linux installers have certain limitations as they do not provide configuring various deployment targets. Unzip the deployment ZIP file into a convenient location (it does not make its own directory) on a live or test server. Notice a file named deployment.properties, which contains the configuration information. The folder deployment includes default target information. To configure the filesystem receiver, open the deployment.properties file in the text editor of your choice. Choose locations for each of the following: ; filesystem receiver configuration deployment.filesystem.datadir=D:/07_MUN_WORK/alfresco_book_wcm_32e/ deployment-data/depdata deployment.filesystem.logdir=D:/07_MUN_WORK/alfresco_book_wcm_32e/ deployment-data/deplog deployment.filesystem.metadatadir=D:/07_MUN_WORK/alfresco_book_ wcm_32e/deployment-data/depmetadata deployment.filesystem.autofix=true deployment.filesystem.errorOnOverwrite=false ; Deployment Engine configuration deployment.rmi.port=44100 deployment.rmi.service.port=44101 ; Stand alone deployment server specific properties deployment.user=admin deployment.password=admin deployment.filesystem.datadir: This is the location in which the filesystem deployment receiver stores deployed files during a deployment, before committing them to their final locations. deployment.filesystem.logdir: This is the location in which the filesystem deployment receiver stores deployment time log data. deployment.filesystem.metadatadir: This is the location in which the filesystem deployment receiver stores metadata about deployed content. deployment.filesystem.autofix: The file system deployment target can either issue an error upon detecting a problem or automatically fix the problem. The autofix parameter controls whether the File System Deployment Target will attempt to fix the metadata itself or just issue a warning. Set the value to true to fix, or false to not fix. deployment.filesystem.errorOnOverWrite: The file system deployment target can issue an error upon overwriting the files. Set the value to false to overwrite the files, which is needed when updating the existing files. deployment.rmi.port: The port number to use for the RMI registry. Choose this so as not to conflict with any other services. By default, the standalone deployment receiver uses 44100. deployment.rmi.service.port: The port number to use for RMI service. Choose this so as not to conflict with any other services. By default this is 44101. Note that while specifying the directory locations on Microsoft Windows, either use forward slashes or escape the backslashes. For example, C:/dir1/dir2 or C:dir1dir2> Configuring your deployment targets You can configure as many target filesystem receivers as you need on a single live or test server. By default, a single filesystem receiver is defined with simple configuration via deployment.properties. Deployment targets are placed in the deployment folder with the filename deployment/*target.xml. To define more targets, follow the pattern of deployment/default-target.xml. There are two steps involved Definition of your target information in the deployment.properties file. Registration of your target with the deployment engine using an XML file. Let's create a deployment target for the CIGNEX website and let's name it as cignex-live1 target. As a first step to configure filesystem receiver, open the deployment.properties file in the text editor of your choice and add the cignex-live1 filesystem target configuration as follows: ; cignex-live1 filesystem target configuration deployment.filesystem.cignex-live1.metadatadir= ${deployment. filesystem.metadatadir}/cignex-live1 deployment.filesystem.cignex-live1.rootdir= D:/07_MUN_WORK/alfresco_book_wcm_32e/deployment-data/targets/cignex- live1 deployment.filesystem.cignex-live1.name=cignex-live1 deployment.filesystem.cignex-live1.user=admin deployment.filesystem.cignex-live1.password=admin Now to register this new target, you need to create a target XML file in the deployment folder. You can refer to an existing target file, default-target.xml, in the deployment folder for more information. Copy deployment/default-target.xml as the deployment/cignex-live1-target.xml file. Open the deployment/cignex-live1-target.xml file in your text editor of choice and replace the keyword default with the keyword cignex-live1. With these simple two steps, you have configured a new target named cignex-live1. Start and stop deployment receiver To run the receiver, execute deploy_start.sh(or deploy_start.bat) as the user on that server. Remember this user will be the owner of the deployed content. To stop the receiver, execute the deploy_stop.sh or deploy_stop.bat file. Using FSR from Alfresco WCM staging Now that the FSR is configured and running, you can use it from Alfresco staging to deploy the content. Configuring a web project to use FSR The following are the steps to configure a Web Project to use an FSR. Navigate to Company Home | Web Projects | <web project name>. Select Edit Web Project Settings from the Action menu. Click on Next to reach the Configure Deployment Servers window. Click on the Add Deployment Receiver link as shown in the following screenshot. Fill out the form as needed. The minimum required fields to be filled out, assuming default settings, are the Host name where the FSR is located and the Target Name. The following table contains the description of each of the FSR configuration fields.   Field Name Description Type Live Server or Test Server. You deploy the content from Staging Sandbox to the live server. And you deploy the content from User Sandbox or from workflow to the test server. Display Name A descriptive label for the server, used by the UI. Display Group The deployment receivers configured using the same Display Group name will be treated as one batch during deployment. Transport Name Name of the network protocol connection to the remote filesystem receiver. By default it is RMI. Host The host name of the destination server, can be a name or IP address. Port The RMI port to connect to on the destination server. URL The runtime URL of the destination server. Can be used to preview the deployment, upon a successful deployment. User Name The username to use to connect to the destination server. Password The password to use to connect to the destination server. Source Path The path of the folder to deploy, for example /ROOT/site1. Excludes A single regular expression (multiple rules can be defined within the expression) of items to exclude from the deployment, for example .*.jpg$|.*.gif$. Target Name The name of a target to deploy to, configured in the FSR. Include in Auto Deployment If checked, then this target will be included in auto deployment. Click on the Add and Finish buttons to complete the configuration.
Read more
  • 0
  • 0
  • 2628
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-search-engines-coldfusion
Packt
23 Oct 2009
5 min read
Save for later

Search Engines in ColdFusion

Packt
23 Oct 2009
5 min read
Built-In Search Engine Verity comes in package with ColdFusion. One of the reasons why people pay for ColdFusion is the incredible power that comes with this tool. It should be noted that one of the most powerful standalone commercial search engines is this tool. Some of the biggest companies in the world have expanded internal services with the help of the Verity tool that we will learn about. We can see that in order to start, we must create collections. The building of search abilities is a three-step process. There is a standard ColdFusion tag to help us with each of these functions. Create collections Index the collections Search the collections These collections can contain information about web pages, binary documents, and can even work as a powerful way to search cached query result information. There are many document formats supported. In the real business world, the latest bleeding-edge solutions will still store a previous version. Archived and shared documents should be stored in appropriate formats and versions that can be searched. Creating a Collection The first thing is to make our collection. See the ColdFusion Administrator under Data & Services. Here, we will be able to add collections and edit existing collections. There is one default collection included in ColdFusion installations. This is the bookclub demonstration application data. We will be creating a collection of PDF documents for this lesson. We have placed a collection of ColdFusion, Flex, and some of the Fusion Authority Quarterly periodicals in a directory for indexing. Here is the information screen for adding the collection through the administrator. We choose to select the Enable Category Support option. Also, there are libraries available for multiple languages if that is appropriate in a collection. We now see that there is a new collection for our devdocs. There are four icons to work with this collection. They are, from right to left, index, optimize, purge, and remove actions. The Name link takes us to the index action. The collection gives us the number of actual documents present, and the size of the index file on the server. The screen will show the details of the index as to when it was last modified, and the language in which it is stored. It lists the categories, and also shows the actual path where the index is stored. Here is a code version of creating a collection that would achieve the same thing. This means that it is possible to create an entire administrative interface to manage collections. It is also possible to move from tags to objects, and wrap up all the functions in that style. <cfcollection action="create" collection="devdocs" path="c:ColdFusion8veritycollectionsdocuments" /> If we have categories in our collection, and we want to get a list of the categories, then the following code must be used: <cfcollection action="categoryList" collection="bookClub" name="myCats" /><cfdump var="#myCats#"> Indexing a Collection We can do this through the administration interface. But here, we will do it as shown in the the following screenshot. This is a limited directory that we have used as an example for searching. This is the result of the devdocs submitted above. This gave a result of 12 documents with a search collection of the size, 4,611 Kb. Now, we will look at how to do the same search using code and build the index outside the administrator interface. This will require the collection to be built before we try to index files into it. The creation of the collection can also be done inside the administration interface or in code. It should also be noted that ColdFusion includes a security called Sandbox Security. These three core tags for Verity searching among many others can be blocked if you find it better for your environment. Just consider what is actually getting indexed and what needs to be searched. Hopefully, documents will be secured correctly and it will not be an issue. When we are making an index, we have to make sure that we can either choose to use a recursive search or not. A recursive search means that all the subdirectories in a document or web page search will be included in our search. It should also be noted that the service will not work for indexing other websites. It is for indexing this server only. <cfindex name="myCats" action="refresh" collection="bookClub" recurse="true" type="path" extensions=".html .htm .cfm .cfml" key="c:inetpubwwwrootdocuments" urlpath="http://localhost/documents/" /> Your collection has been indexed. It is important to note that there is no output from this tag. So we need to put some text on the screen to make sure the person using the site can know that the task has been completed. If we want to index a single file rather than a whole directory path, we can do it with this code: <cfindex action="refresh" collection="bookClub" recurse="true" type="file" extensions=".pdf" key=" c:inetpubwwwrootdocumentsColdFusioncf8_devguide.pdf" urlpath="http://localhost/documents/ColdFusion" /> Your collection has been indexed.
Read more
  • 0
  • 0
  • 2628

article-image-web-application-testing
Packt
14 Nov 2014
15 min read
Save for later

Web Application Testing

Packt
14 Nov 2014
15 min read
This article is written by Roberto Messora, the author of the Web App Testing Using Knockout.JS book. This article will give you an overview of various design patterns used in web application testing. It will also tech you web development using jQuery. (For more resources related to this topic, see here.) Presentation design patterns in web application testing The Web has changed a lot since HTML5 has made its appearance. We are witnessing a gradual shift from a classical full server-side web development, to a new architectural asset that moves much of the application logic to the client-side. The general objective is to deliver rich internet applications (commonly known as RIA) with a desktop-like user experience. Think about web applications such as Gmail or Facebook: if you maximize your browser, they look like complete desktop applications in terms of usability, UI effects, responsiveness, and richness. Once we have established that testing is a pillar of our solutions, we need to understand which is the best way to proceed, in terms of software architecture and development. In this regard, it's very important to determine the very basic design principles that allow a proper approach to unit testing. In fact, even though HTML5 is a recent achievement, HTML in general and JavaScript are technologies that have been in use for quite some time. The problem here is that many developers tend to approach the modern web development in the same old way. This is a grave mistake because, back in time, client-side JavaScript development was a lot underrated and mostly confined to simple UI graphic management. Client-side development is historically driven by libraries such as Prototype, jQuery, and Dojo, whose primary feature is DOM (HTML Document Object Model, in other words HTML markup) management. They can work as-is in small web applications, but as soon as these grow in complexity, code base starts to become unmanageable and unmaintainable. We can't really think that we can continue to develop JavaScript in the same way we did 10 years ago. In those days, we only had to dynamically apply some UI transformations. Today we have to deliver complete working applications. We need a better design, but most of all we need to reconsider client-side JavaScript development and apply the advanced design patterns and principles. jQuery web application development JavaScript is the programming language of the web, but its native DOM API is something rudimental. We have to write a lot of code to manage and transform HTML markup to bring UI to life with some dynamic user interaction. Also not full standardization means that the same code can work differently (or not work at all) in different browsers. Over the past years, developers decided to resolve this situation: JavaScript libraries, such as Prototype, jQuery and Dojo have come to light. jQuery is one of the most known open-source JavaScript libraries, which was published for the first time in 2006. Its huge success is mainly due to: A simple and detailed API that allows you to manage HTML DOM elements Cross-browser support Simple and effective extensibility Since its appearance, it's been used by thousands of developers as the foundation library. A large amount of JavaScript code all around the world has been built with keeping jQuery in mind. jQuery ecosystem grew up very quickly and nowadays there are plenty of jQuery plugins that implement virtually everything related to the web development. Despite its simplicity, a typical jQuery web application is virtually untestable. There are two main reasons: User interface items are tightly coupled with the user interface logic User interface logic spans inside event handler callback functions The real problem is that everything passes through a jQuery reference, which is a jQuery("something") call. This means that we will always need a live reference of the HTML page, otherwise these calls will fail, and this is also true for a unit test case. We can't think about testing a piece of user interface logic running an entire web application! Large jQuery applications tend to be monolithic because jQuery itself allows callback function nesting too easily, and doesn't really promote any particular design strategy. The result is often spaghetti code. jQuery is a good option if you want to develop some specific custom plugin, also we will continue to use this library for pure user interface effects and animations, but we need something different to maintain a large web application logic. Presentation design patterns To move a step forward, we need to decide what's the best option in terms of testable code. The main topic here is the application design, in other words, how we can build our code base following a general guideline with keeping testability in mind. In software engineering there's nothing better than not to reinvent the wheel, we can rely on a safe and reliable resource: design patterns. Wikipedia provides a good definition for the term design pattern (http://en.wikipedia.org/wiki/Software_design_pattern): In software engineering, a design pattern is a general reusable solution to a commonly occurring problem within a given context in software design. A design pattern is not a finished design that can be transformed directly into source or machine code. It is a description or template for how to solve a problem that can be used in many different situations. Patterns are formalized best practices that the programmer can use to solve common problems when designing an application or system. There are tens of specific design patterns, but we also need something that is related to the presentation layer because this is where a JavaScript web application belongs to. The most important aspect in terms of design and maintainability of a JavaScript web application is a clear separation between the user interface (basically, the HTML markup) and the presentation logic. (The JavaScript code that turns a web page dynamic and responsive to user interaction.) This is what we learned digging into a typical jQuery web application. At this point, we need to identify an effective implementation of a presentation design pattern and use it in our web applications. In this regard, I have to admit that the JavaScript community has done an extraordinary job in the last two years: up to the present time, there are literally tens of frameworks and libraries that implement a particular presentation design pattern. We only have to choose the framework that fits our needs, for example, we can start taking a look at MyTodo MVC website (http://todomvc.com/): this is an open source project that shows you how to build the same web application using a different library each time. Most of these libraries implement a so-called MV* design pattern (also Knockout.JS does). MV* means that every design pattern belongs to a broader family with a common root: Model-View-Controller. The MVC pattern is one of the oldest and most enduring architectural design patterns: originally designed by Trygve Reenskaug working on Smalltalk-80 back in 1979, it has been heavily refactored since then. Basically, the MVC pattern enforces the isolation of business data (Models) from user interfaces (Views), with a third component (Controllers) that manages the logic and user-input. It can be described as (Addy Osmani, Learning JavaScript Design Patterns, http://addyosmani.com/resources/essentialjsdesignpatterns/book/#detailmvc): A Model represented domain-specific data and was ignorant of the user-interface (Views and Controllers). When a model changed, it would inform its observers A View represented the current state of a Model. The Observer pattern was used for letting the View know whenever the Model was updated or modified Presentation was taken care of by the View, but there wasn't just a single View and Controller - a View-Controller pair was required for each section or element being displayed on the screen The Controllers role in this pair was handling user interaction (such as key-presses and actions e.g. clicks), making decisions for the View This general definition has slightly changed over the years, not only to adapt its implementation to different technologies and programming languages, but also because changes have been made to the Controller part. Model-View-Presenter, Model-View-ViewModel are the most known alternatives to the MVC pattern. MV* presentation design patterns are a valid answer to our need: an architectural design guideline that promotes the separation of concerns and isolation, the two most important factors that are needed for software testing. In this way, we can separately test models, views, and the third actor whatever it is (a Controller, Presenter, ViewModel, etc.). On the other hand, adopting a presentation design pattern doesn't mean at all that we cease to use jQuery. jQuery is a great library, we will continue to add its reference to our pages, but we will also integrate its use wisely in a better design context. Knockout.JS and Model-View-ViewModel Knockout.JS is one of the most popular JavaScript presentation libraries, it implements the Model-View-ViewModel design pattern. The most important concepts that feature Knockout:JS are: An HTML fragment (or an entire page) is considered as a View. A View is always associated with a JavaScript object called ViewModel: this is a code representation of the View that contains the data (model) to be shown (in the form of properties) and the commands that handle View events triggered by the user (in the form of methods). The association between View and ViewModel is built around the concept of data-binding, a mechanism that provides automatic bidirectional synchronization: In the View, it's declared placing the data-bind attributes into DOM elements, the attributes' value must follow a specific syntax that specifies the nature of the association and the target ViewModel property/method. In the ViewModel, methods are considered as commands and properties that are defined as special objects called observables: their main feature is the capability to notify every state modification A ViewModel is a pure-code representation of the View: it contains data to show and commands that handle events triggered by the user. It's important to remember that a ViewModel shouldn't have any knowledge about the View and the UI: pure-code representation means that a ViewModel shouldn't contain any reference to HTML markup elements (buttons, textboxes, and so on), but only pure JavaScript properties and methods. Model-View-ViewModel's objective is to promote a clear separation between View and ViewModel, this principle is called Separation of Concerns. Why is this so important? The answer is quite easy: because, in this way a developer can achieve a real separation of responsibilities: the View is only responsible for presenting data to the user and react to her/his inputs, the ViewModel is only responsible for holding the data and providing the presentation logic. The following diagram from Microsoft MSDN depicts the existing relationships between the three pattern actors very well (http://msdn.microsoft.com/en-us/library/ff798384.aspx): Thinking about a web application in these terms leads to a ViewModel development without any reference to DOM elements' IDs or any other markup related code as in the classic jQuery style. The two main reasons behind this are: As the web application becomes more complex, the number of DOM elements increases and is not uncommon to reach a point where it becomes very difficult to manage all those IDs with the typical jQuery fluent interface style: the JavaScript code base turns into a spaghetti code nightmare very soon. A clear separation between View and ViewModel allows a new way of working: JavaScript developers can concentrate on the presentation logic, UX experts on the other hand, can provide an HTML markup that focuses on the user interaction and how a web application will look. The two groups can work quite independently and agree on the basic contact points using the data-bind tag attributes. The key feature of a ViewModel is the observable object: a special object that is capable to notify its state modifications to any subscribers. There are three types of observable objects: The basic observable that is based on JavaScript data types (string, number, and so on) The computed observable that is dependent on other observables or computed observables The observable array that is a standard JavaScript array, with a built-in change notification mechanism On the View-side, we talk about declarative data-binding because we need to place the data-bind attributes inside HTML tags, and specify what kind of binding is associated to a ViewModel property/command. MVVM and unit testing Why a clear separation between the user interface and presentation logic is a real benefit? There are several possible answers, but, if we want to remain in the unit testing context, we can assert that we can apply proper unit testing specifications to the presentation logic, independently, from the concrete user interface. In Model-View-ViewModel, the ViewModel is a pure-code representation of the View. The View itself must remain a thin and simple layer, whose job is to present data and receive the user interaction. This is a great scenario for unit testing: all the logic in the presentation layer is located in the ViewModel, and this is a JavaScript object. We can definitely test almost everything that takes place in the presentation layer. Ensuring a real separation between View and ViewModel means that we need to follow a particular development procedure: Think about a web application page as a composition of sub-views: we need to embrace the divide et impera principle when we build our user interface, the more sub-views are specific and simple, the more we can test them easily. Knockout.JS supports this kind of scenario very well. Write a class for every View and a corresponding class for its ViewModel: the first one is the starting point to instantiate the ViewModel and apply bindings, after all, the user interface (the HTML markup) is what the browser loads initially. Keep each View class as simple as possible, so simple that it might not even need be tested, it should be just a container for:     Its ViewModel instance     Sub-View instances, in case of a bigger View that is a composition of smaller ones     Pure user interface code, in case of particular UI JavaScript plugins that cannot take place in the ViewModel and simply provide graphical effects/enrichments (in other words they don't change the logical functioning) If we look carefully at a typical ViewModel class implementation, we can see that there are no HTML markup references: no tag names, no tag identifiers, nothing. All of these references are present in the View class implementation. In fact, if we were to test a ViewModel that holds a direct reference to an UI item, we also need a live instance of the UI, otherwise accessing that item reference would cause a null reference runtime error during the test. This is not what we want, because it is very difficult to test a presentation logic having to deal with a live instance of the user interface: there are many reasons, from the need of a web server that delivers the page, to the need of a separate instance of a web browser to load the page. This is not very different from debugging a live page with Mozilla Firebug or Google Chrome Developer Tools, our objective is the test automation, but also we want to run the tests easily and quickly in isolation: we don't want to run the page in any way! An important application asset is the event bus: this is a global object that works as an event/message broker for all the actors that are involved in the web page (Views and ViewModels). Event bus is one of the alternative forms of the Event Collaboration design pattern (http://martinfowler.com/eaaDev/EventCollaboration.html): Multiple components work together by communicating with each other by sending events when their internal state changes (Marting Fowler) The main aspect of an event bus is that: The sender is just broadcasting the event, the sender does not need to know who is interested and who will respond, this loose coupling means that the sender does not have to care about responses, allowing us to add behaviour by plugging new components (Martin Fowler) In this way, we can maintain all the different components of a web page that are completely separated: every View/ViewModel couple sends and receives events, but they don't know anything about all the other couples. Again, every ViewModel is completely decoupled from its View (remember that the View holds a reference to the ViewModel, but not the other way around) and in this case, it can trigger some events in order to communicate something to the View. Concerning unit testing, loose coupling means that we can test our presentation logic a single component at a time, simply ensuring that events are broadcasted when they need to. Event buses can also be mocked so we don't need to rely on concrete implementation. In real-world development, the production process is an iterative task. Usually, we need to: Define a View markup skeleton, without any data-bind attributes. Start developing classes for the View and the ViewModel, which are empty at the beginning. Start developing the presentation logic, adding observables to the ViewModel and their respective data bindings in the View. Start writing test specifications. This process is repetitive, adds more presentation logic at every iteration, until we reach the final result. Summary In this article, you learned about web development using jQuery, presentation design patters, and unit testing using MVVM. Resources for Article: Further resources on this subject: Big Data Analysis [Article] Advanced Hadoop MapReduce Administration [Article] HBase Administration, Performance Tuning [Article]
Read more
  • 0
  • 0
  • 2624

article-image-creating-our-first-bot-webbot
Packt
03 Oct 2013
9 min read
Save for later

Creating our first bot, WebBot

Packt
03 Oct 2013
9 min read
(For more resources related to this topic, see here.)   With the knowledge you have gained, we are now ready to develop our first bot, which will be a simple bot that gathers data (documents) based on a list of URLs and datasets (field and field values) that we will require. First, let's start by creating our bot package directory. So, create a directory called WebBot so that the files in our project_directory/lib directory look like the following: '-- project_directory|-- lib | |-- HTTP (our existing HTTP package) | | '-- (HTTP package files here) | '-- WebBot | |-- bootstrap.php| |-- Document.php | '-- WebBot.php |-- (our other files)'-- 03_webbot.php As you can see, we have a very clean and simple directory and file structure that any programmer should be able to easily follow and understand. The WebBot class Next, open the file WebBot.php file and add the code from the project_directory/lib/WebBot/WebBot.php file: In our WebBot class, we first use the __construct() method to pass the array of URLs (or documents) we want to fetch, and the array of document fields are used to define the datasets and regular expression patterns. Regular expression patterns are used to populate the dataset values (or document field values). If you are unfamiliar with regular expressions, now would be a good time to study them. Then, in the __construct() method, we verify whether there are URLs to fetch or not. If there , we set an error message stating this problem. Next, we use the __formatUrl() method to properly format URLs we fetch data. This method will also set the correct protocol: either HTTP or HTTPS ( Hypertext Transfer Protocol Secure ). If the protocol is already set for the URL, for example http://www.[dom].com, we ignore setting the protocol. Also, if the class configuration setting conf_force_https is set to true, we force the HTTPS protocol again unless the protocol is already set for the URL. We then use the execute() method to fetch data for each URL, set and add the Document objects to the array of documents, and track document statistics. This method also implements fetchdelay logic that will delay each fetch by x number of seconds if set in the class configuration settings conf_delay_between_fetches. We also include the logic that only allows distinct URL fetches, meaning that, if we have already fetched data for a URL we won't fetch it again; this eliminates duplicate URL data fetches. The Document object is used as a container for the URL data, and we can use the Document object to use the URL data, the data fields, and their corresponding data field values. In the execute() method, you can see that we have performed a HTTPRequest::get() request using the URL and our default timeout value—which is set with the class configuration settings conf_default_timeout. We then pass the HTTPResponse object that is returned by the HTTPRequest::get() method to the Document object. Then, the Document object uses the data from the HTTPResponse object to build the document data. Finally, we include the getDocuments() method, which simply returns all the Document objects in an array that we can use for our own purposes as we desire. The WebBot Document class Next, we need to create a class called Document that can be used to store document data and field names with their values. To do this we will carry out the following steps: We first pass the data retrieved by our WebBot class to the Document class. Then, we define our document's fields and values using regular expression patterns. Next, add the code from the project_directory/lib/WebBot/Document.php file. Our Document class accepts the HTTPResponse object that is set in WebBot class's execute() method, and the document fields and document ID. In the Document __construct() method, we set our class properties: the HTTP Response object, the fields (and regular expression patterns), the document ID, and the URL that we use to fetch the HTTP response. We then check if the HTTP response successful (status code 200), and if it isn't, we set the error with the status code and message. Lastly, we call the __setFields() method. The __setFields() method parses out and sets the field values from the HTTP response body. For example, if in our fields we have a title field defined as $fields = ['title' => '<title>(.*)</title>'];, the __setFields() method will add the title field and pull all values inside the <title>*</title> tags into the HTML response body. So, if there were two title tags in the URL data, the __setField() method would add the field and its values to the document as follows: ['title'] => [ 0 => 'title x', 1 => 'title y' ] If we have the WebBot class configuration variable—conf_include_document_field_raw_values—set to true, the method will also add the raw values (it will include the tags or other strings as defined in the field's regular expression patterns) as a separate element, for example: ['title'] => [ 0 => 'title x', 1 => 'title y', 'raw' => [ 0 => '<title>title x</title>', 1 => '<title>title y</title>' ] ] The preceding code is very useful when we want to extract specific data (or field values) from URL data. To conclude the Document class, we have two more methods as follows: getFields(): This method simply returns the fields and field values getHttpResponse(): This method can be used to get the HTTPResponse object that was originally set by the WebBot execute() method This will allow us to perform logical requests to internal objects if we wish. The WebBot bootstrap file Now we will create a bootstrap.php file (at project_directory/lib/WebBot/) to load the HTTP package and our WebBot package classes, and set our WebBot class configuration settings: <?php namespace WebBot; /** * Bootstrap file * * @package WebBot */ // load our HTTP package require_once './lib/HTTP/bootstrap.php'; // load our WebBot package classes require_once './lib/WebBot/Document.php'; require_once './lib/WebBot/WebBot.php'; // set unlimited execution time set_time_limit(0); // set default timeout to 30 seconds WebBotWebBot::$conf_default_timeout = 30; // set delay between fetches to 1 seconds WebBotWebBot::$conf_delay_between_fetches = 1; // do not use HTTPS protocol (we'll use HTTP protocol) WebBotWebBot::$conf_force_https = false; // do not include document field raw values WebBotWebBot::$conf_include_document_field_raw_values = false; We use our HTTP package to handle HTTP requests and responses. You have seen in our WebBot class how we use HTTP requests to fetch the data, and then use the HTTP Response object to store the fetched data in the previous two sections. That is why we need to include the bootstrap file to load the HTTP package properly. Then, we load our WebBot package files. Because our WebBot class uses the Document class, we load that class file first. Next, we use the built-in PHP function set_time_limit() to tell the PHP interpreter that we want to allow unlimited execution time for our script. You don't necessarily have to use unlimited execute time. However, for testing reasons, we will use unlimited execution time for this example. Finally, we set the WebBot class configuration settings. These settings are used by the WebBot object internally to make our bot work as we desire. We should always make the configuration settings as simple as possible to help other developers understand. This means we should also include detailed comments in our code to ensure easy usage of package configuration settings. We have set up four configuration settings in our WebBot class. These are static and public variables, meaning that we can set them from anywhere after we have included the WebBot class, and once we set them they will remain the same for all WebBot objects unless we change the configuration variables. If you do not understand the PHP keyword static, now would be a good time to research this subject. The first configuration variable is conf_default_timeout. This variable is used to globally set the default timeout (in seconds) for all WebBot objects we create. The timeout value tells the HTTPRequest class how long it continue trying to send a request before stopping and deeming it as a bad request, or a timed-out request. By default, this configuration setting value is set to 30 (seconds). The second configuration variable—conf_delay_between_fetches—is used to set a time delay (in seconds) between fetches (or HTTP requests). This can be very useful when gathering a lot of data from a website or web service. For example, say, you had to fetch one million documents from a website. You wouldn't want to unleash your bot with that type of mission without fetch delays because you could inevitably cause—to that website—problems due to massive requests. By default, this value is set to 0, or no delay. The third WebBot class configuration variable—conf_force_https—when set to true, can be used to force the HTTPS protocol. As mentioned earlier, this will not override any protocol that is already set in the URL. If the conf_force_https variable is set to false, the HTTP protocol will be used. By default, this value is set to false. The fourth and final configuration variable—conf_include_document_field_raw_values—when set to true, will force the Document object to include the raw values gathered from the ' regular expression patterns. We've discussed configuration settings in detail in the WebBot Document Class section earlier in this article. By default, this value is set to false. Summary In this article you have learned how to get started with building your first bot using HTTP requests and responses. Resources for Article : Further resources on this subject: Installing and Configuring Jobs! and Managing Sections, Categories, and Articles using Joomla! [Article] Search Engine Optimization in Joomla! [Article] Adding a Random Background Image to your Joomla! Template [Article]
Read more
  • 0
  • 0
  • 2621

article-image-codeigniter-17-and-objects
Packt
27 Nov 2009
9 min read
Save for later

CodeIgniter 1.7 and Objects

Packt
27 Nov 2009
9 min read
Objects confused us, when we started using CodeIgniter. Coming to CodeIgniter through PHP 4, which is a procedural language, and not an object-oriented (OO) language. We duly looked up objects and methods, properties and inheritance, and encapsulation, but our early attempts to write CI code were plagued by the error message "Call to a member function on a non-object". We saw it so often that we were thinking of having it printed on a T-shirt. To save the world from a lot of boring T-shirts, this article covers the way in which CI uses objects, and the different ways you can write and use your own objects. Incidentally, we've used "variables/properties", and "methods/functions" interchangeably, as CI and PHP often do. You write "functions" in your controllers, for instance, when an OO purist would call them "methods". You define class "variables" when the purist would call them "properties". Object-oriented programming We assume that you have basic knowledge of OOP. You may have learned it as an afterthought to "normal" PHP 4. PHP 4 is not an OO language, though some OO functionality has been stacked on to it. PHP 5 is much better, with an underlying engine that was written from the ground up with OO in mind. You can do most of the basics in PHP 4, and CI manages to do everything it needs internally in either language. The key thing to remember—when an OO program is running, there is always one current object (but only one). Objects may call each other or hand over control to each other, in which case the current object changes, but only one of them can be current at any time. The current object defines the scope, in other words, the variables (properties) and methods (functions) that are available to the program at that moment. So it's important to know and control the current object. PHP, being a mixture of functional and OO programming, also offers the possibility where no object is current. You can start off with a functional program, call an object, let it take charge for a while, and then return control to the program. Luckily, CI takes care of this for you. The CI super-object CI works by building one super-object—it runs the entire program as one big object, in order to eliminate scoping issues. When you start CI, a complex chain of events occurs. If you set your CI installation to create a log (in /codeigniter/application/config/config.php set $config['log_threshold'] = 4; value. This will generate a log file in /www/CI_system/logs/), you'll see something like this: 1 DEBUG - 2006-10-03 08:56:39 --> Config Class Initialized2 DEBUG - 2006-10-03 08:56:39 --> No URI present. Default controllerset.3 DEBUG - 2006-10-03 08:56:39 --> Router Class Initialized4 DEBUG - 2006-10-03 08:56:39 --> Output Class Initialized5 DEBUG - 2006-10-03 08:56:39 --> Input Class Initialized6 DEBUG - 2006-10-03 08:56:39 --> Global POST and COOKIE datasanitized7 DEBUG - 2006-10-03 08:56:39 --> URI Class Initialized8 DEBUG - 2006-10-03 08:56:39 --> Language Class Initialized9 DEBUG - 2006-10-03 08:56:39 --> Loader Class Initialized10 DEBUG - 2006-10-03 08:56:39 --> Controller Class Initialized11 DEBUG - 2006-10-03 08:56:39 --> Helpers loaded: security12 DEBUG - 2006-10-03 08:56:40 --> Scripts loaded: errors13 DEBUG - 2006-10-03 08:56:40 --> Scripts loaded: boilerplate14 DEBUG - 2006-10-03 08:56:40 --> Helpers loaded: url15 DEBUG - 2006-10-03 08:56:40 --> Database Driver Class Initialized16 DEBUG - 2006-10-03 08:56:40 --> Model Class Initialized At start up, that is, each time a page request is received over the Internet—CI goes through the same procedure. You can trace the log through the CI files: The index.php file receives a page request. The URL may indicate which controller is required, if not, CI has a default controller (line 2). The index.php file makes some basic checks and calls the codeigniter.php file (codeignitercodeigniter.php). require_once BASEPATH.'codeigniter/CodeIgniter'.EXT; The codeigniter.php file instantiates the Config, Router, Input, URL, and other such, classes (see lines 1, and 3 to 9). These are called the base classes—you rarely interact directly with them, but they underlie almost everything CI does. /** ------------------------------------------------------* Instantiate the base classes* ------------------------------------------------------*/$CFG =& load_class('Config');$URI =& load_class('URI');$RTR =& load_class('Router');$OUT =& load_class('Output'); The file codeigniter.php tests to see the version of PHP it is running on, and calls Base4 or Base5 (/codeigniter/Base4.php or codeigniter/Base5.php). if (floor(phpversion()) < 5){load_class('Loader', FALSE);require(BASEPATH.'codeigniter/Base4'.EXT);}else{require(BASEPATH.'codeigniter/Base5'.EXT);} The above snippet creates an object—one which ensures that a class has only one instance. Each has a public &get_instance() function. Note the &—this is assignment by reference. So, if you assign using &get_instance() method, it assigns to the single running instance of the class. In other words, it points to the same pigeonhole. So, instead of setting up lot of new objects, you start building one super-object, which contains everything related to the framework. function &get_instance(){return CI_Base::get_instance();} A security check, /** ------------------------------------------------------* Security check* ------------------------------------------------------** None of the functions in the app controller or the* loader class can be called via the URI, nor can* controller functions that begin with an underscore*/$class = $RTR->fetch_class();$method = $RTR->fetch_method();if ( !class_exists($class)OR $method == 'controller'OR strncmp($method, '_', 1) == 0OR in_array(strtolower($method), array_map('strtolower',get_class_methods('Controller')))){show_404("{$class}/{$method}");} The file, codeigniter.php instantiates the controller that was requested, or a default controller (line 10). The new class is called $CI. $CI = new $class(); The function specified in the URL (or a default) is then called and life, as we know it, starts to wake up and happen. Depending on what you wrote in your controller, CI will initialize the classes you need, and "include" functional scripts you asked for. So, in the log, the model class is initialized (line 16). The boilerplate script, which is also shown in the log (line 13), is the one we wrote to contain standard chunks of text. It's a .php file, saved in the folder called scripts. It's not a class—just a set of functions. If you were writing pure PHP you might use include or require to bring it into the namespace—CI needs to use its own load function to bring it into the super-object. The concept of namespace or scope is crucial here. When you declare a variable, array, object, and so on, PHP holds the variable name in its memory and assigns a further block of memory to hold its contents. However, problems might arise if you define two variables with the same name. (In a complex site, this is easily done.) For this reason, PHP has several set of rules. Some of them are as listed: Each function has its own namespace or scope, and variables defined within a function are usually local to it. Outside the function, they are meaningless. You can declare global variables, which are held in a special global namespace and are available throughout the program. Objects have their own namespaces—variables exist inside the object as long as the object exists, and can only be referenced by using the object. So, $variable, global $variable, and $this->variable are three different things. Remember, $variable and global $variable can't be used in the same scope. So, inside a function you will have to decide if you want to use $variable or global $variable. Particularly before OO, this could lead to all sort of confusions—you may have too many variables in your namespace (so that conflicting names overwrite each other). You may also find that some variables are just not accessible from whatever scope you happen to be. Copying by reference You may have noticed the function &get_instance() in the previous section. This is to ensure that, as the variables change, the variables of the original class also change. As assignment by reference can be confusing, so here's a short explanation. We're all familiar with simple copying in PHP: $one = 1;$two = $one;echo $two; The previous snippet produces 1, because $two is a copy of $one. However, suppose you reassign $one: $one = 1;$two = $one;$one = 5;echo $two; This code still produces $two = 1, because changes made to $one after assigning $two have not been reflected in $two. This was a one-off assignment of the value that happened to be in variable $one at that time, to a new variable $two. Once that is done, the two variables lead separate lives (in just the same way if we alter $two, $one doesn't change). In effect, PHP creates two pigeonholes—called $one and $two. A separate value lives in each. You may, on any occasion, make the values equal, but after that each does its own work. PHP also allows copying by reference. If you add just a simple & to line 2 of the snippet as shown: $one = 1;$two =& $one;$one = 5;echo $two; The code now echoes 5, the change we made to $one is reflected in $two. Changing the = to =& in the second line means that the assignment is "by reference". It looks as if there was only one pigeonhole, which has two names ($one<.i> and $two). Whatever happens to the contents of the pigeonhole is reflected in both $one and $two, as if they were just different names for the same variables. The principle works for objects as well as simple string variables. You can copy or clone an object using the = operator in PHP 4. Or you can clone keyword in PHP, in which case you make a simple one-off new copy, which then leads an independent life. You can also assign one to the other by reference, so the two objects point to each other. Any changes made to one will also happen to the other. Again, think of them as two different names for the same thing.
Read more
  • 0
  • 0
  • 2621
article-image-using-javascript-and-jquery-drupal-themes
Packt
10 Feb 2011
6 min read
Save for later

Using JavaScript and jQuery in Drupal Themes

Packt
10 Feb 2011
6 min read
  Drupal 6 Theming Cookbook Over 100 clear step-by-step recipes to create powerful, great-looking Drupal themes Take control of the look and feel of your Drupal website Tips and tricks to get the most out of Drupal's theming system Learn how to customize existing themes and create unique themes from scratch Part of Packt's Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible         Read more about this book       (For more resources on Drupal, see here.) Introduction JavaScript libraries take out the majority of the hassle involved in writing code which will be executed in a variety of browsers each with its own vagaries. Drupal, by default, uses jQuery, a lightweight, robust, and well-supported package which, since its introduction, has become one of the most popular libraries in use today. While it is possible to wax eloquent about its features and ease of use, its most appealing factor is that it is a whole lot of fun! jQuery's efficiency and flexibility lies in its use of CSS selectors to target page elements and its use of chaining to link and perform commands in sequence. As an example, let us consider the following block of HTML which holds the items of a typical navigation menu. <div class="menu"> <ul class="menu-list"> <li>Item 1</li> <li>Item 2</li> <li>Item 3</li> <li>Item 4</li> <li>Item 5</li> <li>Item 6</li> </ul></div> Now, let us consider the situation where we want to add the class active to the first menu item in this list and, while we are at it, let us also color this element red. Using arcane JavaScript, we would have accomplished this with something like the following: var elements = document.getElementsByTagName("ul");for (var i = 0; i < elements.length; i++) { if (elements[i].className === "menu-list") { elements[i].childNodes[0].style.color = '#F00'; if (!elements[i].childNodes[0].className) { elements[i].childNodes[0].className = 'active'; } else { elements[i].childNodes[0].className = elements[i].childNodes[0].className + ' active'; } }} Now, we would accomplish the same task using jQuery as follows: $("ul.menu-list li:first-child").css('color', '#F00').addClass('active'); The statement we have just seen can be effectively read as: Retrieve all UL tags classed menu-list and having LI tags as children, take the first of these LI tags, style it with some CSS which sets its color to #F00 (red) and then add a class named active to this element. For better legibility, we can format the previous jQuery with each chained command on a separate line. $("ul.menu-list li:first-child") .css('color', '#F00') .addClass('active'); We are just scratching the surface here. More information and documentation on jQuery's features are available at http://jquery.com and http://www.visualjquery.com. A host of plugins which, like Drupal's modules, extend and provide additional functionality, are available at http://plugins.jquery.com. Another aspect of JavaScript programming that has improved in leaps and bounds is in the field of debugging. With its rising ubiquity, developers have introduced powerful debugging tools that are integrated into browsers and provide tools, such as interactive debugging, flow control, logging and monitoring, and so on, which have traditionally only been available to developers of other high-level languages. Of the many candidates out there, the most popular and feature-rich is Firebug. It can be downloaded and installed from https://addons.mozilla.org/en-US/ firefox/addon/1843. Including JavaScript files from a theme This recipe will list the steps required to include a JavaScript file from the .info file of the theme. We will be using the file to ensure that it is being included by outputting the standard Hello World! string upon page load. Getting ready While the procedure is the same for all the themes, we will be using the Zen-based myzen theme in this recipe. How to do it... The following steps are to be performed inside the myzen theme folder at sites/all/ themes/myzen. Browse into the js subfolder where JavaScript files are conventionally stored. Create a file named hello.js and open it in an editor. Add the following code: alert("Hello World!!"); Save the file and exit the editor. Browse back up to the myzen folder and open myzen.info in an editor. Include our new script using the following syntax: scripts[] = js/hello.js Save the file and exit the editor. Rebuild the theme registry and if JavaScript optimization is enabled for the site, the cache will also need to be cleared. View any page on the site to see our script taking effect. How it works... Once the theme registry is rebuilt and the cache cleared, Drupal adds hello.js to its list of JavaScript files to be loaded and embeds it in the HTML page. The JavaScript is executed before any of the content is displayed on the page and the resulting page with the alert dialog box should look something like the following screenshot: There's more... While we have successfully added our JavaScript in this recipe, Drupal and jQuery provide efficient solutions to work around this issue of the JavaScript being executed as soon as the page is loaded. Executing JavaScript only after the page is rendered A solution to the problem of the alert statement being executed before the page is ready, is to wrap our JavaScript inside jQuery's ready() function. Using it ensures that the code within is executed only once the page has been rendered and is ready to be acted upon. if (Drupal.jsEnabled) { $(document).ready(function () { alert("Hello World!!"); });} Furthermore, we have wrapped the ready() function within a check for Drupal.jsEnabled which acts as a global killswitch. If this variable is set to false, then JavaScript is turned off for the entire site and vice versa. It is set to true by default provided that the user's browser meets Drupal's requirements. Drupal's JavaScript behaviors While jQuery's ready() function works well, Drupal recommends the use of behaviors to manage our use of JavaScript. Our Hello World example would now look like this: Drupal.behaviors.myzenAlert = function (context) { alert("Hello World!!");}; All registered behaviors are called automatically by Drupal once the page is ready. Drupal.behaviors also allows us to forego the call to the ready() function as well as the check for jsEnabled as these are done implicitly. As with most things Drupal, it is always a good idea to namespace our behaviors based on the module or theme name to avoid conflicts. In this case, the behavior name has been prefixed with myzen as it is part of the myzen theme.
Read more
  • 0
  • 0
  • 2620

Packt
17 Sep 2014
12 min read
Save for later

What is REST?

Packt
17 Sep 2014
12 min read
This article by Bhakti Mehta, the author of Restful Java Patterns and Best Practices, starts with the basic concepts of REST, how to design RESTful services, and best practices around designing REST resources. It also covers the architectural aspects of REST. (For more resources related to this topic, see here.) Where REST has come from The confluence of social networking, cloud computing, and era of mobile applications creates a generation of emerging technologies that allow different networked devices to communicate with each other over the Internet. In the past, there were traditional and proprietary approaches for building solutions encompassing different devices and components communicating with each other over a non-reliable network or through the Internet. Some of these approaches such as RPC, CORBA, and SOAP-based web services, which evolved as different implementations for Service Oriented Architecture (SOA) required a tighter coupling between components along with greater complexities in integration. As the technology landscape evolves, today’s applications are built on the notion of producing and consuming APIs instead of using web frameworks that invoke services and produce web pages. This requirement enforces the need for easier exchange of information between distributed services along with predictable, robust, well-defined interfaces. API based architecture enables agile development, easier adoption and prevalence, scale and integration with applications within and outside the enterprise HTTP 1.1 is defined in RFC 2616, and is ubiquitously used as the standard protocol for distributed, collaborative and hypermedia information systems. Representational State Transfer (REST) is inspired by HTTP and can be used wherever HTTP is used. The widespread adoption of REST and JSON opens up the possibilities of applications incorporating and leveraging functionality from other applications as needed. Popularity of REST is mainly because it enables building lightweight, simple, cost-effective modular interfaces, which can be consumed by a variety of clients. This article covers the following topics Introduction to REST Safety and Idempotence HTTP verbs and REST Best practices when designing RESTful services REST architectural components Introduction to REST REST is an architectural style that conforms to the Web Standards like using HTTP verbs and URIs. It is bound by the following principles. All resources are identified by the URIs. All resources can have multiple representations All resources can be accessed/modified/created/deleted by standard HTTP methods. There is no state on the server. REST is extensible due to the use of URIs for identifying resources. For example, a URI to represent a collection of book resources could look like this: http://foo.api.com/v1/library/books A URI to represent a single book identified by its ISBN could be as follows: http://foo.api.com/v1/library/books/isbn/12345678 A URI to represent a coffee order resource could be as follows: http://bar.api.com/v1/coffees/orders/1234 A user in a system can be represented like this: http://some.api.com/v1/user A URI to represent all the book orders for a user could be: http://bar.api.com/v1/user/5034/book/orders All the preceding samples show a clear readable pattern, which can be interpreted by the client. All these resources could have multiple representations. These resource examples shown here can be represented by JSON or XML and can be manipulated by HTTP methods: GET, PUT, POST, and DELETE. The following table summarizes HTTP Methods and descriptions for the actions taken on the resource with a simple example of a collection of books in a library. HTTP method Resource URI Description GET /library/books Gets a list of books GET /library/books/isbn/12345678 Gets a book identified by ISBN “12345678” POST /library/books Creates a new book order DELETE /library/books/isbn/12345678 Deletes a book identified by ISBN “12345678” PUT /library/books/isbn/12345678 Updates a specific book identified by ISBN “12345678’ PATCH /library/books/isbn/12345678 Can be used to do partial update for a book identified by ISBN “12345678” REST and statelessness REST is bound by the principle of statelessness. Each request from the client to the server must have all the details to understand the request. This helps to improve visibility, reliability and scalability for requests. Visibility is improved, as the system monitoring the requests does not have to look beyond one request to get details. Reliability is improved, as there is no check-pointing/resuming to be done in case of partial failures. Scalability is improved, as the number of requests that can be processed is increases as the server is not responsible for storing any state. Roy Fielding’s dissertation on the REST architectural style provides details on the statelessness of REST, check http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm With this initial introduction to basics of REST, we shall cover the different maturity levels and how REST falls in it in the following section. Richardson Maturity Model Richardson maturity model is a model, which is developed by Leonard Richardson. It talks about the basics of REST in terms of resources, verbs and hypermedia controls. The starting point for the maturity model is to use HTTP layer as the transport. Level 0 – Remote Procedure Invocation This level contains SOAP or XML-RPC sending data as POX (Plain Old XML). Only POST methods are used. This is the most primitive way of building SOA applications with a single method POST and using XML to communicate between services. Level 1 – REST resources This uses POST methods and instead of using a function and passing arguments uses the REST URIs. So it still uses only one HTTP method. It is better than Level 0 that it breaks a complex functionality into multiple resources with one method. Level 2 – more HTTP verbs This level uses other HTTP verbs like GET, HEAD, DELETE, PUT along with POST methods. Level 2 is the real use case of REST, which advocates using different verbs based on the HTTP request methods and the system can have multiple resources. Level 3 – HATEOAS Hypermedia as the Engine of Application State (HATEOAS) is the most mature level of Richardson’s model. The responses to the client requests, contains hypermedia controls, which can help the client decide what the next action they can take. Level 3 encourages easy discoverability and makes it easy for the responses to be self- explanatory. Safety and Idempotence This section discusses in details about what are safe and idempotent methods. Safe methods Safe methods are methods that do not change the state on the server. GET and HEAD are safe methods. For example GET /v1/coffees/orders/1234 is a safe method. Safe methods can be cached. PUT method is not safe as it will create or modify a resource on the server. POST method is not safe for the same reasons. DELETE method is not safe as it deletes a resource on the server. Idempotent methods An idempotent method is a method that will produce the same results irrespective of how many times it is called. For example GET method is idempotent, as multiple calls to the GET resource will always return the same response. PUT method is idempotent as calling PUT method multiple times will update the same resource and not change the outcome. POST is not idempotent and calling POST method multiple times can have different results and will result in creating new resources. DELETE is idempotent because once the resource is deleted it is gone and calling the method multiple times will not change the outcome. HTTP verbs and REST HTTP verbs inform the server what to do with the data sent as part of the URL GET GET is the simplest verb of HTTP, which enables to get access to a resource. Whenever the client clicks a URL in the browser it sends a GET request to the address specified by the URL. GET is safe and idempotent. GET requests are cached. Query parameters can be used in GET requests. For example a simple GET request is as follows: curl http://api.foo.com/v1/user/12345 POST POST is used to create a resource. POST requests are neither idempotent nor safe. Multiple invocations of the POST requests can create multiple resources. POST requests should invalidate a cache entry if exists. Query parameters with POST requests are not encouraged For example a POST request to create a user can be curl –X POST -d’{“name”:”John Doe”,“username”:”jdoe”, “phone”:”412-344-5644”} http://api.foo.com/v1/user PUT PUT is used to update a resource. PUT is idempotent but not safe. Multiple invocations of PUT requests should produce the same results by updating the resource. PUT requests should invalidate the cache entry if exists. For example a PUT request to update a user can be curl –X PUT -d’{ “phone”:”413-344-5644”} http://api.foo.com/v1/user DELETE DELETE is used to delete a resource. DELETE is idempotent but not safe. DELETE is idempotent because based on the RFC 2616 "the side effects of N > 0 requests is the same as for a single request". This means once the resource is deleted calling DELETE multiple times will get the same response. For example, a request to delete a user is as follows: curl –X DELETE http://foo.api.com/v1/user/1234 HEAD HEAD is similar like GET request. The difference is that only HTTP headers are returned and no content is returned. HEAD is idempotent and safe. For example, a request to send HEAD request with curl is as follows: curl –X HEAD http://foo.api.com/v1/user It can be useful to send a HEAD request to see if the resource has changed before trying to get a large representation using a GET request. PUT vs POST According to RFC the difference between PUT and POST is in the Request URI. The URI identified by POST defines the entity that will handle the POST request. The URI in the PUT request includes the entity in the request. So POST /v1/coffees/orders means to create a new resource and return an identifier to describe the resource In contrast PUT /v1/coffees/orders/1234 means to update a resource identified by “1234” if it does not exist else create a new order and use the URI orders/1234 to identify it. Best practices when designing resources This section highlights some of the best practices when designing RESTful resources: The API developer should use nouns to understand and navigate through resources and verbs with the HTTP method. For example the URI /user/1234/books is better than /user/1234/getBook. Use associations in the URIs to identify sub resources. For example to get the authors for book 5678 for user 1234 use the following URI /user/1234/books/5678/authors. For specific variations use query parameters. For example to get all the books with 10 reviews /user/1234/books?reviews_counts=10. Allow partial responses as part of query parameters if possible. An example of this case is to get only the name and age of a user, the client can specify, ?fields as a query parameter and specify the list of fields which should be sent by the server in the response using the URI /users/1234?fields=name,age. Have defaults for the output format for the response incase the client does not specify which format it is interested in. Most API developers choose to send json as the default response mime type. Have camelCase or use _ for attribute names. Support a standard API for count for example users/1234/books/count in case of collections so the client can get the idea of how many objects can be expected in the response. This will also help the client, with pagination queries. Support a pretty printing option users/1234?pretty_print. Also it is a good practice to not cache queries with pretty print query parameter. Avoid chattiness by being as verbose as possible in the response. This is because if the server does not provide enough details in the response the client needs to make more calls to get additional details. That is a waste of network resources as well as counts against the client’s rate limits. REST architecture components This section will cover the various components that must be considered when building RESTful APIs As seen in the preceding screenshot, REST services can be consumed from a variety of clients and applications running on different platforms and devices like mobile devices, web browsers etc. These requests are sent through a proxy server. The HTTP requests will be sent to the resources and based on the various CRUD operations the right HTTP method will be selected. On the response side there can be Pagination, to ensure the server sends a subset of results. Also the server can do Asynchronous processing thus improving responsiveness and scale. There can be links in the response, which deals with HATEOAS. Here is a summary of the various REST architectural components: HTTP requests use REST API with HTTP verbs for the uniform interface constraint Content negotiation allows selecting a representation for a response when there are multiple representations available. Logging helps provide traceability to analyze and debug issues Exception handling allows sending application specific exceptions with HTTP codes Authentication and authorization with OAuth2.0 gives access control to other applications, to take actions without the user having to send their credentials Validation provides support to send back detailed messages with error codes to the client as well as validations for the inputs received in the request. Rate limiting ensures the server is not burdened with too many requests from single client Caching helps to improve application responsiveness. Asynchronous processing enables the server to asynchronously send back the responses to the client. Micro services which comprises breaking up a monolithic service into fine grained services HATEOAS to improve usability, understandability and navigability by returning a list of links in the response Pagination to allow clients to specify items in a dataset that they are interested in. The REST Architectural components in the image can be chained one after the other as shown priorly. For example, there can be a filter chain, consisting of filters related with Authentication, Rate limiting, Caching, and Logging. This will take care of authenticating the user, checking if the requests from the client are within rate limits, then a caching filter which can check if the request can be served from the cache respectively. This can be followed by a logging filter, which can log the details of the request. For more details, check RESTful Patterns and best practices.
Read more
  • 0
  • 0
  • 2615

article-image-working-data-components
Packt
22 Nov 2013
15 min read
Save for later

Working with Data Components

Packt
22 Nov 2013
15 min read
(For more resources related to this topic, see here.) Introducing the DataList component The DataList component displays a collection of data in the list layout with several display types and supports AJAX pagination. The DataList component iterates through a collection of data and renders its child components for each item. Let us see how to use <p:dataList>to display a list of tag names as an unordered list: <p:dataList value="#{tagController.tags}" var="tag" type="unordered" itemType="disc"> #{tag.label} </p:dataList> The preceding <p:dataList> component displays tag names as an unordered list of elements marked with disc type bullets. The valid type options are unordered, ordered, definition, and none. We can use type="unordered" to display items as an unordered collection along with various itemType options such as disc, circle, and square. By default, type is set to unordered and itemType is set to disc. We can set type="ordered" to display items as an ordered list with various itemType options such as decimal, A, a, and i representing numbers, uppercase letters, lowercase letters, and roman numbers respectively. Time for action – displaying unordered and ordered data using DataList Let us see how to display tag names as unordered and ordered lists with various itemType options. Create <p:dataList> components to display items as unordered and ordered lists using the following code: <h:form> <p:panel header="Unordered DataList"> <h:panelGrid columns="3"> <h:outputText value="Disc"/> <h:outputText value="Circle" /> <h:outputText value="Square" /> <p:dataList value="#{tagController.tags}" var="tag" itemType="disc"> #{tag.label} </p:dataList> <p:dataList value="#{tagController.tags}" var="tag" itemType="circle"> #{tag.label} </p:dataList> <p:dataList value="#{tagController.tags}" var="tag" itemType="square"> #{tag.label} </p:dataList> </h:panelGrid> </p:panel> <p:panel header="Ordered DataList"> <h:panelGrid columns="4"> <h:outputText value="Number"/> <h:outputText value="Uppercase Letter" /> <h:outputText value="Lowercase Letter" /> <h:outputText value="Roman Letter" /> <p:dataList value="#{tagController.tags}" var="tag" type="ordered"> #{tag.label} </p:dataList> <p:dataList value="#{tagController.tags}" var="tag" type="ordered" itemType="A"> #{tag.label} </p:dataList> <p:dataList value="#{tagController.tags}" var="tag" type="ordered" itemType="a"> #{tag.label} </p:dataList> <p:dataList value="#{tagController.tags}" var="tag" type="ordered" itemType="i"> #{tag.label} </p:dataList> </h:panelGrid> </p:panel> </h:form> Implement the TagController.getTags() method to return a collection of tag objects: public class TagController { private List<Tag> tags = null; public TagController() { tags = loadTagsFromDB(); } public List<Tag> getTags() { return tags; } } What just happened? We have created DataList components to display tag names as an unordered list using type="unordered" and as an ordered list using type="ordered" with various supported itemTypes values. This is shown in the following screenshot: Using DataList with pagination support DataList has built-in pagination support that can be enabled by setting paginator="true". By enabling pagination, the various page navigation options will be displayed using the default paginator template. We can customize the paginator template to display only the desired options. The paginator can be customized using the paginatorTemplate option that accepts the following keys of UI controls: FirstPageLink LastPageLink PreviousPageLink NextPageLink PageLinks CurrentPageReport RowsPerPageDropdown Note that {RowsPerPageDropdown} has its own template, and options to display is provided via the rowsPerPageTemplate attribute (for example, rowsPerPageTemplate="5,10,15"). Also, {CurrentPageReport} has its own template defined with the currentPageReportTemplate option. You can use the {currentPage}, {totalPages}, {totalRecords}, {startRecord}, and {endRecord} keywords within the currentPageReport template. The default is "{currentPage} of {totalPages}". The default paginator template is "{FirstPageLink} {PreviousPageLink} {PageLinks} {NextPageLink} {LastPageLink}". We can customize the paginator template to display only the desired options. For example: {CurrentPageReport} {FirstPageLink} {PreviousPageLink} {PageLinks} {NextPageLink} {LastPageLink} {RowsPerPageDropdown} The paginator can be positioned using the paginatorPosition attribute in three different locations: top, bottom, or both(default). The DataList component provides the following attributes for customization: rows: This is the number of rows to be displayed per page. first: This specifies the index of the first row to be displayed. The default is 0. paginator: This enables pagination. The default is false. paginatorTemplate: This is the template of the paginator. rowsPerPageTemplate: This is the template of the rowsPerPage dropdown. currentPageReportTemplate: This is the template of the currentPageReport UI. pageLinks: This specifies the maximum number of page links to display. The default value is 10. paginatorAlwaysVisible: This defines if paginator should be hidden when the total data count is less than the number of rows per page. The default is true. rowIndexVar: This specifies the name of the iterator to refer to for each row index. varStatus: This specifies the name of the exported request scoped variable to represent the state of the iteration same as in <ui:repeat> attribute varStatus. Time for action – using DataList with pagination Let us see how we can use the DataList component's pagination support to display five tags per page. Create a DataList component with pagination support along with custom paginatorTemplate: <p:panel header="DataList Pagination"> <p:dataList value="#{tagController.tags}" var="tag" id="tags" type="none" paginator="true" rows="5" paginatorTemplate="{CurrentPageReport} {FirstPageLink} {PreviousPageLink} {PageLinks} {NextPageLink} {LastPageLink} {RowsPerPageDropdown}" rowsPerPageTemplate="5,10,15"> <f:facet name="header"> Tags </f:facet> <h:outputText value="#{tag.id} - #{tag.label}" style="margin-left:10px" /> <br/> </p:dataList> </p:panel> What just happened? We have created a DataList component along with pagination support by setting paginator="true". We have customized the paginator template to display additional information such as CurrentPageReport and RowsPerPageDropdown. Also, we have used the rowsPerPageTemplate attribute to specify the values for RowsPerPageDropdown. The following screenshot displays the result: Displaying tabular data using the DataTable component DataTable is an enhanced version of the standard DataTable that provides various additional features such as: Pagination Lazy loading Sorting Filtering Row selection Inline row/cell editing Conditional styling Expandable rows Grouping and SubTable and many more In our TechBuzz application, the administrator can view a list of users and enable/disable user accounts. First, let us see how we can display list of users using basic DataTable as follows: <p:dataTable id="usersTbl" var="user" value="#{adminController.users}"> <f:facet name="header"> List of Users </f:facet> <p:column headerText="Id"> <h:outputText value="#{user.id}" /> </p:column> <p:column headerText="Email"> <h:outputText value="#{user.emailId}" /> </p:column> <p:column headerText="FirstName"> <h:outputText value="#{user.firstName}" /> </p:column> <p:column headerText="Disabled"> <h:outputText value="#{user.disabled}" /> </p:column> <f:facet name="footer"> Total no. of Users: #{fn:length(adminController.users)}. </f:facet> </p:dataTable> The following screenshot shows us the result: PrimeFaces 4.0 introduced the Sticky component and provides out-of-the-box support for DataTable to make the header as sticky while scrolling using the stickyHeader attribute: <p:dataTable var="user" value="#{adminController.users}" stickyHeader="true"> ... </p:dataTable> Using pagination support If there are a large number of users, we may want to display users in a page-by-page style. DataTable has in-built support for pagination. Time for action – using DataTable with pagination Let us see how we can display five users per page using pagination. Create a DataTable component using pagination to display five records per page, using the following code: <p:dataTable id="usersTbl" var="user" value="#{adminController.users}" paginator="true" rows="5" paginatorTemplate="{CurrentPageReport} {FirstPageLink} {PreviousPageLink} {PageLinks} {NextPageLink} {LastPageLink} {RowsPerPageDropdown}" currentPageReportTemplate="( {startRecord} - {endRecord}) of {totalRecords} Records." rowsPerPageTemplate="5,10,15"> <p:column headerText="Id"> <h:outputText value="#{user.id}" /> </p:column> <p:column headerText="Email"> <h:outputText value="#{user.emailId}" /> </p:column> <p:column headerText="FirstName"> <h:outputText value="#{user.firstName}" /> </p:column> <p:column headerText="Disabled"> <h:outputText value="#{user.disabled}" /> </p:column> </p:dataTable> What just happened? We have created a DataTable component with the pagination feature to display five rows per page. Also, we have customized the paginator template and provided an option to change the page size dynamically using the rowsPerPageTemplate attribute. Using columns sorting support DataTable comes with built-in support for sorting on a single column or multiple columns. You can define a column as sortable using the sortBy attribute as follows: <p:column headerText="FirstName" sortBy="#{user.firstName}"> <h:outputText value="#{user.firstName}" /> </p:column> You can specify the default sort column and sort order using the sortBy and sortOrder attributes on the <p:dataTable> element: <p:dataTable id="usersTbl2" var="user" value="#{adminController.users}" sortBy="#{user.firstName}" sortOrder="descending"> </p:dataTable> The <p:dataTable> component's default sorting algorithm uses a Java comparator, you can use your own customized sort method as well: <p:column headerText="FirstName" sortBy="#{user.firstName}" sortFunction="#{adminController.sortByFirstName}"> <h:outputText value="#{user.firstName}" /> </p:column> public int sortByFirstName(Object firstName1, Object firstName2) { //return -1, 0 , 1 if firstName1 is less than, equal to or greater than firstName2 respectively return ((String)firstName1).compareToIgnoreCase(((String)firstName2)); } By default, DataTable's sortMode is set to single, to enable sorting on multiple columns set sortMode to multiple. In multicolumns' sort mode, you can click on a column while the metakey (Ctrl or command) adds the column to the order group: <p:dataTable id="usersTbl" var="user" value="#{adminController.users}" sortMode="multiple"> </p:dataTable> Using column filtering support DataTable provides support for column-level filtering as well as global filtering (on all columns) and provides an option to hold the list of filtered records. In addition to the default match mode startsWith, we can use various other match modes such as endsWith, exact, and contains. Time for action – using DataTable with filtering Let us see how we can use filters with users' DataTable. Create a DataTable component and apply column-level filters and a global filter to apply filter on all columns: <p:dataTable widgetVar="userTable" var="user" value="#{adminController.users}" filteredValue="#{adminController.filteredUsers}" emptyMessage="No Users found for the given Filters"> <f:facet name="header"> <p:outputPanel> <h:outputText value="Search all Columns:" /> <p:inputText id="globalFilter" onkeyup="userTable.filter()" style="width:150px" /> </p:outputPanel> </f:facet> <p:column headerText="Id"> <h:outputText value="#{user.id}" /> </p:column> <p:column headerText="Email" filterBy="#{user.emailId}" footerText="contains" filterMatchMode="contains"> <h:outputText value="#{user.emailId}" /> </p:column> <p:column headerText="FirstName" filterBy="#{user.firstName}" footerText="startsWith"> <h:outputText value="#{user.firstName}" /> </p:column> <p:column headerText="LastName" filterBy="#{user.lastName}" filterMatchMode="endsWith" footerText="endsWith"> <h:outputText value="#{user.lastName}" /> </p:column> <p:column headerText="Disabled" filterBy="#{user.disabled}" filterOptions="#{adminController.userStatusOptions}" filterMatchMode="exact" footerText="exact"> <h:outputText value="#{user.disabled}" /> </p:column> </p:dataTable> Initialize userStatusOptions in AdminController ManagedBean. @ManagedBean @ViewScoped public class AdminController { private List<User> users = null; private List<User> filteredUsers = null; private SelectItem[] userStatusOptions; public AdminController() { users = loadAllUsersFromDB(); this.userStatusOptions = new SelectItem[3]; this.userStatusOptions[0] = new SelectItem("", "Select"); this.userStatusOptions[1] = new SelectItem("true", "True"); this.userStatusOptions[2] = new SelectItem("false", "False"); } //setters and getters } What just happened? We have used various filterMatchMode instances, such as startsWith, endsWith, and contains, while applying column-level filters. We have used the filterOptions attribute to specify the predefined filter values, which is displayed as a select drop-down list. As we have specified filteredValue="#{adminController.filteredUsers}", once the filters are applied the filtered users list will be populated into the filteredUsers property. This following is the resultant screenshot: Since PrimeFaces Version 4.0, we can specify the sortBy and filterBy properties as sortBy="emailId" and filterBy="emailId" instead of sortBy="#{user.emailId}" and filterBy="#{user.emailId}". A couple of important tips It is suggested to use a scope longer than the request such as the view scope to keep the filteredValue attribute so that the filtered list is still accessible after filtering. The filter located at the header is a global one applying on all fields; this is implemented by calling the client-side API method called filter(). The important part is to specify the ID of the input text as globalFilter, which is a reserved identifier for DataTable. Selecting DataTable rows Selecting one or more rows from a table and performing operations such as editing or deleting them is a very common requirement. The DataTable component provides several ways to select a row(s). Selecting single row We can use a PrimeFaces' Command component, such as commandButton or commandLink, and bind the selected row to a server-side property using <f:setPropertyActionListener>, shown as follows: <p:dataTable id="usersTbl" var="user" value="#{adminController.users}"> <!-- Column definitions --> <p:column style="width:20px;"> <p:commandButton id="selectButton" update=":form:userDetails" icon="ui-icon-search" title="View"> <f:setPropertyActionListener value="#{user}" target="#{adminController.selectedUser}" /> </p:commandButton> </p:column> </p:dataTable> <h:panelGrid id="userDetails" columns="2" > <h:outputText value="Id:" /> <h:outputText value="#{adminController.selectedUser.id}"/> <h:outputText value="Email:" /> <h:outputText value="#{adminController.selectedUser.emailId}"/> </h:panelGrid> Selecting rows using a row click Instead of having a separate button to trigger binding of a selected row to a server-side property, PrimeFaces provides another simpler way to bind the selected row by using selectionMode, selection, and rowKey attributes. Also, we can use the rowSelect and rowUnselect events to update other components based on the selected row, shown as follows: <p:dataTable var="user" value="#{adminController.users}" selectionMode="single" selection="#{adminController.selectedUser}" rowKey="#{user.id}"> <p:ajax event="rowSelect" listener="#{adminController.onRowSelect}" update=":form:userDetails"/> <p:ajax event="rowUnselect" listener="#{adminController.onRowUnselect}" update=":form:userDetails"/> <!-- Column definitions --> </p:dataTable> <h:panelGrid id="userDetails" columns="2" > <h:outputText value="Id:" /> <h:outputText value="#{adminController.selectedUser.id}"/> <h:outputText value="Email:" /> <h:outputText value="#{adminController.selectedUser.emailId}"/> </h:panelGrid> Similarly, we can select multiple rows using selectionMode="multiple" and bind the selection attribute to an array or list of user objects: <p:dataTable var="user" value="#{adminController.users}" selectionMode="multiple" selection="#{adminController.selectedUsers}" rowKey="#{user.id}"> <!-- Column definitions --> </p:dataTable> rowKey should be a unique identifier from your data model and should be used by DataTable to find the selected rows. You can either define this key by using the rowKey attribute or by binding a data model that implements org.primefaces.model.SelectableDataModel. When the multiple selection mode is enabled, we need to hold the Ctrl or command key and click on the rows to select multiple rows. If we don't hold on to the Ctrl or command key and click on a row and the previous selection will be cleared with only the last clicked row selected. We can customize this behavior using the rowSelectMode attribute. If you set rowSelectMode="add", when you click on a row, it will keep the previous selection and add the current selected row even though you don't hold the Ctrl or command key. The default rowSelectMode value is new. We can disable the row selection feature by setting disabledSelection="true". Selecting rows using a radio button / checkbox Another very common scenario is having a radio button or checkbox for each row, and the user can select one or more rows and then perform actions such as edit or delete. The DataTable component provides a radio-button-based single row selection using a nested <p:column> element with selectionMode="single": <p:dataTable var="user" value="#{adminController.users}" selection="#{adminController.selectedUser}" rowKey="#{user.id}"> <p:column selectionMode="single"/> <!-- Column definitions --> </p:dataTable> The DataTable component also provides checkbox-based multiple row selection using a nested <p:column> element with selectionMode="multiple": <p:dataTable var="user" value="#{adminController.users}" selection="#{adminController.selectedUsers}" rowKey="#{user.id}"> <p:column selectionMode="multiple"/> <!-- Column definitions --> </p:dataTable> In our TechBuzz application, the administrator would like to have a facility to be able to select multiple users and disable them at one go. Let us see how we can implement this using the checkbox-based multiple rows selection.
Read more
  • 0
  • 0
  • 2611
article-image-page-management-part-one-cms-design
Packt
14 Dec 2010
6 min read
Save for later

Page Management - Part One in CMS Design

Packt
14 Dec 2010
6 min read
  CMS Design Using PHP and jQuery Build and improve your in-house PHP CMS by enhancing it with jQuery Create a completely functional and a professional looking CMS Add a modular architecture to your CMS and create template-driven web designs Use jQuery plugins to enhance the "feel" of your CMS A step-by-step explanatory tutorial to get your hands dirty in building your own CMS         Read more about this book       (For more resources on this subject, see here.) How pages work in a CMS A "page" is simply the main content which should be shown when a certain URL is requested. In a non-CMS website, this is easy to see, as a single URL returns a distinct HTML file. In a CMS though, the page is generated dynamically, and may include features such as plugins, different views depending on whether the reader was searching for something, whether pagination is used, and other little complexities. In most websites, a page is easily identified as the large content area in the middle (this is an over-simplification). In others, it's harder to tell, as the onscreen page may be composed of content snippets from other parts of the site. We handle these differences by using page "types", each of which can be rendered differently on the frontend. Examples of types include gallery pages, forms, news contents, search results, and so on. In this article, we will create the simplest type, which we will call "normal". This consists of a content-entry textarea in the admin area, and direct output of that content on the front-end. You could call this "default" if you want, but since a CMS is not always used by people from a technical background, it makes sense to use a word that they are more likely to recognize. I have been asked before by clients what "default" means, but I've never been asked what "normal" means. At the very least, a CMS needs some way to create the simplest of web pages. This is why the "normal" type is not a plugin, but is built into the core. Listing pages in the admin area To begin, we will add Pages to the admin menu. Edit /ww.admin/header.php and add the following highlighted line: <ul> <li><a href="/ww.admin/pages.php">Pages</a></li> <li><a href="/ww.admin/users.php">Users</a></li> And one more thing—when we log into the administration part of the CMS, it makes sense to have the "front page" of the admin area be the Pages section. After all, most of the work in a CMS is done in the Pages section. So, we change /ww.admin/index.php so it is a synonym for /ww.admin/pages.php. Replace the /ww.admin/index.php file with this: <?php require 'pages.php'; Next, let's get started on the Pages section. First, we will create /ww.admin/pages.php: <?php require 'header.php'; echo '<h1>Pages</h1>'; // { load menu echo '<div class="left-menu">'; require 'pages/menu.php'; echo '</div>'; // } // { load main page echo '<div class="has-left-menu">'; require 'pages/forms.php'; echo '</div>'; // } echo '<style type="text/css"> @import "pages/css.css";</style>'; require 'footer.php'; Notice how I've commented blocks of code, using // { to open the comment at the beginning of the block, and // } at the end of the block. This is done because a number of text editors have a feature called "folding", which allows blocks enclosed within delimiters such as { and } to be hidden from view, with just the first line showing. For instance, the previous code example looks like this in my Vim editor: What the page.php does is to load the headers, load the menu and page form, and then load the footers. For now, create the directory /ww.admin/pages and create a file in it called /ww.admin/pages/forms.php: <h2>FORM GOES HERE</h2> And now we can create the page menu. Use the following code to create the file /ww.admin/pages/menu.php: <?php echo '<div id="pages-wrapper">'; $rs=dbAll('select id,type,name,parent from pages order by ord,name'); $pages=array(); foreach($rs as $r){ if(!isset($pages[$r['parent']]))$pages[$r['parent']]=array(); $pages[$r['parent']][]=$r; } function show_pages($id,$pages){ if(!isset($pages[$id]))return; echo '<ul>'; foreach($pages[$id] as $page){ echo '<li id="page_'.$page['id'].'">' .'<a href="pages.php?id='.$page['id'].'"'>' .'<ins>&nbsp;</ins>'.htmlspecialchars($page['name']) .'</a>'; show_pages($page['id'],$pages); echo '</li>'; } echo '</ul>'; } show_pages(0,$pages); echo '</div>'; That will build up a &ltul> tree of pages. Note the use of the "parent" field in there. Most websites follow a hierarchical "parent-child" method of arranging pages, with all pages being a child of either another page, or the "root" of the site. The parent field is filled with the ID of the page within which it is situated. There are two main ways to indicate which page is the "front" page (that is, what page is shown when someone loads up http://cms/ with no page name indicated). You can have one single page in the database which has a parent of 0, meaning that it has no parent—this page is what is looked for when http://cms/ is called. In this scheme, pages such as http://cms/pagename have their parent field set to the ID of the one page which has a parent of 0. You can have many pages which have 0 as their parent, and each of these is said to be a "top-level" page. One page in the database has a flag set in the special field which indicates that this is the front page. In this scheme, pages named like http://cms/pagename all have a parent of 0, and the page corresponding to http://cms/ can be located anywhere at all in the database. Case 1 has a disadvantage, in that if you want to change what page is the front page, you need to move the current page under another one (or delete it), then move all the current page's child-pages so they have the new front page's ID as a parent, and this can get messy if the new front-page already had some sub-pages—especially if there are any with equal names. Case 2 is a much better choice because you can change the front page whenever you want, and it doesn't cause any problems at all. When you view the site in your browser now, it looks like this:
Read more
  • 0
  • 0
  • 2608

article-image-non-default-magento-themes
Packt
07 Oct 2009
4 min read
Save for later

Non-default Magento Themes

Packt
07 Oct 2009
4 min read
Uses of non-default themes Magento's flexibility in themes gives a lot of scope for possible uses of non-default themes. Along with the ability to have seasonal themes on our Magento store, non-default themes have a range of uses: A/B testing Easily rolled-back themes Changing the look and feel of specific pages, such as for a particular product within your store Creating brand-specific stores within your store, distinguishing your store's products further, if you sell a variety of the same products from different brands A/B testing A/B testing allows you to compare two different aspects of your store. You can test different designs on different weeks, and can then compare which design attracted more sales. Magento's support for non-default themes allows you to do this relatively easily. Bear in mind that the results of such a test may not represent what actually drives your customers to buy your store's products for a number of reasons. True A/B testing on websites is performed by presenting the different designs to your visitors at random. However, performing it this way may give you an insight in to what your customers prefer. Easily rolled-back themes If you want to make changes to your store's existing theme, then you can make use of a non-default theme to overwrite certain aspects of your store's look and feel, without editing your original theme. This means that if your customers don't like a change, or a change causes problems in a particular browser, then you can simply roll-back the changes, by changing your store's settings to display the original theme. Non-default themes A default theme is the default look and feel to your Magento store. That is, if no other styling or presentational logic is specified, then the default theme is the one that your store's visitors will see. Magento's default theme looks similar to the following screenshot: Non-default themes are very similar to the default themes in Magento. Like default themes, Magento's non-default themes can consist of one or more of the following elements: Skins—images and CSS Templates—the logic that inserts each block's content or feature (for example, the shopping cart) in to the page Layout—XML files that define where content is displayed Locale—translations of your store The major difference between a default and a non-default theme in Magento is that a default theme must have all of the layout and template files required for Magento to run. On the other hand, a non-default theme does not need all of these to function, as it relies on your store's default theme, to some extent. Locales in Magento: Many themes are already partially or fully translated into a huge variety of languages. Locales can be downloaded from the Magento Commerce website at http://www.magentocommerce.com/langs. Magento theme hierarchy In its current releases, Magento supports two themes: a default theme, and a non-default theme. The non-default theme takes priority when Magento is deciding what it needs to display. Any elements not found in the non-default theme are then found in the default theme specified. Future versions of Magento should allow more than one default theme to be used at a time, as well as allow more detailed control over the hierarchy of themes in your store. Magento theme directory structure Every theme in Magento must maintain the same directory structure for its files. The skin, templates, and layout are stored in their own directories. Templates Templates are located in the app/design/frontend/interface/theme/template directory of your Magento store's installation, where interface is your store's interface (or package) name (usually default), and theme is the name of your theme (for example, cheese). Templates are further organized in subdirectories by module. So, templates related to the catalog module are stored in app/design/frontend/interface/theme/template/catalog directory, whereas templates for the checkout module are stored in app/design/frontend/interface/theme/template/checkout directory. Layout Layout files are stored in app/design/frontend/interface/theme/layout. The name of each layout file refers to a particular module. For example, catalog.xml contains layout information for the catalog module, whereas checkout.xml contains layout information for the checkout module.
Read more
  • 0
  • 0
  • 2607
Modal Close icon
Modal Close icon